Youtube comments of Michael Wright (@michaelwright2986).

  1. 357
  2. 281
  3. 192
  4. 164
  5. 141
  6. 132
  7. 78
  8. 78
  9. 60
  10. 57
  11. 53
  12. 45
  13. 40
  14. 38
  15. 32
  16. 32
  17. White, cis male, old--too old to be a boomer. One thing that to someone of my age seems to be missing is a discussion of the nature of gender roles and stereotypes, When I was young, there was a great deal of talk, and some actual activity, around redefining gender roles. At the time, a lot of it was such mundane topics as that men actually could and should cook and do housework (Schwerpunkt: can your man clean the toilet properly?) and look after kids, and that women could chair meetings with men in them and could learn to fix their own cars: but there was also talk about men "getting in touch with their feminine side" and that sort of thing. And androgyny was a thing, and we all loved Marlene. The swerve away from talk about sex, using only "assigned gender" and "experienced gender", seems unfortunate in that it tends to preserve the notion of fixed gender characteristics. It is an undoubtable fact that some people are born the wrong sex and need to be reassigned to have a decent chance of flourishing. But one wonders how many of the children who feel dysphoria, and especially girls who want to be boys, are actually reacting to something wrong with society's construction of gender, not some sub-optimal feature of their embodied personality. I think it is not just recently that girls have thought "I wish I was a boy," and given the patriarchal nature of society that's not a surprise. I wonder if this is one of the areas in which science is of real but limited help. Gender has possibilities for nuance and ambiguity which don't produce good statistics. It is absolutely certain that the politicisation of the topic is doing no good to anybody except for the politicians and businesses that hope to profit from inflaming the socially conservative.
    32
  18. 30
  19. 27
  20. 26
  21. 25
  22. 23
  23. 20
  24. 19
  25. 18
  26. 18
  27. 16
  28. 16
  29. 16
  30. 15
  31. 15
  32. 15
  33. 14
  34. 14
  35. 14
  36. 14
  37. 14
  38. 13
  39. 13
  40. 13
  41. 13
  42. 13
  43. I saw a Sperrin at Farnborough when I was still in short trousers. It was acting as a testbed for the Gyron: we pronounced the engine's name with an initial affricative, like "gyroscope." The Gyron was going to be the Really Really Big jet, but it never seemed to get anywhere--that would be interesting to hear about. But the Gyron Junior did get some use--Wikipedia tells me it was used in the initial (underpowered) version of the Buccaneer, so not much more successful than big brother. That's a really interesting account. I didn't know that the Valiant was essentially carried on as a private venture. I'd always thought that the Ministry ordered two cutting edge aircraft (which ended up being the most successful and long lasting of the whole set), with the less adventurous Valiant as a safety development. And they ordered the Sperrin, just in case, and then there were four. But the Sperrin as initially intended as a safety net for the two advanced aircraft looks almost rational. Looking back, it looks like the British industry produced a profligate number of prototypes, all competing. I suppose the US produced a lot of different types, some of which failed; but they could afford it. France seemed to manage things with a bit more economy. Although British aviation enthusiasts have nothing but bad to see about the forced amalgamations, something like that was needed for a world where aircraft production was getting more and more capital intensive. When I was at secondary school, the Aviation Club (or whatever we were called) got taken on a Saturday to Hatfield where the Comet was being produced. What we were shown looked like a series of sheds, one with a Comet fuselage in a corner. Memory is highly fallible (I realise I can't by memory locate this visit to before or after the disasters--must have been after, by my age), but the impression I carry with me is that it would all have looked a bit scruffy in the back garden of the bloke next door.
    13
  44. 13
  45. 12
  46. 12
  47. 12
  48. 12
  49. 12
  50. 11
  51. 11
  52. 11
  53. 11
  54. 11
  55. 11
  56. 10
  57. 10
  58. 10
  59. 10
  60. 9
  61. 9
  62. 8
  63. 8
  64. 8
  65. 8
  66. 8
  67. 8
  68. 8
  69. 8
  70. 8
  71. 8
  72. 8
  73. 8
  74. 8
  75. 8
  76. 7
  77. 7
  78. 7
  79. 7
  80. 7
  81. 7
  82. 7
  83. 7
  84. 7
  85. 7
  86. 6
  87. 6
  88. 6
  89. 6
  90. 6
  91. 6
  92. 6
  93. 6
  94. 6
  95. 6
  96. 6
  97. 6
  98. 6
  99. 6
  100. 5
  101. 5
  102. 5
  103. 5
  104. 5
  105. 5
  106. 5
  107. 5
  108. 5
  109. 5
  110. 5
  111. 5
  112. 5
  113. 5
  114. 5
  115. 5
  116. 5
  117. 5
  118. 5
  119. 5
  120. 5
  121. 5
  122. 5
  123. 5
  124. 5
  125. 5
  126. 4
  127. 4
  128. 4
  129. 4
  130. 4
  131. 4
  132. 4
  133. 4
  134. 4
  135. 4
  136. 4
  137. 4
  138. 4
  139. 4
  140. 4
  141. 4
  142. 4
  143. 4
  144. 4
  145. 4
  146. 4
  147. 4
  148. 4
  149. 4
  150. 4
  151. 4
  152. 4
  153. 4
  154. 4
  155. 4
  156. 4
  157. 4
  158. 4
  159. 4
  160. 4
  161. 4
  162. 4
  163. 4
  164. 4
  165. 4
  166. 4
  167. 4
  168. 4
  169. 4
  170. 4
  171. 4
  172. 4
  173. 4
  174. 4
  175. 4
  176. 3
  177. 3
  178. 3
  179. 3
  180. 3
  181. 3
  182. 3
  183. 3
  184. 3
  185. 3
  186. 3
  187. 3
  188. 3
  189. 3
  190. 3
  191. 3
  192. 3
  193. 3
  194. Thank you. Although it's not defined, New Atheism is useful as a label, at least as useful as the labels applied to literary or other artistic movements. And the chief feature of New Atheist discourse is that the practitioners don't understand Christianity; or at least, think that all Christianity is like rather benighted forms of US White Conservative Evangelicalism (very marked in the anti-creationist rhetoric of Dawkins, who is English; he would be hard pressed to find a creationist in any of the long-established denominations in England). Many campaigning atheists aren't clear whether they're opposed to the idea of God, or opposed to organised religion. That's a distinction I'm sure you'll be pursuing, but we all know of intensely believing theists who have been strongly opposed to at least some forms of the religious group they were brought up in or which surrounds them (defiers of steeple houses, shall we call them). Meanwhile, as some people who don't believe in God form groups for mutual support and delight and to celebrate their world view, one watches with a certain wry amusement as the history of religion seems to begin to be played out in them. How long, I wonder, before a dispute about what it takes to be a true atheist? This, of course, only applies to public atheists, and especially atheist campaigners, and has nothing to say to about the many people who live their lives without belief in any god or godlike entity (sorry, hard to express that in the light of the Christian theological tradition that says it is incorrect to say that God exists). It would be interesting to know if those many people are happier or not, feel themselves more open to flourishing than the typical believers/practitioners of a variety of faiths. For the avoidance of doubt, I'm not in the USA. That poor country, amongst its many ills, is under the baneful influence of a mistaken form of Christian belief so deviant as to perhaps amount to heresy. In the USA, Satanism has a lot of work today (as long as they don't really believe in Satan).
    3
  195. 3
  196. 3
  197. 3
  198. 3
  199. 3
  200. 3
  201. 3
  202. 3
  203. 3
  204. I love this channel, but debunking woo is not the same as dismissing everything that is not physics as woo. That is just arrogance. Take "quantum healing"; obviously woo, and probably a fraud upon the public designed to extract money from the gullible, but although you can't cure cancer by thinking, you certainly can produce effects in your body by conscious effort. Square breathing, for instance, lowers heart rate, and you can increase peripheral blood flow by conscious attention. I paid good money to a perfectly sane psychologist to learn such techniques to control my rage and desire to murder several of my bosses and colleagues. Works, though I don't know how, but hard to explain how it works, or even describe the phenomenon, without using mentalist concepts. Doesn't do any good to pour scorn on research on ESP. It was, for a time, a possibility taken seriously by serious people. Properly investigated, it was shown not to happen--rather like the way the Society for Psychical Research put a great deal of effort into debunking fraudulent mediums, so that now mediumship is just a form of stage entertainment and exploitative psychotherapy. It's the same as the way historians of science point out that although alchemy is now obviously wrong, it took a lot of serious scientific analysis to distinguish alchemy from chemistry, the woo from the true. Shut up and calculate is a method that works, but to assume that it rules out attempts to "understand" what is going on is philosophically naive. I am not a philosopher, but I know some, and they aren't actually fools. But of course scientists have to denigrate all humanities studies because they're competitors for funding in universities.
    3
  205. 3
  206. 3
  207. 3
  208. 3
  209. 3
  210. 3
  211. 3
  212. 3
  213. 3
  214. 3
  215. 3
  216. 3
  217. 3
  218. 3
  219. 3
  220. 3
  221. 3
  222. 3
  223. 3
  224. 3
  225. 3
  226. 3
  227. 3
  228. 3
  229. 3
  230. 3
  231. 3
  232. 3
  233. 3
  234. 3
  235. Briliant. Thanks to this video, I have got a long way with running a virtual machine, which I have previously regarded as an Arcane Mystery, not for the likes of me. My version of VirtualBox is 7.0.4 and I'm running it on the Thinkpad T420 I use for Adventures in Computing. One difference I see with 7.x is in installing Guest Additions. Your version offers a choice of physical drive or ISO. 7.0.4 doesn't offer that, only having an option for CD, but there is another command (something like "Load Guest Additions CD") which slightly counterintuitively means "tell the machine to look for the ISO in the D: drive." That doesn't work, but I looked in Explorer for something that looked like the right file, and clicky-wickied, and it worked. I got a slight hiccup with USB. My venerable machine only has USB 2, so I clicked the radio button for USB 2 controller, and that didn't work: it does work if I tell it to enable the USB 3.0 controller which, as a matter of fact, I don't have. idk, but I can see a USB drive now. My only problem left is getting iTunes to see the WiFi network rather than the virtualised connection, so I can hook it up to Airplay. This is a pretty niche requirement, and thanks to you I feel confident in trying to solve it. I spent a career in tertiary level teaching, some of it fluffy stuff, some of it with its own technicalities. I know absolutely first-rate exposition of difficult material when I see it, and you're a master. Just the right amount of PowerPoint and graphics--in this, I was actually grateful for having the main points all on one slide. Brilliant.
    2
  236. 2
  237. 2
  238. 2
  239. 2
  240. 2
  241. 2
  242. 2
  243. 2
  244. 2
  245. 2
  246. 2
  247. 2
  248. 2
  249. 2
  250. 2
  251. 2
  252. I think Simulated Intelligence might be a better term. I don't know what the Python code is like, and it is clear that Machine Learning can help humans greatly when very large data sets are relevant. But the summary of Macbeth is a sign that the value of these systems varies greatly depending on domain. Basically, the Macbeth summary is vacuous crap. Anyone who teaches English will have seen that sort of thing, and it probably from time to time causes one to ask why one is doing this job. It is likely to be accepted, because management requires a certain minimum pass rate, but you can't imagine that the person who produced it gained any value from the exercise at all, and would have been better off doing something else. But, even today, it might not be accepted because there is a howling error. It is not Macbeth who goes mad, but Lady Macbeth. To tell the truth, Macbeth is not a play I think all that highly of (within the context of plays by Shakespeare), but the ChatGPT error makes me realise that one of the interests in the play is the way that the rather stolid Macbeth keeps on to the end, whereas the more ruthless and imaginative Lady M loses it. But the point is you need a human, with a prejudice against the output of high powered chatbots, to detect the error. All literary stuff is doubtless of little interest to the STEM crowd, but it should be, because this sort of activity is a surrogate for a great deal of human employment of intelligence. How does the latest statement from the Elbonian Foreign Office compare with earlier pronouncements? You might find a machine learning analysis useful, but God forbid anyone should rely on it, because of the capacity for error: and on the evidence so far, error is going to persist. As for the two brief statements about AI, the ChatGPT is so flabby as to be useless--it's the sort of unexceptionable blandeur that CEOs pump out on public occasions, because it will never come back and bite them. Whereas the human response, though necessarily general, points to specific areas of concern and opportunity. The real danger is that, because it takes considerable active human intelligence to make good use of this machine learning output, too few people will be aware of its dangers, and what the computer says will be taken as an oracle. I look forward to the day when all medics can take advantage of automatic analysis of huge data sets, just as they now use sophisticated analysis and imaging; but I still want to be diagnosed by a doctor, not a robot. Oh, yes, and all that analysis will be great in government: but you still want elected politicians to set goals. After WW 2, many governments set on a path towards rough egalitarianism--not the total eradication of difference, but a levelling up, the assurance that no one would fall below a minimum level of material provision that was at quite a high standard. I was hugely a beneficiary of that consensus. In the 1980s the mood changed to the pursuit of economic efficiency, and now prosperous countries have poverty and deprivation on a level not seen since the Great Depression. Machine Learning will have nothing to say about which of those paths to pursue, and for that choice to be made--or rather, for negotiating the path between extremes, we need human politicians, however prone to failure. And hope that the bad actors don't use AI to cook up all sorts of misinformation which is carefully designed to correlate with what people are already saying, and so believe, but extend it and weaponise it. In some areas, AI is overhyped, in others it is very powerful. Unfortunately, humans do bad things with powerful tools, sometimes. Oh, and AI is not going to be part of our evolution until we can exchange genes with an Invidia card (though perhaps research on that is already going on in basements across the globe).
    2
  253. 2
  254. 2
  255. 2
  256. 2
  257. 2
  258. 2
  259. 2
  260. 2
  261. 2
  262. 2
  263. 2
  264. 2
  265. 2
  266. 2
  267. 2
  268. 2
  269. 2
  270. 2
  271. 2
  272. 2
  273. 2
  274. 2
  275. 2
  276. 2
  277. 2
  278. 2
  279. 2
  280. 2
  281. 2
  282. 2
  283. 2
  284. 2
  285. 2
  286. 2
  287. 2
  288. 2
  289. 2
  290. 2
  291. 2
  292. 2
  293. 2
  294. 2
  295. 2
  296. 2
  297. 2
  298. 2
  299. 2
  300. 2
  301. 2
  302. 2
  303. 2
  304. 2
  305. 2
  306. 2
  307. 2
  308. 2
  309. 2
  310. 2
  311. 2
  312. 2
  313. Thank you. There is a strangely persistent belief in British pop history that radar was a British invention. This is probably due to the capacities for self-promotion of Watson-Watt, but it's odd that it has survived so long now that the 40-year history of radio-frequency detection has been studied. (BTW Watson-Watt genuinely was responsible for HF/DF, which located very short bursts of radio transmission: it used a technique he developed for detecting lightning strikes when he was a civilian scientist, and it mattered for submarine detection.) There's a common belief that Chain Home was very resilient precisely because it was rather crude in construction: do you think that's right? I realise that we assume the radar should have been the first target, because in any modern invasion it is--but that's with modern precision weapons, especially radiation homing weapons. My sense is that the Dowding System worked for a variety of reasons. First, as you say, it was the integration and distribution of information which was the technical triumph. Perhaps more important was Dowding's perception that the object was not to achieve victory but to avoid defeat. An invasion would have needed air superiority, if not command of the air, as a necessary, though not sufficient, condition, and his job was to prevent that, which he did by ensuring that relatively small numbers of aircraft were fed into what we'd now call 'target-rich environments'. He resisted the urgings to mount giant Sky Battles, and got kicked out as thanks; but it was important that he understood how much air war in the 1940s was a question of attrition. As you say, radar at the level of Chain Home was no use for final interception (and it took a long while before British radar was good enough for interception at night). German radar could direct defending aircraft more precisely, but (was this 'because' ?) was more limited in the number of interceptions it could handle. I assume the chief value of Chain Home was to give advance warning of the direction of an attack: I can't assume that members of the Observer Corps were at a constant level of high alert, but I don't know of records of messages being sent out to wake them up and point them in the right direction--have you found any? The other great virtue was giving advance notice to the fighters to take off and climb to operational altitude in the right general area, particularly important with fighter escort high cover; and also giving assurance in not putting up other squadrons, so preserving mission capability and going with Dowding's parsimonious approach to the battle.
    2
  314. 2
  315. 2
  316. 2
  317. 2
  318. 2
  319. 2
  320. 2
  321. 2
  322. 2
  323. 2
  324. 2
  325. 2
  326. 2
  327. 2
  328. 2
  329. 2
  330. 2
  331. 2
  332. 2
  333. 2
  334. 2
  335. 2
  336. 2
  337. 2
  338. 2
  339. 2
  340. 2
  341. 2
  342. 2
  343. 2
  344. 2
  345. 2
  346. 2
  347. 2
  348. 2
  349. 2
  350. 2
  351. 2
  352. 2
  353. 2
  354. Two urban myths could have been dispelled better. MS never said that Windows 10 is the last version ever, certainly. But that official rebuttal is so full of obscurity vetted by the legal department that the tech journalists might have genuinely misunderstood it. And if at first you don't succeed, give up--that's not the attitude we expect from a predatory monopolist. If they'd just said "Look, we never said W10 was Windows' final form. <Legal cya material follows.>" they would at least have tried. Linux and the terminal. Shortly after Windows 11 appeared, and before I'd worked out how to turn off a lot of the obnoxious stuff, I was trying gently to propagate the virtues of Linux for some users, and someone fairly cluey about computing in a corporate environment said, "Oh, Linux, you have to do everything in the terminal." He was obviously thinking of the server people at work. But this myth persists because, if you go online looking for help, you'll see lots of how-tos using the CLI, even when GUI alternatives are available. I don't think this is flexing, mostly; if you spend all your working day in the CLI, it will be quicker to do simple jobs in the mode you're familiar with, and here are some things that you can only do in the terminal, and others that are easier (to go back to my beginnings, PIP B: = A:*.DOC is quicker and easier than copying all the originals, but leaving the back-ups, in a GUI file manager). Don't know what to do about this, but at least the forums for desktop oriented users might adopt a policy, or at least a guideline, or maybe a nudge, about using GUI methods unless absolutely necessary (as some do). One last thing: do I believe in an urban myth? When RAID first appeared, I understood it was an acronym for Redundant Array of Inexpensive Disks, as opposed to the expensive (Enterprise, we would say nowadays) disks used for high availability. But so many acronyms have varying expansions, especially under the needs of social change. Was it always Independent disks?
    2
  355. 2
  356. 2
  357. 2
  358. 2
  359. 2
  360. 2
  361. 2
  362. 2
  363. 2
  364. 2
  365. 2
  366. 2
  367. 2
  368. 2
  369. 2
  370. 2
  371. 2
  372. 2
  373. 2
  374. 2
  375. 2
  376. My recent experiences with Windows makes me think of Microsoft as a corporate predator, seeking to ambush its users to get more access to their information, or maybe old-fashioned money. It has great difficulty just shutting up and letting me get on with what I'm doing. It's the barefaced impudence with which it does it that is especially annoying. The lack of customisability in the interface is, perhaps, more understandable. The big bucks are in mass orders for corporations, and there are good reasons for big organisations to want all their desk-top computers to look and work the same, especially as hot-desking becomes another way of cutting people costs. Since there are also good reasons for an individual not to get too dependent on their own special way of setting up a computer, but just getting the most out of what the manufacturer gives you, probably the best we can hope for is something intelligently designed, and not changing just to give the designers a stunning innovation to put on their CV. As your were talking about Microsoft losing its monopoly, I thought at first this was wishful thinking (MacOS is annoying, too, and Linux won't get the mass acceptance): but then I realised that ChromeOS will do most of what most people want to do on desktop, and although Google are quite as nefarious as Microsoft, they tend to be a bit more subtle about the way they grab all the details of your on-line life. Microsoft, Google, Zuckerberg, Musk--it is all a bit dystopian, isn't it? Your fury is understandable.
    2
  377. 2
  378. 2
  379. 2
  380. 2
  381. 2
  382. 2
  383. 2
  384. 2
  385. 2
  386. 2
  387. 2
  388. 2
  389. 2
  390. 2
  391. I greatly admire your ability to present content clearly, and I learned from this, but I'd suggest the first section does not start from the right place, and misses a chance to free people from some bugbears about sound quality. I think there are two things that need making clear right at the start: the limitations of human hearing, and the fact that audio digitisation is a process of sine-wave reconstruction, not the smoothing out of little isolated steps (like the columns of a histogram). Both these apply as much to uncompressed formats: when the consortium introduced the CD format, this was based on a huge amount of study of the limits of human hearing, done by record companies as well as tech companies, and the record people would not want anything that would make their stuff sound bad (DGG was big into this). This established the parameters for the CD, and these are the outer limits. So people who pay extra for "high definition" audio at high sampling rates and bit depths for listening (as opposed to editing) are not getting anything tangible for their money (though doubtless it makes them feel special, and that is always worth spending money on--I do mean that). People who were brought up on the analogue world, where there was always a small (though diminishing) gain to be had from getting a better cartridge or a more exactly controlled turntable often have difficulty in accepting that good enough is truly good enough, and that though you can get something that measures better, it is a physiological impossibility for it to sound better--just as you could build a computer monitor that emitted in the deep infra-red, but it wouldn't look any different. Also, it seems that the most discriminating hearing is to be found in young females , who are not strongly represented among the people who think a lot about audio techniques and equipment. The other thing that needs explaining -- and even a mathematical dunce like me can kind of grasp it -- is that the digitisation and reconstruction of a sound wave is a precise reconstruction of sine waves up to a certain frequency, and not a smoothing out of lumpiness (which implies approximation). A misapprehension here leads to a misunderstanding of what is an easy sound to compress, and what is hard. The step-smoothing notion leads to the idea that an instrument whose sound is very close to a pure sine wave is hard -- so, solo flute is difficult, people sometimes think. Doubtless they think of the sound of a flute as pure, and digitisation is a pollution of analogue purity. Whereas in fact a flute is very easy for digitisation, and hence for lossy compression. Whereas what is difficult is a Nordic death metal drummer at the end of a session hitting everything as hard as possible as often as possible, with diminished precision. These two things are hard for lots of us to grasp, at first, but getting them straight can end up with a lot less disk space (not that that matters now) and a lot less anxiety about quality, more money to spend on speakers where it makes a difference, and more mental energy to spend on listening to music, which is the main point of the exercise. All this from the point of view of the listener: for editing, of course, you want to stay well clear of the minima to make life easy: and maybe that's another clarification that could be made: there are significant differences in the requirements for the editor and the end listener, and your attention is perhaps more addressed to the needs of people who edit? Love your work.
    2
  392. 2
  393. 2
  394. 2
  395. 2
  396. 2
  397. 2
  398. 2
  399. 1
  400. 1
  401. 1
  402. 1
  403. 1
  404. 1
  405. 1
  406. 1
  407. 1
  408. I found this very interesting, but I think you've oversimplified what manuscripts mean as evidence for historical fact. You are dead right in saying that the gap in years between the events and the first MS is of little to no importance. What really counts is the number of generations of copying, because it's pretty much an axiom that every act of copying introduces mistakes (Jewish scrolls of the Tanakh for use in the synagogue are perhaps an exception, but that's the product of extraordinary care, and I'd bet there'll be some errors in that body of texts). So, if you have a MS that was written a thousand years after the composition of the text, but was copied directly and carefully from the author's fair copy, it is vastly more authoritative than a MS that's only a hundred years after the date of composition, but is a copy of a copy of ..., and some of those copies made either carelessly, or (worse) by people trying to "correct" or "improve" what they had in front of them. This applies to all manuscript evidence of all texts from before the introduction of printing. The second point, which you rather conflate with this, is the difference between Caesar's accounts of his wars, which are undoubtedly spun like a top and propaganda, but are at least trying to look like an accurate narrative of events, and the gospels which are intended to communicate faith, and interpret the life of Jesus. All history is full of interpretation, but the accounts of leaders who are meant to be inspiring are, as you say, a different genre: the gospels probably sit between the popular accounts of the life of George Washington and the official biography of Kim Jong-Il. I suppose Billy Graham was trying to counter the Jesus mythicists, but it's a very poor argument, as you say, even against that futile hypothesis.
    1
  409. 1
  410. 1
  411. At least you see that anti-intellectualism doesn't confine itself to the humanities. You should have added that there is particular hate directed in that direction because it's an area in which women are in the majority: there's probably a big overlap between STEM bros and incels. But the abolition of the Humanities was going fast in New Zealand before this thesis hit the headlines. It's all playing into the popularist mode. Headlines indeed: condemn a thesis on the title. Which fits well with the MAGA contempt for *all* credentialed experts, especially medical. Prevent a disease by injecting poison into your arms? Obviously wrong, if you just use common sense. BTW, I don't know if the thesis was bullshit or not. There is bullshit in the Humanities, mostly generated by the dedicated followers of fashion. There is too much research in the Humanities, and in the field of EngLit, a lot of it looks very unlike what I understand as the study of literature, and more like half-baked philosophy, politics, and psychology. But some of it is good, and asks us to look at what we take for granted. I'd be interested in a thesis on smell. To think of the late 18th early 19th centuries. When the Prince Regent (to be George IV of the UK) first encountered his bride-to-be, Princess Caroline, he needed a large brandy because she wasn't much into personal hygiene. OTOH, it is said that Lord Nelson, when expecting to return to Lady Hamilton, exhorted her not to wash for a while, in expectation of his return. And to think that our attitude to smell is unrelated to our attitude to other social groups would be naive. Might be worth having a look at that. Who knows if the thesis did that, or just regenerated a lot of theoretical boilerplate, but it might be worth more than YAST* thesis. And if you need to moan about the poor suffering taxpayer, it wasn't your taxes that paid for it, and Humanities research is dirt cheap compared with anything else except, perhaps, Theology. *Yet Another String Theory
    1
  412. 1
  413. 1
  414. 1
  415. 1
  416. 1
  417. 1
  418. 1
  419. 1
  420. 1
  421. 1
  422. 1
  423. 1
  424. 1
  425. 1
  426. 1
  427. 1
  428. 1
  429. 1
  430. 1
  431. 1
  432. 1
  433. 1
  434. 1
  435. 1
  436. 1
  437. 1
  438. 1
  439. Some of the effects of computational photography used to be achieved with (less precise) chemical methods: you could enhance the sharpness of an image by controlling the dilution and frequency of agitation of the developer as it acted on the negative (enhanced sharpness (technically acuity) might be gained at the expense of fineness of detail, and that was a big thing in the 1960s. Doing this, and much more, digitally gives greater precision and control. At the moment, the basic happy snap from a phone camera is getting to the point where it looks a little over-processed--hyper-reality rather than reality. I notice this when I compare results from my iPhone 13 Pro with the pix I get from "real" cameras with bigger sensors (and much less processing power). But I also like the phone look, most of the time. And you aren't doomed to have it. I haven't tried using the RAW images from my phone, because I'm not doing anything serious any more, but it's there on many makes of phones. What I would like is more ability to control the degree of processing, as you get on a stand-alone camera. Some control is there on phone cameras, but there could be more. But phone cameras are really the social equivalent of the box camera (or the Instamatic, like the little Rollei at the beginning of the video), and it's astonishing how technically capable they are. I would never have dreamed of taking a movie of a bumble bee enjoying itself in the flower of an artichoke, but it was a piece of cake with an iPhone SE (1st gen).
    1
  440. 1
  441. 1
  442. 1
  443. 1
  444. 1
  445. 1
  446. I love your work, but this especially was deeply moving--I speak as someone whose chief intellectual concern is being expunged from universities. In particular, though, two specific things struck me: the first was the Library as an institution of cultural imperialism (which does not invalidate the work done there--intellectuals work in the crevices and shadows of the exercise of power); and the other pointing out that Alexandria is humid. I'd never thought of that. But, with papyrus, it means the library was doomed to disappear, without a constant process of renewal. And, of course, when it comes to destruction by fire, shit happens: cf Notre Dame in Paris, and the 1992 fire at Windsor Castle in the UK. Both could have been started for all sorts of political reasons, but in fact the calamities were just dumb bad luck. On the numbers, two things occur to me. One is that it is easy to get confused about what counts as a book: for us, the Iliad is one book (as, for instance, in the translation by Emily Wilson plug plug--one ISBN); but it is divided into 24 "books," the division apparently done by Alexandrian scholars and so presumably corresponding to a scroll in the library. On the other hand, there were presumably many works of which multiple copies were held: those same Alexandrian scholars worked on establishing the text of Homer, and you can't do that without multiple copies--especially because, in the world of manuscripts (texts written by hand), every copy is an edition. Thank you so much for this. I am off to the merch shop.
    1
  447. 1
  448. 1
  449. 1
  450. 1
  451. 1
  452. 1
  453. 1
  454. 1
  455. There's some very good evidence against a simple connection between critical thinking and atheism. First, it's easy to find theologians who are ferocious in their analytical thinking, and retain religious faith. I'm thinking mostly of medieval Christian academic theologians (like the Gaunilo of Marmoutier, a Benedictine monk who analysed and rejected Anselm's ontological argument for the existence of God). And on the other hand New Atheists do not by any means employ critical thought all the time. I lost what faith I had shortly after, and fairly certainly in part because, I read Dawkins' The Blind Watchmaker_; I came back to a kind of Christianity after, and because, I read the first third of _The God Delusion and chucked it out because it seemed almost entirely devoid of analysis, and its mis-analysis of Christianity made it clearer to me what I actually kind of believed, or at least was prepared to take a punt on. Reading a bit of thoughtful modern theology made me realise that a lot of magical thinking that was kind of a drag on my mind was NOT a necessary part of Christianity. And I've been similarly Enlightened by some Jewish thinkers whose works I've read. Where there is a link between rationality and loss of faith may be where a person's religion is intricately involved with beliefs that are entirely incompatible with the normal use of human reason. I think of people with a belief in the inerrancy of the Bible, read literally. I note that Bart Ehrman, as a biblical scholar, has moved from belief to atheism as he has thought more about the nature of the biblical texts; but he seems to me to be a very Evangelical atheist. My examples are all from Christianity--and most of the discussion is about this faith. I'd really like to know how it goes in "Hinduism".
    1
  456. I really must make a contribution to my distro of choice. Thank you for making that clear. Recently I noticed some comments on a couple of channels expressing the notion that you have to be constantly fiddling with Linux to keep it going. IME, just doing the things a normy does with a computer, I find Linux less of a faff than Windows. But it does suggest that some Linux advocacy is counter-productive, in its desire to reveal further intricacies of the bowels of the system. It really doesn't help as much as people think to talk about distros as "good beginners' distros." That implies that Linux is an end in itself, that the user is embarking on a "Linux journey" towards the kind of inwardness that lets them save an e-commerce cluster by the cunning use of cat, grep, and some utility nobody has heard of. The point for most folk is that Linux gives you a system that you can spend less time thinking about, on which you have to spend less effort fending off efforts to trick you into signing up for something you don't need, and you can just get on with doing normal human stuff. One part of the FOSS campaign that I am beginning to take more seriously is "free as in libre," that is, not losing control of your data: the software as a service push is making that more and more relevant. I recently tried to back up from a copy of my OneDrive folder (it had everything local as well as in the cloud); without activating a OneDrive account, it was gibberish, and felt like a ransomware attack. Luckily I had two other backups of my disk, but I suddenly saw that Stallman had a point. So, whilst the F in FOSS has never meant "free as in beer," and I would really happily pay for a perpetual licence for Lightroom on Linux, I wouldn't want to subscribe to it. So, yes, FOSS is not Linux, but one of the virtues of the Linux world is that it is free of subscription services, and we need to watch that if commercial distributors do start making their stuff available on Linux. And I really must chuck some money at the devs of my beginners' distro.
    1
  457. 1
  458. 1
  459. 1
  460. 1
  461. I think this video is conflating two things. One is the general likability (a hard word to spell, even for a native English speaker) of scientists to the general population. As people have said, 7% is pretty good: try lawyers or politicians or car salespeople (or even lecturers in literature). The other is hostility by activist groups, who are committed to a fringe point of view (or sometimes a batshit point of view, or sometimes a frankly malevolent point of view) and enjoy expressing hate against anyone who opposes them. Sometimes I think the chief motivation for being an activist is the opportunity to get in a good hate. Social media makes it much easier for them to make that hate personal. In this case, shooting the messenger is definitely one motivation; another is probably cognitive dissonance--the activists kind of know their position is at least questionable; and the third is just being haters. But as well as these groups, I do find amongst some of my friends a kind of generalised anti-science. I think that is because they know that science has greatly increased human power to change the environment, but they don't think of medicine and public health, but they think of improved weapons and industrialisation on a huge scale before we understood the consequences. And some scientists don't help: the sort of scientists who dismiss caution over genetic modification and the desire to balance benefits against potential risks; the sort of scientists who dismiss that sort of caution as "anti-scientific" as though that was the worst thing you could be. Scientists like that aren't likable.
    1
  462. 1
  463. 1
  464. 1
  465. 1
  466. 1
  467. 1
  468. 1
  469. Off and on Linux flirter for 20 years. User for 10, all GUI, Mint/Cinnamon for preference: easy to do now I've retired, but I'm not sure how I'll go shortly when I'm co-operating again on a book, and might have to go back to MS Word, which means Windows. Also I need a Windows machine for iTunes to organise my music for my iPhone. The new interest in Linux is because Windows 11 has become so predatory against its users. "Tell us everything about yourself. Entrust all your data to our cloud, where it will all be encrypted so that no one (apart from us) can access it, including you if you let your subscription lapse." Why Linux won't work for some people: 1: Some people use one or two pieces of software for their professional work: typically Adobe. The decision tree here goes: I need to use (say) Photoshop; what systems (hardware + software) will give me a good experience? The operating system isn't the choice, it's a consequence of prior choices. This is the same as people who have got some very expensive piece of manufacturing hardware which is old but still functional and central to their business, so they need to nurse into life some antique PC with a Centronics port because that's the machine's interface. 2: People who have a lot of experience with Windows or Mac, and know how to do out-of-the-way things, and try to do the same with Linux, and can't. It's partly that a lot of this is what is sometimes called "implicit knowledge": stuff you know without knowing how you got to know it. The charitable reading of the Linus fiasco is that he wanted to set up an advanced gaming rig with his Windows knowledge (the uncharitable interpretation is that he thought a bad-faith video about how Linux is too complicated and broken would be good commercially). This accumulated knowledge can be a real change-stopper, depending on how old you are and what sort of appetite you have for learning new stuff. For pretty much everybody, changing from Windows is a rational choice, but there are some people for whom Mac is a better alternative. (I used to like OS X in the days of the Big Cats, but too much of its functionality is hidden for me these days -- like menu items that only appear when you hold down the splat key while clicking on the menu.) And then there's gaming, but isn't everything better on a console anyway? I don't game.
    1
  470. 1
  471. 1
  472. 1
  473. 1
  474. 1
  475. 1
  476. 1
  477. 1
  478. 1
  479. 1
  480. 1
  481. This is excellent, but I'd want to add other terms in your discussion of world-views from about 13:00. First, I don't think we have to choose between naturalism, dualism, and immaterialism (or idealism, as it used to be known). It seems to me that it is possible to hold that the basic stuff of the universe has both a mind-like nature and a physical (material) nature: like dualism but without an absolute split. This seems to be the pov of Thomas Nagel (an atheist) in Mind and Cosmos when he declares himself a "neutral monist." It seems to me evident that there are facts of existence that we can't explain without using mentalist language, but dualism seems very problematic for accounting for how much our mind is intertwined with the electro-chemical functioning of our brains. (Test: describe two 12 year olds playing a game of chess, using only statements about brain states and functions.) Second, on epistemology: I am not happy with the only choices being "Science + Reason" and "Divine Revelation." There are things we know by intuition and by non-rational (which is not anti-rational) observation. I think of the way the arts can inform our understanding, but also things like human love, friendship, hostility, and hatred. There's also the way in games that if you're one-on-one with another player, you can sometimes just know whether they're going to come in hard on you. There's the way we can talk about a great poem being "inspired" without necessarily being divinely inspired. And on the other hand, I am very hesitant about what we might mean by calling the Bible "divinely inspired": certainly some meanings of that phrase mean things I couldn't believe. I realise that these positions might be a bit nuanced for a mass survey in a PhD thesis, but they're important to at least some people. For the record, I'm a church-going Christian (Anglican, which some people might regard as CINO), having been in and out of belief for most of my life. BTW, there's a typo in one of your slides: POSTITIVE ATHEISM -> POSITIVE ATHEISM, if you ever feel like fixing it. It's always the block caps where the last typo lurks.
    1
  482. 1
  483. 1
  484. 1
  485. 1
  486. 1
  487. 1
  488. 1
  489. 1
  490. 1
  491. 1
  492. 1
  493. 1
  494. 1
  495. 1
  496. 1
  497. 1
  498. 1
  499. 1
  500. 1
  501. 1
  502. 1
  503. 1
  504. 1
  505. 1
  506. 1
  507. 1
  508. 1
  509. 1
  510. 1
  511. 1
  512. 1
  513. 1
  514. 1
  515. 1
  516. I don't agree with that pronunciation you found: the second syllable has the ee sound, and typically the stress is on the second syllable, too. I do know Ancient Greek, btw, and I have thought a bit about asceticism, but I don't always get English pronunciation of Greek words right, so I checked Wiktionary (a good online solution for all your dictionary needs). I have no problem at all with the from-the-ground-up, learn each tool as you need it, approach to Linux, or any other OS. For many people I'm sure it's the best way to learn an OS, and indeed, since I started on CP/M, I've had something of that trajectory, even though my switch to Linux was pretty much entirely convenience, as I wanted something that's less intrusive and less of a faff than Windows. But I do have issues about calling that approach ascetic. As I understand it, asceticism is about two things: one is freeing yourself from the complications of unnecessary possessions and material concerns (a kind of wellness play); the other is to simplify your life to the utmost so as to concentrate on ultimate value, whether you think of that value as freedom from all illusions about the nature of "reality", or whether it's something for which the word you reach for is god, or the divine. The minimalist approach to Linux (why not Gentoo?) would certainly not leave anyone with much time to think about anything else, but if you take the asceticism angle seriously, it would seem to imply that the ultimate concern is, indeed, Linux. Nothing wrong with that, as long as you don't frighten the noobs who are just looking for a better Windows than Windows (where have I heard that before?); but I'm not sure whether many historical ascetics would agree with it (though, how monastic are the Shao Lin martial arts monks?) Another practice that goes with asceticism is anchoritism, the practice of withdrawing into a secluded or solitary life as an anchorite or hermit (all anchorites are ascetics, but not all ascetics are anchorites). Hmmm. Anyone going to talk about basements? The notion that someone who's become one with Linux, as described, would be happy with macOS is a bit laughable. Sure, it's BSD underneath, with a funny kernel, but Windows is supposed to be VMS underneath, and when ordinary people talk about those OSes they are thinking of the DE (there's not even a Mac server edition these days, is there), and I hate the modern Mac experience because it's as opinionated as Gnome, and far too eager to leap in and do what it thinks you want it to do.
    1
  517. 1
  518. 1
  519. 1
  520. 1
  521. 1
  522. 1
  523. 1
  524. 1
  525. 1
  526. 1
  527. 1
  528. 1
  529. 1
  530. 1
  531. 1
  532. 1
  533. 1
  534. 1
  535. 1
  536. 1
  537. 1
  538. 1
  539. 1
  540. Originally, j wasn't a separate letter, but just a fancy way of writing i. Particularly you see it at the beginning or end of words in medieval English spelling. So, when simply writing numbers in ordinary narrative, with no need for tamper-evident records, you still see 3 written as iij, and 6 as vj. Fun fact: although differentiation of i and j as separate letters representing different sounds started as early as the 16th century, it took a long while for this to become fully established--especially since at first most people who could write could write Latin, in which j is a purely calligraphic variant. Washington DC has streets named by letters, so: A, B, ...G, H, I, K, L.... There is folk-lore that this is the result of a feud, and whoever was in change of street naming had a deadly enmity against someone called Jay. Nice story, and tells us about how Americans think about the conduct of politics, but it was just that J still wasn't firmly enough established to count as a separate item in a sequence. Those who can have had the privilege of flying in the Queen of the Skies can perhaps remember the seat lettering in Economy on a 747: ABC (aisle)DEFG(aisle)HJK. Same structure, but the once-mighty Boeing company chose the fancy member of the pair. Indo-Arabic numerals start appearing in English manuscripts in the late Middle Ages--I remember a late 15th c. manuscript that uses Indo-Arabics for folio numbers (which might, admittedly, be a bit later than the original writing). At first, some of the numbers are written in a different rotation from the modern usage, so 4 (IIRC) is written as though, to our eyes, it's lying on its back. These were the orientations of Arabic writing--I don't know why they were later rotated.
    1
  541. 1
  542. 1
  543. 1
  544. 1
  545. 1
  546. 1
  547. 1
  548. 1
  549. 1
  550. 1
  551. 1
  552. 1
  553. Are there any studies of the economics of massive projects like this? I don't mean cost benefit analyses, so I guess it's really finance rather than economics, but basically: Where does the money end up? A collider is a very large hole in the ground, full of things made of steel, copper, and more exotic materials. Do you get particle physicists to dig the hole? Hell no, you get one of the specialised companies, with their highly skilled and nomadic work force, who dig holes, whether for water pipes or underground trains or Elon's latest X-wormerator to extrude people through the earth. The highly skilled, and rightly highly paid, workforce comes to the new site, and spends money in the local economy, which generates taxes for the host nation. The tunnelling companies presumably pay taxes somewhere. And so on with all the equipment inside the hole. A large part of the operation can be seen as the transfer of money from science budgets in the funding nations to the companies that do most of the engineering and the nations where they pay taxes. Whereas if you funded, let us say, theoretical research, the capital costs would be much lower; the theorists could even move into the space freed up by the abolition of the Humanities, though they will have to compete with the expanding Department of Social Media and Marketing. And you could fund small experiments, too, that know what they're looking for. Instead of the mega-enterprise, a model might be the sort of labs where the foundations of modern physics were experimentally explored, that seem to have required a good supply of string and sealing wax, and a very good glass-blower.
    1
  554. 1
  555. 1
  556. 1
  557. 1
  558. 1
  559. 1
  560. 1
  561. 1
  562. 1
  563. 1
  564. 1
  565. 1
  566. 1
  567. I'm a non-technical user of Mint, but NOT a new user. Been using it for years. That's a distinction worth bearing in mind: it's not that I don't YET know stuff, it's that I've got other things I'd rather be deep into than Linux technicalities, and I rely on useful sources, such as yours (thank you) to help me sort out what I really need to know to use Linux felicitously. It sounds presentational, really. So "Verified" is not a guarantee of absence of malware, but no such guarantee is possible, I think? Someone sufficiently motivated and resourced could presumably infiltrate malware into the Microsoft Store (probably starting from Petrograd). So the question is, for a non-technical user, are they better off sticking to Verified flatpaks? (I actually want to know, and so far I have the impression that the answer is "Yes," to some degree.) And if so, how to present the information? Remembering that non-technical users get MEGO pretty quickly. A question I'd like the answer to is, which source is least likely to serve up malware: distribution's repo, Verified Flathub, unverified Flathub, random binary, random flatpak? I've got a clue, but I'd like to know the detailed rankings. Or perhaps it's not possible to give more than a general answer, which would be good to know. Last, I take the point about what happens if flatpaks are not available through the preferred source. The answer might seem to be to say, "VLC is great (for example); we think you should install it from our repository, rather than this unverified flatpak." Given that the Mint package manager now shows traditional packages and flatpaks on the same page, this seems like a reasonable idea? And a way of combatting the erosion of safety measures (some clown will always tear down the fence at the top of the cliff). Oh, and post-lastly, are there any advantages for the user in installing flatpaks? Is the sandboxing of any security benefit, for the user? Any benefits in app updates? I observe on the Mint package manager that typically flatpaks are a more recent version than what's in the distribution's repository, but I come to conclude that that's not necessarily an advantage The Moral is maybe one of the things I learned in an early part of my experience with computers: don't be an early adopter. Wait for someone else to find the bugs (and now the scams). (And, BTW, never ever install version x.0, and with Microsoft wait for v. 3.1)
    1
  568. 1
  569. 1
  570. 1
  571. 1
  572. 1
  573. 1
  574. 1
  575. 1
  576. 1
  577. 1
  578. 1
  579. 1
  580. 1
  581. 1
  582. 1
  583. 1
  584. 1
  585. 1
  586. 1
  587. 1
  588. 1
  589. 1
  590. 1
  591. 1
  592. 1
  593. 1
  594. 1
  595. 1
  596. 1
  597. 1
  598. 1
  599. 1
  600. 1
  601. 1
  602. 1
  603. 1
  604. 1
  605. 1
  606. 1
  607. 1
  608. 1
  609. 1
  610. 1
  611. 1
  612. 1
  613. 1
  614. 1
  615. 1
  616. 1
  617. 1
  618. 1
  619. 1
  620. 1
  621. 1
  622. 1
  623. 1
  624. 1
  625. 1
  626. 1
  627. 1
  628. 1
  629. 1
  630. 1
  631. 1
  632. 1
  633. 1
  634. 1
  635. 1
  636. 1
  637. 1
  638. 1
  639. 1
  640. 1
  641. 1
  642. 1
  643. 1
  644. 1
  645. 1
  646. 1
  647. 1
  648. 1
  649.  @adamplentl5588  Well, according to the source of all knowledge, the International Humanist and Ethical Union says, _inter alia_, that Humanism "stands for the building of a more humane society through an ethic based on human and other natural values in the spirit of reason and free inquiry through human capabilities." Note the use of the word "humane", which implies some positive value to humanness. To try to build an ethics or morality or value system on a purely human basis implies that we are apt to bear the weight. And it genuinely seems to me that that is an open question. We are truly capable of great acts of good; and also of great acts of evil and destruction. See the history of most major religions for appalling examples of the human capacity for evil; and they are human acts, though done under the delusion that they were God's will. It's possible to say that we should just work for the best considering only human values; but it seems extremely hard to agree on what human values might be. Competition or cooperation? Probably both, either in a blend, or in different contexts, but how do you get to agree? It looks a bit like the attempt to gain the authority of a religion, without openly making a metaphysical commitment; assuming that values we can all agree on are "natural". I think there's a lot to be said for basing common life on a principle of naming what is obviously wrong, and trying to fix it, without any deep metaphysical rootedness. But whilst most of us would agree that having a lot of people homeless and sleeping on the streets is wrong, there will be some people who will question whether it's wrong (not, I hope, any people you or I know), and huge disagreement on how to fix it. Sorry to take up your time when you obviously have much more important things to do, but I really don't think "humanism" is on a par with any of the other ethical bases you mention; not least because utilitarianism and consequentialism seem to me both compatible with whatever "humanism" might be, if it is not an identification of humanity as itself a source of positive value. "Deontology" in a secular form just seems to be plucking moral absolutes out of a hat.
    1
  650. 1
  651. 1
  652. 1
  653. 1
  654. 1
  655. 1
  656. 1
  657. 1
  658. 1
  659. 1
  660. 1
  661. 1
  662. 1
  663. 1
  664. 1
  665. 1
  666. 1
  667. 1
  668. 1
  669. 1
  670. 1
  671. 1
  672. 1
  673. 1
  674. 1
  675. 1
  676. 1
  677. 1
  678. 1
  679. 1
  680. 1
  681. 1
  682. 1
  683. 1
  684. 1
  685. 1
  686. 1
  687. 1
  688. 1
  689. 1
  690. 1
  691. 1
  692. 1
  693. 1
  694. 1
  695. 1
  696. 1
  697. 1
  698. 1
  699. 1
  700. 1
  701. 1
  702. 1
  703. 1
  704. 1
  705. 1
  706. 1
  707. 1
  708. 1
  709. 1
  710. 1
  711. 1
  712. 1
  713. 1
  714. 1
  715. 1
  716. 1
  717. 1
  718. 1
  719. 1
  720. 1
  721. 1
  722. 1
  723. 1
  724. 1
  725. 1
  726. 1
  727. 1
  728. 1
  729. 1
  730. 1
  731. 1
  732. 1
  733. 1
  734. 1
  735. 1
  736. 1
  737. 1
  738. 1
  739. 1
  740. 1
  741. 1
  742. 1
  743. 1
  744. 1
  745. 1
  746. On AI. On the big claims for AI, I am deeply sceptical; sticking strictly to the material basis of consciousness, AI so far only works on the electrical connections of the brain, leaves out the emotional/chemical part of the thing (D'Amasio, Descartes' Error for what I mean). BUT I have become converted to the extreme usefulness of 'AI' for specific functions since I discovered how much SpeechNotes improved voice to text transcription running an LLM locally, on Linux, with modest hardware. So I'd suggest you might attend, not to the big claims, but to quite specific uses of cognitive computing for practical tasks, running locally. For instance, it seems to me that it should be possible to get a scam detector system that runs entirely locally on the sort of adequate hardware we all have now, and which would be more accurate than traditional spam filters. Telegram is always a no-no, but some scams take a bit longer to detect. I have no clue how it would be done, and I might be wrong, but maybe it would be interesting to explain why I'm wrong. Similarly, as Windows becomes less and less tolerable, we're all trying to get away from Adobe. Are there ways in which the new computing could help with analysis of graphics projects, maybe even generating hints on how you could achieve specific results with non-Adobe software. For example: - AI: The sky looks a funny colour in this photo. Do you want to change it? - Human: Yes, I should have used a polarising filter when I took the photo. On PhotoShop I'd do <procedure>. How could I do that in DarkTable? - AI: Well, let me see .... What you could try is <procedure>. Now that might be too blue sky for where we are, or could only be done by major collective effort, but it would help me to know what I might expect, and not expect, from new developments in computing -- stuff that would be helpful for me, not just help Predatory Commerce try to make more money out of me. This, I think, would fit with what I see as three main streams in your channel. The first is the Maker thread (we used to call it Hobbyist, I think, but it is truly beyond that); software maker rather than hardware, but still. The second is your attention to where computing is going (both Big Computing and Small Computing). The third, and what really engages me personally, is your careful and clear explanation of what computing can do for us normies who quite like the hardware and software, but are only really involved because it helps us do stuff. And maybe you could run a competition for the most entertaining AI hallucination of the month? As for a prize, I used to reward co-workers with an Internet Gift Voucher, good for one fantasy of your choice. Maybe now we could have AI-assisted fantasies? I live a long way away from England these days, but it took me back when I saw that to venture into the open air in the middle of summer, you needed a pretty serious looking jacket. Hope this helps, one way or another. Love the channel.
    1
  747. 1
  748. 1
  749. 1
  750. 1
  751. 1
  752. 1
  753. 1
  754. 1
  755. 1
  756. 1
  757. 1
  758. 1
  759. 1
  760. 1
  761. 1
  762. 1
  763. 1
  764. Suggestions about redirecting the students' creativity, as at 9:25, are obviously made by people who have no experience of educational institutions, apart from having attended one with an intention to learn (which makes them a minority from the start). If you consider what has been happening at this school--I mean, the lived reality of the context described in the use-case--then you realise that for the poor bastard asking this question, burning down the school (7:45) could be a beneficial side effect. Alternative suggestions also do not take seriously the chronic underfunding of public schools in many parts of the world: in the USA, for instance, I gather that school teachers have to buy pens and paper for the kids out of their own, tax-paid, incomes. So the probable situation is that the school has the person who's asking the question, and one tech, to look after the kit. How long would it take to disconnect a speaker on a PC? Remembering that you have to not only do the deed, but also travel from computer to computer, probably including travelling between buildings. And all this, BTW, would have to be done out of hours, so as not to disrupt the creative little angels while they're (ostensibly) learning. I guess, on average, about half an hour per machine. Might be twenty minutes, if the logistics are favourable. How many computers? Sounds like they've go a lot. Maybe 200? So that's 100 hours, or two and a half weeks work (in a civilised country with a 40 hour week: even in the USA, management would have to budget 10 days to get the job done). So, by the time you get to the end of the process, the computers you did first have already had new, louder, speakers put in by the kids. And you've made it a competition, so it's a matter of principle now. Who gets to turn the school PC into a theremin? I know, I'm cynical about kids--and I haven't even been a school teacher. And many students could doubtless be diverted to more useful aspects of computer systems programming, though of course you'd have to fit it in with the government-mandated syllabus. Or you could set up a computer club, so that's someone who volunteers to give up another evening a week for the privilege of looking after other people's children, and organising the use of facilities, and ensuring that they don't try a ransomware attack on the local hospital, all on an income that pays a poor hourly rate even if you just stick to the official part of the job. But even if all this worked, you wouldn't get everyone. A few little twats would just like causing trouble for the sake of causing trouble (or maybe shit posting is not really a thing?), so it would start again, and others would then join in, and you're back at square one, though maybe a more sophisticated lot of troublemakers because of all they've learned in Computer Club. In the circumstances, the question, certainly asked after much thought and in desperation, seems entirely sensible. To people who have never been in a classroom, teaching seems easy and obvious. And bits are, indeed, good. So why don't you go and frigging do it? But in the real world, I think the questioner would have been justified in asking for a good implementation of EOU.
    1
  765. 1
  766. 1
  767. 1
  768. 1
  769. 1
  770. 1
  771. 1
  772. 1
  773. 1
  774. 1
  775. 1
  776. 1
  777. 1
  778. 1
  779. 1
  780. 1
  781. 1
  782. 1
  783. 1
  784. 1
  785. 1
  786. 1
  787. 1
  788. 1
  789. 1
  790. 1
  791. 1
  792. 1
  793. 1
  794. 1
  795. 1
  796. 1
  797. 1
  798. 1
  799. 1
  800. 1
  801. 1
  802. 1
  803. 1
  804. I think the change in museums started before the internet---certainly before the Web--and in part it's a good thing. Museums used to be, in part, collections of weird and rare stuff, like books on history of languages used to be all about exceptions. Then the idea got around that they ought to be telling the story of the main streams of development, and that is good. But then museums had to justify their (inadequate) funding by crowds through turnstiles, and some did get dumbed down. York in England is a classic place for museology. One of its museums is a very early introduction of the Folkmuseum idea to the UK, and it's a great museum of everyday life--if you're old enough, you can join in the exclamations of "My grandma used to have a room just like that!" There are also a ton(ne) of churches in York that no longer have congregations. A lot were turned into museums, but there aren't all that many artefacts, so they used the space to tell relevant stories about craftwork and building and stuff with good signage, and using the building as a physical contextualisation. The star used to be Jorvik, which is a museum housing an archeological exploration; at first it was a brilliant combination of display of current research and house of scholarship, and an innovative display which engaged the ordinary interested person and gave a real feel of the sights, sounds, and smells of the Viking era settlement. Alas, commercial pressures mean it's now a theme-park ride, with lots of gamified learning activities for the kiddies, and the seriously interesting and beyond that the scholarly important stuff is pushed to the background. But what I'm sure museums have to stick to, and what they have over the internet, is the artefacts. The things themselves. It's great seeing all this stuff on Ian's channel, and InRange, and Othais and May, but there is no substitute for the things themselves, whether it's guns or paintings or medieval manuscripts. Must be a hard time to work in museums.
    1
  805. 1
  806. 1
  807. 1
  808. 1
  809. 1
  810. 1
  811. 1
  812. 1
  813. 1
  814. 1
  815. 1
  816. 1
  817. 1
  818. 1
  819. 1
  820. 1
  821. 1
  822. 1
  823. 1
  824. 1
  825. 1
  826. The placebo problem is not just a witticism. I am mildly afflicted with depression, and during a very stressful time at work started taking an SSRI (Selective Serotonin Reuptake Inhibitor: the currently widely prescribed class of anti-depressants). My doctor told me it would take a time to work as the molecule titrated up to the effective level, but in a couple of days I felt relief. Not numbing: I still knew that my situation at work was shit, but I wasn't oppressed into hopelessness by the realisation of this. I wondered why it worked so quickly. Then a series of studies were reported, showing that the effects of SSRI were not distinguishable from placebo. I'd also retired, so I thought I would try gradually weaning myself off the drug. After about a week of gradual reduction of dose, disturbing forms of ideation re-appeared. Ten years on, I've tried the same reduction, with the same results. My provisional conclusion is that SSRIs are indeed placebo, but such powerful placebos that they do indeed work on people who believe them to be placebo (like me). My doctor thinks that perhaps the brain has enough serotonin all the time, but the SSRIs might affect the way a particular brain processes it. I always take my doctor's hypotheses seriously, but I think that perhaps there are funny things going on in the mind, and for things that have a large mental component (yes, the mind is a thing, even though we don't understand it) it is EITHER hard to distinguish placebo from pharmacological activity that works sometimes OR there are super-placebos or meta-placebos that sometimes work on people who believe them to be placebo. Or you could call it magic.
    1
  827. 1
  828. 1
  829. 1
  830. 1
  831. 1
  832. 1
  833. 1
  834. 1
  835. 1
  836. 1
  837. 1
  838. 1
  839. 1
  840. 1
  841. 1
  842. 1
  843. 1
  844. 1
  845. 1
  846. 1
  847. 1
  848. 1
  849. 1
  850. 1
  851. 1
  852. 1
  853. 1
  854. 1
  855. 1
  856. 1
  857. Plain text is universal? Yeah, right. When I started with little computers, some of my colleagues were using an IBM word processor. Guess who got to be the expert on translating EBCDIC to ASCII? My wife and I were doing a book: the general editor had support at his university computer department, who worked with LaTEX. So guess who learned how to put in LaTEX codes using the simple, austere word processor we were using then. And then the editor lost the support, and I forget what format we ended up with, but I know one author in the bibliography had a Polish given name which was spelt with a z with a dot over it. Long before UTF: so I bought a copy of WordPerfect, and learned it (in so far as anyone actually learned WordPerfect, rather than being quick at navigating the cheatsheet template). The problem, I think, is that a lot of the people who pontificate about Linux are developers and sysadmins (to whom, respect) for whom writing is producing documentation for other professionals. But a lot of writing IRL is for publication, either in dead tree or e-book format, and what publishers want is Word format files, and they want authors to do all the formatting for what used to be called camera-ready copy. (Maybe if you're a best seller, this doesn't apply, but it's the way it works in academic publishing). For this purpose, word processors don't do a fully professional job, but they will produce a passable result that's good enough for academic publishing. Though I observe that publishers still have difficulties with getting footnotes done properly in ebooks. Publishers (outside the technical sphere, perhaps) do not want LaTEX any more than they want nroff, they want .DOC or .DOCX. Commercial and advanced FOSS word processors can get incompatible (hell, MS Word can be incompatible with itself if there's enough of a gap in versions and platforms), but that only applies to pretty recondite sorts of usage. These days, for the sort of thing that markdown does, the compatibility is good. Especially if you use .RTF, which is proprietary, indeed, but MS is not making any money out of it, and .RTF will tell you if you're doing something too intricate for it. Where word processors can be, and certainly used to be, evil is when there's a monopoly. Microsoft used to change the .DOC format with every upgrade. This would to drive the massive sale of upgrades by a simple mechanism. It used to be a rule in large organisations that the person who had the very latest desktop PC was the CEO's PA. So, an EDICT would be issued from the desk of the Supreme Manager. It would be typed up (and probably corrected for grammar and spelling) by the CEO's PA (or, as it was in those days, Secretary) and she (as it was in those days) would promulgate it to the masses. Since the CEO's PA/Secretary was a very intelligent and capable person (probably smarter than the CEO), she was in complete command of the new version of Word, and would use its new features. So when the message came to the peons, and they opened it in their old versions, they could not access the guidance of the Dear Leader in all its fullness, and so each department paid for upgrades, and so was increased Bill Gates' fortune (ill-gotten, but now used well). And if you want pure, undistracted, composition of a first draft, nothing beats paper and a 2B pencil.
    1
  858. 1
  859. 1
  860. 1
  861. 1
  862. 1
  863. 1
  864. 1
  865. 1
  866. 1
  867. 1
  868. 14:20 Psychopath. Are you suggesting that Gnome should adopt Reiser Sans as the default? All the font minutiae are real, but often in quite compartmentalised use cases. Actual dead-tree printing has had over 500 years of this, and for a book I'm not sure there's anything much better than the font Nicolas Jensen designed in Venice in the second half of the fifteenth century: but it looks pretty crap on a computer screen. Also, it's not obvious that different alphabets should belong to the same font family, though it is clear that one should pay some attention to how a particular Arabic or Hebrew or Georgian script looks alongside a particular Latin face. Each of them has their own tradition of calligraphy and type design, quite separate from the Latin tradition, so there's no a priori reason why they should belong together in the same act of design. Which means that wanting to have one font to rule them all is likely to introduce complications which could be avoided by accepting that a system could have a variety of fonts available, even for the default display fault, depending on default language. Possibly even a different font for languages using the Latin alphabet with a lot of diacritics. One thing I find troubling in the discussion is that there is no mention of readability studies. There are the obvious abominations like I and l being indistinguishable (as in the font I see on YouTube now), and my pet hate of l being hard to distinguish from i in some quite fashionable fonts; and then there's telling the difference between rn and m, which is unnecessarily hard in some sans serif faces. But there have been more general studies, taking into account different levels of visual acuity and stuff. BTW, making a Bold by just tweaking some parameters on a base font is, I think, regarded as devil's work by font designers. Even scaling by point size can be usefully tweaked, if your aiming for the font beautiful. A distro using a clone of Comic Sans? To go alongside Hannah Montana OS ?!
    1
  869. 1
  870. 1
  871. 1
  872. 1
  873. 1
  874. 1
  875. 1
  876. 1
  877. 1
  878. 1
  879. 1
  880. 1
  881. 1
  882. 1
  883. 1
  884. 1
  885. 1
  886. Groupthink is real, and when it leads to opposition to, say, nuclear power, it's harmful. But you can't keep science and politics separate (that is a statement about what is possible, not what is desirable). We need to do something -- well, I'm nearly 80 and have no kids, so I'll say YOU LOT have to do something if you want to avoid really bad times -- and doing something is what politics is about. Al Jaber, also, was not making some innocent remark; the major producers of fossil fuels have been trying to stifle honest discussion by setting up climate science. Sure, it is important to remember what the main point is; and if carbon capture can be made to work, that would be wonderful. So in that sense fossil fuels are not the core problem. But, as far as I know, there is science to say that the major cause of excessively rapid climate change is the release of carbon dioxide and methane into the atmosphere; and that the chief source of these gases is the use of fossil fuels. So, by a chain of reasoning that even Tucker Carlson could follow, there is science behind concern about the use of fossil fuels. When does carbon capture reach a scale to allow us to use fossil fuels like we do at the moment? About the same time that nuclear fusion becomes a practical source of power, I'd bet. The other reason why it is intellectual purity to the point of naivety to decry the mixing of politics and science is that a lot of the denialist rhetoric is an attack on science as an institution. It's all a big conspiracy to do down the salt of the earth God, guns, and family Americans. If these people were susceptible to reason, it would be good to ask how the big oil companies find reserves. They employ expert people to suggest where to look. Who are these people? Geologists. Scientists, who don't seem to be interested in overturning the master conspiratorial narrative about where hydrocarbons come from (and who probably don't believe in the literal truth of the Genesis story). Politics is dirty. Science is cleaner, though in fact there's a lot of internal politics in science, as in every part of institutionalised intellectual activity (string theory? new particle accelerators?), but it's only really cleaner because the stakes are, for the most part, small. But if the climate deniers carry on with an unremitting, orchestrated narrative that has no relationship to the truth, it is an unfortunate necessity that people wanting to oppose them have to think about the political effectiveness of their statements, more than the nuances of precision. At least it's not as bad as my old subject, LitCrit, which is almost entirely politics these days.
    1
  887. 1
  888. 1
  889. 1
  890. 1
  891. 1
  892. 1
  893. 1
  894. 1
  895. 1
  896. 1
  897. 1
  898. 1
  899. 1
  900. 1
  901. 1
  902. 1
  903. 1
  904. 1