Youtube comments of LoneTech (@0LoneTech).
-
2000
-
577
-
312
-
192
-
178
-
158
-
126
-
118
-
78
-
77
-
75
-
73
-
60
-
60
-
52
-
45
-
44
-
41
-
39
-
38
-
36
-
32
-
32
-
30
-
28
-
27
-
25
-
24
-
24
-
24
-
23
-
23
-
22
-
22
-
22
-
22
-
20
-
20
-
19
-
18
-
17
-
17
-
17
-
17
-
16
-
16
-
15
-
15
-
It's quite a bit more complicated. For instance, with the structs shown, the linked list is likely twice as large even before taking into account memory allocator overhead. Just as importantly, traversing the entire list is not necessarily your most important operation, and the linked list makes operations like insertion, removal and append (if you track the tail) quite fast, while the array may need to copy all other elements each time. As for heaps, we have two things commonly named that; the memory allocation heap and a heap queue. The heap queue algorithm maintains a sort of ordering within an array at a limited cost, such that you can always extract the first item. It maintains a type of binary tree, and the technique is applicable to other structures.
In short, horses for courses.
14
-
14
-
14
-
14
-
13
-
13
-
13
-
13
-
13
-
13
-
13
-
13
-
12
-
12
-
12
-
12
-
12
-
11
-
11
-
11
-
11
-
11
-
11
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
USB is most certainly not using set 2 - for the reason why, look up what the scancode is for the arrow keys, which have a prefix for set 1 compatibility, or the Pause key which is 7 bytes long. USB keycodes are in alphabetic order and Z in particular is 1D (see HID usage tables, section 10 keyboard). Your firmware might be translating it for legacy OS support, however. Set 3 was more logical in that it used a single bit for press/release and one byte for each event. Also, the polling in USB is performed by the host controller, not the CPU, so we still have an interrupt for when things have changed. There are also buffers in both keyboards and port controllers for PS/2, AT etc, but the CPU typically has to handle them byte by byte.
7
-
7
-
7
-
7
-
7
-
7
-
The summary in the first sentence is a bit off target; the first part describes a type class (which Monad is; another example is Num for numeric, which contains arithmetic operations), and the second part is already enabled by sum types holding fields (which Maybe is; so is Either, and both have instances of Monad). But neither describes the concept of a monad. The prelude describes it thusly: "it is best to think of a monad as an abstract datatype of actions".
A monad in Haskell provides a way to sequence processing steps; they could pass information from one to the next, return information, be skipped, repeated, or even reordered. For instance, the STM monad (which allows shared mutable state in multithreaded programs without locking) can repeat actions which it determines need to be retried, and the Maybe monad can skip actions it determines receive no input. The IO monad is the most flexible, as it can perform anything at all, including non-deterministic behaviour; it is also isolated, in that the only thing that performs IO actions is the main function. In effect, the main function's job is to produce the sequence of actions that shall be performed. The Either monad can be easily applied as exceptions are, in that it carries information down the fast Left path as well as the thorough Right path, and the Control.Exception module contains support for translating between this form and IO monad exceptions.
Many monads do much simpler work; for instance Identity, First, Last, Max, Min, List (called simply [] in the library), Product and Sum. These are used to take the same internal steps and produce different results from them. State provides a way to carry data past steps, so the steps themselves don't need to forward all state. All of these are deterministic and polymorphic.
7
-
7
-
7
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
This exercise is designed to demonstrate the flexibility of semaphores and test students' understanding of them. Unfortunately, it revealed flaws in your understanding, not a great point to start teaching from.
This video didn't describe the challenge very well. Sadly, neither did the original paper. This is why you both missed relevant points of synchronization; in the original paper, santa may finish deliveries before the reindeer arrive at the sleigh. In yours, the sleigh does not exist and the reindeer may go back on vacation before santa finishes his deliveries.
The "sneaky bug" reveals a notable misunderstanding of what semaphores do. This bug is imaginary, as semaphores can hold a number of posts. Santa posted 9 times, so 9 times a reindeer will get out. It doesn't matter that santa moved on, the count is in the semaphore; like a counting turnstile. The behaviour you worried about could occur with POSIX condition variables, where you can only wake already waiting threads.
The use of mutexes missed the point that semaphores can do that too (it's why C++ named their methods release and acquire, instead of signal and wait). Perhaps more concerning is that you overserialized independent blocks by sharing the lock for different variables.
The explanatory value of flashing up miniature pictures of large chunks of code, that we have neither the time nor resolution to read, is doubtful.
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
It's a deep topic. At its root is atomicity, which means something that won't be interrupted (may still be more than one cycle, but nothing affecting it should overlap). The precise mechanisms vary in complexity, but most CPUs have a feature where a sufficiently complicated operation can be performed without interruptions, even from other CPU cores (e.g. the LOCK signal on 386). Compare-and-swap is one of the easier to understand, and load-link/store-conditional is a more flexible variant. These aren't limited to CPUs either; there are memory chips that support operations like atomic increment, such that the value need not be loaded into the processor. For many programs, the entire concept can be abstracted away; STM (software transactional memory) allows focusing on the logic rather than the sequence. Some atomics are quite expensive, e.g. serializing all threads in a GPU work group or CPUs contending for one cache line (which may take hundreds of cycles to load), while others can be very fast (bitwise and/or in a workgroup can be as fast as a broadcast).
A mutex is very simple with compare and swap: Acquire is compare to unlocked, if equal swap to locked, if not yield and repeat.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@Mad_Elf_0 I want to treat it as impossible, because the number of discs is a non-negative integer. It's just not part of the problem domain, and in a statically typed language, shouldn't be a possible call. It's just as helpful as passing a non-numeric object. The problem operates in discrete non-negative integer space and functions correctly if and only if it reaches zero for all its reductions. If your program reaches an impossible state, failing as early as possible (typically, user input validation) helps pinpoint why. In the interest of failing early, it would make sense to assert that the number is a non-negative integer, but it would not help to make a subset of invalid values appear valid.
This is like someone telling you to solve the puzzle with nothing but a ball. There are no discs or pegs; you can't perform the task. You're proposing that "I did it!" is the proper response.
Adding a successful return of your function when called with bad input is removing the option for the call site to handle the problem, because you never alerted them. This is why "except: pass" is generally a very bad idea; you don't know why there was an exception (or even that there was one or which), and you've removed the opportunity for someone else to interpret it. An example of one that should generally not be silenced, certainly not in a loop, is KeyboardInterrupt - the user has told you to stop, probably for real world reasons your program doesn't understand.
PEP 20, the Zen of Python, captures this as "Errors should never pass silently." You should run "import this" from time to time.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
I also once paid for YouTube Premium and stopped because of their hostility. For starters, I requested the function to pay instead of getting ads years before they even introduced Red; and when they did, they decided to refuse me that service for another year or so just because of where I live (Sweden). Before offering it, they more than quadrupled the amount of ads, and they've kept going since. Well, I tried once they let me. Turns out, not only didn't I consistently get the ads removed, they told my mother she wasn't my family, based on a hidden requirement and hidden measurements. They never told me, the person paying.
So one thing they'd have to do is publicly apologize for actively treating their paying customers worse than their ad-suffering viewers. Google spyware don't get to tell me who my family is. It's offensive and likely illegal that they spy in the first place, and using it to target and abuse customers is perverse.
Another thing they'd have to do is actually fight fraud. Currently they're fighting the people trying to remove spam from the comment sections, not the spammers. And their ad system is at least 70% fraud too, without even the flagging option. Flagging comments doesn't get them removed, and it's not even possible to flag ads.
One more thing. It would be nice if they actually took viewers into account at all before breaking things, e.g. dislike statistics and stereoscopic video. There are swathes of videos out there that were once correctly tagged for 3D viewing (and a mechanism to add the tags if they were missing) but youtube simply broke them.
2
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
+Ströbröd Stolbarn You've merely shown that to alternate between different pairs (x,x) or (y,y), a transition (x,y) is required. Extend it to longer strings and you'll have one of either for every pair of digits; for instance the pattern 1,1,2,2,1,1,2,2.. has a balanced set, 1,1,1,1,1.. has only 1,1 pairs, and 1,2,1,2.. has only transitions. A random pattern would be expected to approach an even distribution, as it for each following digit would have an equal chance of adding an equal pair or a transition.
1
-
@ricardoalves9605 A cogent argument, but ultimately futile. The point is that "intelligent design" is not a theory to be weighed on evidence and probability, possibly to be refuted; they'll just take your point and say it's there "to test your faith", "his plan is inscrutable" or whatever (frequently just outright deny it). What truly baffles me is that these deniers literally argue against observing the fascinating wonder and complexity of the world we live in; placing limits on what their supposed creator may have done based on some preacher, book, or personal preconception. And to do so they'll gladly build nonsensical caricatures of what they think their supposed opponents believe, including completely ignoring what words mean (typical triggers: theory, gravity, evolution). They're not in the conversation to communicate, but indoctrinate. Remarkably, the entrance point to such indoctrination relies on doubt; they train their victims to reject notions, never to test them. XKCD 258.
1
-
1
-
1
-
Am I weird for finding this last digit pattern intuitive? It does relate to the base chosen and the nature of primes. Primes are separated by compound numbers, and the two primes that are factors to our base can be ignored, which is how we ended up with the possible last digits. The first prime that isn't one of them, 3, represents a pattern; 3, 6, 9, 12, 15, 18, 21, 24, 27, 30. Remove the numbers excluded by the base primes (2 and 5), and we get last digits of 3 9 1 7. I would expect primes to have a preference for following that cycle in order, though it does seem to be more complex than that; the chain from primes ending in 7 doesn't quite agree with the curve. That could be from looking at too small a piece of the sieve though; repeating the process we did for 3 on 7 yields a second pattern of 7 1 9 3, the reverse since 7 and 3 add up to 10. Those two and a simple incremental order will probably be shifting in weight by a beat frequency, eventually tuned by more primes joining in (those ending with 1 reinforcing incremental patterns, 9 reverse patterns, and 3 and 7 as already observed). Lower primes produce a higher density of compound numbers and therefore affect the distribution more.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Assign multiple variables": This doesn't work exactly the same way as multiple assignment statements. In fact, that's the main reason it exists; you can do "a, b = b, a" to swap two variables. With sequential assignments that would require a third variable.
Color: That's not even a Python feature, but a terminal escape sequence. For a Pythonic way to do it, look at e.g. the click library's echo and style functions.
Concat without +: This is string literal contatenation. Unlike + it operates at compile time, not run time. There's no problem assigning it to a variable, but you broke the strings apart when you removed the parenthesis.
Substring with in: "I don't really like" is a very vague argument. The problem is repeating the search using both in and index. You can also use the exception as your conditional:
try:
print(haystack.index(needle))
except ValueError:
pass
id to get identity: Do not use id to check if variables are identical; this is already what the "is" operator does, e.g. "assert data1 is data2", which is both clearer and more efficient. Do not save an id by itself as they could be reused if the object is freed.
Aliases: data1 and data2 were just equal names referring to the same object. Names hold references to values, they do not define them.
"Primitive types": All values in Python, including types, are objects. You can create new immutable types easily, e.g. using namedtuple. The closest you get to a "primitive" type is types that have a literal syntax, but that includes mutable ones like list, set and dict. There are also objects that can only hold specific kinds of values, like in array and ctypes.
"Replace list vs replace list content": id is a property of an object, not of a name. The name data within change_list is what was changed, not the object it initially pointed to.
"Check if exists": Those aren't tiers of nothing; None is a value like any other, a falsy one with a singleton type. It's used as a placeholder; the most common occurrence is when you don't specify anything to return. In general, looking for existence of a name is rather strange metaprogramming; per duck typing, you'd normally just try to use things, and maybe handle if it fails, like hasattr does.
I'm tired and won't be writing exhaustive commentary.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Sadly, Windows hasn't been about technology development - ever, as far as I can tell. It remains about market dominance, and recently transitioned to a model of bloated spyware and advertisement, which you keep paying top dollar for. Meanwhile we get to enjoy side effects like how the GPD Win Max 2 doesn't turn its fan off during suspend (half the time it goes full speed instead), or how the password is now renamed a PIN because I have a fingerprint sensor.
Oddly, Microsoft do make scattered contributions elsewhere. It's just fascinating how the good stuff never makes it into their products, like transactional file system updates to install patches in the background.
The primary upside of Windows is work on making it backwards compatible, which is a mess on its own; the settings you get access to in a PIF or LNK just scratch the surface, and that those two are different systems is just one example of how inconsistent it all is. But then it's super frequently wasted as software just breaks down when your screen is bigger than 1024 pixels, or drive bigger than 2GiB, etc etc.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@tiarkrezar There are limits, yes, but they're likely scaled to the number of threads. For instance, this coalescing might use CU-wide reduction logic, and serialize only if there are multiple addresses with contention within a SIMD group. E.g. for one write instruction, the GPU expects to create 16 writes, but 7 go to address A and 4 to address B; the first cycle enqueues the 6 to address A as one coalesced operation and the remaining 5 non-contentious as individual operations, then the second cycle enqueues address B as a coalesced operation. These operations might then be distributed to memory controller channels where the actual RAM reads and writes take place at some point. If it goes through cache lines (read-modify-write operations likely do), it's quite likely several of these operations hit the same cache lines before they're written back out to RAM. The sieve program is likely limited by the write command rate, similar to the fill rate for graphics, and won't need to run the compute units at boost frequencies.
1
-
1
-
Yep, this tradeoff between polling and interrupts does exist! Interrupts cause a context switch, which involves saving some state and jumping to another routine, possibly waking up a CPU that was sleeping entirely. Spinning the CPU on a single task can achieve faster reaction times on many, but not all, processors (the exceptions tend to have hardware contexts, like SPARC register windows, ARM fast interrupts or XMOS threads). The point I was making is that the CPU receives the interrupt when the message has arrived; it is not involved while the keyboard is transferring, and does not need a round trip to interrogate the keyboard. It's more like the mail man leaves the note at the desk you're already working than you answering the bell. (On PS/2, there's a high risk the CPU is fast enough to switch back and forth for each byte, which can get costly as many messages are several bytes long.)
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@artit91 That's almost fully mistaken. The stack is the simplest, most fundamental form of dynamic memory allocation; it uses one pointer, with one side being free memory and the other occupied. alloca() can therefore just change the stack pointer, relying on the frame pointer to restore it, while functions not using dynamic allocation on the stack (like alloca or dynamically sized automatic arrays) can outright omit the frame pointer.
malloc's heap allocation, by contrast, requires tracking at least two pieces of information per chunk of memory; size and if it's free. Added to this is often a heap size which may or may not be dynamic (program break). On top of that it requires the appropriate search algorithms to transfer, split and merge chunks, and will handle fragmentation (when you can't allocate a size that is free, because the free memory is divided by allocated memory). The advantage is that you can free things in arbitrary order.
For machines without an MMU, stack placement is trivial; you place static allocations at one end of memory and start the stack pointer at one end of the remaining free space. The heap is frequently statically sized, such that a stack exhaustion is when the stack and heap meet. Often it is assumed that they never will meet, so there's no overhead checking for it, which an MMU would otherwise assist with.
The stack is so simple and fundamental it exists even on some computers that don't support indirect memory addressing in the first place, like PIC10.
What does typically complicate matters is multithreading, when you need more than one stack. That's why we have calls like pthread_attr_setstacksize, so you can allocate a suitable size for threads depending on their task.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@TheZabbiemaster Packages are containers for system components; not just programs, e.g. notepad, but libraries, e.g. direct3d, drivers, data such as timezones, keyboard mappings, and UI text in various languages, documentation and so on. The point of a package system is to track these components so you can install, remove or upgrade them. E.g. if the library handling WebM has a bug, we fix that package rather than everything that contains a web view separately.
Linux distribution packages appear "many" compared to Windows for a few reasons. One is that Windows bundles them to crazy degrees, e.g. service packs, .NET framework, suites like Office. Note that the latter isn't included, and that's a reason for Microsoft to ensure what is included underperforms. Another is that Linux distributions unify the view; we have rough cuts like the Tasksel profiles, but rarely make no efforts to hide the full lists. Some distributions take different approaches, too, like not separating development sections from runtime parts, or platform specific binaries from shareable data.
But the core answer is simpler. We have many packages because there are many users, developers and devices, and they all have choices. I don't have to stop using my SpaceOrb 360 just because Windows 11 exists.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I'm an extremist! I hold the extreme opinion that accusations do not imply (in the logical sense, i.e. prove) guilt, and that punishment, if used, should be scalable to relevance. For instance, I believe copying a music recording is less serious than murder, and fictional tales are less serious than direct abuse of people, even if drawn as pictures. Another of my questionable beliefs is that ideas for legislation should be weighed more by their societal effects than the income of the organisation that proposed them.
I find it sad how the word "moderate" has been coopted in politics, by the way.
Far less extreme is my opinion that Youtube ostensibly letting fraudsters (the biggest ad category here) set the rules is a bad idea.
Always remember, the first target of marketers is the ad buyers. Their goal is profit for themselves, not their "clients". That's a con game itself, with the "DRM" business providing an egregious example (making products worse to allay induced fears, with only one winner - and that's not the publishers or authors).
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Operating systems are indeed written in particular styles; mostly state machine, but some higher algorithms in scheduling and memory management, with a focus on actual steps in hardware. While they likely won't have a mark and sweep style garbage collector, they will have reference counting and implicit priorities (e.g. which memory page can you discard when you need more? when do you run the work of wiping freed pages for reuse?). The key tends to be isolation; if you're mutating your state, make sure you're alone in accessing that state. If your work takes time, see if you can do it without interrupting other work. Side effects remain anathema; so make very clear where they exist, e.g. scheduling points. If your work doesn't take notable time, make sure to squeeze it in before you do the overhead of switching back to user space.
OS developers thus focus on costs and risks that are often further from mind in functional styles, even if they're not necessarily less efficient at them. If we could have a class of drivers that operate entirely in STM with bounded RAM, that could provide a considerable gain. Engineers have planned for such usages, e.g. with microkernel architectures and multiple privilege rings, but it hasn't quite caught on. I think two factors tend to dominate why not: short path to understanding what precisely is executed, and (sometimes excessive) focus on overhead.
The really sad thing is, a lot of that overhead could have been removed on a hardware level. If you have the interest, take a look at e.g. CRASH/SAFE.
A seemingly more successful initiative today is F* and Project Everest.
1
-
@frustrateduser9933 I believe the case I was thinking about involved Peter Bergman, at least that case was widely publicized (possibly starting with Östran). At the time he had some vision, using a computer with great magnification (Synskadades Riksförbund and Aftonbladet referred to him as blind). Radiotjänst i Kiruna did everything they could to be unreasonable, including immediately shouting they wouldn't be paying anything back after being specifically instructed that was their duty by the supreme court. I personally found it most remarkable when they specifically published a statement claiming their interpretation was what decided what the law said, not the intent or product of the lawmaking process.
The core of the case was that they'd decided to unilaterally claim every computer was a TV receiver, specifically so they could extort more money. They claimed public service had become more expensive to produce and income was dropping while their public accounting disagreed. The law stated that a TV receiver was a device built for the purpose (e.g. a computer with a TV card), and had a specific definition of TV transmissions which excluded any Internet streaming service. In the end their campaign really highlighted that SVT's broadcasting license didn't actually cover Internet streaming. Another fun revelation was that RIKAB employed their own court that never ruled against them or even acknowledged arguments; they only existed to force people to go to court at least twice, and if you did win the case (in the second court) to show you had no TV, nothing stopped RIKAB from again deciding you did anyway the next day. I'm a little uncertain why they employed squealers to point out people as having TVs when they didn't have any burden of proof anyhow, but those were both incompetent and deceitful.
We ended up with a reform where the public service is now paid for by plain taxation per individual. Not necessarily more fair, but less actively corrupt.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I totally agree on preferring Rust over C++, but it isn't a magical silver bullet. What you want isn't just CUDA in Rust, but the best pieces of Halide, Chapel and Futhark. Chapel has a strong concept of domain subdivisions and distributed computing, Halide has algorithm rearrangements, Futhark has a less noisy language with some strong library concepts like commutative reductions and tooling that can autotune for your data. You'd also want a reasonably integrated proof system, as in Idris 2.
The core thing that Chapel and Halide bring is the ability to separate your operational algorithm from your machine optimizations. E.g. if you chunk something for optimization, the overall operation is still the same. Futhark does some of that too, but only profile guided. Some fields approach this by separately writing formal proofs that two implementations are equivalent instead, but it's a much smoother process if you can maintain that as you write, like Idris attempts.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
You're right, pipelining is time domain multiplexing at the function block level. If you have some fairly complex function, it takes time to complete it, as it propagates through multiple layers of logic. If we add registers spread out in that deep logic, the depth is lower so we can raise the frequency, but the new registers must then be filled with more data. The stages of the pipeline are like work stations along a conveyor belt. It's the same in CPUs; a pipelined CPU has multiple instructions at varying stages of completion. A revolver CPU, such as XMOS XS1, runs instructions from multiple threads to ensure they're independent (generic name SMT, hyperthreading is one example). MIPS instead restricts the effects, such that the instruction after a branch (in the delay slot) doesn't need to be cancelled. DSPs like GPUs specialize in this sort of thing, and might e.g. use one decoded instruction to run 16 ALU stages for 4 cycles (described as a wavefront or warp).
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
They are indeed distinct, and there are other definitions too. For instance, threading predates multithreading; it's when a set of instructions are assembled into one subprogram, like beads on a string, as used in threaded Forth implementations.
Concurrency is multiple things (co-) existing at the same time (current). Multithreading or multiprocessing is when we have more than one thread of control in the same system. Parallelism usually refers to when processing might be concurrent, i.e. more than one thread is actually running (as opposed to time sharing), whether through SMT, SIMD, SMP, coprocessing or distribution; typically to speed computation up.
In this example, no requirement of parallelism is present, but asynchronous concurrent multithreading is assumed. Santa is an example of single thread concurrency in that he may be awakened for multiple tasks. A more typical example might be a chat program awaiting either user or network input.
It's a synchronization challenge, with unclear requirements, where you're expected to use shared memory and only the one synchronization tool (semaphores). In modern software (from about a decade before this problem was posed) we'd probably use a higher level construct, such as MVar or channels, to associate the relevant data (e.g. count in room or wakeup source) with the event.
1
-
1
-
1
-
Like any other feature, it's somewhere, maybe. The list of things moved recently includes comments, descriptions, playlists, titles, chats, controls, and recommendations. Things outright removed includes annotations, metadata tags, and playback corrections. There's inevitably more.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1