Comments by "clray123" (@clray123) on "Fireship"
channel.
-
1000
-
151
-
75
-
45
-
43
-
35
-
34
-
30
-
28
-
16
-
15
-
13
-
10
-
10
-
10
-
9
-
8
-
8
-
8
-
@CrystalStearOfTheCas Because if all your conversations are accessible to government agents, it is easy for someone in the future government to retrospectively outlaw what you said, and arrest you for that crime. Or even today they could claim that you said or sent something "illegal" which you actually didn't (how are you going to prove in court that you did NOT say or send it - that the "evidence" was fabricated - if the accuser is known to have fully legal access to your communications?).
In result, eliminating privacy has the effect of shutting down certain kinds of conversation altogether. This is especially true if you are in a politically precarious position, e.g. a member of opposition who the current government wants to eliminate as a threat to their power.
Basically, it reminds us of how the Spanish inquisition operated to eliminate their political enemies. Accuse of blasphemy, then make a big public trial, punish the "crime" severely so that nobody else even thinks to say anything against you. This is what we are facing now with the modern attempts to ban encryption or eliminate private communications. Literally your life may depend on it (as some people in countries like Russia, China, Ukraine, have already learned).
8
-
8
-
7
-
7
-
7
-
6
-
6
-
6
-
6
-
5
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
2
-
2
-
2
-
2
-
2
-
@zainnatour4792 No, these models cannot be "easily" trained to do what good programmers do, which is essentially predicting the future and predicting human behaviors - the consequences of your decisions in regard to correctness, performance, user experience, handling of exceptional situations, productivity of future maintainers, total cost of ownership caused by a particular implementation, trending in popularity of programming languages/libraries/frameworks etc. etc. The best they can do is parrot code examples, but hey, for that purpose looking up stuff on StackOverflow is entirely sufficient, and chances are you will also get to see some intelligent discussion there, unlike from the "commentary" which the AI generates along with the copied code snippet. These models struggle and fail with basic logic tasks like adding numbers correctly (unless they cheat and resort to use a calculator), they are far from the sort of causal and diagnostic reasoning that is required for successful software dev.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1