General statistics
List of Youtube channels
Youtube commenter search
Distinguished comments
About
Mikko Rantalainen
Sabine Hossenfelder
comments
Comments by "Mikko Rantalainen" (@MikkoRantalainen) on "Sabine Hossenfelder" channel.
Previous
3
Next
...
All
@BillyViBritannia Have you been in lots of meetings during your life? Many organizations employ people that need a lot of support to be brave enough to express their real thoughts in meetings.
1
@annekincannon-kf3hx Are you saying you would say "My first belief is that this idea is good" instead of "My first feeling is that this idea is good"?
1
@jonnawyatt I agree that meetings are typically pretty bad. However, unless we can get everybody on board to async communication based on text and understandable replies, the meetings are the least problematic solution known to get information from people that are not capable of effective async communication.
1
@chrisk.7418 Both parties must end up with the same symmetric key for communication to be successful. I meant this full key that both parties share. Of course, with modern perfrect forward secrecy the symmetric key is created per session using a method that never requires transmitting the bits in symmetric key in over the network (encrypted or otherwise) and as a result, not even a quantum computer can later extract the symmetric key from recorded traffic. That said, if the handshake algorithm used to create the same symmetric key for both parties is vulnerable, then an attacker might be able to crack the handskake using only the captured bits related to the handshake. As far as I know, modern handshake algorithms are not vulnerable but if you used e.g. basic variant of Diffie-Helman algorithm, an attacker with ability to do discrete logarithms (which might be possible with a future quantum computer) could recreate the symmetric key using the data from the captured traffic.
1
@chrisk.7418 The handshake works by both parties creating a big random number and sending a result of a clever computation of that big number to the other party, they can do another clever computation with their own random number and both parties end up with the same number. And yes, the handshake a bit similar to asymmetric ("public key") encryption which is why that part might be vulnerable to quantum computers. The symmetric encryption part cannot be cracked with quantum computers so any future attack will have to attack the handshake or earlier parts.
1
Exactly this. If you don't have method that can randomly de-escalate with higher probability soon enough to not launch nukes, launching of nukes will happen.
1
2:28 I'm watching this from Finland and I can tell that we have very same problems here, except that winter is longer and colder than in Germany.
1
I think that whenever April 1 nerd jokes are mentioned, RFC 1149 (and RFC 2549 which amends it) would deserve a mention.
1
@sluggo206 Exactly. For "web 2.0" (or simply HTML5 if you don't need the marketing speech) it's called microformats. It's basically simply re-using HTML5 attribute called "class" and defining explicit meaning for some common words such as "summary", "location" etc. The idea is that authors may accidentally use the correct word and the markup makes semantic sense even if you're not using microformats parser. However, whenever I see microformats in real world, it seems to have about 50% change of being incorrect usage even though it's obvious from the context that the intent was to use microformats instead of accidentally using the same word. As a result, I expect any metadata in general to have slim to none possibility to be correct because use of microformats is rare and even in those rare cases when it's used, it's wrong half the time.
1
@vylbird8014 The idea is that if the whole intent of your website is not to get visitors but publish information, then using microformats allows information to spread faster and more accurately. For example, if you have a website about big concert, do you really want lots of visitors on your site, or do you really want people to be aware of the concert with accurate date and location? That said, if you run Twitter, Facebook or Instagram, you obviously do not want to export any data from your system because the whole idea of your site is to make people to visit it. And you definitely want to avoid other people from getting the information to keep monopoly on the said information.
1
@NJ-wb1cz Yes, I agree that machine learning is not AGI but I would argue that machine learning is AI, especially in modern language usage of words "artificial intelligence". However, once you get e.g. GPT-4 to get answers close enough to correct, you can then use GPT-4 to train proper AGI-like system faster because GPT-4 can emit close-enough perfect responses faster than humans so a possible AGI system can be trained with GPT-4 faster than by humans. And of course, a proper AGI system could be trained by GPT-4 and it would slowly learn that GPT-4 sometimes makes mistakes because proper AGI system could cross-correlate multiple answers from GPT-4.
1
@JimBob1937 WCAG 2.1AA only require that the content is accessible but it doesn't require markup declaring that a string "760 United Nations Plaza, Manhattan" on a random website is actually an address that can pinpoint location pretty accurately worldwide. Microformats are about ability to use markup to declare that that piece of string is an address. My point is that AGI or even LLM can already easily figure that out and that doesn't require humans or even programmers to figure out how to detect and introduce markup to declare that string as an address. As a result, I don't believe the semantic part of Web 3.0 is going to happen in large scale. The only exception would be if caching the results of AI computation would be deemed worth doing. In that case, the site owner could run the pages through some kind of AI system which adds the missing markup which reduces the processing power needed by the reader of the website. However, there's no additional information introduced, just caching the computed results.
1
@cjbottaro When was that poll done? The polls I've seen seem to suggest that the average guess is getting much closer to year 2030 the more recent the poll is.
1
@tehm-tpc I think it's important to understand that we don't have perfect knowledge today (in databases or anywhere else) and will not have perfect knowledge in the future either. If humans do not use the database directly but using some kind of natural language interface, the changes of fixing the underlying mistakes in the data in the database get harder and harder to fix.
1
@minhtrietvo8448 Is that your guess or do you have some data to back it up?
1
@Mythhammer As I see it, even LLM has generated much higher quality output than expected when the network is huge enough. As a result, it seems probable that "true intelligence" happens automatically given complex enough computing system. I wouldn't be overly surprised if eveb LLM were good enough for AGI if you can scale the LLM to brain-like numbers: brain is estimated to have 500–1000 trillion synapses so instead of 175 billion parameters for GPT-3 or 1760 billion parameters for GPT-4, you would need to run an LLM with 1000000 billion parameters to match human brain. However, the algorithms currently used for LLMs do not scale to 1000000 billion parameters on any sensible hardware. We need either much faster computer systems or better algorithms. Tesla Dojo seems to be a bet towards just building better computers and we'll see around 2025 if it turns out to be a good idea or not. Of course, even if it were possible to use LLM for everything, it is probably not the best idea to use LLM for AGI because the interface is so limited.
1
@tehm-tpc I was trying to argue that the data already in plain-text already has unknown amount of bad data. Adding additional metadata (e.g. semantic markup) to it doesn't improve the quality of the data. And if the metadata is mostly invisible to most users, the changes of even the metadata actually fully matching the plain-text data is slim to none.
1
@timetraveler_0 Do you think that human brain has some special (yet unknown) parts in addition to neurons and synapses? If human brain has nothing but neurons and synapses, then large enough deep learning network model will be enough to accurately model the features of a brain. In addition, it seems clear that human brain cannot use backpropagation for learning and it's currently considered the best method for training the network so AI should be able to learn faster with the same neuron and synapse count. The problem is processing speed. Current estimates for the future Tesla Dojo system (expected to be running around year 2025) sets its processing power to about 10% of the processing power of human brain for running the neural network. If a human takes about 3 years to learn to speak simple things, an AGI running at 10% of human brain would require 30 years to demonstrate knowledge level of a 3 year old child. Who is going to pay for running said system for that long? (And just to be clear: human brain cannot compute more than the best supercomputers we have but when best supercomputers emulate the human brain, the emulation has such a high overhead that the resulting performance is still pretty small compared to human brain.)
1
Previous
3
Next
...
All