Comments by "sharper68" (@sharper68) on "Science Time" channel.

  1. 5
  2. 4
  3.  @enumclaw79  We have barely started looking and already we are finding noticeable "bugs". So perhaps you are right and there are more bugs to find that we do not want have the tests to see. The idea that you and I could not conceive of how this technology might work does not change it's probability it might someday. As in the trenches experts, our skill set as engineers is iinatly limited by our understanding of existing technology, tools and available topologies in an evaluatin of a working version of this system. To say they could never exist feels premature given the rate of our change and therotiecal leaps gained by proposed future tech. The idea this system could not run without constant bugs that would be noticed ignores that the subjects of the simulation could not just be configured toì ignore them. Maybe they are occuring but like badly scripted bots we do not see them, yet. The kinds of bugs we have faced in the past and you outlined will be mitigated. Details easy to miss for any programmer will not be mussed by these systems and subsystems crested by the constructs. This kind of expert system will be working in conjuction with human designers to create next gen tools, which will allow complexity and topologies currently unmanageable. They wil bel built in a way to allow the constructs within the simulation to meet the applications needs. The idea it can't be done feels too attached to an aopeal to a staus quo. Citing the concept is at home in the 1970's is not the compelling point you think it is. You were the one who played the argumet from authority fallacy in your reply. It weakened your unearned crediblity with me. There is no consensus among current experts this can't be done in the future. No amount of attacking my personal credibility does not changes that Your offence that I called your novel a book is you expecting respect while showing none. It is hard to take your scolding seriously
    2
  4. 1
  5. Your assertions are all based on your understanding of technology we use now and the limits you outline may not exist in the future for us let alone in a system as complex as the one that could be simulating our reality for us. You are projecting our systems limitations onto this proposed one. We have no analogue to a system that could do a fraction of what this proposed simulator would need to be able to do. The bugs and errors which you assert make the system impossible may exist. The fix would a simple matter effective and instant error handling and a filter so we do not perceive the errors occur. Without access to the back end the software running even modern programs with proper error handling have no idea they hit a bug after the error is handled. One would need to go to the logs to see what occurred and we may not have access to those here. The simulation does not need to be perfect code, it would require excellent seamless error handling to make it work. I have a degree in software engineering too, no one has tried to write anything like this and our expertise would make us like babies in the face of a system complex enough to do what the simulation theory proposes. That is to say your professional take on this not as valuable as you pretend it is. If we were to ever write simulations of this complexity it will not be you or I writing that base code, it will be written by AI's directed to write a tool which will write a more complex tool etc. We as engineers will only be working on the very earliest iterations of the system. Comparing the something like shuttles software to this proposed simulation is like comparing a Ruko Toy to commander Data of StarTrek. We have many server options provide 24/7 uptime with redundancy , fail over and multiple sites. There is no reason why this kind of simulation could not have a more advanced version of the same thing with fewer errors or issues. Problems that get covered up as soon as they happen without the user even noticing. This is not proof we are in a simulation, just that your spin is not as definitive as you think it is.
    1
  6.  @enumclaw79  Our eventual advanced simulations will not likely be "man made". They will be constructed by specially created ai tools to write better tools to write this kind of application. The system would be designed to not be prone to the errors or would be able to correct and adjust them in real time. The idea no system could do this does not strike me as well thought out. I see no reason why something like this could not exist given time and resources applied. With the expansion of technology and AI, I expect we will be creating highly detailed simulations ourselves within the imaginable future . You do not seem to understand that zero down time systems that manage up time with parallel processing on distributed machines run all the time, this is despite the bugs inherent in our code. We write durable very complex code all the time, bugs are not the inherent problem you are pretending they are. By the very nature of any working system they are manageable and most systems work within a specific operational range as dictated by the demands of the service All errors can be managed, hardware can be configured to be parallel and redundant. With the proper application of focus and resources there does not ever have to be an unhandled error and post submit audits to confirm smooth running can check for badly generated data within a specified margin. That is now. This will be even more so in the case when we have a flexible, fast and hyper intelligent utility to monitor and react appropriately to each in real time. The limits you are discussing are not eternal and are largely based on the way code is written and the tools we have to do it now. The idea we will never move into a place a system like this can be created does not seem certain at all. We can not do so with with our current software and tech, but no one said we can. The idea no one could ever create a system like this at all because of your bug concern seems short sighted. It is the kind of thing a 1990's programmer would say about a computer that sits in the palm of your hand but with more processing power than the space shuttle. The best scientific evidence of simulation theory may be an example of us finding a bug. That photons do not act the same way when observed and not observed in a double slit experiment might very well be an oversight in the logic of the simulation. You are correct that bugs exist even in advanced code, but they do not necessarily break it.
    1
  7. 1
  8.  @enumclaw79  I never said we could write these systems today and it is a silly strawman that anyone asserts we can. I have laid out the rudiments of the kind of tech and processes that will be required to build a system like this. I disagree that that this system could not be written for the reasons you cite. A 1970's assertion that assumes we do not move past or mitigate your stated limitations is not well supported. It does not seem insurmountable to me given extra resources, topologies and tools these issues can not be addressed. You are not making a scientific assertion when you say this can't be done. This is not hard science we are discussing, but potentials and concepts. You reject the idea based on limitations of tech and systems we are now currently handcuffed by, that does not seem apt here. Many of your concerns are conceivably mitigatable by processes I have outlined earlier. I can think of ways this may be done, I can imagine another mind solving this. This is a science that is rapidly changing. Many hard and fast rules about the system design from the 1970's are no longer true today. They were solved by next gen languages, raw horse power and topology innovation. It may be far fetched, but is most certainly a hypothesis. That you do not think it likely does not change it's nature. It may be a failed hypothesis but it is based on data and observations that demand speculation. Physicists have no mandate to explain the nature of this system to make this particular hypothesis. At this stage in the investigation on this subject our expertise as system engineers is largely irrelevant. We would be cave dwellers looking at a rocket to the moon on a system of this complexity. I do not expect our experts to be able to design a system of this complexity and demand, today. That does not mean it can't be done. If I can imagine ways to mitigate some of your concerns using contemporary concepts (as we do now) then I see no reason to wave away the idea these issues could be addressed. We are not talking about making anti grav here, the idea no one could build a fault management system for this process is not the same thing. The error issue you highlight is conceivably solvable with horse power, redundant parallel processes and filters to mitigate the fallout of the concerns. Now give me your actual road map solution for anti grav and I will accede your point. Arguments from authority are fallacies for a reason, your claims it can't be done are as empty as mine it is conceivable it is true That we do not understand how to build this kind of process does not change the nature of the hypothesis you reject out of limitations of our current tech, not the base concept.
    1
  9.  @enumclaw79 I reject your spin too, the idea this can not be done is not a widely accepted concept even among your peers. That you are pretty alone in making a definitive statement on this does not give you the unearned cred you seem to demand. You saying it can't be done because because we can not mitigate errors seems silly on the face. The fact you can not imagine how to deal with the concerns faced does not mean they can't be dealt with. Your reasons for flatly rejecting the idea are shortsighted and somewhat myopic. Your overly definitive assertions are empty in context. The issues that are stopping us from creating this system are simply not eternal or un-addressable like the production of grav plating given our current understanding of both sets of subject matter. You are just saying nuh uh because you correctly recognize a technical problem we have not yet solved. As I stated earlier your desire to inject your expertise on this issue is largely pointless. Your arguments are only definitive in the context of your own experience/current tech limitations and prove literally nothing. Hand wave these objections away at your leisure, you do seem deeply invested in this subject in a personal way that seems to bias you. None of the claims here are definitive, nor do they have the data backing them to make the idea "hard science". But calling on a modern software engineer's skills to effectively say the simulation can't be done because we can't do it now is not compelling in debunking the concept. Am no giant proponent of the SH myself, it is mostly interesting. There is not enough info for me to make any kind of definitive assertions about it's veracity. There are a few compelling augments countering it, I just do not find yours to be very effective or convincing. If indeed we are living in a simulation featuring literally trillions of animal minds, it is not crazy to assume the writers beat your 1970's limitation as a truism. Software issues are mitigated with redundancy and resources applied to the issue allowing all usable systems to work within an acceptable tolerance of errors. In this context it is hard to imagine the requirements necessary to deal with your specific concern. That is not remotely the same as our total lack of understanding about the nature of gravity on command.
    1
  10.  @enumclaw79 There are several plausible ways it can be done. That we do not have a reasonable way to do it now does not mean your expertise as a computer engineer who can not imagine how it is done is critical to the discussion. Perfect code does not exist and does not need to in order to design the construct. All that is needed is correct error handling and or an external auditing of the process along with data fixing. The idea this is impossible or the basic structure that will be the core of these systems is not already written is asserted ignorance. AI Generated code will be used as a tool to create and manage interactions between tools in a way a person would not be able to. It will enable the creation of processes that would be conventionally impossible extending our ability to innovate the sub subsystem integration which will be integral to a process this complex. AI use in development will give an advanced generation of tools opening up options currently not at our finger times. One does not need to predict every outcome, one needs to create the basic structure and include an algorithm that allows a fail over into a self learning corrective process that will iteratively corrects the issue or gently fail until corrected externally. Patching data retro-actively would undo any damage the errors may cause going forward rolling back concerns on error stacking issues. One could conceivably brute force fix these issues if you had the cycles to do so I agree on your assessment of what these tools do will do in the short term. I continue to disagree with you on the simulation thing for many the reasons cited. That we do have not the technology or topologies to make such a thing now does not mean it is unimaginable even from a technical perspective. As a final note on your error concerns, we may be facing the errors you cite all the time in this simulation. In business today we have bugs that cause damage, we fix them and then fix the data so the issue goes away entirely. As long as this proposed system can patch the data and put things right after a bug, we as participants would not need to know it happened. If this was true it would flatly undermine any of the questionable value your expertise has on the validating or refuting this concept. As entities in this imagined box, we may not be equipped to understand how to actually build this. That truth would not preclude it being real.
    1