Youtube comments of Slipoch (@slipoch6635).

  1. 1600
  2. 90
  3. 35
  4. 35
  5. 32
  6. 27
  7. 23
  8. 18
  9. 17
  10. 14
  11. 13
  12. 12
  13. 11
  14. 9
  15. 8
  16. 7
  17. 7
  18. 7
  19. 6
  20. 6
  21. 6
  22. 5
  23. 5
  24. 5
  25. 5
  26. 4
  27. 4
  28. 4
  29. 4
  30. 3
  31. 3
  32. 3
  33. 3
  34. 3
  35. 3
  36. 3
  37. 3
  38. 3
  39. 3
  40. 3
  41. 3
  42. 3
  43. 3
  44. 3
  45. 3
  46. 3
  47. 3
  48. 3
  49. 3
  50. 3
  51. 3
  52. 2
  53. 2
  54. 2
  55. 2
  56. 2
  57. 2
  58.  @chrischance1063  Myself I always go with PC, simply more options and I can fit 13 hard drives in my current case without modding ;) so a heap of fast RAID storage space. Try to get a 16:10 monitor that can be rotated preferably 2k 75+hz , if you really need a laptop, maybe wait for the new AMD cpus *(4000 series) to come out as they will require less power but be better performance per core than intel, this speeds up your compile times, particularly on large multi-project solutions and also means your battery will last longer. I always go with windows/linux as you have more IDEs available, VS is only available on mac in a stripped down form. THere is also only 20% of users that are using mac, so program dev for pc is far more likely, most of Europe it's only about 10-15% on Mac. 2TB hard drive is a minimum, you can always expand further with a thunderbolt drive array or a NAS. Don't worry about a good gfx card unless you are gaming or developing games, if you are using UE4 engine, try to get one of the 20 series nvidia cards as they have released prosumer drivers for graphics and game creation. Try for good speed ram, avoid Corsair as they use the lowest standard memory chips so you might as well buy non-brand-name in that instance.Make sure you get a good keyboard, toshiba and lenovo keyboards are good on laptop, any decent mechanical on PC is good but epends on taste, I use one that cost $70 (AUD) with cherry brown keys that is pretty decent. If you are getting a laptop, try for 17", as that usually means a bigger keyboard and it also makes it easier to read text.
    2
  59. Those wage increases will have little-no effect on the economy nor on prices in the supermarket (actually give it 2 years and we should see a 5%+ strengthening of the economy if past evidence over 20 years in 15 western nations, including Australia, holds true). The supermarkets have been recording year-on-year record profits since ~2017, I will note that the producers are not seeing any of that, particularly the farmers. If the costs had gone up the supermarkets would have made modest profits in-line with what had gone before, but it's like when they had the storm hit the banana plantations, the farmers were selling product off cheap, the supermarkets were selling them for $11-$20 per kg. I think also we have the problem of hedge funds and other investment groups buying housing to keep off the market to accrue value. (they keep these empty) like in 2009 when Domain said there were no more empty houses in Sydney and the ABS promptly pointed out there were 200k new houses that had not been rented in over 2 years. remove negative gearing from everything except the first 1-2 investment houses (have this as a owning company effect so a company cannot own a bunch of smaller companies each with 2 investment properties, each property counts right up the line to all businesses with a stake). This will drive down prices, remove the condition that first home buyers have to buy a new house, this has been driving shitty building for some time. Limit the amount of rent you can charge to the value of the property.
    2
  60. 2
  61. 2
  62. 2
  63. sorry - We DO know when most of the pyramids were built to within the dynasty at the very least, to a specific ruler in the most prevalent pyramids, we know how they were built (1st yr engineering at university teaches you how these were done along with mayan/aztec pyramids and the transportation methodologies) . We know who built most of them (some of them mention the architects in the hieroglyphics, the rosetta stone is instructions on how to obey foremen in multiple languages including jewish). As for why, well it started as small monuments (like we do with gravestones) and got more and more complex and ostentatious as time went on, you can see the progression of the designs and methodology quite clearly when you look at he oldest ones in the valley of the kings. Add in that it was supposed to be their abode in the afterlife and you have a compelling reason to make a fancy house. Tesla also failed to be able to use energy from the atmosphere, claimed the energy was alive (he thought thinking was external electrical currents, close but not quite right.) and failed to understand why DC was lethal (he electrocuted and killed an elephant to try to prove it was safe) . Good when it came to electrical engineering, he was rubbish when it came to physics. Given that Tesla was led down the garden path by the gnostics (proven wrong by the rosetta stone, also proven wrong by Tacitus's writings and various other writings from the periods). The gaia series is factually incorrect on most of their 'history' that they espouse (some coming from Gardiner and Grahame of the wiccan beliefs in the 70's, most is more modern), includes no research, no fact checking and bugger-all logic, it is also factually incorrect on a lot fo physics. Remember, if you have a hypothesis it MUST fit all the facts, and explain them better than current theory. This stuff does not. Electroplating of the pyramids capstone was carried out by the acid-jars as the acid used + the metal purity was not enough to do anything else (think < a 5v battery), presumably because you would get more lightening strikes on them (this is also only on the more modern of he pyramids) to the others in the chat, yes the founding fathers were largely patent thieves, I don't know of a single invention of ben franklin or eddison that was not already completed commercially in another country (marconi, tesla etc.). This s the issue with ethnocentric teaching and a general lack of education in the teaching sector of the american school system.
    2
  64. 2
  65. 2
  66. 2
  67. 2
  68. 2
  69. ​ @lalotz  Full-stack is a bit of a misleading term, no-one is truly full-stack otherwiise you would have to be SQL/DBMS as well as Server Admin as well as FE as well as BE. etc. But if you wish to do programming on large-scale stuff and do a 'bit of everything' then don't get a Mac, you will be artifically limited in software choices and they run poorly, there are a lot more programming tools out there for linux and windows that are simply not allowed to run on Mac. Try to get a desktop if possible as you will then have upgrade options and more power available to you, grab a 16x10 monitor or one that can be swiveled to vertical as well as a landscape 2k-4k monitor with reasonable colour accuracy (Phillips make good colour acurrate monitors for a good price). The reason I say 2k/4k is because a lot of websites are designed for a max 1920px screen width, seeing the page on a larger pixel width can show you if you have a site that doesn't scale well. (don't turn on zoom for this). Try for a slightly higher refresh rate as well. Get a decent amount of RAM if you are going to be running multiple sites on your system at once. 16gb is a minimum. Especially if you are going to be running Adobe's suites as well. Get a decent multi-threaded CPU, while IDEs don't tend to use all cores, the compile will (if you are compiling). AMDs new chips look great for this or the ryzen 7-9s.. Minimum of 1TB of hard drive space, this space will fill up over a year or 2, very quickly if you work on large-scale projects that integrate stuff like salesforce etc. So make sure you can expand the hard disks later on. An ok graphics card like a 1660 will do the job in all these. If you have to choose a lappy of the highend dell XPS models, but do not expect good support unless it is bought through a business, better than Mac support but only marginally. Asus typically make very decent machines in their 'art' line, good service from what I have seen in Australia. For bang vs buck Origin PC / Metabox make very higher-powered laptops that can operate well and are expandable, they are rebadged clevos, unsure of longevity support etc. Hope this helps and isn't too confusing.
    2
  70. 2
  71. 2
  72. 2
  73. 2
  74. 2
  75. ​ @manlymcstud8588  Actually the studies are peer- reviewed papers published in high-impact factor academic journals, and covered the UK, Germany, Sweden, I think Australia and the few companies in the US and several other nations. It compared the overall productivity of the employees before and after shifting to remote work, it also accounted for other factors such as a change in software infrastructure, etc. If I am a contractor you are right, I need to be responsible for my own gear, but if I am an employee I am typically paid less than a contractor, nor do I (typically) set my own times for work, have to have business insurance, have greater rights over my code/product/service delivery, and a bunch of other things that a contractor gets to control. If a company allows workers to work remotely, and you are an employee not a contractor, then it is the responsibility of the company to provide the equipment for their employees to do the job. This is also their business decision as they can calculate for the costs vs benefits of allowing this. As to your arguments 1) A business does not pay for any of these at all except under certain circumstances, an employee (under Australian law) does not need to pay for insurance related to the business in either circumstance. In a lot of situations an employee may live within walking distance of the office, this has no impact on the business nor should it be taken into account in this reasoning. But if an employee works in an office, then the business has to pay for desks, computers, power, floor space and a lot of other overheads normally that they will either pay the same or less to have in someone's home. The largest cost here is typically floor space in an office building, which is usually pretty expensive. Pt 1 is not really relevant as most of these are non-business impact examples. 2) Yes you can in Australia too, as long as your work is not paying for it and any expenses are tax deductible as well, but if I have to buy a computer specifically for a certain job (lets say they want it on a VPN and not be used for private use or whatever), lets assume that the computer costs ~$2k, if I buy the computer and it comes off my taxable income I will only get the equivalent of ~$700 back in tax savings at a 35% income tax rate (not including GST) UNLESS this pushes me below one of the tax thresholds (then the impact will be slightly larger). This means that I, as an employee using a computer required by the employer have just had to pay ~$1300. 3) Advertising and data harvesting are the two that immediately spring to mind as an answer. Also plenty of companies in the tech & industrial sector have started paying for food and snacks for employees to retain staff and boost morale. Hell in the industrial industries starting about 20 years ago a lot of companies will often buy you a vehicle that you own after 5 years of employment, this is to retain staff longer term, obviously this doesn't happen at hungry jacks or Maccas. Also this isn't about convenience, this is about flexibility and the choices the company make, if they choose to allow flexi-work arrangements, then they shouldn't use that as an opportunity to get their employees to buy office equipment to use at work, whether that be at home or in the office itself. 4)Actually they found that interns typically are forking out ~$10k per year in the banking sector on office supplies for where they are interning, as they are (commonly) not being paid back. For someone who is not being paid in the first place, this is a lot of money. Depending on costs, yes sometimes they can be low amounts, I rarely use pens for example, so when I do I use my own, but if you are in an industry that uses a high amount of consumables, this can rack up pretty quickly. All this being said, the responsibility is still with the business that chose to allow remote working to make sure if someone requests supply that they can supply to the job's requirements, it is not the responsibility of an employee to support the company employing them. Their responsibility is to do what they have been hired to do to the best of their abilities. 5) Hell yes on the daycare. I have a private office in my house, as do most of the others working remotely in my place of work, but yeah occasionally we get interruptions, but it's usually over pretty quickly. This is also where using a headset instead of the webcam microphone makes sense, as the headset mics can use NR to reduce background noise massively. The only time it's been an issue for us is when there is no headset (someone using a mac usually). I used to spend $10 Australian (about $7.30 USD) a month to get to and from work, I know the guys in the bigger cities pay more for transport, but even then the bulk tickets bring that back down to ~$30 AUD per month, but given our fuel is $2 a litre (~$6-$7 a gallon) I find it hard to believe you are spending > $70 a month on fuel unless you are driving a long distance. Your point is valid, if someone wants the benefit of working from home, then that's cool, and any extras they would normally take into the office or personal costs on business gear should not come out of an employers pocket, but by the same token the business is responsible and liable to provide the equipment you need to do your job the same way they would be in an office situation. As I stated previously, the business makes immediate benefits in this arrangement that the employee doesn't 'see', nor am I saying they should be given more due to those savings, the savings are the driving carrot behind a lot of businesses doing this. From being able to reduce or reallocate the floor space of their offices, or be able to hire even more staff before having to expand into new offices, this is a huge expense/saving calculation. To give you an example of the above, when floor space in a skyscraper is being allocated to a business it typically costs about $25k per walled off area to put in (I think this is exorbitant), this doesn't include monthly rent, fixtures, equipment, bond etc. This is the reason behind so many businesses adopting the cubicle-farm (open plan) approach to offices even though it is noisier, more disruptive (a ton of quality published papers), etc. as it can save them a good couple hundred grand in their initial office setup costs. Then the rental costs may knock you for another $100k per year or more for an office of that size. I am not saying the company should fork out more, just buy for their employees what they would normally have to buy them in the office and not rely on them providing office equipment to do their job. See top of this for a more detailed explanation of what I was saying about studies (I was thinking actual academically published papers, not 'studies' in the media). I do wonder about the impact micro-management made on these results as it is much harder to micro-manage projects remotely. I also note there have been a bunch of companies in Aus who have gotten rid of middle-managers during covid. From the people I know in some of these companies it seems these people were micro-managers and as soon as everyone was working remotely things went a lot smoother but this is from a single industry and is not randomly selected so canno't count. I guess in summary my POV is that if a business wants me to work for them (remotely or in the office) I am not going to pay them for that privilege, or if I do have to fork out anything to use my own gear, I'm going to look at the wage and deduct those amounts from it when considering the job. I mean this is why Maslow's hierarchy of needs principles even exist (the basis of the capitalist style of management structure) as this was his first recommendation to the mining companies in the 20's who hired him.
    2
  76. 2
  77. 2
  78. 2
  79. 2
  80. 2
  81. 2
  82. 2
  83. 2
  84. 2
  85. 1
  86. 1
  87. 1
  88. 1
  89. Yeah the bird example is terrible, you would have the bird parent, which would have a bool state or function CanFly or something to that effect to keep your inheritance levels low (I limit to 2 for anything). Or have the bird super-class as an interface then parent the interface to another sub-type interface or class. But the main point of the example is that every sub-type should work for any situation you can use the super-type for. A better example would be ShoppingCartItem where the shoppingcart can hold many types of item that all inherit from the ShoppingCartItem. The shoppingCartItem would have virtuals that would calculate tax (GST here), the cost per unit, and have values for quantity (by unitType) a var for the unitType enum. A lot of examples would then say you should have a subclass called 'Fruit' tbh I think that is a bit too much, I would use an enum for the category of item. Or you could use interfaces by category if extra variables were required for particular categories or category groups. This way the generic functions at the top level and all methods and functions that accept that object type would be consistent. In this manner if you have a checkout object it can have a list of items and work out totals, taxes, discount etc. without inherently knowing what specific subtypes it has in the list. You can have the shoppingCart work out it's totals only when the list of items changes, then simply share that object between all the views, rather than calculating the totals and information for each view. I've been going through Rust tutorials and having programmed in ASM/C/C++/C#, IMO Rust is definitely OOP (at least from what I have seen so far). As far as getters and setters go, the only reason I really use them is either when I am forced to, for binding, or when logic/restrictions are required in the get or set and the state as a whole may need to be modified (such as updating totals in the shopping cart) or an event fired. The problem as I see it is that traditional Java training has buggered OOP with forcing people to use a factory for everything, trying to force everything to be an object, forcing tons of layers of abstraction etc. And unfortunately this is where a lot of people get their OOP knowledge from.
    1
  90. 1
  91. 1
  92. 1
  93. 1
  94. I had to get data from a PIC database once. It was hellish, for those that do not know, each field on each record can hold one piece of data in every type, so not only do you need to know the table and field (which the query language doesn't seem to like giving you) but you also have to know what datatype the specific data is and hope like hell the interface forced the users to use specific datatypes for the fields (they never do). I wished it had more tools like SQL does. That being said, when you deal with objects (where the data is always an n-dimensional object which may then relate to other objects of varying dimensional depth) vs 2 dimensional related data a relational database may not be appropriate, for instance velocity db is a NoSQL db that is used in air traffic control due to the speed and reliability features. Each aeroplane has a distinct set of objects associated with it which are never related by themselves. So here is a different method of db interaction than using SQL or a 2d relational model like most SQL dbs are. They generally interact with your development language in it's own native manner (in most cases). Mongo DB now uses Realm's (NoSQL) paid cloud sync as it's backend instead of the original mongo db, the foundation of Realm is a completely open source project and it uses APIs in languages to access it in an object oriented way, this includes the migration, the schema, and it means if you really want you can change the schema in dynamic ways (personally I would avoid this;). So I'm unsure why he thinks there is no competition, there is and in certain areas it is actually becoming more prevalent. In this manner you can interact with your data in the same way you would any native object you had created rather than using a different language between your code and your data. When I was testing using benchmarks between postgres, mssql, realm, velocitydb, raven and others for loading and reading 1 million small animal records (40 data points + % accuracy, + history, + the pedigree with all their own records recursively) the slowest were the SQL dbs. In our testing, loading the animal data into the dbs took SQL hours (MSSQL would crash unless you treated it special, postgres coped ok), on the same server Raven took ~31-40s, Realm took 24-26s, & VelocityDB took 12s. These were all saved then reloaded fresh, and random lookups were then conducted and benchmarked again. SQL would take ~15-40s for a single animal record with 6 gens of pedigree (up to 65 individuals), something that when you need to push 500-1000 animals through the crush can slow things down immensely, just the loading times for the animal data we worked out to around ~7-10 hours, let alone then checking the data, recording ~5-10 trait data points, and then herding them into the correct areas and waiting for the data to be saved before the next animal loaded. When we tested using the NoSQL db it could pull up all the same info in ~15-25ms (most of this being the I/O time). So yeah no idea where he gets the idea that there is no competition with SQL. He seems to be conflating SQL with SQL tools for writing queries, the relational db model, and dodgy methods of using SQL inside other languages (using strings for queries is a good way to get hammered by injection attacks, please start setting up and using stored procedures), I still work with SQL on a daily basis and write new SQL queries by hand as I have never found a tool that works really well at making good, optimised queries, it's not hard (heck I found react and angular harder in the day).
    1
  95. 1
  96. 1
  97. 1
  98. 1
  99. 1
  100. Sorry but the only academic papers I have seen showing any possible cause off autism was a significant correlation with how close the mother lived to a main road during the later terms of pregnancy. Also chemical farming started in the 1930's-60s, DDT was banned in 72 for crying out loud. If you look at a lot of these conditions you mention, they tend to be more prevalent in cities, where the food they eat is largely the same as what is eaten in smaller towns and rural communities (due to the distribution method most supermarket chains use). This negates the food (except in some specific provable cases) being the cause, but rather indicates a difference in the environment of the area. md != scientist, particularly in the US. I know people who are researchers in this area and this is pop-science and in some cases very/outright misleading. To give an example we now have a whole spectrum of Autism, many of whom function fine in society, if you go back to the 1970's the only ones diagnosed were unable to function (like the example of the child given) What about he ones who were ok in company but had number obsessions later on? Or the nes who simply have trouble reading emotions on other people's faces? They were NOT diagnosed as children and were not counted toward the stats quoted in the 70's. The Dr here has no evidence of how this was accounted for, just says they were not missed. The endocrine system is also interesting as the major normal factor is distance to coast and amount of seafood consumed by a population. However any exposure to radiation will also have an effect on the thyroid, for example trace amounts of natural uranium found in drinking water will have a major effect on the median thyroid health of the drinkers. This is correlational but causation has been proved in the early radiation trials the US government conducted on the army. Fast food has been shown to have a direct impact on both type 1 & 2 diabetes, Cealiacs disease is most prevalent in Ireland (~10% of pop) and for at least a good 300 years, where chemical farming was started far later than in the US. Also a significant number of diagnosed cealiacs who were tested in a double-blind published trial (the journal Nature I believe) were found not to be affected by the condition at all, but had been misdiagnosed. Sorry but a lot of this is junk. Yes some chemicals are really bad for you, yes monsanto is prolly a major culprit in a lot of issues. However you need to present scientific research not an opinion piece. You also cannot lay everything at it's door, as there are a lot of factors involved and until you account for those you cannot rule them out.
    1
  101. 1
  102. 1
  103. 1
  104. 1
  105. 1
  106. Hmm, well the bloke at MS I know was working waterfall post NT4. Unsure what specifically on but he's an assembler programmer so it would be very low level boot/driver/OS work. I'm unsure why you think a small team means waterfall wasn't followed? I was part of a 3 person team working on application software and we had to do the project using waterfall. Doesn't matter anyways it just seemed an odd assumption. Os2/warp (round 95 days) was actually pretty decent IME but IBM never bothered to really push it for non-business use which is a shame as it was working considerably better than Win 95 and pre-sp2 98. Getting decent software for it was a bugger though. One of the more famous groups that used waterfall for many years (They may still) was Toyota (kanban boards were used within their waterfall system) for their manufacturing software. Cisco has and still uses both Agile and Waterfall for different things. As mentioned earlier a LOT of older 3d rendering and engineering/architectural software was developed using waterfall. I have worked in several jobs where full-functional software has been released and 1 major version has been supported and updated over more than 10 years and it was developed using waterfall. I have also worked agilely in small and larger teams and have also supported original waterfall projects using agile for updates. Now patching, updating, anything with ongoing development and large change etc. I prefer to use Agile, but if we want a robust code structure for a new project particularly if the end-user is not going to be involved in the development at this stage, then I think it's either CI with a shedton of qualitative & edge case tests and code reviews/pair programming (or other fallbacks to reduce poor code quality) or map the planned software out using a waterfall methodology, if you then use agile for the implementation stage, this is fine, but the amount of crap I have seen in code where one thing has been done a few different times in the same codebase because a new TM didn't know it had already been done elsewhere is pretty much something I see on every agile project I have worked on (none have been CI/CD). The rate of this issue occurring in waterfall was greatly reduced as there is less unplanned change. perhaps you could do a vid on your fave methods for ensuring the existing team is replaceble with others that do not have full code coverage and how you would structure something so that the wheel is not reinvented for a new feature when it exists elsewhere even when it may not be obvious? I go with the flow for the most part. If I find we need a lot more planning I'll do the planning. If something is a pain to modify each time we touch it, then I will flowchart it, mapping out what it is actually doing and every point of the software that touches it. If it is small change I will just get stuck in and do it. If it is large I will change the flowchart and try to push it to be more modular and efficient (and more obvious). The above has resulted in one piece of layered agile import code, instead of taking 2-3 hours (often getting timeouts) and using 4-6gb of RAM, to taking 10 minutes and using 500MB. It also avoided a couple of edge-case gotchas later on that would have killed it (and would not have been caught by the tests) when the data imported was a bit weird (this was a very oddly designed data set and was very mutable, but the client had no control over it) . Horses for courses, whatever is the simplest way to make it a robust long-term solution should be the way to follow. Usually I use agility, but for more planned, less mutable work I will use a general overview waterfall with agile for the actual implementation phase and to handle any change to the overall plan.
    1
  107. 1
  108. 1
  109. 1
  110. 1
  111. 1
  112. 1
  113. 1
  114. 1
  115. 1
  116. 1
  117. 1
  118. 1
  119. 1
  120. 1
  121. 1
  122. 1
  123. 1
  124. 1
  125. 1
  126. 1
  127. 1
  128. 1
  129. 1
  130. 1
  131. Tesla: actually this is the wireless quote: "When wireless is perfectly applied the whole earth will be converted into a huge brain, which in fact it is, all things being particles of a real and rhythmic whole" nothing to do with size or what it is used for. More of AI that hits a singularity, but not a voluntary one but an inevitable one. He did describe a communication device small enough to fit in a pocket, but nothing to do with wireless in that same device. Lucky Tesla didn't live too long, major backer of eugenics and was always trying to create a giant death ray from electrical currents. Refrigeration is incorrect as well: "Artificial refrigeration began in the mid-1750s, and developed in the early 1800s. In 1834, the first working vapor-compression refrigeration system was built. The first commercial ice-making machine was invented in 1854. In 1913, refrigerators for home use were invented." You also ignore meat-lockers of which my family have been using since Aus was settled only stopping in my grandparents time and the ice-cellars used for refrigeration in Europe. Arthur C. Clarke predicted geosynchronous orbit a few years before it was done, after talking to Russian and US astrophysicists. Also newspads and datapads were a staple in science fiction from 1940's onward, Arthur Conan Doyle used one in some of his own science fiction. Clarke was pretty damn visionary in a lot of ways. Asimov - Yeah legend. He was also a lecturer and the most published author of the 20th century. He also hated computers, he is the origin of the story of someone throwing a computer out the window, which he did. Going back to the typewriter. :) Apple: 1987 - we actually already had large tablets (IBM et. al.) coming out in 89, we already had basic video. Again we had these concepts from sci fi since around the 40's-50's, including the voice-recog. Nearly all of these concepts were covered by Harry Harrison in detail in the 70's & 80's. Kurtzwell also based his sun hypothesis on Dison (RIP) spheres (1960's-70's). Where's William Gibson? In the 70's he predicted the online world and VR as well as cybernetic modifications, the fact it was mostly corporatised and had popups everywhere, he also predicted groups of 'hidden' areas on the web, including a group hidden in 'the walled city'. that would take part in social anarchy. I think if you are going to mention their predictions you should also mention all the ones that would never work (Tesla's death ray, all of the things apple said it could do when it couldn't or claiming inventions when they were already invented (tablets, GUIs, etc.)), also you should probably mention the people who actually invented stuff using techniques different to what others proposed, ie: actual wifi came from astronomers in Australia and works significantly differently to what Tesla was proposing (Tesla: electronic consciousness almost, little-no physics involved, just charged particles), these 6-8 guys figured out how it would work and what it could be used for from analysing quasars.
    1
  132. 1
  133. 1
  134. 1
  135. 1
  136. 1
  137. 1
  138. 1
  139. 1
  140. 1
  141. 1
  142. 1
  143. 1
  144. 1
  145. 1
  146. 1
  147. 1
  148. 1
  149. 1
  150. 1
  151. 1
  152. 1
  153. 1
  154. 1
  155. 1
  156. 1
  157. 1
  158. 1
  159. 1
  160. 1
  161. 1
  162. 1
  163. 1
  164. 1
  165. 1
  166. 1
  167. 1
  168. 1
  169. 1
  170. 1
  171.  @d.sherman8563  Having done both professionally for a number of years, as well as programming on software projects for over a decade, I know that the requirements for programming are different to FE/FED/Scripting on websites, as you have several elements that are usually not present or present in much lower quantities in FED, due to the nature of scripting languages used. This changes the requirements of the system. Yes FE can be challenging, particularly with so many JS frameworks and libraries that often conflict with each other and the non-strong nature of most scripting languages, however the scope of difference between OpenGL programming and using CSS/JS is very large, the difference between doing C# db manipulation and displaying the output data in a react table are very different. Some things crossover, but not everything. How many projects you have in the one solution for example: a large-scale multi-national website will have 3 - maybe 4 when done efficiently, maybe 5 if you have to integrate with a POS (not point of sale ;) like SAP. The herd management suite and other software projects I used to work on had 10+ in each final solution due to the differences in complexity, industrial hardware integration, and functionality, we also had installers, libraries we created, the updater, obsfucation, and other utility based systems that had to be integrated into each piece of software. Power/RAM in the software cases above makes the difference between a 20+minute compile time and a 2 minute compile time. Memory makes a difference to how much the software can store in RAM prior to stack overflow. Compatibility makes the difference in shipping to 20% of the market and 80%. long-term reliability concerns will raise your costs of outputting updates and increase your hardware expenditure. Connectivity will limit the access speed to your server. There are a lot more examples, the reason this was pointed out in the first place was because all the emphasis was placed on components that would attract a FED, in a system that is below-par of the average dev machine.
    1
  172. 1
  173. Prior to these articles you use, Putin had lamented the splitting up of the USSR and the end of the cold war, he believed that the USSR would have 'won' the cold war and believed in Russian superiority as a race. He stated he wanted to reinstate the Russian empire, and to this end he planned the invasion of Crimea. He is a wannabe emperor, so the justifications he uses are exactly that, he wants x so he will use any argument to get it or to justify it including faking attacks (the already disabled man in the fake bombing etc.). He then cries wolf when the other nations defend and blames it all on 'foreign influences' - similar to the Nazis blaming the Jews, ROmanians, and foreign minded people for all their issues pre-ww2. This justification he used for the invasion and supression of the Georgian peoples; Using a similar tactic to Crimea: 1. plant agents within a city on the border, bribe officials. 2. claim the whole country wants to rejoin based on these agents stirring up trouble and protests in that city, used corrupted officials to sway things in the direction of Russia. 3. Invade and leave behind FSB agents to quell any citizens protests and further the narrative of a foreign influence leading 'Russians' astray in the country and that the true citizens want to 'reunify' Added to this is the estimation that Russia is almost out of some of it's most lucrative mined resources, these resources are also found in quantity in the Ukraine etc. This gives a pretty big motive for invasion.
    1
  174. 1
  175. 1
  176. 1
  177. 1
  178. 1
  179. 1
  180. 1
  181. 1
  182. 1
  183. 1
  184. 1
  185. 1
  186. 1
  187. 1
  188. 1
  189. Dunno about your opinion on DRY, I had an old legacy function I had to update that was 5000 lines long (written by a vb programmer), 90% of it was repeated code with minor alterations with ~1hr I had got this down to 200 lines of code total (there were outside effects & params I didn't want to touch), with the original function being ~100 lines long then around 3-5 other functions that took the modified settings and did the processes accordingly. In this case DRY helped the process run about 10x quicker and was so much easier to read. But there are extremes people take it to that are ridiculous. BUT if you look at ID's source code for Doom, most functions are very short (a lot under 10 lines) and do only 1 thing. Their naming conventions though....ew. In another example from the vb they had a loops doing 1 thing in different spots (literally originally identical), at some point they had updated how this was done in most of the areas and forgotten a couple, which caused intermittent issues. They also used foreach loops that were very long inside the loop and had continue buried inside nested ifs multiple times in the code. I limit myself to 2 level max within loops and aim for 40 line max function length but do not worry too much if it is obvious what is going on and it is a bit longer, tests are often longer for example. BUT I also do not make a function for a 1 - 4 lines unless it will be called from multiple other functions and has to work in the same way each time. This way when change is inevitably enacted you only have to change 1 function and don't have to go hunting. Java is a PITA and inherited a lot of poor decsicions from C, and IMO most Java training & books are not a good example of setting up OOP principles, a lot of Java books tends towards over-complication of stuff (factories...as far as the eye can see...) and an over reliance on abstraction over procedurality, I found the UNI courses in C++ vastly better for OOP training than the Java ones.
    1
  190. 1
  191. 1
  192. Processors speed is required for fast compiling, on large projects this may be necessary because you need to fix bugs and recompile all the time. Not necessary for small web projects, large-scale dbs with ML etc. may require it for more efficient finds and searches and learning. RAM may become necessary if you work on a project that requires large amounts of ram or runs a lot of services. Extra RAM also allows you to run on-the-fly code analysis tools, large scale services that may later be deployed on servers, etc. Ie: if you are running a database with 2 trillion entries, ML and indexing it may require large amounts of RAM to index efficiently. Unix != Linux - there are a number of very important differences and a few incompatibilities, while BASH etc. may be the same commands, under the hood is very different. Also if you wish to create a program with a UI it is NOT transferrable between MacOS and Linux. MacOS has a 20% speed reduction in processor power. This means slower compiles and slower single/multi-thread program function. GitHub is good except when you work on anything that requires security or is related to trade secrets/work you don't want someone else being able to access, when you work on complex projects they can get large VERY quickly, I have a good 2TB of project space on a secondary drive, I would recommend a thunderbolt hard drive as then you can work on the project in realtime from an external drive, and backups, make lots of backups. lol sketch! POS software, not as badly programmed as Adobe, but not great, only in use in Mac environments not used very much professionally. Macs are only in high use in America, incidentally San Francisco is one of the highest point of uses - elsewhere anywhere else in the world it is less than or around 20% usage. For Mac programming you can run MacOS in a VM, this way you can run more powerful cpus and more RAM than Mac allow and actually get it up past the 20% issue. You can also program frontends & backends cross-platform in C# now, frontends in Mono/Xamarin et.al. and backends in .Net Core. (.NET Core is not fully developed ecosystem yet, but showing promise). Another issue with the Apple ecosystem is the poor hardware design of their laptops, with bending boards, the crap keyboards (the current one is better but more like a lappy from the early 2000's than the 90's like the previous), the T2 chip interfering, chips coming loose due to heat, lower-spec chips being used (ie: chip designed for short bursts of 5v used for a constant 5v stream). Add in their atrocious customer support compared to any other professional system (HP/Dell/Lenovo next day onsite servicing, same day servicing for higher end vs Apple's lack of technical support and habit of buggering up their software) For PCS: A standardised x86 Assembler language, meaning your higher level languages tend to be more stable with less major changes, in the last 10 years Apple programming has had 3 major shifts that affected me and screwed up a lot of projects. You also have Apple overriding FreeBSD (the UNIX backbone of MacOS) core stack platforms like networking, whereas Windows typically subscribes to industry standards except in certain cases (OpenGL etc.) You have a wider range of capabilities on PC - I can run a boot of Linux, Windows and MacOS (Using a VM) on a wide range of hardware allowing me to test different configurations. The ability to buy more stable hardware and replace parts when needed. Higher res. screens with OLED and the ability to send true 10bit/12bit image signals to monitors (MacOS only allows 8bit+FRC and their screens are not DCI-P3 accurate). Ability to load whatever software you want - MacOS tries to limit devices to using their store. Ability to use random pci cards - This can be linux/windows cards some may be for a specific OS or flavour of Linux, but specialist boards can be input from industrial/mining equipment, vehicles, scientific instrumentation and may include a wide variety of port types (like d3, sfb, custom optics, etc.), I have never seen these work on Mac, and are not usually recognised on the MB even.
    1
  193. 1
  194. 1
  195. 1
  196. 1
  197. 1
  198. 1
  199. 1
  200. 1
  201. 1
  202. 1
  203. 1
  204. 1
  205. 1
  206. The examples on this site are not brilliant. Your code examples are a bit fraught because you use the same examples where your code breaks the guidelines already and then you use it not working with that to strawman the guideline argument. SOLID only really works if you actually follow all the principles as each relies on the others in part. Single responsibility - your render_into may be quicker to script as it is, but if the window object changes then it's screwed and you have to modify this along with your window object changes. If you returned the relevant information and your window object could take it and render the data independently then you would be ok, as any large window changes would be self-contained and the output from the other object would be self-contained. As long as your output is normalised you are fine. Every legacy project i have worked on that has not followed this guideline has all kinds of bugs occurring because they are sharing the load between different objects and classes (and sometimes between projects) and at points in the past they have modified one but not the other and no error was thrown so the issue was missed until multiple successive changes later there were discrepancies shown. EG: totals in a sidebar and on the main part of a checkout page not being the same. Doom source really does follow single responsibility well, each function and object has it's own scope that it sticks to. The locality of behaviour comes with good function naming (something I think ID could do better) so you know exactly what is happening at each stage of the locality. For example your render_into function is probably running from a locality where you are doing rendering. So why not return the data from the function and do the rendering in the rendering locality instead of calling it from a rendering-unreleated object? In this case it also means your stack would be a bit better because if there is an internal object error it occurs on the object, if there is an error with the window, it occurs in the relevant area to doing the rendering for the window. Open-closed limits bugs to the new changes, I see this is mostly for finished libraries/sections of code, the library does it's one thing and works. So for example I have a library that allows reading from a file and finding data within it efficiently (reading 6gb zipped xml). This is finished code and is in use in multiple projects, so if for example I wanted to update it to read data into JSON or create inserts for SQL from the data being read, instead of modifying the library as it stands to do so, I could simply extend it in the one project or within the library project if more need it without touching the existing codebase. Same as if I wanted it to read json as well, then I would extend the existing project to allow another config option and add the code that reads the JSON file ionto my existing objects and pass it to the existing output functions. The function of the library is not changing fundamentally, so why would we touch that part of it and risk breaking it for every project using it currently? This will not work all the time if you are not using the S in SOLID because you have that overflow of responsibility and by design you will have to modify the original code as you showed in your example. Liskov - You don't HAVE to write child-parent classes, but if you do, don't screw it up. If your child cannot be used in a function where the parent is expected, then you have screwed it up. Most IDEs and languages follow this principle inherently. So if you need child class x in a function, then ask for that, not the parent. If you have a function that needs to determine what child class is in use and do different things, then either overloading or a switch hashtable lookup can work. Interface segregation - keep it simple, keep it stupid. Dependency Inversion - Keeps the code self-contained - Your example on the interface was a good one. If you do the S and this then this means you can extend the interface with custom code, but you never touch the classes themselves meaning you can achieve the O. I think it is down to the programming team to come up with the code rules they want to follow, but one of the biggest problems I have ever seen is exactly your example of doing an unrelated task inside an object.
    1
  207. 1
  208. 1
  209. 1
  210. 1
  211. 1
  212. 1
  213. If inheritance isn't good for it, then don't use it. If you are building your code well then step 17 would be 1-3 levels down from step 1 in inheritance, and you can easily check the conditiions on step 1 in relation to your object. If you are going beyond that then you might need to refactor your code and maybe think about it differently (or stop listening to Java training). Maybe an interface would suit it better or maybe the objects should be broken up a bit more, or that function you dumped into the object should be taken out and used for the whole program. The issue here is lying with the way you have designed the code, not the fact you can inherit from another class. Take programming a game, something done more often in OOP these days than not. You may have entities -> dynamic Entities -> player character, alongside the PC may be NPCs. An entity may just have the var of the name of mesh used, a position in worldspace co-ords (often a struct or object itself), an overlap function IsOverlapping(Entity obj) etc. Dynamic entity may have the list of animations, bone structure, min/max movement speed, health , killDistance or whatever. The player Character may have number of lives, functions that fire events when the pc is killed, points total, weapons and inventory etc. Whilst the sibling NPC might just have a string for the mesh used as a weapon, aggro factors etc. I remember the A-Life system Stalker came up with that had all creatures have a hunger/satiation, fear, and risks values on them as well as their food types etc, this was applied to all creatures in the game right down to human npcs. The creatures would take bigger risks depending on their hunger level. So you could sit on a hill and watch a bunch of dogs hunt down the huge pig things, or the dogs would avoid them as they still posed a risk and the dogs had not reached the point where they would risk it. This sort of system is ideal for inheritance. But if each creature has a distinct set of values and works in a completely different way, then you wouldn't.
    1
  214. Several points: Apple signed up to the voluntary tech consortium in the EU 10 years ago and the first proposal was using the proposed USB-C as a standardised connector, Apple championed this. Apple then kept trying to delay the action (last one was ~7 years ago) and use loopholes. EU has now enforced the action of the descision made 10 years ago. Please note this was a consortium of IBM, Apple, ARM, etc etc. NOT the government descision on what tech to use, Apple agreed and promoted the use of USB-C. Another point, you talk about USB-C as a communication standard which it is not, USB-C is the connector itself, then there are various standards that can run over USB-C, such as displayport and thunderbolt. As long as the port and cable support the same standard (and they are usually backwards compatible) then you are fine, it also means you have a port that can be changed in use over time and have functionality added to it. The laws allow this to change. Magsafe is 1. not new 2. not safe - if someone has a pacemaker they cannot use magsafe (hell even Apple warns about using it in the US which has bugger-all safety laws) as it interferes with the pacemaker's operation. 3. It uses a LOT more power than what the phone receives. So if you are wirelessly charging using 36w the phone may only be receiving 5-20w to charge with. It's wasteful, creates excess heat, and is pointless as it has to be physically present anyway (the controller and port for usb-c is not large and takes up a lot less room than some of the unnecessary rubbish in phones)
    1
  215. 1
  216. 1
  217. 1
  218. 1
  219. 1
  220. 1
  221. 1
  222. 1
  223. 1
  224. 1
  225. 1
  226. 1
  227. 1
  228. 1
  229. 1
  230. 1
  231. 1
  232. 1
  233. 1
  234. 1
  235. 1
  236. 1
  237. 1
  238. 1
  239. 1
  240. 1
  241. 1
  242. 1
  243. 1
  244. 1
  245. 1
  246. 1
  247. 1
  248. 1
  249. 1
  250. 1
  251. 1
  252. 1
  253. An Aussie bloke (Gareth Lee) had this issue with his Apple battery 2 years ago, he also warned Apple at the time. There was another case in the UK where someone replacing it themselves accidentally punctured the battery by removing the sticky tape on it (it broke part of the plastic casing on removal) and it exploded, in the last months of 2017 and the beginning of 2019 there have been several cases of Apple batteries causing fires in stores and homes. Apple are currently being sued (As of Feb I think) as the cause of an apartment fire that killed 1 person in New Jersey with an iPad battery fire in 2017, I don't know if the investigation placed the source of the fire on the iPad, but suspect they wouldn't have a case otherwise. Lithium Ion batteries are not safe when you do not stop the trickle charge if it is already at 100%, the walls are not correctly made, or the cells too close together. Samsung learned this the hard way and then did one of the most successful recalls (well the second one anyway) in history for both speed and percentage returned. It took them way too long to get there from the initial issues occurring, but their response was pretty damn good compared to Apples 2+ years average. I can personally vouch for 2 instances of battery overheating and expanding in normal use in the laptop, these occurred 2015 & 2016, Apple notified at the time (so 3 & 4 years ago). We get bloody hot in Oz though, so maybe the occurrence is higher here due to thermal runaway? The issue as I see it, is that both Samsung, Apple, and many others continue to use the same battery suppliers who have cut corners in the past and made unsafe batteries. Anyways, just my opinion and experience.
    1
  254. 1
  255. 1
  256. 1
  257. 1
  258. 1
  259. 1
  260. 1
  261. 1
  262. 1
  263. 1
  264. 1
  265. 1
  266. 1