Comments by "" (@GSBarlev) on "Fireship"
channel.
-
6600
-
2300
-
1100
-
953
-
819
-
812
-
809
-
776
-
402
-
183
-
178
-
177
-
96
-
92
-
84
-
83
-
79
-
77
-
76
-
76
-
76
-
75
-
73
-
70
-
67
-
63
-
60
-
59
-
47
-
44
-
34
-
33
-
30
-
29
-
23
-
21
-
18
-
18
-
17
-
17
-
17
-
Eh. Jensen wants you to believe that you need to pay him a fortune for a 4090 and sell your kidney to afford the power bill, but the truth is, not really. I've run Stable Diffusion XL and Mistral 7B on my local computer, which, sure, has 128GB of RAM, but only a 12GB last-gen Radeon GPU. Hooked up a kill-a-watt, crunched some numbers, I can answer maybe a dozen proompts for about $0.14 in electricity. Sure, GPT-4 Turbo, Sora and Gemini are an order or magnitude or two more expensive, but they're also running on dedicated (and thus, higher efficiency) server hardware. There are plenty of reasons to be anti-AI, but environmental impact isn't one of them, especially since, once people realize that these capabilities are massively over-hyped, the demand is going to be nowhere near the numbers that Altman and Huang are forecasting.
16
-
16
-
15
-
15
-
14
-
14
-
14
-
13
-
12