Youtube comments of Shawn Fumo (@ShawnFumo).
-
74
-
56
-
@@user-ze2zm4sz1byou don’t actually need NFTs for this. What you can do is use digital signatures. For a photo, the camera takes the original raw image and use a one-way hash. Basically taking the entire image data and metadata (time, location, etc) and create a small sequence of data. If even one bit of the original was different, it’d change the resulting hash.
Then the camera uses a secret key to encrypt that hash, which can be then be decrypted by a known public key of the manufacturer.
So now you can decrypt that hash and compare it to a hash of the current image and make sure they’re the same. If they are, it is exactly the image the camera took, unless the secret key was stolen somehow. Then if you need to prove the authenticity, you can keep the old file around (like having the negatives with a film camera).
The only issue is that the different manufacturers need to build this into the cameras.
Though you could sign any image you want yourself. It wouldn’t prove that you made it originally and that it wasn’t a deepfake by you or someone before you, but it’d still prove that you signed it. So the White House could sign any images or videos to prove they are approved by the gov. That could help for that Ukraine example, where it’d be suspicious if it wasn’t signed by the Ukraine gov.
But that still needs its own infrastructure. We have SSL (https) built into websites, but our image viewers/players aren’t checking signatures to warn us. And re-encoding a pic/video would kill a signature. When you upload a png file to Facebook, they turn it into a lower quality jpeg, YouTube makes multiple versions of videos at diff bitrates, etc. YouTube could tell you that the original was signed by X. You’d have to trust them (unless they let you download the original file), but it’d still be better than what we have now.
I think Blockchain is trickier since while it could show a chain of custody, it can’t show you how a file was modified at each step unless you store all the versions somewhere, which could be large. And if you store it centrally, you don’t need blockchain since you can just sign each version and stick them all in one file anyway.
Hopefully diff companies will start to figure out some kind of standard as we get more and more fakes happening. It took a while for secure websites to become the norm. We just don’t have a ton of time considering how fast this is all moving.
38
-
14
-
10
-
8
-
7
-
6
-
I do think we're going to be in for a hard time with stuff like this. Definitely not a bad idea to set up a code word with family in case someone spoofs a voice, etc. The most fundamental problem though is that it feels almost impossible to actually control all of this. I think people are under the impression that the only way to train and run these things are with oodles of money and compute, but that is changing quite fast. Like a recent research paper showed how to train an image model from scratch (not at the level of Flux but better than Stable Diffusion 1.5/2) for less than $2,000, with 37m images (SD 1.5 cost $300k and used 4b training images). And even though SD took a lot to train, the result is a 2GB file that you can run locally offline on an iPhone. And those can already easily be modified to impersonate various people, change (inpaint) existing images, etc.
And that's all with the current hardware and current techniques. In a year or two, maybe it costs $200 to train a Flux-quality model and maybe you can run that on a new generation phone. Obviously video generation is a lot more intensive, but what is cutting edge now will be easier in a few years as well. Once the techniques are known publicly, it feels like it is pretty hard to actually regulate this stuff. Sure, we can clamp down on the big companies like OpenAI and Google and Grok that make it super easy for anyone to do online, but that doesn't really stop the fact that anyone can download an open weights model from online and use it however they want.
6
-
4
-
4
-
4
-
@chazmuzz Yeah, though it doesn’t even have to be blockchain. The easiest thing (which we should pressure companies for) is the manufactures of cameras/camcorders to digitally sign the raw files.
That way if you kept the the equivalent of a film negative, you have some pretty good proof of authenticity.
It certainly doesn’t solve all the problems, but it’d be a good first step. And I’m guessing YouTube, Facebook, etc keep the originals that were uploaded to them, even if they give out compressed versions. They could validate the original signature and sign the new compressed one with their own signature, perhaps with some description of how it was edited (like taking just a portion of an original video, or changing contrast on an image), and a copy of the original signature.
In that scenario, you need to trust YouTube and Facebook, but is better than nothing. And then you know which service it came from and law enforcement can ask them for the original file.
4
-
4
-
4
-
4
-
3
-
3
-
3
-
3
-
2
-
This is a good point, though I definitely think there are opportunities to improve the situation without going too far. Camera manufacturers can digitally sign original files, so there can be the equivalent of “negatives” to check back with. As with film negatives, they can be lost/destroyed, but are helpful when trying to prove something is authentic.
And software like Lightroom could sign an image edited with it, along with some info on the original image’s signature. Like that the hash of the original image and that it was signed by Sony. It could be configured to pass through certain data, but does need privacy. Like saying the originally signed image was in Boston vs someone’s actual home coordinates. And you can sign that image before uploading to social media and Facebook can sign it as it makes its own compressed version.
It isn’t perfect since you need to trust each of the steps and you may purposefully be losing some info at each step for privacy.
But still could help things. If any image/viewer displayed signatures, it’d be a start. Like if Ukraine always signed videos, it’d be suspicious if Instagram said the original upload wasn’t signed by them. And if a random image in Discord was signed by Facebook with their handle, a journalist could message that user to ask if they have the original picture from the camera, etc.
2
-
2
-
2
-
2
-
2
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@tttttttttttttttp12 I tend to think most of the good and bad claims are true at the same time. I think we're all in for a rough time pretty soon, no matter what we do (art, programming, even physical labor eventually). And certainly a lot of money pouring into research is because they think it'll help their bottom line in the end.
I don't think it is all necessarily bad. Like back when I was learning to draw, while it was interesting, I think part of the reason I didn't stick with it was not having a clear "voice" on what I'd want to use it for that would make all the time to get better at it worth it. When I messed around with MidJourney back when it it was new, there was the side-effect of feeling more in touch with visual creativity in terms of what themes, color schemes, styles, etc. that I most connected with. Not in terms of a pure consumer, but in being involved with new images appearing. Turns out they weren't the same kinds of images as those I might like to just look at. That kind of experience (and also the lack of control in generators) made me want to put more effort toward learning to draw/paint than before using it. I think others have had that experience too, a kind of gateway like people who learning programming to help them make stuff in a video game.
One good/bad aspect on training sources is it seems like a lot less images are needed than first thought. There was a recent research paper about how they trained a good image model (not Flux level but better than Stable Diffusion 1.5/2) from scratch for less than $2000 and using 37m images, about a third of which were synthetic. That's starting to get to the number where you could probably source all public domain images to train it. On the bad end, it also means the tech is pretty much uncontrollable (at least for regular image generators), since almost anyone could spend $2000 and it'll just keep getting cheaper with new hardware and techniques.
In any case, try not to let what other people are doing take the joy of a creation away from you. Even though anyone can buy cheap items of all sorts from any big box store, there's still a market for hand-made things. And there's still many reasons to create things apart from selling them. I know that only helps so much if you want a standard career in it without having to sell via being a personality, but I think we're all in that boat. I predict everything is going to go a bit topsy turvy soon.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1