The White House wants to ‘cryptographically verify’ videos of Joe Biden so viewers don’t mistake them for AI deepfakes::Biden’s AI advisor Ben Buchanan said a method of clearly verifying White House releases is “in the works.”
the technology to do this has existed for decades and it’s crazy to me that people aren’t doing it all the time yet
Honestly I’d say that’s on the way for any video or photographic evidence.
You’d need a device private key to sign with, probably internet connectivity for a timestamp from a third party.
Could have lidar included as well so you can verify that it’s not pointing at a video source of something fake.
Is there a cryptographically secure version of GPS too? Not sure if that’s even possible, and it’s the weekend so I’m done thinking.
It’s way more feasible to simply require social media sites to do the verification and display something like a blue check on verified videos.
This is actually a really good idea. Sure there will still be deepfakes out there, but at least a deepfake that claims to be from a trusted source can be removed relatively easily.
Theoretically a social media site could boost content that was verified over content that isn’t, but that would require social media sites to not be bad actors, which I don’t have a lot of hope in.
I agree that it’s a good idea. But the people most swayed by deepfakes of Biden are definitely the least concerned with whether their bogeyman, the “deep state” has verified them
So basically Biden ads on the blockchain.
Cryptography ⊋ Blockchain
A blockchain is cryptography, but not all cryptography is a blockchain.
…no
Think of generating an md5sum to verify that the file you downloaded online is what it should be and hasn’t been corrupted during the download process or replaced in a Man in the Middle attack.
generating an md5sum to verify that the file you downloaded online
I’m more interested in how exactly you’d implement something like this.
It’s not like videos viewed on tiktok display a hash for the file you’re viewing; and users wouldn’t look at that data anyway, especially those that would be swayed by a deep fake…
Likely it would be a service provided by the Whitehouse press corps and media outlets then could rehost the videos with the whitehouse watermark
Digital signature. A watermark may be useful so that an unauthorized user can’t easily hide their source without noticeably defacing the photo, but it doesn’t prevent anyone from modifying it
A digital signature is a somewhat similar idea except that signature verification fails if there are any changes. This is tough to do with a photograph, where some applications may be blindly re-encoding or modifying the resolution so those may need to be fixed.
You could argue this is a good use case for blockchain, certainly much better than those stupid monkey images. When John Stewart parodies a politician, there should be a verifiable chain of evidence from the White House release to the news bureau to his studio, before they alter the lighting to highlight orange skin tone for yucks.
The question is how does the USER verify the authenticity. They just see a video, not a signature.
They shouldn’t have to actively verify that, but yeah, I don’t know if there is a relevant file format though
I once worked with signed xml, where the signature field is really no different than any other field, but with binary data. That data used a private key to sign a checksum if the file. For tools that understand the format, you just verify the trust chain against cert authority public keys using your local keystore. It just worked, with no action required of the user and no internet required
- if you edit the signature, the trust chain will fail validation
- if you edit other data, the signed checksum would not match and validation would fail
- if you edit the checksum, the key would no longer match and validation would fail
It’s actually been a lot of years, so I hope I’m remembering it accurately
Fucking finally. We’ve had this answer to digital fraud for ages.
Sounds like a very Biden thing (or for anyone well into their Golden Years) to say, “Use cryptography!” but it’s not without merit. How do we verify file integrity? How to we digitally sign documents?
The problem we currently have is that anything that looks real tends to be accepted as real (or authentic). We can’t rely on humans to verify authenticity of audio or video anymore. So for anything that really matters we need to digitally sign it so it can be verified by a certificate authority or hashed to verify integrity.
This doesn’t magically fix deep fakes. Not everyone will verify a video before distribution and you can’t verify a video that’s been edited for time or reformatted or broadcast on the TV. It’s a start.
We’ve had this discussion a lot in the Bitcoin space. People keep arguing it has to change so that “grandma can understand it” but I think that’s unrealistic. Every technology has some inherent complexities that cannot be removed and people have to learn if they want to use it. And people will use it if the motivation is there. Wifi has some inherent complexities people have become comfortable with. People know how to look through lists of networks, find the right one, enter the passkey or go through the sign on page. Some non-technical people know enough about how Wifi should behave to know the internet connection might be out or the route might need a reboot. None of this knowledge was commonplace 20 years ago. It is now.
The knowledge required to leverage the benefits of cryptographic signatures isn’t beyond the reach of most people. The general rules are pretty simple. The industry just has to decide to make the necessary investments to motivate people.
The President’s job isn’t really to be an expert on everything, the job is more about being able to hire people who are experts.
If this was coupled with a regulation requiring social media companies to do the verification and indicate that the content is verified then most people wouldn’t need to do the work to verify content (because we know they won’t).
It obviously wouldn’t solve every problem with deepfakes, but at least it couldn’t be content claiming to be from CNN or whoever. And yes someone editing content from trusted sources would make that content no longer trusted, but that’s actually a good thing. You can edit videos to make someone look bad, you can slow it down to make a person look drunk, etc. This kind of content should not considered trusted either.
Someone doing a reaction video going over news content or whatever could have their stuff be considered trusted, but it would be indicated as being content from the person that produced the reaction video not as content coming from the original news source. So if you see a “news” video that has it’s verified source as “xXX_FlatEarthIsReal420_69_XXx” rather than CNN, AP News, NY Times, etc, you kinda know what’s up.
I don’t blame them for wanting to, but this won’t work. Anyone who would be swayed by such a deepfake won’t believe the verification if it is offered.
Agreed and I still think there is value in doing it.
I honestly do not see the value here. Barring maybe a small minority, anyone who would believe a deepfake about Biden would probably also not believe the verification and anyone who wouldn’t would probably believe the administration when they said it was fake.
The value of the technology in general? Sure. I can see it having practical applications. Just not in this case.
Sure, the grandparents that get all their news via Facebook might see a fake Biden video and eat it up like all the other hearsay they internalize.
But, if they’re like my parents and have the local network news on half the damn time, at least the typical mainstream network news won’t be showing the forged videos. Maybe they’ll even report a fact check on it?!?
And yeah, many of them will just take it as evidence that the mainstream media is part of the conspiracy. That’s a given.
It helps journalists, etc, when files have digital signatures verifying who is attesting to it. If the WH has their own published public key for signing published media and more then it’s easy to verify if you have originals or not.
Problem is that broadly speaking, you would only sign the stuff you want to sign.
Imagine you had a president that slapped a toddler, and there was a phone video of it from the parents. The white house isn’t about to sign that video, because why would they want to? Should the journalists discard it because it doesn’t carry the official White House blessing?
It would limit the ability for someone to deep fake an official edit of a press briefing, but again, what if he says something damning, and the ‘official’ footage edits it out, would the press discard their own recordings because they can’t get it signed, and therefore not credible?
That’s the fundamental challenge in this sort of proposal, it only allows people to endorse what they would have wanted to endorse in the first place, and offers no mechanism to prove/disprove third party sources that are the only ones likely to carry negative impressions.
I don’t think that’s what this is for. I think this is for reasonable people, as well as for other governments.
Besides, passwords can be phished or socially engineered, and some people use “abc123.” Does that mean we should get rid of password auth?
I’m sure they do. AI regulation probably would have helped with that. I feel like congress was busy with shit that doesn’t affect anything.
I see no difference between creating a fake video/image with AI and Adobe’s packages. So to me this isn’t an AI problem, it’s a problem that should have been resolved a couple of decades ago.
I salute whoever has the challenge of explaining basic cryptography principles to Congress.
Might just as well show a dog a card trick.
That’s why I feel like this idea is useless, even for the general population. Even with some sort of visual/audio based hashing, so that the hash is independant of minor changes like video resolution which don’t change the content, and with major video sites implementing a way for the site to verify that hash matches one from a trustworthy keyserver equivalent…
The end result for anyone not downloading the videos and verifying it themselves is the equivalent of those old ”✅ safe ecommerce site, we swear" images. Any dedicated misinformation campaign will just fake it, and that will be enough for the people who would have believed the fake to begin with.
Should probably start out with the colour mixing one. That was very helpfull for me to figure out public key cryptography. The difficulty comes in when they feel like you are treating them like toddlers so they start behaving more like toddlers. (Which they are 99% if the time)
Why not just official channels of information, e.g. White house Mastodon instance with politicians’ accounts, government-hosted, auto-mirrored by third parties.
So should Taylor Swift
This doesn’t solve anything. The White House will only authenticate videos which make the President look good. Curated and carefully edited PR. Maybe the occasional press conference. The vast majority of content will not be authenticated. If anything this makes the problem worse, as it will give the President remit to claim videos which make them look bad are not authenticated and should therefore be distrusted.
It needs to be more general. A video should have multiple signatures. Each signature relies on the signer’s reputation, which works both ways. It won’t help those who don’t care about their reputation, but will for those that do.
A photographer who passes off a fake photo as real will have their reputation hit, if they are caught out. The paper that published it will also take a hit. It’s therefore in the paper’s interest to figure out how trustworthy the supplier is.
I believe canon recently announced a camera that cryptographically signs photographs, at the point of creation. At that point, the photographer can prove the camera, the editor can prove the photographer, the paper can prove the editor, and the reader can prove the newspaper. If done right, the final viewer can also prove the whole chain, semi-independently. It won’t be perfect (far from it) but might be the best will get. Each party wants to protect their reputation, and so has a vested interest in catching fraud.
For this to work, we need a reliable way to sign images multiple times, as well as (optionally) encode an edit history into it. We also need a quick way to match cryptographic keys to a public key.
An option to upload a time stamped key to a trusted 3rd party would also be of significant benefit. Ironically, Blockchain might actually be a good use for this. In case a trusted 3rd can’t be established.
Great points and I agree. I also think the signature needs to be built into the stream in a continuous fashion so that snippets can still be authenticated.
Agreed. Embed a per-frame signature it into every key frame when encoding. Also include the video file time-stamp. This will mean any clip longer than around 1 second will include at least 1 signed frame.
I’ve thought about this too but I’m not sure this would work. First you could hack the firmware of a cryptographically signed camera. I already read something about a camera like this that was hacked and the private key leaked. You could have an individual key for each camera and then revoke it maybe.
But you could also photograph a monitor or something like that, like a specifically altered camera lens.
Ultimately you’d probably need something like quantum entangled photon encoding to prove that the photons captured by the sensor were real photons and not fake photons. Like capturing a light field or capturing a spectrum of photons. Not sure if that is even remotely possible but it sounds cool haha.
I don’t think that’s practical or particularly desirable.
Today, when you buy something, EG a phone, the brand guarantees the quality of the product, and the seller guarantees the logistics chain (that it’s unused, not stolen, not faked, not damaged in transport, …). The typical buyer does not care about the parts used, the assembly factory, etc.
When a news source publishes media, they vouch for it. That’s what they are paid for (as it were). If the final viewer is expected to check the chain, they are asked to do the job of skilled professionals for free. Do-your-own-research rarely works out, even for well-educated people. Besides, in important cases, the whole chain will not be public to protect sources.
It wouldn’t be intended for day to day use. It’s intended as a audit trail/chain of custody. Think of it more akin to a git history. As a user, you generally don’t care, however it can be excellent for retrospective analysis, when someone/something does screw up.
You would obviously be able to strip it out, but having it as a default would be helpful with openness.
I don’t understand your concern. Either it’ll be signed White House footage or it won’t. They have to sign all their footage otherwise there’s no point to this. If it looks bad, don’t release it.
The point is that if someone catches the President shagging kids, of course that footage won’t be authenticated by the WH. We need a tool so that a genuine piece of footage of the Pres shagging kids would be authenticated, but a deepfake of the same would not. The WH is not a good arbiter since they are not independent.
Politicians and anyone at deepfake risk wear a digital pendant at all times. Pendant displays continually rotating time-based codes. People record themselves using video hardware which crypto graphically signs output.
Only a law/Big 4 firm can extract video from the official camera (which has a twin for hot swapping).
But we are talking about official WH videos. Start signing those.
If it’s not from the WH, it isn’t signed. Or perhaps it’s signed by whatever media company is behind its production or maybe they’ve verified the video and its source enough to sign it. So maybe, let’s say the Washington Post can publish some compromising video of the President but it still has certain accountability as opposed to some completely random Internet video.
Then this exercise is a waste of time. All the hard hitting journalism which presses the President and elicits a negative response will be unsigned, and will be distributed across social media as it is today: without authentication. All the videos for which the White House is concerned about authenticity will continue to circulate without any cause for contention.
Anyone can digitally sign anything (maybe not easily or for free). The Whitehouse can verify or not verify whatever they choose but if you, as a journalist let’s say, want to give credence to video you distribute you’ll want to digitally sign it. If a video switches hands several times without being signed it might as well have been cooked up by the last person that touched it.
That’s fine?
Signatures aren’t meant to prove authenticity. They’re proving the source which you can use to weigh the authenticity.
I think the confusion comes from the fact that cryptographic signatures are mostly used in situations where proving the source is equivalent to proving authenticity. Proving a text message is from me proves the authenticity as there’s no such thing as doctoring my own text message. There’s more nuance when you’re using signatures to prove a source which may or may not be providing trustworthy data. But there is value in at least knowing who provided the data.
It would become quite easy to dismiss anything for not being cryptographically verified simply by not cryptographically verifying.
I can see the benefit of having such verification but I also see how prone it might be to suppressing unpopular/unsanctioned journalism.
Unless the proof is very clear and easy for the public to understand the new method of denial just becomes the old method of denial.
Once people get used to cryptographical signed videos, why only trust one source? If a news outlet is found signing a fake video, they will be in trouble. Loss of said trust if nothing else.
We should get to the point we don’t trust unsigned videos.
If a news outlet is found signing a fake video, they will be in trouble.
I see you’ve never heard of Fox News before.
https://en.wikipedia.org/wiki/Fox_News_controversies#Video_footage_manipulation
Yeah good luck getting to general public to understand what “cryptographically verified” videos mean
It could work the same way the padlock icon worked for SSL sites in browsers back in the day. The video player checks the signature and displays the trusted icon.
It needs to focus on showing who published it, not the icon
Democrats will want cryptographically verified videos, Republicans will be happy with a stamp that has trumps face on it.
I mean, how is anyone going to crytographically verify a video? You either have an icon in the video itself or displayed near it by the site, meaning nothing, fakers just copy that in theirs. Alternatively you have to sign or make file hashes for each permutation of the video file sent out. At that point how are normal people actually going to verify? At best they’re trusting the video player of whatever site they’re on to be truthful when it says that it’s verified.
Saying they want to do this is one thing, but as far as I’m aware, we don’t have a solution that accounts for the rampant re-use of presidential videos in news and secondary reporting either.
I have a terrible feeling that this would just be wasted effort beyond basic signing of the video file uploaded on the official government website, which really doesn’t solve the problem for anyone who can’t or won’t verify the hash on their end.
Maybe some sort of visual and audio based hash, like musicbrainz ids for songs that are independant of the file itself but instead on the sound of it. Then the government runs a server kind of like a pgp key server. Then websites could integrate functionality to verify it, but at the end of the day it still works out to a “I swear we’re legit guys” stamp for anyone not techinical enough to verify independantly thenselves.
I guess your post just seemed silly when the end result of this for anyone is effectively the equivalent of your “signed by trump” image, unless the public magically gets serious about downloading and verifying everything themselves independently.
Fuck trump, but there are much better ways to shit on king cheeto than pretending the average populace is anything but average based purely on political alignment.
You have to realize that to the average user, any site serving videos seems as trustworthy as youtube. Average internet literacy is absolutely fucking abysmal.
In the end people will realise they can not trust any media served to them. But it’s just going to take time for people to realise… And while they are still blindly consuming it, they will be taken advantage of.
If it goes this road… Social media could be completely undermined. It could become the downfall of these platforms and do everyone a favour by giving them their lives back after endless doom scrolling for years.
“Not everybody will use it and it’s not 100% perfect so let’s not try”
Just make it a law that if as a social media company you allow unverified videos to be posted, you don’t get safe harbour protections from libel suits for that. It would clear right up. As long as the source of trust is independent of the government or even big business, it would work and be trustworthy.
As long as the source of trust is independent of the government or even big business, it would work and be trustworthy
That sounds like wishful thinking
Back in the day, many rulers allowed only licensed individuals to operate printing presses. It was sometimes even required that an official should read and sign off on any text before it was allowed to be printed.
Freedom of the press originally means that exactly this is not done.
You understand that there is a difference between being not permitted to produce/distribute material and being accountable for libel, yes?
“Freedom of the press” doesn’t mean they should be able to print damaging falsehood without repercussion.
What makes the original comment legally problematic (IMHO), is that it is expected and intended to have a chilling effect pre-publication. Effectively, it would end internet anonymity.
It’s not necessarily unconstitutional. I would have made the argument if I thought so. The point is rather that history teaches us that close control of publications is a terrible mistake.
The original comment wants to make sure that there is always someone who can be sued/punished, with obvious consequences for regime critics, whistleblowers, and the like.
We need to take history into account but I think we’d be foolish to not acknowledge the world has indeed changed.
Freedom of the press never meant that any old person could just spawn a million press shops and pedal whatever they wanted. At best the rich could, and nobody was anonymous for long at that kind of scale.
Personally I’m for publishing via proxy (i.e. an anonymous tip that a known publisher/person is responsible for) … I’m not crazy about “anybody can write anything on any political topic and nobody can hold them accountable offline.”
So your suggestion is that libel, defamation, harassment, et al are just automatically dismissed when using online anonymous platforms? We can’t hold the platform responsible, and we can’t identify the actual offender, so whoops, no culpability?
I strongly disagree.
That’s not what the commenter said and I think you are knowingly misrepresenting it.
I am not. And if that’s not what’s implied by their comments then I legitimately have no idea what they’re suggesting and would appreciate an explanation.
Jesus, how did I get so old only to just now understand that press is not journalism, but literally the printing press in ‘Freedom of the press’.
The general public doesn’t have to understand anything about how it works as long as they get a clear “verified by …” statement in the UI.
The problem is that even if you reveal the video as fake,the feeling it reinforces on the viewer stays with them.
“Sure that was fake,but the fake that it seems believable tells you everything you need to know”
“Herd immunity” comes into play here. If those people keep getting dismissed by most other people because the video isn’t signed they’ll give up and follow the crowd. Culture is incredibly powerful.
Maybe the White House should create a hash of the video and add it to a public blockchain. Anyone can then verify if the video is authentic.
- Anybody can also verify it if they just host the hash on their own website, or host the video itself.
- Getting the general populace to understand block chain implementations or how to interface with it is an unrealistic task
- What does a distributed zero trust model add to something that is inherently centralized requiring trust in only 1 party
Blockchain is the opposite of what you want for this problem, I’m not sure why people bring this up now. People need to take an introductory cryptography course before saying to use blockchain everywhere.
Putting it on the blockchain ensures you can always go back and say “see, at this date/time, this key verified this file/hash”… If you know the key of the uploader (the white house), you can verify it was signed by that key. Guatemala used a similar scheme to verify votes in elections using Bitcoin. Could the precinct lie and put in the wrong vote count? Of course! But what it prevented was somebody saying “well actually the precinct reported a different number” since anybody could verify that on chain they didn’t. It also prevented the precinct themselves from changing the number in the future if they were put under some kind of pressure.
All of this could be done without blockchain. Once they sign a signature with their private key they can’t unsign it later. Once you attest something you cannot un-attest it.
Just make the public key known and sign things. Please stop shoehorning blockchain where it doesn’t belong, especially when you aren’t even giving any examples of things that blockchain is doing for you with 100000x the cost and complexity, that normal crypto from the 80s/90s cant do better.
Trusted timestamping protocols and transparency logs exists and does that more efficiently
Tinfoil hat time. It’s probably because they need to start creating AI videos to show he’s ‘competent and coherent’ and they’ll say their 'tests proves that it’s a real video not a fake. And since the government said it’s true, morons will believe it.
I don’t have nearly enough tinfoil for this
If Trump is any indication, no politician will ever need to be ‘competent and coherent’ ever again, constituents would vote in a literal corpse if it had a sign propped on it saying “gays bad”
I’ve always thought that bank statements should require cryptographic signatures for ledger balances. Same with individual financial transactions, especially customer payments.
Without this we’re pretty much at the mercy of trust with banks and payment card providers.
I imagine there’s a lot of integrity requirements for financial transactions on the back end, but the consumer has no positive proof except easily forged statements.