We knew back in 1988 that this would eventually happen. And it will get worse. Species are going extinct at a much faster pace than any of the earlier great extinction events.
Maybe we should finally lift the taboo on atomic power and start solving this problem.
While your comment is what we all (mostly) think, a grease fire is quite a good joke - I appreciate you can make it on here without massive dickbags downvoting you or you being shadow banned.
Detecting real video as fake seems problematic where it might lead to apathy – folks just don’t believe any video anymore. Similar to Trump’s “everything is fake news” approach
Thus far these detectors kind of suck, both for deepfakes and AI generated text. They’re biased against non-native speakers and using them in a scholarly setting can result in punishing students that aren’t cheating.
The genie was let out of the bottle much too early.
I used to work in the field of image forensics a few years ago, right as the GAN technology was entering the scene. Even when it was just making 200x200 pixel faces, everyone in the industry was starting to panic. Everything we had at the time was based off of detecting inconsistencies in the pixel content, repeating structures that indicated copy/paste attacks, or looking for metadata inconsistencies
For pixel inconsistencies, you can look at how the jpeg image is encoded to look for blocks that aren’t encoded consistently. This paper coversDCT and some others. scholar.google.com/scholar?q=dct+image+forensics&…That’s just one example, but it’s ultimately looking for things like someone photoshopping a region out or patching something in.
Similarly, copy-move detection would look for “edges” and “intersections” in images and creating constellations of points, which you can use scale invariant transforms to look for duplicates. This article covers an example where North Korea tried to make their landing force look more impressive theguardian.com/…/north-korea-photoshop-hovercraf…
The problem is that when the entire image is forged, there is no baseline to detect against. The whole thing is uniformly fake. So we’re back to the old “I can tell by looking at it” which is extremely imprecise and labor intensive. In fact, if you look at how GANs work, it’s trivial to embed any detector algorithm into the training process and make something that also defeats that detector.
those super embarrassing things you did at that party were obviously a deepfake by a angry neighbor or jilted lover. Dumb things politicians say, deepfake smear campaigns, all of them. Those politicians all have IQ of a 180 or how would they have gotten into Congress and the White House etc.
Apparently if people are gullible enough you can even use video calls to commit crimes, no AI needed!
It was through a video call – where only the photograph of a man who was the same as Minister Jackson, with a cap and glasses – was seen where the imposter began to give orders to the two workers, who work at night. First, they removed 50 laptops from the different floors of the ministry. newsrnd.com/…/2023-07-22--negro-chico---the-priso…
This seems like a very bad idea. I’m concerned that having a test might cause people to suspend their critical thinking responsibility and may have other issues like being inaccurate or causing deep fake tech to just leap frog over it - and then be able to benefit from fake authenticity measurements.
“In essence, the accuracy is entirely dependent on the difficulty of the test.”
AKA doesn’t work for shit. Don’t bother with it. And even if it did work, it’ll be an arms race and right now the deepfakes have the nuke and Intel has a musket.
Climate change has been particularly unsettling this summer because it has felt like a constant reminder that it can’t be escaped. Even if you aren’t in Florida or Arizona or Greenland, even if you feel like you’re insulated up in Vermont or Canada – bam, 11 inches of rain in 24 hours.
bbc.co.uk
Oldest