There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

nyan ,

You may be able to prove that a photo with certain metadata was taken by a camera (my understanding is that that’s the method), but you can’t prove that a photo without it wasn’t, because older cameras won’t have the necessary support, and wiping metadata is trivial anyway. So is it better to have more false negatives than false positives? Maybe. My suspicion is that it won’t make much difference to most people.

T156 , (edited )

A fair few sites will also wipe image/EXIF metadata for safety reasons, since photo metadata can include things like the location where the photo was taken.

restingboredface ,

It’s of course troubling that AI images will go unidentified through this service (I am also not at all confident that Google can do this well/consistently).

However I’m also worried about the opposite side of this problem- real images being mislabeled as AI. I can see a lot of bad actors using that to discredit legitimate news sources or stories that don’t fit their narrative.

Dagamant ,

I watched a video on methods for detecting AI generation in images. One of the methods was comparing the noise on different color channels. Cameras have different noise in different channels while AI doesn’t. There is also stuff like JPG compression artifacts in other image formats.

So there are technical solutions to it but I wouldn’t know how to automate them.

AbouBenAdhem ,

Those would be easy things to add, if you were trying to pass it off as real.

apfelwoiSchoppen ,
@apfelwoiSchoppen@lemmy.world avatar

Google is planning to roll out a technology that will identify whether a photo was taken with a camera, edited by software like Photoshop, or produced by generative AI models.

So they are going to use AI to detect AI. That should not present any problems.

hemko ,

They’re going to use AI to train AI*

So nothing new here

apfelwoiSchoppen ,
@apfelwoiSchoppen@lemmy.world avatar

Use AI to train AI to detect AI, got it.

tal , (edited )
@tal@lemmy.today avatar

looks dubious

The problem here is that if this is unreliable – and I’m skeptical that Google can produce a system that will work across-the-board – then you have a synthesized image that now has Google attesting to be non-synthetic.

Maybe they can make it clear that this is a best-effort system, and that they only will flag some of them.

There are a limited number of ways that I’m aware of to detect whether an image is edited.

  • If the image has been previously compressed via lossy compression, there are ways to modify the image to make the difference in artifacts in different points of the image more visible, or – I’m sure – statistically look for such artifacts.
  • If an image has been previously indexed by something like Google Images and Google has an index sufficient to permit Google to do fuzzy search for portions of the image, then they can identify an edited image because they can find the original.
  • It’s possible to try to identify light sources based on shading and specular in an image, and try to find points of the image that don’t match. There are complexities to this; for example, a surface might simply be shaded in such a way that it looks like light is shining on it, like if you have a realistic poster on a wall. For generation rather than photomanipulation, better generative AI systems will also probably tend to make this go away as they improve; it’s a flaw in the image.

But none of these is a surefire mechanism.

For AI-generated images, my guess is that there are some other routes.

  • Some images are going to have metadata attached. That’s trivial to strip, so not very good if someone is actually trying to fool people.
  • Maybe some generative AIs will try doing digital watermarks. I’m not very bullish on this approach. It’s a little harder to remove, but invariably, any kind of lossy compression is at odds with watermarks that aren’t very visible. As lossy compression gets better, it either automatically tends to strip watermarks – because lossy compression tries to remove data that doesn’t noticeably alter an image, and watermarks rely on hiding data there – or watermarks have to visibly alter the image. And that’s before people actively developing tools to strip them. And you’re never gonna get all the generative AIs out there adding digital watermarks.
  • I don’t know what the right terminology is, but my guess is that latent diffusion models try to approach a minimum error for some model during the iteration process. If you have a copy of the model used to generate the image, you can probably measure the error from what the model would predict – basically, how much one iteration would change an image or part of it. I’d guess that that only works well if you have a copy of the model in question or a model similar to it.

I don’t think that any of those are likely surefire mechanisms either.

SchmidtGenetics ,

I guess this would be a good reason to include some exif data when images are hosted on websites, one of the only ways to tell an image is true from my little understanding.

CatsGoMOW ,

Exif data can be faked.

SchmidtGenetics , (edited )

I guess, but the original image would be somewhere to be scraped by google to compare and see an earlier version. Thats why you don’t just look at the single image, you scrape multiple sites looking for others as well.

Theres obviously very specific use cases that can take advantage of brand new images that are created on a computer, but theres still ways of detecting that with other methods as explained by the user I responded to.

conciselyverbose ,

No, the default should be removing everything but maybe the date because of privacy implications.

SchmidtGenetics ,

include some EXIF data

Thats what I said.

Date, device, edited. That can all be included, location doesn’t need to be.

conciselyverbose ,

The device is no more anyone else’s business than anything else.

It should absolutely not be shared by default.

SchmidtGenetics ,

To prove the legibility of the image? It’s a great data point that’s pretty anonymous, they don’t need to include the Mac, sim, serial or other information.

conciselyverbose ,

A. It’s not even the weakest of weak evidence of whether a photo is legitimate. It tells you literally zero.

B. Even if it was concrete proof, that would still be a truly disgusting reason to think you were entitled to that information.

SchmidtGenetics ,

You can use metadata to prove an image is real, you can’t prove something is real without it, so it’s the only current option. It tells you a lot, you just don’t want people to know it apparently, but that doesn’t change it can be used to legitimatize an image.

What’s disgusting about knowing if an image was taken on a Sony dslr, and Android or an iPhone? And entitled…? This is so you can prove your image is real? The hell you talking about here?

conciselyverbose ,

No, you cannot use metadata as even extremely weak evidence that an image is real. It is less than trivial to fake, and the second anyone even hints at making it a standard approach, it will be on every photo anyone uses to mislead anyone.

Most photos on the internet are camera phones, and you absolutely are not entitled to know what phone someone has. Knowing someone’s phone has infinitely more value to fingerprinting a user than including metadata could ever theoretically have to demonstrate whether a photo is legitimate or not.

Photos without a specific, on record provenance from a credible source are no longer useful for evidence of anything. You cannot go back from that.

SchmidtGenetics ,

Meta data creates a string, if you want to claim ownership of an image and I show an image with earlier metadata, who’s is the real one? Yes it can be faked, but it can also be traced. Thats not a reason to not do something, the hell? That’s like suggesting you can’t police murders because someone can fake a murder.

What is identifiable about the type of phone you have…?

And without that exit data you can’t prove any of that… you realize this… yeah…?

AbouBenAdhem ,

The problem here is that if this is unreliable…

And the problem if it is reliable is that everyone becomes dependent on Google to literally define reality.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines