Technology, Smartphones, Games


OpenAI to Identify AI-Generated Images,Unveils New Tool to Authenticate AI Contents

ai

Image Credit: Generated by Ideogram.ai

OpenAI recently showed off its cool new tool for spotting and understanding images made by computers. This is a big deal because it helps us know if what we're seeing online is real or not. They also joined a group called the Coalition for Content Provenance and Authenticity(C2PA) committee. This group made an open standard for how to say where stuff on the internet comes from. OpenAI likes these rules and started using them for pictures made by their special AI, DALL·E, gathering AI related data continuously.

In a blog post, OpenAI talked about how the internet is getting full of fake pictures and videos made by computers. They said it's important for everyone to know when something is made by a computer and not a real person. OpenAI wants to help people understand this better. They're doing two things to help.

First, OpenAI joined the C2PA committee and proposed for a widely used standards for digital content certification, that makes sure we can trust what we see online. The C2PA committee wants to put special labels on the metadata of the pictures to show where they came from. For example, if a picture was taken with a camera, it will say so in the label. But if it was made by an AI, it will say which AI made it. This is important because it's hard to change or remove these labels no matter even the image is shared, cropped or edited in any way.

The second thing OpenAI is doing is making a tool ( Not named yet, they called it “OpenAI’s image detection classifier”) that can tell if a picture was made by their AI, DALL·E. This tool is like a smart detective for pictures. It can guess if a picture looks like it was made by DALL·E or not. And guess what? It's really good at its job! It got 98 out of 100 pictures right when it had to pick out DALL·E's work from real pictures. But sometimes it makes mistakes, especially if it's trying to tell if a picture was made by DALL·E or another AI. Then, it might get it wrong about 5 to 10 times out of 100.

Now, OpenAI is letting some people test this detective tool. They want research labs and groups that write about science to try it out and give feedback. This is so they can make it even better.

So, this new tool and the coalition with the  C2PA committee are OpenAI's ways of helping us to trust what we see online. They want everyone to know when something is made by a computer and not a real person. With these tools and standards, we can all feel more sure about what's real and what's not on the internet.