top of page
  • Writer's pictureLindsay Chelf

How to identify AI-generated photos

If you’ve read our e-book or blogs on artificial intelligence, you are likely already aware of the many benefits of AI. Whether it’s improving productivity, advancing research or enhancing user experiences, AI is increasingly impacting our personal and professional lives—and doing so in ways that are becoming harder to discern.

One common type of AI-generated content is photographs (or rather, images that are photorealistic). To create these images, generative AI tools based on machine learning, such as DALL-E and Midjourney, are first “trained” on a text prompt paired with a corresponding visual to “learn” what an object looks like. For example, the text “a red apple hanging from a tree branch” is paired with a photograph of—you guessed it—a red apple hanging from a tree branch. After processing countless text-image pairs like this, the program can eventually create its own image of a red apple hanging from a tree branch when prompted by a user.

It’s no surprise that this technology continues to evolve and improve at an incredible pace, making it increasingly difficult to identify when a photo is authentic and when it has been created by AI. However, there are certain telltale signs—for the time being, that is—that can help you spot fakes at a single glance.

Use a critical eye

Despite all its capabilities, AI still struggles with the intricacies of human anatomy. If you suspect a picture depicting a person isn’t real, look at:

  • Fingers and hands: AI will often insert extra fingers or will merge them together in odd shapes, making hands the first place you should check for authenticity.

  • Hair: Human hair, with all its little imperfections, is difficult for AI to imitate. The result is hair that looks too perfect, like it was painted on with a brush in completely straight lines, or too weird, like locks that appear out of nowhere or loop back into themselves.

  • Skin: Like hair, skin is tricky for AI to simulate. While real people have minor imperfections, blemishes and texture to their skin, AI-generated humans tend to have impossibly perfect porcelain-smooth skin (though it’s important to note that filters and Photoshop can do the same to an authentic photo!).

  • Accessories: Look closely at glasses or jewelry, because AI will often generate misplaced, mismatched or distorted accessories. For example, glasses may have hair cutting through them and earrings may appear to melt into earlobes.

If you’re feeling up for a challenge, test your ability to identify AI-generated people using the above tips at the website Which Face is Real. Not all AI-generated photos are of people, however. When you don’t have the above human elements to study, look at:

  • Text: While AI can generate passably realistic imagery, it has a much harder time generating legible text. Look for garbled characters, misspelled words and strange shapes where readable text should be.

  • Patterns and Symmetry: AI struggles with generating complex visual patterns, such as brick walls and decorative wallpapers or carpets, as well as symmetrically balanced visuals, such as a building with repeating architectural features, fences or the pane dividers on a window. While the pattern or balance may start out correctly, it usually degrades into bizarre-looking visuals.

  • Nonsensical Logic: Though AI can outperform humans in many tasks, thinking isn’t one of them. Look for things that don’t make logical sense, like a bird with two pairs of wings or a mirror on a wall that isn’t reflecting what’s in the room around it.

  • Backgrounds: Don’t focus just on the subject in a photo—scan the background for inconsistencies and abnormalities. Buildings may be warped, furniture may be missing legs or crowds may be full of people with extra limbs and warped faces. Some AI generators heavily blur the backgrounds of the images they generate to hide these obvious flaws, so a fully blurred background is another sign an image may be fake.


Ask questions

Even if a photo passes your visual inspection, it’s still a good idea to sharpen your media literacy skills by practicing what is referred to as SIFT: Stop. Investigate the source. Find better coverage. Trace the original context.

  1. Stop: It’s only natural to want to have your beliefs confirmed with visual evidence, but it’s important to pause and think before sharing. Say, for example, you come across a photo on social media that shows a politician you dislike being hauled away by police in handcuffs. Your first instinct may be to share it with everyone you know—especially those who disagree with you politically—but before doing so, slow down and consider what you’re looking at. Does it confirm any biases you have? Is it depicting something that’s too good to be true?

  2. Investigate the source: While most AI-generated images are harmless, some are created to mislead, stoke fear or spread confusion. Using the above example of a photo of a politician being arrested, there are likely numerous people and organizations who would benefit from spreading that kind of misinformation. Ask yourself: Who’s sharing the image? Are they a reputable source? Do they have an agenda or a bias of their own?

  3. Find better coverage: If a politician really was arrested in such a dramatic fashion, you can be certain that legitimate news sources would be covering it. Are you able to find additional information from multiple trustworthy sources? Are there other photos from different angles? Or is the single image you saw on social media the only evidence that this event even occurred?

  4. Trace the original context: Even if an image is authentic, it can be presented in a misleading manner. Think about the example photo of the politician in handcuffs. Instead of being arrested, could they have been participating in a lesson on police safety for schoolchildren? Could it have been a staged photo op intended for use in a political ad?


Turn to tools

If you see an image you’re just not sure about (or if you want to avoid having to do a thorough investigation every single time you come across a suspicious image), there are tools out there to help determine an image’s veracity:

  • Hive Moderation AI-Generated Content Detection allows you to upload an image to determine whether or not it was created by AI. It also has a similar tool for text you may suspect as being AI-generated.

  • Google’s About This Image can provide details to help you determine if an image is real or not, such as when it first appeared on the internet, where else it has been seen online and other important contextual information.

  • TinEye’s Reverse Image Search is similar to Google’s tool in that it can help you find out where an image originated. You can also compare different versions of an image, which may help you discover if it’s been modified in any way.

It’s important to recognize that AI detectors aren’t infallible, and as AI-generation technology continues to advance, detecting tools must keep up or else risk becoming unreliable.


It can be troubling to think about how difficult distinguishing AI-generated images from real photographs, but using the above advice and tools will help train your brain to recognize what is real and what is fake. Even if you are unable to determine an image’s authenticity and AI-detection tools are inconclusive, relying on the SIFT method is a great way to protect yourself against the spread of misinformation.

Still have questions? AOE has several team members certified by the Marketing Artificial Intelligence Institute who are ready to help you navigate the complex and rapidly evolving world of AI, so reach out and let’s get a conversation started!


bottom of page