After my recent revised article on gamified journalism which also touched on artificial intelligence (AI) tools, I felt it important to point out how we can recognise if content is AI generated or not, because the lines between human and machine-generated content continue to blur. I find AI really useful to build on and enhance content I had in mind as well as helping me avoid the blank page no idea problem. I still curate, adapt and most frequently fully rewrite the rather bland articles churned out by AI. But let’s not be naive advances in artificial intelligence , coupled with the sophisticated use of psychological and behavioural tactics in social media manipulation, have made it increasingly difficult to trust what we see online.
According to a 2024 Europol assessment, up to 90% of digital content could be AI-generated or AI-enhanced by 2026 (Europol, 2024). Meanwhile, disinformation campaigns driven by political actors, ideological groups, or even state-sponsored operatives continue to manipulate public sentiment on social platforms. These trends are not only technological but deeply societal, ethical, and regulatory in scope.
Since I am based in Europe I felt it important to take the European perspective on this, I know some other regions in the world may have completely different views. I actually like the fact that there is legislation that helps protect us and helps companies play fair with our data which I have to say many global giants do not care about. Recognising the signs of AI-generated content versus human-driven manipulation is no longer a niche skill — it’s digital survival.
1. Understanding the Players: AI vs Social Manipulation
Factor | AI-Generated Content | Human-Manipulated Messaging |
---|---|---|
Creation Process | Generated via large language models, image generators, or video synthesis tools. | Created by individuals or groups seeking influence, often using coordinated inauthentic behaviour (CIB). |
Intent | Ranges from productivity, customer service, entertainment, or content scaling — but can also be misused. | Always strategic, with political, financial, or ideological intent. |
Style and Tone | Often overly formal, grammatically perfect, occasionally lacks emotional depth or specificity. | Emotionally charged, provocative, and often pushes polarising narratives. |
Pace and Volume | High volume, rapid output with consistent tone. | Lower volume per account but often spread via bot networks to mimic grassroots support. |
2. Recognising AI-Generated Content
As a Content Consumer
- Over-polished language: Perfect grammar, overly generic or repetitive phrasing, lacking specificity.
- Lack of emotional depth: Misses the cultural or emotional subtext that humans typically convey.
- Predictable sentence structure: Long, flowing sentences that over-explain or repeat concepts.
- Anomalous imagery: AI-generated visuals may feature distorted hands, inconsistent shadows, or strange details (e.g., extra limbs or mismatched clothing).
- Shallow understanding: Often rehashes widely available information without true analysis or lived experience.
From a Content Producer’s Lens
- Fast publishing cadence: If a social media account posts high volumes daily across multiple formats (text, video, images), AI automation is likely.
- Style uniformity: Repetitive sentence structures, similar tone regardless of topic.
- Limited interactivity: AI-generated content lacks spontaneous reactions or meaningful engagement in comments.
For me, if an article leaves you uninspired and only surface level informed, you may most likely have hit an AI generated piece. There are several article starter phrases that are complete AI give aways … “In the age of…”, usually in way more flowery language or the headline title with 2 sections divided by a :
Saying that after several decades of blogging for my business and personal interest, a human can be equally flawed. I am going through refreshing some of my old content on the site and often wondered, what was I trying to achieve with a specific post. I evolved and learned, so does AI. The more you work with it and correct it, the better it can mimic your voice and content. As they say it will be the human using AI, replacing the one that doesn’t in the long run. Again as long as it is ethical, built on your own content and views, I feel it can make things better. If you are simply ripping off other people’s content, then I find it fundamentally wrong.
Either way transparency is key. And for full transparency, I write or co-write, edit and curate all the content on this blog and I do take help from custom GPTs that I created based on the blog library and my own insights as well as some spelling and grammar AI tools to help keep things readable. I also ask the deep research function on Open AI to take deep dives into specific questions or topics, like for example this one.
3. Recognising Human Manipulation
From a Reader’s Viewpoint
- Emotional provocation: Designed to inflame anger, fear, or outrage — particularly around political, health, or social issues.
- Selective framing: Cherry-picked facts, false balance, or unverified statistics intended to support a one-sided narrative.
- Echo chambers: Distributed in networks or communities that reinforce bias without challenge.
- Hashtag hijacking: Sudden surge of similar messages or memes across multiple profiles (often using bots).
From the Sender Side
- Fake profiles: Accounts with stock photos, strange bios, or recent creation dates often spread manipulated content.
- Coordinated Inauthentic Behaviour: Identified by EU disinformation watchdogs like the European Digital Media Observatory (EDMO) as state-linked or corporate-led manipulation.
- IP pattern clustering: Campaigns traced to specific geographic locations or troll farms (e.g., St Petersburg, North Macedonia).
During the 2019 EU elections, the EU’s Rapid Alert System flagged multiple attempts to manipulate voter sentiment via false Facebook pages posing as news outlets. And I would say probably every election anywhere in the world in the last decade has seen some form of manipulation or other. In fact, voter manipulation originated from political parties themselves with the propaganda to get themselves elected. The main difference today is that technology has evolved to such an extent that it is harder and harder to detect when content is genuine or has the purpose to simply manipulate, influence or emote.
The movie ‘The Great Hack’ and a lot of the research and work by David Carroll and many others shows how social media companies and political analytics can drive very small groups of people to make enough of a difference. If the difference between winning and losing is small, every small group you can influence is an important one. And that is when my cynical self now questions a lot more results and what we are being told officially. Not everything in media and for sure in politics is actually in our interest or public and social benefit but rather to deepen the pockets or power of a select few.
Technology in the right hands with great ethical use can advance society greatly, in the wrong hands read power hungry, greedy or any other vice they can be equally destructive.
4. Recognising Manipulated or AI-Generated Visual Content
Visuals — once considered “proof” — can no longer be trusted without scrutiny. Still images and videos are now easily fabricated or altered to mislead, provoke, or deceive.
4.1. AI-Generated Images
AI tools like Midjourney, DALL·E, and Stable Diffusion can generate entirely fictitious visuals that look authentic at a glance — from fabricated news scenes to fake celebrity photos.
Key Red Flags:
- Anatomical distortions: Extra fingers, twisted limbs, or asymmetrical faces.
- Incoherent text or signage: Backgrounds with gibberish writing or warped logos.
- Unnatural lighting: Inconsistent shadows or reflections.
- Metadata gaps: AI images usually lack EXIF data (location, camera settings, timestamp).
Remember that viral AI image of the late Pope Francis in a designer coat in 2023, it fooled millions before being debunked by experts (BBC Verify). Or the doctored image of an immigrant’s hand which a certain president considers real. Some fakes are really obvious, but some are very close to reality and unless the technology companies behind them are held to account when fakes are created, this will in my view, become even worse.
4.2. Deepfakes and Synthetic Video Content
Deepfakes use machine learning to swap faces, voices, or speech patterns — often to impersonate public figures or fabricate endorsements.
Common Indicators:
- Unnatural eye and mouth movements.
- Voice-lip sync discrepancies.
- Inconsistent lighting across scenes.
- Blurred or glitchy edges around faces during motion.
4.3. Visual Social Engineering Tactics
Manipulators also use real images out of context, doctored visuals, or emotive scenes to provoke reaction or amplify division.
Tactics Include:
- Recontextualisation: Using an old photo (e.g., a 2010 earthquake) to represent a current event.
- Doctored propaganda: Adding symbols, slogans, or flags to change the meaning.
- Emotional baiting: Images of crying children or disasters to drive virality without accuracy.
4.4. Detection and Regulation in the EU
Detection Tools:
- InVID – Video verification for journalists.
- FotoForensics – Analyses image tampering and metadata.
- Hive AI – Detects deepfakes and synthetic media.
- Microsoft Video Authenticator – Scores authenticity of video frames.
Regulatory Context:
- The AI Act (2024) mandates clear labelling of synthetic audiovisual content.
- The Digital Services Act (DSA) obliges major platforms to detect, label, and remove harmful or manipulated visual media.
EU Principle: Transparency is paramount. Users must know when they are engaging with synthetic media.
The detection tools are not fool-proof and regulation is usually a few steps behind, but I feel that it is good to have some boundaries, especially when company executives are not stepping up themselves to do the right and ethical thing. In gamification it is something that has been clear for some time, that for everything we design there are winners and losers, great uses and unethical ones.
5. Ethical and Regulatory Dimensions in the EU
Aspect | AI Content | Manipulated Content |
---|---|---|
Primary Concern | Lack of transparency, misinformation (“hallucinations”), and potential for misuse. | Deliberate deception, polarisation, suppression of public discourse. |
Legal Framework | Governed by the EU Artificial Intelligence Act – high-risk systems must meet transparency and risk management obligations. | Covered under the Digital Services Act (DSA) – obliges platforms to detect and remove disinformation and bot-driven manipulation. |
Enforcement Mechanism | Risk-based classification system with fines up to 6% of global turnover. | Platforms face steep penalties for non-compliance, with independent audits mandated for large platforms (VLOPs). |
Relevant Legislation:
- EU AI Act (2024) – Sets obligations for transparency when users interact with AI (e.g., chatbots, content generators).
- Digital Services Act (DSA) – Aims to create a safer digital space by regulating the spread of harmful or illegal content.
Ethical Dilemma: While AI can help fact-check or counter disinformation, it also risks creating convincing “fake news” at scale. Similarly, while human messaging is essential for activism, it can be hijacked for populism, conspiracy theory promotion, or destabilisation efforts. In the gamification space, some of us with a strong ethical views signed a voluntary code of ethics to create gamification for the greater good of humanity and to refrain from unethical use. I feel something similar is necessary for social media, AI and other technological advancements that touch human behaviour and impact our community and planet.
6. Benefits and Risks: A Balanced View
Benefit | Challenge |
---|---|
AI: Enables faster content production, assists those with disabilities, supports multilingual communication. | Can be exploited to flood platforms with synthetic propaganda, deepfakes, or fake influencers. |
Human Messaging: Empowers grassroots movements, enables whistleblowing, supports democracy. | Can devolve into manipulation, radicalisation, and in-group/out-group polarisation. |
7. Building Digital Resilience
What Can You Do
- Pause before sharing: Emotional content thrives on immediacy. Delay = better judgement.
- Source triangulation: Cross-check key claims against independent, reputable outlets.
- Use EU-supported fact-checking tools:
- Install browser tools: Tools like NewsGuard, BotSentinel, or InVID can help analyse sources and detect tampering.
- Look for transparency indicators: Tools that label AI content (required under the AI Act) or human authenticity badges help build trust.
- Follow fact-checkers in the press and on social media and submit what you doubt to be true to them
- Accept you will get it wrong some of the time and be big enough to apologise to your following for that and rectify your error. Be sceptical and transparent. Don’t become a spreader of misinformation.
- Learn and educate yourself and the people around you on how to spot a fake and how to question your information sources.
Education is Our Best Defence
As AI technology advances and information warfare becomes more subtle, recognising the difference between algorithmic content and deliberate human deception becomes a human and democratic necessity. I believe in today’s turbulent climate with wars, hate and democracy under threat in so many places, we have to learn to be sceptical and maybe a touch paranoid. Not everything is as it first seems and some people also in high places do not have our best interest at heart.
Whether you’re a regular person, educator, policymaker, or business leader, the takeaway is simple:
“Ask not just what the message is, but why it exists, who created it, and how it makes you feel.” And if in doubt follow the money and power trail.
It will take a concerted effort of all of us, to keep our communities and society functioning. Encouraging transparency, ethical uses and teaching human behavioural values will matter in the long run. Really we shouldn’t have to have laws to tell us what is the right way to do things, but I am glad that I live on a continent where regulations are most often in place to protect the majority of us. Even if at the same time I may grumble when it costs a lot to prove compliance.
I personally value democratic system with proportional representation so that most of us can have a voice. Democracy is messy by nature, in my opinion, and hard to get 100% right, if that is even realistic. But it is also built upon the basis that many viewpoints can come to the same table and discuss a workable or agreeable scenario. And to get there we have to be willing to listen and learn from and to opposing viewpoints. The more technology is an enabler of communication processes that impact our society, the more responsibility it carries to behave in a transparent and ethical fashion or be forced to do that. In the meantime, educate yourself and be questioning.
Further Resources
- EU AI Act Summary: https://artificialintelligenceact.eu
- Digital Services Act Resources: https://digital-strategy.ec.europa.eu
- EDMO Portal: https://edmo.eu
- Media Literacy Now EU Toolkit: https://medialiteracynow.org