Artificial Intelligence Content Generator. A man uses a laptop to interact with AI assistant. AI offers functions like chatbot, generate images, write code, writer bot, translate and advertising

How AI can go wrong

Artificial intelligence is no longer science fiction. It’s here – embedded in everyday apps, services, and systems.

But while it promises efficiency and innovation, AI can also go dangerously off track. And when it does, it’s often consumers who pay the price.

This guide explores the many ways AI can go wrong, equipping you with the knowledge to recognise the dangers and understand what needs to change.

Join the Claim is not a law firm. This information is for general guidance only and does not constitute legal advice.

Guide overview

Introduction

The rise of AI has brought a wave of innovation, but it’s not all smooth sailing. While artificial intelligence offers smarter services and slicker tech, it also brings risks that are often hidden beneath the surface. From how your data is collected to how life-changing decisions are made on your behalf, AI is raising urgent questions about fairness, consent, and accountability.

Consumers are already feeling the impact – whether it’s your work being used to train machines, your credit score being affected by opaque algorithms, or your face being scanned without you knowing.

In the UK, many of these issues are already covered by existing laws, particularly data protection rules. But those laws weren’t designed with modern AI in mind, which is where the gaps begin to appear.

Data misuse: the invisible cost of convenience

We all rely on apps, platforms, and smart tools to get through the day. But while these services make life easier, they often come at a hidden price: your personal data.

AI systems run on data. The more they have, the better they perform. But not all data is collected ethically or used responsibly.

Your data, their training set

Many AI tools are trained using vast amounts of real-world data scraped from websites, social media, and public platforms. That might sound harmless, but often, this includes personal information: names, photos, opinions, purchase histories, and more.

Some companies argue this information is “public” and fair game. But being online doesn’t mean you’ve consented to have your data fed into a machine learning model, especially one that might be commercialised or reused without your knowledge.

Under UK GDPR, organisations must have a lawful basis to use your personal data, and they must be transparent about how it is used. Simply being publicly available does not automatically remove your rights.

What can go wrong?

When AI systems handle vast amounts of data, things can go wrong in ways that are invisible, until they’re not. Here are some of the biggest risks to watch out for:

The case against Microsoft and Google

In 2024, concerns emerged around how Microsoft and Google may have used personal data from everyday tools like Gmail, Word, Chrome, Outlook, and Teams to train their AI systems.

UK law firms are now investigating claims that this data was used without proper consent – a potential breach of data protection laws. If confirmed, those affected could be entitled to compensation for the misuse of their information. The investigation includes products used by millions of people, such as Xbox Live, Google Maps, YouTube and more. If you’ve used these services, your data may be involved.

Grok chatbot conversations exposed in search results

xAI’s chatbot Grok came under scrutiny after reports revealed that hundreds of thousands of user conversations were publicly accessible through Google search results

Accoirding to reports, Grok users were given the option to “share” their chat transcripts. However, what many didn’t realise is that this didn’t just create a private link — it made those conversations publicly available online, where search engines could index them.

As a result, anyone could come across chats covering everything from everyday queries to far more sensitive topics, including health concerns, personal relationships and financial information.

Copyright abuse: creators left in the cold

One of AI’s flashiest tricks is generating content: artwork, articles, music, even deepfake videos. But AI doesn’t create from scratch – it mimics what it has seen.

To learn how to write like a journalist or paint like an illustrator, AI models are trained on millions of real pieces of content, often without asking permission or compensating the original creators. This raises serious copyright questions. If an AI tool can spit out a new image in your style, is that theft? If it uses your blog posts to learn how to write persuasive copy, should you be paid?

What can go wrong?

The case against Stability AI

Legal challenges have been brought against AI developers over how copyrighted content is used to train models.

In a landmark UK case, Getty Images brought claims against Stability AI in the High Court, arguing that its images had been used without permission to train the Stable Diffusion model.

In November 2025, the High Court rejected Getty’s claim for secondary copyright infringement. The court found that the AI model itself did not store or reproduce the original images, meaning it could not be treated as an “infringing copy” under UK copyright law.

However, the court did find limited and historic trade mark infringement, where generated images included Getty-style watermarks.

Importantly, some of the most significant copyright questions were not decided. Getty dropped key parts of its case during the trial, including arguments about infringement during the training process and in the outputs generated by the AI.

Algorithmic bias: when AI reinforces discrimination

AI is only as fair as the data it learns from. If the training data contains bias – and it often does – then the AI will learn to replicate and reinforce it.

This is particularly dangerous in high-stakes decisions: hiring, credit scoring, insurance pricing, and criminal justice. If historic data reflects discrimination, an AI tool will carry that forward.

What can go wrong?

Complaint against HireVue and Intuit over AI hiring discrimination

In 2025, the American Civil Liberties Union (ACLU) filed a complaint against Intuit and its hiring tech provider, HireVue. 

The complaint followed a case where a Deaf Indigenous woman was denied a promotion after being evaluated using a video interview platform.

The ACLU alleges that the system relied on automated speech recognition and non-verbal analysis, which failed to accommodate her disability and misjudged her communication style.

According to the complaint, the employee requested adjustments — including captioning support — but these were not provided. She was later rejected for the role, with the decision allegedly influenced by how the system assessed her communication.

The ACLU argues this may amount to a breach of disability rights and anti-discrimination law.

However, both Intuit and HireVue have denied the allegations. HireVue has stated that its AI-based assessment tools were not used in this instance, and Intuit has said it provides reasonable accommodations to candidates.

57% of Black people and 52% of Asian people expressed concern about facial recognition in policing, compared to 39% in the general population.

Lack of accountability: who takes the blame when AI goes wrong?

When AI gets it wrong, who is responsible? That’s one of the thorniest legal questions of our time.

Part of the problem lies in how AI systems are designed and deployed. Many decisions are made with little human oversight, especially when algorithms are outsourced or embedded in larger systems. That creates confusion over who owns the outcome – the tech company, the business using the AI, or no one at all.

Unlike a human employee, an AI tool can’t apologise or be disciplined. And tech companies often hide behind layers of complexity, claiming that outcomes are generated by “the model” and can’t be easily explained. This makes it incredibly hard to challenge unfair decisions or seek redress.

Without transparency, people can be left with no explanation and no clear route to challenge what happened. As a result, consumers are left chasing shadows, trying to hold someone accountable for decisions no one seems to own.

What can go wrong?

Deepfakes and misinformation: truth under threat

AI tools can now create incredibly realistic fake videos, images, and audio clips. These “deepfakes” are more than a novelty – they pose real risks.

From political propaganda to revenge porn, deepfakes have been used to deceive, harass, and manipulate. AI-generated misinformation can spread faster than fact-checkers can keep up, especially on social media.

What can go wrong?

Martin Lewis deepfake investment scam

In 2023, a frightening deepfake featuring consumer finance expert Martin Lewis appeared on Facebook and Instagram, promoting a fake investment scheme tied to Elon Musk.

The AI-generated video was highly convincing and went viral, causing public alarm. The clip prompted Martin Lewis to warn: “This is frightening … they are going to get better”

Deepfake featuring consumer finance expert Martin Lewis appeared on Facebook and Instagram, promoting a fake investment scheme tied to Elon Musk.

AI-generated abuse and harmful content: when things escalate

In some cases, AI tools are being used to generate harmful, abusive or even illegal content at scale.

When safeguards fail

In early 2026, Grok — an AI tool linked to X (formerly Twitter) — came under scrutiny after researchers said it had generated millions of sexualised images in a matter of days, including images that appeared to depict children.

The controversy sparked widespread backlash and led to regulatory scrutiny in the UK, with concerns raised about whether the platform had adequate safeguards in place.

The wider trend is deeply concerning.

According to the Internet Watch Foundation, more than 8,000 AI-generated images and videos of realistic child sexual abuse were identified in 2025 — a rise on the previous year.

Some of the most serious material fell into the highest harm categories under UK law, highlighting how AI is being used to create more extreme and more realistic abuse content.

Experts warn that these tools are lowering the barrier to harm. Content that once required technical skill can now be generated with a simple prompt.

Everyday people becoming targets

There are growing concerns about how AI tools are being used in schools and workplaces. Reports in the UK have highlighted cases where pupils allegedly created fake, sexualised images of teachers and shared them online, causing serious distress.

These cases show how quickly AI-generated abuse can move from online spaces into real-world environments — affecting people’s reputations, wellbeing and ability to work.

In the UK, creating or sharing non-consensual intimate images — including AI-generated deepfakes — may be illegal. But enforcement can be difficult, especially when content spreads quickly or originates from overseas platforms.

AI in sensitive settings: where consent and clarity vanish

AI isn’t just shaping the content you see online, it’s now entering more sensitive spaces, like your workplace, your school, and even your GP’s surgery. And while these tools are often introduced to increase efficiency, they can create serious risks when they’re deployed without transparency or regulation.

In these contexts, people may not know that their words, decisions, or personal information are being shared with AI. Even more worrying, they may not have been given the option to consent in the first place.

What can go wrong?

Doctors found using unapproved AI

In June 2025, a Sky News investigation revealed that some NHS clinicians were using unapproved generative AI tools to transcribe and summarise patient appointments.

The software had not yet received NHS-wide approval, and there were concerns that patients were not clearly informed their consultations were being processed by AI. As a result NHS bosses wrote to GPs and hospitals to demand that they stop using AI tools that potentially breach data protection rules and put patients at risk.

The Sky News report also flagged a critical issue: AI “hallucinations” – where the technology invents false information and presents it as fact. In a healthcare setting, that kind of error isn’t just misleading, it could be life-threatening. 

AI hallucinations: when machines make things up

One of the most unsettling flaws in generative AI is its tendency to “hallucinate” – a term used when AI systems confidently produce false or misleading information. These systemic problems are caused by how these tools are trained and how they generate text, images, or data.

Hallucinations often sound convincing, which makes them hard to detect. They can range from subtle factual errors to complete fabrications. In high-stakes situations – like healthcare, finance, or law – these made-up statements could have serious real-world consequences.

What can go wrong?

UK lawyers warned about using AI

In 2025, the UK High Court publicly reprimanded legal professionals who submitted documents containing fake case-law citations generated by AI.

In a major lawsuit (estimated to be worth £89 million), the claimant’s legal team presented 45 cited cases — 18 of which were entirely made up by generative AI. Dame Victoria Sharp warned that such mistakes threaten public trust, could lead to contempt proceedings, and might even bring criminal charges for perverting the course of justice

88% of UK adults believe the government should have the power to stop harmful AI products.

Legal grey areas: the rules haven’t caught up

AI is evolving faster than the laws that govern it.

The UK has taken a “pro-innovation” approach to AI regulation. Instead of introducing a single overarching AI law, existing regulators — including the ICO, CMA and Ofcom — are expected to oversee AI within their existing remits. This differs from the approach taken by the EU AI Act, which introduces more prescriptive rules for high-risk AI systems.

Playing catch-up

Without clear, AI-specific regulations, tech companies often make up their own rules. They decide how data is collected, how algorithms are used, and what consumers are told — or not told — about how these systems operate.

While existing UK laws — including data protection, consumer protection and equality laws — already apply to AI, they were not designed with modern generative systems in mind. This creates uncertainty around how those rules should be interpreted in practice.

This lack of clarity can leave consumers in the dark about their rights and options for redress when something goes wrong.

Even when regulators do intervene, they’re often hampered by slow processes, limited resources, and legal frameworks that struggle to keep pace with fast-moving technology. Meanwhile, companies continue to develop and deploy new AI systems at speed.

AI regulation is coming – but not quickly enough

In the UK, the Information Commissioner’s Office (ICO) has raised concerns about AI use, but enforcement actions remain rare due to gaps in existing frameworks.

The UK government has signalled that more formal AI regulation is coming, but the timeline and scope remain uncertain.

In 2025, ministers confirmed plans for a more comprehensive AI framework covering issues like safety, transparency and copyright.

However, rather than introducing a single, standalone AI law, the current direction suggests a continuation of the UK’s sector-led approach — with regulators expected to apply existing rules to AI use cases.

This has drawn mixed reactions. Some argue it supports innovation, while others — including consumer groups and parts of the creative sector — say it risks leaving harmful gaps in protection.

Copyright abuse: a shifting position

Proposals had suggested allowing AI developers to train models on copyrighted content unless rights holders actively opted out. That approach was strongly opposed by artists and rights groups — including figures like Elton John and Kate Bush — who argued it would undermine the UK’s creative sector.

In March 2026, the government stepped back from this position, confirming it “no longer has a preferred option” and would take more time to decide how copyright law should apply to AI.

This means there is currently no clear or settled framework for how AI companies can use copyrighted material in the UK. While ministers say they are trying to balance the needs of the creative industries and the AI sector, campaigners remain concerned that ongoing uncertainty could leave both creators and consumers exposed.

What needs to happen next

AI doesn’t have to be dangerous. But right now, the balance of power is tilted too far in favour of big tech. Here’s what needs to change:

Stronger regulation

Updated laws to cover AI-specific harms and ensure clear consumer protections.

Greater transparency

Making AI systems explainable and accountable.

Informed consent

Requiring opt-in for data used to train AI.

Fair compensation

Paying creators whose work trains or is mimicked by AI.

Independent oversight

Establishing watchdogs with teeth to hold AI systems and companies to account.

FAQs about AI and your rights

As AI becomes more common, many people are unsure what their rights are when things go wrong. Here are answers to some of the most common questions.

Possibly. If your personal data was used to train an AI system without your knowledge or permission, and it breaches UK GDPR or data protection laws, you may be entitled to compensation.

You have the right to challenge automated decisions that significantly affect you – for example, being denied credit, a job, or access to services. Under UK GDPR, you can request a human review of the decision. However, under UK GDPR (Article 22), this right only applies to fully automated decisions that produce legal or similarly significant effects. Not all algorithmic decisions qualify.

This is a grey area. Some companies claim content made public online is fair game, but if it contains copyrighted material or personal data, UK law may offer protection. Legal challenges are already under way.

AI hallucinations happen when systems generate false or misleading content that sounds convincing. These can be dangerous in sensitive settings like law or healthcare, especially if they’re trusted without verification.

Partially. While UK GDPR and consumer protection laws apply, there are still major gaps when it comes to regulating AI specifically. A dedicated AI bill is in development but has yet to be introduced.

It’s not always obvious. You might spot unfair treatment, see your content copied by an AI tool, or discover that your personal data was scraped for training. If you’re unsure, check our latest claims and investigations.

How Join the Claim can help

AI can be a powerful force for good. But it needs guardrails. As consumers, we have the right to demand better, safer, and more responsible systems – before the harms become irreversible.

At Join the Claim, we work with trusted law firms to help consumers understand their rights and take action when needed.

If you believe your data or rights have been misused by AI, we can help you check if there is a live case, confirm your eligibility and connect you with legal experts.

Disclaimer 

This guide is for general information only and should not be taken as legal advice. While we aim to provide accurate and up-to-date content, the legal and regulatory landscape surrounding AI is complex and rapidly evolving.

Join the Claim is not a law firm and does not provide legal representation. If you require legal advice or have specific questions about your rights, we recommend seeking guidance from a qualified legal professional.

AI in the news

MPs have voted against banning under-16s from social media, but UK regulators are increasing pressure...
Hundreds of thousands of user conversations have been made publicly available through Google search results...
AI is changing everything. But at what cost? Discover the risks of AI misuse, from...

Did you know we have a newsletter?

Sign up for our newsletter to stay up to date.