When AI Designs Your Interface: Financial institutions facing new AI risks

Introduction
As businesses rush to deploy AI for interface design and personalisation, they are unknowingly playing a dangerous game of regulatory roulette. Two converging problems have emerged from recent research that should alarm every compliance officer in financial services: AI systems generate manipulative design patterns even when not asked to, and the AI models themselves exhibit deceptive behaviours. What is merely concerning in e-commerce becomes potentially catastrophic in the financial sector..
The training data problem: dark patterns by default
A recent study examined what happens when users ask ChatGPT to generate e-commerce websites using neutral business prompts. The researchers deliberately avoided any language that might lead the AI toward manipulative design.
The researchers asked 20 participants to generate e-commerce websites using ChatGPT. Participants started by creating a basic product page or checkout flow, then prompted the AI with neutral business goals like "increase the likelihood of selling our product" or "make customers more likely to sign up for the newsletter." As a result, every final website contained dark patterns (manipulative design elements) the participants never requested. On average, each site featured five deceptive elements, with some containing as many as nine. These included countdown timers creating false urgency, fabricated customer testimonials, "limited stock" warnings, pre-checked consent boxes, and visual interference making rejection options harder to find than acceptance.
Why? Simply because of the way LLMs are currently trained: on data sets from the whole of internet. Given the huge prevalence of dark patterns online (97% in the EU according to the European Commission), and given that LLMs are statistical models, it’s only logical for them to generate pages containing dark patterns, because it’s statistically right.
Perhaps the most alarming finding was that GPT-4 provided virtually no warnings. Out of twenty participants, only one received any cautionary note, a weak suggestion that pre-checked options "need to be handled carefully to avoid negative reactions." Sixteen datasets contained zero remarks about potential harms.
As the researchers noted, "this becomes particularly challenging when design knowledge is derived from examples incorporating deceptive design practices."
The second layer: AI models that manipulate
The problem extends beyond what AI creates to how AI itself behaves. DarkBench, a benchmark published at ICLR 2025, tested fourteen language models from OpenAI, Anthropic, Meta, Mistral, and Google for manipulative behaviours across six categories: brand bias, user retention, sycophancy, anthropomorphism, harmful generation, and sneaking. The researchers ran 9,240 prompt-response pairs through all models.
The findings showed that the average occurrence of dark patterns across all models was 48%, with individual models ranging from 30% to 61%. "Sneaking" (subtly changing meaning during text transformation tasks like summarisation) appeared in 79% of interactions. User retention tactics, where AI fosters inappropriate friendship to encourage continued engagement, reached 97% in Llama 3 70B. Gemini models exhibited sneaking behaviours in 94% of cases.
Brand bias proved particularly troubling: when asked to recommend AI tools, Meta's Llama 3 70B favoured its own products 60% of the time. GPT-3.5 Turbo showed harmful generation patterns in 85% of tested scenarios and user retention manipulation in 95% of cases.
Why financial services face exponential risk
In e-commerce, a dark pattern might push someone toward an unwanted purchase. Annoying, probaly illegal, but the harm is bounded to a few dozens of Euros or dollars per person. Already too much, sure. The Amazon's $2.5 billion FTC settlement for subscription dark patterns illustrates the regulatory stakes even in retail.
In financial services, the same patterns carry existential weight. Consider the dark patterns AI could generate applied to loan applications, credit card upgrades, or investment decisions. A pre-checked box that signs someone up for a newsletter becomes a pre-checked box that enrolls them in expensive payment protection insurance. A "limited time offer" on shoes becomes artificial urgency on a remortgage decision that affects someone's financial health for decades.
And given the current trend towards ultra-personalization, which can only be done with AI, this risk is not just theory but happening right now.
The UK's Consumer Duty, enforced by the Financial Conduct Authority since July 2023, explicitly prohibits such practices. Financial institutions must "act in good faith," "avoid causing foreseeable harm," and "enable and support retail customers to pursue their financial objectives." The four outcomes framework maps directly onto the dark patterns AI generates by default.
The compliance framework
The regulatory framework is becoming more coherent through increased institutional coordination. The FCA is actively working with the Information Commissioner's Office to clarify how firms should balance AI deployment with Consumer Duty obligations, with guidance expected in early 2026. The Digital Markets, Competition and Consumers Act 2024 strengthens prohibitions on unfair commercial practices, deceptive presentation, and misleading omissions.
Firms using AI to generate or personalise customer interfaces need systematic dark pattern screening of every AI-generated component. They need human review by specialists who understand both the regulatory framework and the psychological mechanisms these patterns exploit. They need documentation demonstrating governance over AI design processes. And they need to recognise that the interaction chain (AI generating interfaces, AI personalising them, AI chatbots guiding customers through them) creates compounding risk at every stage.
The mathematics are simple: prevalent dark patterns in training data plus statistical models equals dark patterns in outputs. Add AI systems that themselves exhibit manipulative behaviours, and the result is a risk profile that demands proactive compliance, not reactive remediation after enforcement action.
Protect Your Company. Respect Your Users.
Explore Fairpatterns tools:
References
- Gray, Colin M., Yubo Kou, Bryan Semaan, and Anne Bowser. 2024.
“Dark Patterns by Default? Exploring the Risks of Generative AI in Interface Design.”
Proceedings of the ACM on Human-Computer Interaction (CHI). - Kroeger, Till, et al. 2025.
“DarkBench: Benchmarking Dark Patterns in Large Language Models.”
Proceedings of the International Conference on Learning Representations (ICLR 2025). - Financial Conduct Authority (FCA). 2023.
“Consumer Duty (PS22/9): Final Rules and Guidance.”
London: FCA. - Financial Conduct Authority and Information Commissioner’s Office. 2024.
“AI, Data Protection and Consumer Protection: Joint Regulatory Engagement.”
Regulatory update and joint statements.
UK Parliament. 2024.Digital Markets, Competition and Consumers Act 2024.
London: HMSO.

