Designing Trustworthy AI Products

A DataCamp webinar with Marie Potel-Saville and Sara Vienna
Intro
In a recent webinar hosted by DataCamp, Marie Potel-Saville, co-founder and CEO of FairPatterns, joined Sara Vienna for a practical discussion on how product teams can design trustworthy AI products.
The session explored how seemingly small interface decisions—such as prompts, defaults, consent flows, or cancellation paths—can shape user behavior and sometimes become AI dark patterns. These patterns may reduce user control, exploit attention, or unintentionally push users toward decisions they would not otherwise make.
The speakers emphasized that trustworthy AI does not start with compliance checklists or model selection. It begins much earlier, when users first encounter an interface and interact with its design choices.
Watch the Webinar
You can watch the full session here: https://www.datacamp.com/resources/webinars/designing-trustworthy-ai-products
Key Ideas from the Discussion
Several important themes emerged during the conversation:
- Dark patterns in AI products can go beyond traditional UX manipulation, especially when conversational systems or AI companions create emotional pressure or dependency.
- Transparency and user control are essential. Users rarely explore settings, so product teams must make data usage, consent, and model behavior understandable from the first interaction.
- Designing for critical thinking can reduce automation bias. Interfaces can encourage reflection through uncertainty cues, verification steps, or intentional friction in high-risk contexts.
The speakers also highlighted that trust is not only a product feature but an organizational outcome, shaped by incentives, metrics, and internal culture.
Thank You
Many thanks to Sara Vienna for the insightful discussion and to Rhys for facilitating the session and guiding the conversation.

