top of page

If you have seen: 

  • Glossy responsible AI principles

  • All of the buzzwords: "human-centered" and "sustainable" and "accountable" AI

  • A dedicated AI ethics person (who has been bullied onto sick leave)

  • Promises that a chatbot is safe for kids (for months after suicides were reported)

  • The blame put on users for the wrong prompts (when the guardrails were not deployed) 

Or if you tried to bring up an ethical problem, and were labelled a complainer or difficult, put on a performance improvement plan, pushed out or burned out. 

My research is collecting these stories, from people in product development, consulting, academia, and NGOs. Ethical problems in AI - and what happens when people speak up. 

100% confidential to you and your organization. 

 

Can we talk?  ​​

You have seen AI ethics-washing

How I protect your confidentiality

You will be anonymous

No details will be included that could identify you, the organization, or anyone else involved. 

The point of this research is to center the voices of people who speak up, not to identify or punish people or organizations. 

You can change your mind

If you change your mind after you share your story - that is fine. This is your story.

No data in the cloud; no AI

I transcribe interviews by hand, and once I do, I delete all details that could identify you. Then I delete the recording file.

I don't put any data into an AI system.

All of my writing is saved locally, not on the cloud. 

Independent research
 

I am conducting this research unaffiliated with or funded by any institution, in order to have no reporting or documentation obligations. 

Share with me:
AI ethics problems & failures

Stories of ethics concerns that were ignored, dismissed, or punished. What you saw, what you said, and what happened next.

  • Safety concerns overridden by product launch pressure

  • Bias discovered but not addressed

  • Ethics reviews that were never meant to see daylight

  • Retaliation after speaking up​

  • Marketing campaigns about ethical AI

Share with me:
AI ethics successes & hope

Times when the right thing happened. Visions of how AI should be developed, sold, and used ethically. Stories that give you hope. 

  • A team that listened and changed course

  • An accountability model you admire

  • Your vision for ethical AI 

  • Identifying discrimination and changing the product

"The way that hope is most often grounded in memory, because you can't see the future but you can understand the patterns and possibilities if you know the past." -Rebecca Solnit, No Straight Road Takes You There

How can we talk? 

However you feel most comfortable. Some write, some meet with me online, some meet in-person. 

Your confidentiality is the most important. 

Email me - encrypted

Message, call, or voice message

+47 4547 1880

Signal

Whatsapp

SMS

Messenger

Online or in-person meeting

Trondheim, Norway

My name is Ley Muller

I publish under Ashley E. Muller

I am a critical researcher.

This means the duty to constantly make and re-make the world into a better place. 

I care about people who are outsiders. Most of the time, acutely, I feel like an outsider.

I have lived outside my home country (the US) since I was 23; I operate (quite obviously) in my third language. When I work with immigrants, refugees, queer people, and people facing mental health crises, I feel most connected.

When I talk to feminist killjoys and whistleblowers, those who refuse to accept daily racism or workplace take-downs, I know that there is hope. 

 

My network, and my life, is full of kind and clever people who challenge me, support me, and teach me. I have talked to so many more while doing this research: people who care about AI harms, who want AI development / research / consulting / policy to be more just. I hope this book will do them justice. 

headshot of Ley, wearing a black turtleneck, light skin, short hair and smiling
  • LinkedIn

My AI credentials: ​ - Founder, Values-driven AI consulting - IAPP Artificial Intelligence Governance Practictioner - Nordic leader of Women in AI Governance - Research and education lead of Women in AI Norway - Senior Fellow with the Aula Fellowship for AI, Science and Technology Policy  - ISO 42001 Lead AI Auditor (Global Skills Development Council) - Built, led, and scaled up AI teams and projects in the state and municipal sectors - Worked as senior AI product manager - 8 articles and one upcoming chapter on AI

My research credentials: - PhD in addiction medicine, master's in applied social sciences - 45 articles - forthcoming book chapter on power and AI - co-editor, forthcoming book on AI - Technical Advisory Group on the mental health impacts of COVID-19, World Health Organization European Office

© 2035 by Ley Muller. 

bottom of page