Name the Bias

Stop Saying ‘AI Bias.’ Name What You’re Seeing.


We’ve said the word “bias” so many times it’s lost meaning.

Algorithmic bias. Training data bias. Gender bias. Cultural bias.

Generic. Safe. Abstract.


The word gets thrown around in panel discussions, academic papers, corporate diversity statements. It’s become a catch-all term that lets us acknowledge a problem without actually confronting what that problem is.

Time to name what we’re actually talking about.


Misogyny, homophobia, racism, sexism, transphobia, class war

On a recent panelist planning call for WiT Regatta, I said something that landed hard:

“The word bias we’ve all said a hundred times is actually misogyny, homophobia, racism, sexism, transphobia, class war. When you name it for what it is, the conversation gets harder and more interesting.”

The call went quiet.

Then Fernanda Ave shared her story. She asked an image generator to create a “marketing professor.” Got a Harvard-blazer man in a power pose: authority, gravitas, academic credibility visualized.

Then she asked it to change the same professor image from male to female.

What she got: a timid school teacher. Not an authoritative university professor. A diminished version… less authority, less power, different context entirely.

It took her multiple rounds of prompting to finally get a female professor with the same authority the system gave the male version by default.

She posted the comparison on Instagram. 11 out of 13 people immediately saw the problem: 10 women and 1 man. 2 men said they saw nothing wrong.

That’s not bias. That’s misogyny embedded in image generation systems.

And when you name it… really name it… you can’t unsee it.


What We’re Actually Seeing

Let me be specific about what we’re calling “bias” in AI systems:

MISOGYNY is when:

  • Image generators create authoritative male professors but turn female professors into timid school teachers—requiring multiple rounds to achieve equal authority
  • Virtual assistants are coded female and deferential (Siri, Alexa) while authority figures remain male-coded
  • Resume screening software stacks men’s applications higher because it learned from historical data where men got promoted (Amazon scrapped their tool when they discovered it)

RACISM is when:

  • Facial recognition systems fail more often on Black women than white men (not by a little—by orders of magnitude)
  • Healthcare AI trained predominantly on white patient data misdiagnoses people of color at higher rates
  • Predictive policing algorithms send more police to Black and brown neighborhoods because they were trained on historically biased policing data

SEXISM is when:

  • You flip the gender on identical résumés and the algorithm ranks them completely differently
  • Medical AI systems trained on male bodies fail to accurately diagnose women’s health conditions
  • Hiring algorithms trained on 20 years of tech industry data just codify 20 years of gender imbalance and call it “objective”

CLASSISM is when:

  • Credit algorithms use zip codes as proxies for creditworthiness, effectively redlining by digital means
  • Educational AI systems assume lower performance from students in under-resourced schools
  • Job screening tools filter out candidates without four-year degrees, regardless of skill or experience

HOMOPHOBIA and TRANSPHOBIA are when:

  • Content moderation algorithms flag LGBTQ+ voices and identities more aggressively
  • Gender recognition systems offer only binary options
  • Healthcare AI doesn’t account for trans patients’ medical histories

This isn’t abstract. This is real harm to real people, disguised as algorithmic objectivity.


Dr. Joy Buolamwini: The Poet Who Names It

If there’s one person who’s been naming discrimination in AI systems louder and clearer than anyone else, it’s Dr. Joy Buolamwini.

I call her a cyberpunk warrior dressed in scholar’s robes. She calls herself a poet of code. Both are true.

As founder of the Algorithmic Justice League, Joy has been doing the work that tech companies should have been doing from the start: actually testing whether their systems work for everyone, not just the people who built them.

Her groundbreaking Gender Shades study exposed something the industry didn’t want to acknowledge: commercial facial recognition systems had error rates of less than 1% for lighter-skinned men but up to 34.7% for darker-skinned women.

Let that sink in.

The same technology. The same confidence from the companies selling it. But if you’re a darker-skinned woman, the system is over 40 times more likely to misidentify you than a lighter-skinned man (34.7% error rate vs. 0.8%).

That’s not a “bias challenge.” That’s racism and sexism embedded in code.

Joy’s spoken word piece “AI, Ain’t I A Woman?”, a direct homage to Sojourner Truth—doesn’t softly critique. It demands accountability. It names what’s happening. It refuses to let discrimination hide behind the word “bias.”

Her term “the coded gaze” captures something essential: AI systems reflect the gaze of their creators. And if the creators are overwhelmingly from one demographic, one geography, one set of lived experiences, then the “objective” systems they build will reflect those limitations.

Joy didn’t wait for permission. She didn’t soften the language. She named what she was seeing, documented it rigorously, influenced policy, turned it into art, and demanded change.

That’s the model.


What a Decade Behind a Lens Taught Me

I spent more than a decade as a world-traveling photographer, shooting for Rolling Stone, Wired, The New Yorker, National Geographic, and beyond. You learn something fundamental when you spend that long looking through a viewfinder:

What’s missing from the frame matters as much as what’s in it.

Every photograph is a series of decisions: what to include, what to exclude, where to stand, when to shoot, whose story gets told. Those decisions reflect the photographer’s perspective, values, blind spots.

AI works exactly the same way.

When you train a system on data, you’re making framing decisions. What data gets collected? Who’s represented? Who’s missing? What gets labeled as “normal”? Whose experience becomes the baseline?

Those aren’t technical decisions. They’re values decisions.

And when you call the result “bias,” you’re softening what’s actually happening.

If the training data doesn’t include Black women’s faces, facial recognition systems won’t recognize Black women’s faces. If the data doesn’t include women’s health presentations, diagnostic AI won’t recognize women’s health presentations. What’s missing from the frame becomes what’s missing from the system.

And when those systems fail, people get misdiagnosed, wrongfully arrested, denied opportunities, surveilled, excluded.

That’s not bias. That’s harm. Specific, documented, preventable harm targeting specific communities.


Why Naming Matters

Here’s what happens when we name discrimination specifically:

“The algorithm said so” carries unearned weight it feels objective, scientific, neutral. But when you call it “algorithmic bias,” companies can respond with: “We’re working on bias mitigation strategies.” Generic problem, generic response.

When you name it as racism, as misogyny, as discrimination that demands a different response. You can’t “mitigate” racism. You have to dismantle the systems that perpetuate it.

Specificity creates accountability.

“The algorithm seemed biased” is dismissible.

“This facial recognition system failed to identify my face as a Black woman” is evidence.

“This hiring algorithm ranked my résumé lower when I changed my name from Michael to Michelle” is documentation.

The more examples we surface with specific naming, the harder it becomes for companies to claim “edge case” or “isolated incident.” Pattern recognition requires multiple data points. Be a data point.

The conversation gets harder when you name discrimination. It also gets more honest. More useful. More actionable. Because now you’re talking about real harm to real people, not abstract technical challenges.


The Call to Action

Stop saying “bias.” Start naming what you’re actually seeing:

  • Is this misogyny?
  • Is this racism?
  • Is this sexism?
  • Is this classism?

Document specifically. Not vague impressions—concrete examples. Specificity creates evidence. Evidence makes dismissal harder.

Build vocabulary. When you can articulate what you’re seeing… bias laundering, coded gaze, algorithmic discrimination… you have language to demand change.

Credit the work. Dr. Joy Buolamwini didn’t just observe facial recognition failures. She quantified them, published them, turned them into art, founded a movement, influenced policy. When you use terms like “coded gaze” or “algorithmic justice,” credit where they came from.

Refuse the softening. When someone calls it “algorithmic bias,” ask: “What kind of discrimination are we actually talking about?” Make them name it.


What This Means for WiT Regatta

On February 5th, Fernanda Ave, Sonali Sharma, Adina Gray, and I are asking: “What would responsible AI look like in 2030?

One answer: We stop calling discrimination “bias.”

We name misogyny when image generators default to diminished authority for women.

We name racism when facial recognition fails Black faces at 40 times the rate it fails white faces.

We name classism when algorithms use zip codes to determine who deserves opportunities.

The people in that room at WiT Regatta…. women in tech, many actively building AI systems… are the ones who can change what gets built. Not if we keep softening what we’re seeing. Not if we accept that these are technical problems solved by better datasets.

This is about values. About whose faces, voices, experiences get included in the training data that will determine how AI understands humanity.

The people asking these questions are increasingly the people building these systems. That’s how change happens.


Join the conversation:

The people asking these questions are increasingly the people building these systems. That’s how change happens.


Kris Krug is a photographer (Rolling Stone, Wired, The New Yorker, National Geographic) turned AI educator, founder of Vancouver AI and BC AI Ecosystem Association. His 130,000 Creative Commons images became AI training data without consent. He uses these tools daily. He names what he’s seeing. Both things are true.

Word count: 1,485 words


Discover more from Kris Krüg | Generative AI Tools & Techniques

Subscribe to get the latest posts sent to your email.

Leave a Reply