Here’s what kept happening. I’d be leading a cohort of TheUpgrade... journalists, PR folks, sales leaders, creatives… and about halfway through week two, the same question would surface every single time. Someone would pause mid-exercise and ask, “Okay but what about the ethics of this?”
Not in a hand-wringing way. In a genuine, working-professional way. They’d just used ChatGPT to draft a donor letter, or summarize a 300-page government report, or analyze a bunch of interview transcripts, and they could feel something shift. The speed was real. The output was real. And the questions were real too.
Is it cool that I didn’t tell the subject I used AI for this?
Am I allowed to train a model on our internal documents?
What happens when this thing is confidently wrong and I don’t catch it?
Should I be using this at all for decisions that affect people’s lives?
I’d been answering those questions one at a time for two years. Sometimes I had good answers. Sometimes I punted. Sometimes I’d say “let me get back to you” and then spend a weekend reading frameworks from UNESCO, OECD, NIST, and Mitre Atlas trying to give a straight answer.
Eventually I realized: we need a program for this. Not a webinar. Not a PDF. A real, assessment-based certification that produces people who know how to think about this stuff on Monday morning when their boss says “let’s throw AI at this.”
That’s RAP. It’s live. First cohort starts in May. Here’s the whole story.

The gap we kept running into
Every organization I talk to is deploying AI right now. Nonprofits, government departments, marketing agencies, law firms, Indigenous economic development corporations, school boards, you name it.
Almost none of them have governance frameworks. Not in a “they’re bad people” way. In a “nobody taught them how” way. The courses that exist are either academic theory with zero practical application, or vendor sales pitches dressed up as training, or narrow technical certifications that assume you’re already an ML engineer.
None of them were answering the question the journalists and PR folks and sales leaders kept asking me: how do I, a working professional who is not an AI researcher, actually assess whether this deployment is responsible?
I started sketching what a useful answer would look like. Four weeks. Assessment-based, not attendance-based. Built around practical artifacts — things you actually build and keep — not slide decks. Covers the full spectrum: technical foundations, bias and privacy, deployment ethics, the human stuff. Multi-framework because the world is multi-framework. Small cohort because the real learning happens in the discussion, not the lecture.
Then I realized I couldn’t build it alone. I’m a photographer who ended up running an AI ecosystem. I know community. I know teaching. I know how to translate technical ideas for working professionals. But I’m not the person who should be designing the assessment rubric for evaluating model governance.
So I called Martin.
The Martin call
Martin Lopatka is BC + AI founding member #31. PhD in forensic statistics. M.Sc. in AI. Mozilla alumni with production ML systems experience. He spent years doing the actual work of building responsible AI assessment frameworks inside a company that takes that stuff seriously.
I told him what I was seeing. The gap. The cohort questions. The weekend-warrior framework-reading. The fact that we had this whole community of working professionals ready to do the work if somebody would just give them the map.

Then I called Sarah.
Sarah Downey is based in Victoria, 20+ years in nonprofit and social impact leadership, and she’s been quietly running facilitated conversations for nonprofit leaders on ethical AI governance for a while now. Her ethos: Stay Curious. Stay Connected. Stay Human. When I described the program, she didn’t flinch. She said “I’m in.” Founding member #138. She’s bringing the facilitation craft and the values-centered lens that mission-driven organizations need.
The three of us is the core. And that’s when RAP went from sketch to real.

What’s actually in the program
Four weeks. 90-minute live sessions once a week. Pre-readings and exercises between sessions. Cohort capped at 30 people because the conversation is the whole point and you can’t have a real conversation with 200.
Week 1 is Foundations. How these systems actually work. The accuracy problem — why models confabulate with full confidence and what to do about it. Global frameworks: UNESCO, OECD, NIST, IEEE. You leave week one with a Personal AI Inventory — every AI system you use or oversee, mapped.
Week 2 is Core Ethics. Bias and prediction. Privacy, consent, surveillance. Copyright and creative ownership. This is the week that shows up in lawsuits. Artifact: Ethics Assessment — your actual systems evaluated against the frameworks.
Week 3 is Societal Impact. Deployment readiness — when is this thing ready, and when should it just not be deployed at all? Labor displacement and worker surveillance. Environmental costs. The questions most companies skip because they’re uncomfortable. Artifact: Deployment Checklist you can actually use to make decisions.
Week 4 is the Human Element. Authenticity and deepfakes. Human-AI relationships and vulnerable populations. Creativity, agency, meaning. The stuff that doesn’t fit in a compliance document but determines whether we come out of this decade as better humans or worse ones. Artifact: Ethics Impact Assessment — your capstone.
Then here’s the part I’m most proud of. After the program, we build you a Custom GPT trained on your coursework. Your inventory, your assessments, your context, your frameworks. It becomes your ongoing ethics practice partner. Not a chatbot you consult once and forget. An actual tool that knows how you think about this stuff and helps you stay sharp after the cohort ends.
That’s new. Nobody else is doing that. It’s the thing Martin got excited about when we designed it turning coursework into infrastructure.

Who this is actually for
Three groups.
Leaders and executives overseeing AI initiatives who need governance frameworks that actually work… not policy documents that sit in a drawer until the incident happens.
Career transitioners with some runway or a severance package who are pivoting into AI governance, or AI practitioners who want to differentiate themselves as someone who understands ethics, not just implementation.
People who want to build an AI practice. Not a certificate mill graduate. Somebody who walks out with frameworks, artifacts, and a Custom GPT they’ll use for years.
If you’re looking for theory and vibes, this is the wrong program. If you want a compliance box to check, also wrong. If you’re ready to actually do the work of becoming the person in your org who knows how to think about this stuff? That’s who we built it for.
The pricing, and the math
Let me be straight about this because I hate when people bury it.
Standard price is $1,500 CAD. Early bird is $1,200 (until April 15 — that just passed for this cohort, so we’re at $1,500 going forward).
BC + AI members pay $750. Half. That’s the big discount.
Membership costs $340/year. So the real math looks like this: if you’re not already a member, you join BC + AI for $340, then register for RAP at the member price. You spend $1,090 total instead of $1,500. You pocket $410. And you get the rest of the membership benefits forever — Friday office hours, Discord with 850+ members, meetup priority, all future certification discounts.
That’s not a gimmick. It’s how we’ve designed the whole organization: membership should be the obvious economic move if you’re engaging with our stuff. Nobody should pay full price while members pay half we want you in the community.
The stuff I’m not going to pretend about
A few honest notes.
Cohort 1 is a pilot. We’re running it smaller and tighter on purpose. We’ll iterate on Cohort 3 (August-September, online refined) and Cohort 2 (October, in-person intensive during BC + AI Festival Week). If you join Cohort 1, you’re shaping the program. That’s the deal. Some people want that. Some don’t. Pick accordingly.
The assessment is real. Weekly quizzes with an 80% pass threshold (unlimited attempts, we’re not trying to trick you). Practical exercises you actually submit. Final assessment. You earn the certification. I’d rather have 20 people finish than 30 people who half-showed-up walk away with a piece of paper.
This is not a technical deep-dive. If you want to argue about transformer architecture, this isn’t that. Martin could teach that course, but it’s a different course. RAP is for working professionals who need to make responsible decisions about AI systems not build them from scratch.
We don’t pretend ethics is solved. I’m stealing this from Sarah: stay curious, stay connected, stay human. The goal isn’t to hand you the right answer. It’s to give you the frameworks, the judgment, and the practice to navigate the wrong ones.
Why BC + AI is the right place to build this
One last thing, because I know somebody’s going to ask.
BC + AI Ecosystem Association is a nonprofit. We’ve turned down $10K sponsors whose practices conflicted with our values. We open every meetup with Indigenous ceremony — not as acknowledgment theater, but as structural grounding for the conversation. We’re on the unceded territories of the Musqueam, Squamish, and Tsleil-Waututh peoples.

An ethics certification from a community that’s willing to walk away from money it disagrees with hits different than one from a vendor with a product to upsell. That’s not marketing — that’s just accurate.
We built RAP because somebody had to, and because we thought we were the right somebody. I still think that.

What to do next
Cohort 1 starts May 22, 2026. 30 seats. I don’t know how long they’ll last we’ve had soft interest through the meetups and Discord already.

? Join BC + AI — get member pricing, plus everything else we do
? Register for RAP — lock your seat
? Questions? Reply to the email, show up at Friday office hours (12-1 PM PT, free, open to all BC + AI Members), or message me on Discord. I read everything.
Shoutout to Martin Lopatka for carrying the curriculum weight on this, to Sarah Downey for bringing the facilitation craft, and to everybody in the AEFL (AI Ethical Futures Lab) community who’s been stress-testing these ideas with us every month.

Technology isn’t neutral and neither are we. Come build something with us.
— Kris

Discover more from Kris Krüg | Generative AI Tools & Techniques
Subscribe to get the latest posts sent to your email.
