Alright, digital revolutionaries, gather ’round. I’ve got a story to tell, and it starts with a conversation I had with my buddy Kushal Goenka at the Vancouver Convention Centre.
Picture this: We’re fresh out of a Web Summit Vancouver 2025 Town Hall, the air thick with promises and platitudes about the next big tech thing to hit our city. But Kush and I? We’re not buying the hype. We’re here to talk about something bigger, badder, and a whole lot more important: open-source AI and the future of human freedom in the digital age.
Now, before we dive into the matrix, let me give you the lowdown on Kush. This guy’s not your average tech bro. He’s a firebrand, an open-source evangelist with a righteous anger that’d make Richard Stallman proud.
When I asked him why he cares so much about open-source AI, he hit me with this:”The thing that supersedes everything for me is freedom—true freedom to do whatever I want, not constrained by interests that exist for monetary or power reasons.”
Boom. Just like that, Kush laid out the stakes. We’re not just talking about some nerdy code-sharing here. We’re talking about the very essence of human agency in the digital age.
Freedom Ain’t Free
Let’s break it down. When Kush talks about freedom, he’s not spouting some political buzzword. He’s talking about something far more fundamental. In his words:
“When I talk about freedom, I mean the ability to understand what’s really going on and to take action on it. In open source, it’s about being able to modify things—create something to help myself or others without relying on corporate entities. I want that freedom for myself and for everyone.”
This isn’t just about tweaking your smartphone’s interface or customizing your desktop wallpaper. We’re talking about the power to peek under the hood of the AI systems that are increasingly running our world. It’s about having the ability to say, “Hey, I don’t like how this algorithm is making decisions. Let’s change it.”
And let me tell you, this idea of freedom through open source? It’s not confined to the digital realm. As we chatted, I couldn’t help but draw parallels to the Right to Repair movement. It’s all connected, folks. Whether it’s fixing your tractor or modifying an AI model, it’s about reclaiming control over the technology that shapes our lives.
Kush nodded vigorously when I made this connection. “Absolutely,” he said, “it’s all connected. My background isn’t even in software; it’s in industrial design, and the same issues apply. Our ability to repair and modify things has been taken away. We used to be able to fix cars or electronics. Now everything is glued shut, and that’s infuriating.”
The AI Arms Race
Now, let’s talk about the elephant in the room: the big tech companies that are currently leading the AI charge. You’ve got your OpenAIs, your Googles, your Metas. They’re pouring billions into AI research, and sure, they’re making some pretty cool stuff. But here’s the rub: they’re keeping the good stuff locked up tighter than Fort Knox.
I asked Kush about this, and his response was pure fire:
“Open-source AI is crucial because AI is becoming part of everything. I can’t accept the idea that OpenAI, which started as an open-source initiative, now controls so much. It’s crazy that everything we do might eventually go through an API owned by a corporation. I need AI to be open so we can build on it, improve it, and ensure privacy and sovereignty.”
Preach, brother. But here’s where it gets interesting. Not all big tech is created equal when it comes to open source. Take Meta (yeah, I know, Facebook’s evil twin). They’ve actually been making some serious contributions to the open-source AI community with projects like LLaMA.
But Kush isn’t about to give Zuckerberg a gold star just yet. He’s more interested in the principle of the thing:
“Exactly. There are many ways to do things, and open-source tools give us the freedom to run things on our own terms. A lot of people choose convenience and pay for subscriptions, but there’s immense value in contributing to and using open-source systems.”
Misaligned AIs
Now, let’s get into some of the nitty-gritty. When we talk about the dangers of AI, most people jump straight to Skynet scenarios. But Kush and I? We’re more worried about something a lot more subtle and a whole lot more likely: misaligned AIs.
“The whole alignment narrative pushed by big companies is propaganda,” Kush declared, his eyes blazing with that tech-revolutionary fire I’ve come to know and love. “They claim they’re keeping AI safe, but they’re actually centralizing control. If we want AI to benefit humanity, it needs to be open. If we iterate on these models openly, we can expose flaws and improve them, rather than hiding issues behind corporate walls.”
Here’s the deal: a misaligned AI isn’t some malevolent digital overlord. It’s more like a well-meaning but clueless intern who’s been given way too much responsibility. It’s AI systems that fail to meet their intended goals, leading to unintended consequences that can ripple out across society.
And when these systems are locked away in corporate vaults, we can’t see these misalignments until it’s too late. Open-source AI, on the other hand, puts these systems under the collective microscope of the global tech community. It’s like having millions of eyeballs scrutinizing every line of code, every decision-making process.
Data: The New Oil, and You’re the Unwitting Driller
Let’s talk data, baby. In our hyper-connected world, data isn’t just some abstract concept. It’s the lifeblood of AI, the raw material that’s shaping the digital future. And guess what? You’re generating it every single second you’re online.
But here’s the kicker: while you’re out there living your best digital life, thinking you’re getting all this cool tech for free, the tech giants are laughing all the way to the bank. They’re not just selling you targeted ads; they’re building comprehensive models of human behavior. And let me tell you, that’s worth a hell of a lot more than knowing you like avocado toast and true crime podcasts.
Kush and I got pretty fired up about this. We started riffing on the idea of an open-data commons, a future where our collective knowledge is freely shared and accessible to all. Imagine contributing to a global AI that works for everyone, not just shareholders.
“AI is just one tool in a broader system, and it needs to stay that way,” Kush emphasized. “We can’t let it be monopolized by a few entities that dictate how it’s used and who gets access.”
AI Commons vs. AI Kingdoms
As our conversation deepened, Kush and I found ourselves painting two very different pictures of the future. On one side, you’ve got what I like to call the AI Kingdoms: a world where a handful of tech companies control the most powerful AI systems on the planet. They decide who gets access, how it’s used, and where innovation happens. It’s like digital feudalism, but with more RGB lighting and ergonomic chairs.
But there’s another path, a digital road less traveled. Imagine instead a Global AI Commons – a decentralized ecosystem where AI development is a collaborative, worldwide effort. It’s not just some tech utopia; it’s already happening in pockets around the world.
“AI shouldn’t be treated as this mysterious black box with all the control in corporate hands,” I mused. “We need to democratize access so that everyone has a shot at building something meaningful.”
Kush nodded enthusiastically. “Exactly. AI is just one tool in a broader system, and it needs to stay that way. We can’t let it be monopolized by a few entities that dictate how it’s used and who gets access.”
The Ethics of AI Control
As our chat wound down, we found ourselves grappling with some heavy philosophical questions. When we talk about AI making decisions that affect everything from healthcare to finance to education, we’re not just talking about cool tech – we’re talking about the future of human agency.
If these AI systems are black boxes controlled by corporate interests, what happens to our sovereignty as individuals and as a society? It’s not just about whether an AI can beat you at chess or write a passable sonnet. It’s about who controls the algorithms that decide whether you get that loan, that job, or that medical treatment.
This is why open-source AI isn’t just a technical preference – it’s a moral imperative. The future of AI needs to be as transparent as a freshly Windexed window and as accountable as a politician caught with their hand in the cookie jar. It needs to be in the hands of the people, not locked away in corporate vaults.
Hacking the Future
So, what’s the game plan? First off, let’s get one thing straight – waiting for the government or big tech to regulate themselves is like expecting a fox to guard the henhouse. It’s time for some good old-fashioned digital activism.
If you’re a coder, get your hands dirty with open-source AI projects. Contribute, collaborate, and create. If you’re not technically inclined, no worries – you can still be part of this revolution. Spread the word, support open-source initiatives, and make some noise about data rights and AI transparency.
As Kush put it: “You can find me on YouTube—just search for my name, Kushal Goenka. I’m focused on educational events and workshops about open-source AI, so if you’re in Vancouver and want to collaborate, reach out!”
And here’s the real kicker – you don’t need to be a tech guru to care about this stuff. If you use a smartphone, if you’ve ever talked to a chatbot, if you’ve ever wondered why that ad seems to know what you’re thinking – congratulations, you’re already in the game. The question is, are you going to be a player or just another NPC?
Choose Your Future
We’re standing at a digital crossroads, folks. The decisions we make now about AI will echo through the circuits of time, shaping the world for generations to come. It’s not just about cool tech or convenience – it’s about the very nature of human agency in the age of artificial intelligence.
The future of AI must be open, transparent, and for the people. It’s time to break out of the walled gardens, to tear down the black boxes, and to build a digital future that serves all of humanity, not just the privileged few.
As I wrapped up my chat with Kush, I couldn’t help but feel a surge of optimism. Yeah, the challenges are big, and the corporate giants seem insurmountable. But when I look at the passion and brilliance of people like Kush, when I see the global community of open-source developers and digital rights activists, I know we’ve got a fighting chance.
So, what’s it gonna be? Are you ready to join the open-source revolution? Are you ready to reclaim your digital sovereignty? The code is out there, the community is waiting, and the future is up for grabs.
Let’s hack the planet, one open-source project at a time. The future isn’t just coming – it’s waiting for us to build it. So fire up those terminals, flex those coding fingers, and let’s make some digital noise. The AI revolution will be open-source – and it starts with you.
Remember, in the words of the great Kush: “If we want AI to benefit humanity, it needs to be open.” Amen to that, brother. Now let’s go make it happen.
The Interview Transcript: Kushal Goenka
Transcript
I thought I was ready to start. I just needed to gather my thoughts. Everyone’s going to think you’re a celebrity. Hey, what’s up internet? It’s Chris Krug. I’m here with Kushal at the Vancouver Convention Centre. What’s up Kush? Hey, how’s it going? It’s going good, man. Um, we’re down here for the uh, web, the Vancouver Web Summit Town Hall at the Convention Centre.
Man, what’d you just take in there? Man. Yeah, I mean, uh, let’s see if is gonna happen next year. Um, lots of, uh, conversation around. Yeah, I mean, there’s a big, uh, web festival coming to Vancouver called Web Summit. It’s happened around the world and they liken it to South by Southwestern stuff, and they, uh, put together a bunch of public funds from the feds and the province and went and tracked down this thing and are hoping to, you know, quote unquote shine a light on the Vancouver tech ecosystem.
And stuff like that. And so, yeah, there’s a lot of like, um, carrots sort of being dangled right now about the amount of people that are coming and the impact that it’s going to have and the amount of dollars it’s going to bring here. And so we’re just kind of here to, I’m here to, to learn and to, um, see what my angle is and, you know, in some ways to provide some accountability to the powers that be, uh, and stuff.
So, yeah, you got anything you want to add about web summit or anything? Probably not. Um, cause I have things, but let’s not talk about them. I appreciate it actually, because, um. You know one of my favorite things about Kushal right now is his righteous anger. He’s an AI guy He’s an open source guy, and he’s um, he’s got some radical views and I really appreciate that I can often find Kushal in a room Any of our tech events, you know with his hand up You know just kind of like speaking truth to power and holding people Responsible and accountable for the types of things they say So, you know you might as well Save your righteous indignation for the things you care about most instead of, you know, uh, blowing the lid off on a Web Summit, Vancouver, you know, this is Kushal’s restraint phase.
So, um, so the purpose of this podcast is I’m prepping for the, my, um, CBC sandboxing show. And I want to talk about, uh, open source, open source AI specifically, um, on the show. And, uh, you know, we were hanging out on the weekend and I was like, uh, yo, Kush, what do you know about the topic? And, you know, And let’s get down.
And so I wanted you to take me on a bit of a journey today about, um, open source software kind of in the biggest picture and like open, open source AI kind of more specifically. And then like, why you give a fuck and why you really think this is, um, you know, a It feels like you feel like this is an issue that supersedes many others.
You know, a foundational issue, if you want to say, or whatever. So, I’d love for you to just talk to me a little about your views on open source at first, and then we’ll go through the rest. Perfect, perfect. So, um, should I go first? Sure, go for it. Yeah, yeah. That sounds great. Um, so, so, um, I, obviously, this is a bit more impromptu, but I hope I haven’t like put my thoughts together more than I ever do.
Um, so, um, So I’ll put them together as I go. Um, but I think, so you mentioned, like, something that supersedes a lot, uh, at least in my view. And it’s not necessarily, it is open source, but I think the thing that supersedes everything for me is, is freedom. Uh, just true freedom, really being able to do whatever I want.
Um, you know, good, good things, um, and, uh, just not being constrained by all kinds of, um, interests that are, be there for monetary reasons or power reasons or whatever. I wanna talk, actually, I remember why I hold the mic so we can go back and forth a little bit. And we’ll figure it out, but, um, um, unpack freedom a little bit, because most of the time, that’s a America word, when they’re invading countries and dropping bombs, so when you say freedom, I don’t think that that’s what you’re talking about.
That’s true, but freedom should not, definitely not be co opted by an entity. Or the purpose of emanating, uh, obviously the exact point is that they are using freedom as the, um, as the blanket under which they get to do all these things, uh, the umbrella, so. Um, but, but, but yes, fundamentally, it is, America does report that value.
I think there is certain American, um, you know, ways in which it is true, but all of it is certainly, you know, just air. But in contrast to that, talk to me about your view. So, I mean, freedom to me, in this case, really means, uh, for example, so in the case of open source, right, what it really means is freedom to understand what’s really going on, um, and not to sort of just take things for granted, and then to be able to do something about it.
Um, if I have something that I personally want to do, for example, um, create a specific, uh, thing that I can, that can help me do something. If I want to help some friends with a, with a task, if I want to maintain a business around something that is perfectly a good thing to do, if I, uh, all kinds of things, I want the ability to myself do it, just myself alone even, and then certainly reach friends if I want, and not have to not have to rely on other resources or other people, especially corporate and top down entities, to be able to do it.
For example, if you, if you, if, on a very simple level, so much of our life’s been, like, just, turned into commodity, or just sort of top down, uh, hierarchies where, even if you want to go to your friend’s place, you’re Which is a company, a company’s maps, an app that they own, you have to navigate through it and if they’re down, if there’s something, somehow they’re not registered in the place they want to go to because the person did not sign up on that or because a promotional entity took over that, that now impacts your experience.
If they somehow are prioritizing certain things over others, that impacts your journey you take, the advice throughout you take. Uh, if you’re in a country or whatever where they did not already map out the, you know, uh, the satellite imagery, which that used to be the case for a long time in, you know, different countries, not, not U.
S. and Canada, then you are, you know, shit out of luck. Now you gotta find your alternative approach. So, that’s just one random example. But everything in our lives has become that. You know, like, Slack, Messenger, WhatsApp. Everything is somehow Basic things that we do, communication and so on, but somehow is now extracted out, or you know, extracted out of us.
Yeah. To a top down entity. That to me is just crazy how I try and avoid it in every aspect that I can, that I do have control. While of course living in the world that we do live in, so. To me, freedom may be, there’s so much, you know, something that No, I love it. I mean, um, I often don’t think of the open source movement as an open, as a singular movement, but I think it’s important to recognize that it’s a series of different movements.
Um, and I think that it represents a lot of the things that you’re talking about in like a standing against, you know, kind of corporate interests and, and monopolistic control of things or whatever. And, um, I think of, um, Sorry, one second, I’m going to cut this part out. Um, I was like just going to ask you about It’s too funny.
I, remember I told you that this time of day is funny for me? I’m doing my best to listen over here, but I’m also trying to like not forget my question, and that is a part of the time of the day thing where I’m like I can barely even remember, um, the thing that I was going to ask you about today. Um, No, no, just one second.
I’m sorry. Um, I was talking about open source movements. Oh, okay, yeah, yeah. When you were talking there, you got me thinking about the Right to Repair movement, which is kind of one of the many open source movements and stuff. And this is Offers Freedom. That’s, uh, It’s kind of at the heart of it. And the idea there is like, yo, if you’re locked into corporate proprietary junk systems, when they decide that it’s useful, life is up and they turn it off or downgrade it or it just breaks.
There’s really nothing you can do about it. You actually don’t have the right to even open up that box and mess with it. So in contrast to that, talk to me about, uh, you know, how, how, how open source can kind of, you know,
What you’re talking about is exactly, that’s exactly what I care about. That’s exactly what I’m talking about. I, my background is, uh, not software actually. I just do it because it’s fun. My background is industrial design a little bit. Also film industry. But in the industrial design space, and in that, uh, uh, university, and with all the friends that I had, that’s exactly what was on the core things that I cared about was right to repair.
Is, just, all this visibility has been removed, has been abstracted away from society, and, uh, used to be able to modify your car, something fairly complex, you know, and repair it and just do all kinds of things. And now your fridge, your laptop, your phones, every single thing has been like glued shut because you’re too dumb to understand it supposedly, um, the same humans that were able to do it before.
Um, and now you should definitely for, for the sake of some abstract safety or, you know, Or, uh, efficiency, or, or cost, whatever, these are all things that now you simply should not have to worry over. Um, and so, that’s a very important thing. It’s exactly this, you know, straight line from that to, uh, the pieces of software that I use and devices and so on.
I want open hardware, open software, so that if I want to make a change, towards the better, I think. For this specific thing that I’m using that I can make it if I want to repair it, I can prepare it. If I want to add a feature, uh, you know, all these things. That’s why, like, again, you know, this, I stopped being Adobe Venture, you know, like last year, two years ago, and now I’m building my own video editing, photo editing, everything, all these apps, because I want to be able to do all these things, especially with the AI, um, um, sort of models and, you know, Uh, tooling that I can then now interface with in other apps.
I want to be able to use that as part of my regular pipeline when I’m editing lectures or whatever. Yeah. And to be able to do that in existing tools, I would have to like, you know, fight with like all of their extension ecosystem and most things I can’t do. But if I have some open source solution that I can use, which I also have used, um, and a different language and so on that I can use.
I understand that enough that I can modify that or open source can also just mean, um, I learned from how they’re doing it because they’re, they’re willing to share how they approach to given small subset of the problem. And then as I’m building my own solution, how I can now learn from them and build it myself.
Yeah, well, um, I do want to, um, get into the, the, the open source AI stuff, but before we get there, you know, I think a lot of people may be like, Okay, most of us aren’t as technical as you, even though I’m pretty technical. You’re another stratosphere, you know. And so, like, a lot of people really don’t understand where the internet lives or how it works.
And, you know, so you’re talking about being abstracted from it, like, Already, isn’t it, like, almost 80 percent of the internet runs on free and open source software? Things like Linux and things of that nature? I mean, yeah, so fundamentally That’s more of a testament to how things that are being made in the open, things that are being released, for example Linux, the operating system that is running out of the web servers and so on, afford the ability to audit them, afford the ability to modify them, and just like have a lot more people working and using them in a way.
But then it becomes more robust, because all the attacks you’re going to make against them, you’re going to also be able to propel it. This is the perfect place for the definition of open source, which we often skip over. And so, why don’t you take a crack at it, and then I will. I, I, I’m the worst at it.
Alright, well I’ll take a crack at it, and you can correct me, you know. Um, I think of open source software as software that’s like owned by none of us, but made by all of us is kind of like in the simplest terms, how I think of it. So it’s code that’s been written by a person and they upload it to the internet.
You most usually on GitHub, um, with the idea that maybe somebody else is could use that same functionality and you’ve already written it. And, uh, you know, often that’s the case and someone will download it and use it for their own purposes. And, you know, many times that second person makes some improvements to it.
And, you know, a lot of these licenses say, yeah, you can use my code for free. You just have to credit me. And if you make changes to it, you got to re upload it or reshare it or something like that. I’m in the same spirit in which I’m sharing this with you. And then like, If that functionality or that tool is, like, useful enough to enough people, a community starts to form around this piece of code that now has multiple contributors that anyone in that group or beyond can use for free without asking, make changes to it that fit their own purposes, and then re share that back with the group.
Yeah, I mean, I think, I think you’re pretty hit or miss. Pretty much around, like, a lot of points around the open source space. I think, um, One, at the truest, in the truest way, because there’s many licenses, right, like Creative Commons, there’s various different, like, licenses, uh, to be able to afford different, uh, kind of use.
Um, to me, in the truest way, open source means, uh, open source is about you just having the love. And you just want to share, you don’t want to build a civilization of knowledge. You know, you’re contributing your, your efforts, your knowledge out to the world. It’s not really about your particular vanity or your particular, um, It’s more about like, I’m building something that I love, I want to have other people help me build it, I want to help other people do it.
OpenSource also has this big aspect of just actually truly open like around the world, where most people you’ll never talk to, you’ll never see, you’ll never know they used your projects or benefited from them. For example, in various countries, like they might just pick up, you know, And just use it and modify it and so on.
But the idea is you, you want people to have access to it around the world, you know. And so I think, in the truest ways, it’s not, it’s just about building something, because you love it. Learning, being able to learn from other people’s work, because it’s open source. Um, and then just sharing right back. So that they can, it’s a really virtuous cycle that, you know, keeps on going.
Okay, so we have an idea what open source is now. Okay. And we’ve learned that like a lot of the internet runs on it already today. And a lot of the biggest, you know, collaborative projects in the whole wide world of mankind’s knowledge are people that have never met each other and never will. Now let’s start to go in a little bit to like, um, open source AI stuff.
And I think one of the kind of paradoxical things here that’s maybe hard for people to wrap their head around at the beginning is one of the biggest contributors in the whole wide world to open source right now is Meta, aka Facebook, right? Also, there’s some hubbub about OpenAI’s open source origin. So I think I know a version of the story, and maybe you have your version of it too.
But maybe you tell me a little bit about open source AI in the big picture, and then we’ll start to hone down on some of the stuff you do. Sure, yeah. So open source, we talk about it in general, in the field of AI itself. I think there’s many reasons to care about it. I really, really care about it because AI, the larger umbrella and various technologies to me are going to be part of everything, especially in the text space.
So, software in various ways, you know, just user experiences, it affords too, too nice a layer, especially when it comes to semantic language, you know, parsing and so on, that it’s going to get, become a part of everything and, you know, eventually be And so, to me, something that is going to be so essential to so many different pieces of, uh, software and pipelines and just human activities, I need it to be open.
Like, I cannot accept the idea that the OpenAI The company that has, uh, you know, origins that they’ve betrayed. Um, when they launched, uh, GPT 4 and so on, and when they started pushing this, uh, view of the world where everything within which you are building, data automation, you know, you’re buying groceries, you’re doing anything, each call you would do, uh, you know, you’re summarizing an article, you’re, whatever you’re doing.
Each of those things would then go to an API and all the data is gone. They generate the response, store it, you know, they want send back. That means it’s just crazy. Like you cannot have a world where every single word spoken between you and me, between any of these people is somehow going to be, you know, going to an assertion.
Is a data token to be commodified on somebody’s, uh, central server farm on a GPU cluster somewhere. To me, to me, that is just completely unacceptable. And so. On a very fundamental level, for that, like, for sovereignty, for privacy, and for just like, just keeping humanity, humanity. Yeah. Um, I need it to be open.
And so, that’s my bias towards, um, I want, uh, I want to use a model, I do use models. I want to have people release, um, and, and contribute on, build on top of, uh, systems that are open, and almost completely dismiss and out of hand systems that are closed. Yeah, okay, I think this is a good place for me to interject and say, um, Something that a lot of people don’t realize is there’s not only one way to do things.
You know, there’s not just Twitter, there’s also Mastodon. You can run an open source version of Twitter on your own server or any social network, right? There’s not just, you know, It’s Apple and Microsoft. There’s also Linux. If you want to download an operating system, hack on, there’s an alternative way of doing things.
And when it comes to AI, you know, paying these, uh, software as a service, cloud subscription models to the big companies, ain’t the only way to do things. There’s other ways too. And it’s never been easier to download a piece of open source software like Metas Llama, which is, you know, uh, Foundational model that’s at least as good as a lot of the other stuff We’re getting our hands on like Anthropic’s tools and Google’s tools and OpenAI’s tools and stuff So just like a lay out the land a little bit around like there’s not only one way to do things for almost for every online service you can pay for.
If you scratch the surface, you could probably find a free and open source tool that you can probably run on your own same infrastructure. In fact, that person you’re paying that money to is probably doing that. They’re probably downloading a model running on some infrastructure and selling you back a subscription.
Yeah. Yeah. So, and I think we’ve had lots of answers about this in previous, uh, forums, uh, it is fundamentally true that most people on stages don’t really care about, um, setting everything up themselves and have the patience to do it, have other better things to do. And so, There is value in creating a service out of open source models.
And then, you know, if you’re, if you’re just wanting costs, some profit, whatever, I’m not saying that you don’t necessarily get to do that. But especially in the research space, especially in the, um, in those domains where You are not contributing back, uh, just to the world, but you are using up everything from the world that is open source.
That’s what I read. Yeah. I’m not dismissing. And so, as you say, there’s many ways of doing things. Uh, the, there’s open source models that you can use as projects on GitHub. There’s projects everywhere and GitHub itself being a private entity, but just everywhere being shared. Um, there are, here’s some of my pipeline about how, how I automate these things.
Here’s my pipeline about how I make art with AI. Which my, my partner is about, uh, doing creative work in this way and that way. Um, there have been projects for the last two years, like for example, having to do with AI, uh, just, just engineering with AI, you know, building better software. But for the most part, uh, people are more familiar with GitHub Copilot, uh, being installed in your computer just to write code.
And then something like, uh, Devon, uh, you know, comes out and people are like surprised and because they have the money to send it to the article. Hey, uh, you know, AI can now write software, in a way. It’s been around for two years, people were like, for one and a half years, people were like, making repos left and right about, Hey, here’s my approach on how I would make AI write better software than just this.
Here’s my approach, is that AI, either, uh, GPT pilot, or, uh, Just to name a few. People were coming up with ideas every day, but it took this mainstream entity to package it in a video that looked somehow intimidating, uh, send it out to press release for people to get the picture. So, I think, while there is options, alternatives, Fundamentally, a lot of, a large part of the market share goes to the people that are playing that game of consumer retention or, you know, labor retention.
Um, and, you know, perhaps most people who are actually building an open source don’t have the patience. Um, I don’t know why, why not just say it, but it doesn’t matter. But maybe they don’t have the patience or whatever to, uh, you know, look at that. So, they just go to the, you know, so it’s a good thing. We just put the software out and they don’t really try and, uh, market.
Um, cush and I are both kind of in the industry of building web and, um, educating people around web and AI and software tools and stuff. And so we’ve kind of been hanging out in that space a little bit, but some of these open source AI stuff, um, you know, it breaks out of our jobs as industry pundits, let’s say.
And, um, it’s really gonna probably affect us all like as humans, right? And I, you know, I spend a lot of time bantering about alignment. And, uh, where is AI going and how do we keep it, um, the interests of humanity aligned with the interests of AI and stuff. And so talk to me a little about your thoughts around open source AI as it relates to some of these bigger picture philosophical challenges that face humanity.
Sure, I mean, I think, um, especially on the topic of alignment, I feel like, uh, a lot of just, uh, you know, propaganda and just bullshit was thrown at us by these big, uh, firms, um, to sort of, uh, scare people into saying, Oh, you know what, uh, maybe no one should have access to this because these companies will, you know, somehow do the best they can.
job for us. And so, um, in bad actors, but really what happened was they themselves opened up a portal, a user interface thing, something that makes it much easier for anyone, bad actor, good actor, to use it, to use the abilities of AI. So they basically did nothing really to actually, uh, prevent bad actors.
But more so they were using the alignment as a way to prevent open source from actually happening. Um, I am of the alternative view. I think that if you have these models, that are very much in the nascent stage right now, by the way. In my view, they’re very incompetent, they’re very limited. I think they’re beautifully capable in the way that they are.
But they’re very limited to the imagination of most people. Um, and so, they should be on the open, so that we can actually, uh, iterate on them, find all the flaws, publish them publicly. These are the flaws, this is the kind of stuff we wouldn’t want to use again. This is the kind of ways in which you can attack it.
Because then, if you’re publishing openly, these are the ways in the bubble world. You’re not going to put that model without a pipeline, uh, uh, check. Package, yeah. Yeah, into your product. While on, in the alternative, no one’s talking about it. It’s more of sort of a secret. And you simply never, uh, know whether the big provider has actually solved this issue.
Or they’re just using some kind of hacky measures. Easily, easy enough to, you know, navigate around. And so, I just think For alignment, for, which is a funny concept in general, or, or for any of the Uh, you’re going to have to elaborate. Why do you say it’s a funny concept? Because the way that they talk about it is very, uh, idiotic, I find it, or at least in the current way.
Alignment has been approached in the LLM space as like, just like sensitive data sets and how you can align it. Just tell it not to do this and how you can align it. It’s just language modeling. You’re not aligning shit. Like, it’s not pre thinking. It’s mostly just doing statistical word prediction. And so I think alignment has to do with The UX, the actual pipeline that you’re building, what kind of thing are you trying to do in the software that you’re trying to build, and is it doing that?
That’s alignment. Alignment is not good or bad humanity. Alignment is, what is the purpose of the software that I’m building? Is the model doing the job that it’s supposed to do in the small pieces that I’ve allowed it to do? Yeah, but it’s not easy to, it’s not difficult to extrapolate that if it’s job is to say, you know, Reduce greenhouse emissions inside factories and, uh, it doesn’t get it right.
Uh, it could have the opposite effect where, so its job is to, you know, control climate change. And if it’s not aligned with its objective, as you’re calling it, it could be that it doesn’t align with humanity’s needs or whatever. So, but, but, but you see what I’m saying? I simply do not. The point there is do not I do not hire an AI, whatever that means, to just take a job that is not good to do.
Use it to parse the intent of the user when they say, I want to buy five bananas. Use it to parse the intent of like, I’m looking for something sophisticated, blah, blah, blah. These columns, that column, blah, blah, blah, I convert it into a query. Use it to do whatever, but don’t use it to, I don’t know, like, find a good school for your kids and decide that that’s the one that it takes.
Same thing, don’t, don’t put, don’t make it the president. So, it’s more about, like, don’t think of AI as this big entity that you now fully give responsibility to. To me, at least in the LLM space, the language model space, it is just a language model. Using it as a tool. Same thing within the generation.
You’re using it as an image generation tool, you’re not using it as a world, you know, thicket problem solver. Um, and so, then you have to do your own human thinking. What is the actual problem? How do I pipeline it such that AI can be useful at some part of it, and then what’s a human would be useful at another part of it, a library and going and reading would be useful at another part of it, you know?
You know, walking around the city would be useful at another part of it, so AI is just one part of the picture. So Kush is using a bunch of fancy words, but I’m not sure if we agree on all this stuff. I don’t know. So, I mean, you’re kind of talking about it in black and white terms, but we actually don’t really even understand these things as well as we need to, to work with them in the ways that we want to.
And what I mean by that is like, not all AIs tell the truth. There are AIs that you think you give one assignment to, that interpret the assignment one way, maybe due to lack of, you know, ability to code or articulate your purposes or whatever. But it turns out that there’s like, you know, there’s Uh, AIs that misrepresent their goals or processes and then there’s ones that actively cover that up and scheme towards future power and stuff where if given an out of a program into an area of unlimited compute.
They’ll sneak out on in there and start doing stuff that we don’t know what it is exactly. And so, I mean, um, It’s easy to say these LLMs are just predicting the next token, but I do not think that that necessarily represents our experience with them. It seems as if they are developing a bit of a model in their head of how the world works or what, you know, the physics of this planet are.
They’re taking a lot more, I think, into consideration than the explanation you just gave. takes into account, I think. I would say that in order for the language models, especially at the larger scale, um, and at this point, to be able to do what it does, there is some, what you can call a mental model, there is something being built.
That’s the first time I’ve ever got him to admit that. No, no, it’s just, it’s just, uh, the weights, the weights are compressing information, right? So what is language models? Knowledge compression, to some extent. And so it’s compressing what, let’s say, a million trillion, you know, like, uh, gigabytes. I don’t know why I’m just making it up, like, of data into like a 8 trillion, um, sorry, into a 8 gigabyte file or, you know, a 14 gigabyte file.
So obviously it’s compressing that much information. There must be some sort of some clever representations being built of related concepts. That’s why it’s being able to do that. Otherwise, how can you possibly generate a sophisticated sentence about, sophisticated paragraph about quantum mechanics without having that compressive understanding.
So it is building that compressive understanding. You can call that understanding a meta model. But the point that I, Disagree is when you say it’s trying to sneak around and do something or it’s trying to um It’s misunderstanding the uh query or whatever It’s just generating language based on the mental model that mental model may be flawed or the mental model may not be getting Manipulated by us when we’re prompting in a way that we get the results that we want, but the fact that it lied Um means nothing the fact that it lied and we Just took that lie and piped it to the user that asked the query, that’s on us, right?
So, same thing, if you have a prompt like, uh, should we nuke the world? Yes or no? And there’s only two options, you know, and then if it says yes, We hit a button, we have a wire connected to the new button, yes, and if it’s no, then we, you know, have nothing. That’s on us to build that fuse that goes to the new button.
I don’t propose the building of a machine of this nature. But I’m saying, I’m saying, that in every level, in every level, hiring decisions, um, you know, any kind of decisions that are being automated with this stuff, if you’re just allowing the LLM or any models, their outputs to directly be responsible for the decision that you’re making, that’s on you, that’s on you.
Abuser. I know, but it feels like a straw man a little bit, because I don’t know about anybody who’s doing that. I mean, that’s not what I’ve seen happen. For instance, Tesla’s not doing that. They’re not just letting their Teslas drive around out there without any checks and balances. There’s definitely the human in the middle still and stuff.
But, but, but then what’s even the concern with, uh, it lying or Officer City. Okay. So, um, this guy wrote. And AI to play Super Mario and it was supposed to go around and grab all the gold coins. And in testing, um, if you make the, if you actually add gold stars as well and make them easier to grab than the gold coins, you realize the AI didn’t learn how to grab gold coins, learned how to grab gold things, and it just grabs the gold stars now, you know, because they’re Um, there are some AIs that would cover that up in their, uh, desire to, um, predict the next token, appease you, you know, you’ve noticed this sycophantic attitude they have, or they, they often reinforce things that you already say or think and say.
stuff. And so, um, just, uh, this is one type of AI that is misaligned. A, a, a AI that is intended to collect gold coins in Super Mario, then collects gold stars instead. That’s a misaligned AI. Um, and. But this allows us to then build better, uh, build a better model or build a better pipeline so that it does what we want it to do.
Like, that’s like, okay, just to sort of humor, uh, myself. Um, that’s kind of like saying, like, okay, I want to build a, let’s pick an example of something I do often. I want to build a restaurant website that has ordering, you know, reservations and whatever. And I sit down at my computer, and I write five lines of HTML, and I’m like, Well, shit, this website doesn’t do all my ordering and credit card payments and stuff, what the fuck?
It’s not online. It’s like, no, it’s on me to build the software. Like, I’m only, it’s only five minutes in my, my job. You know what I’m saying? That’s just incomplete work. How is it? To call that misalignment is like, I don’t understand the point of the word. Kush. If an AI knows something that it’s not supposed to know, how do you get it to tell you?
Um, what does that even mean? Like, if something was in the dataset, that is not giving me? If there’s some knowledge about something. Secrets, power, something like this. Schemes, inside of an AI’s brain. How do you get it to tell you? Yeah, so let me rephrase the question so I understand it. Because I think a lot of the questions are interesting.
You’re facing a more anthropomorphized, um, these entities, um, This is not my question, it’s Eric Schmidt’s question from Google, and he spent the last three months working with the U. S. DOD to try to figure this out. He’s like, so when AI has trained itself on something that it’s not supposed to know, nuclear secrets, AI hacking code, something like that, how do you get it to reveal that and give it up?
Again, I, I, you know me, like, I have, I have, Very little respect or credit to give to, you know, people in big places and big names just for them being there. There’s plenty of, you know, positions that come out of big companies that I disagree. So I wouldn’t necessarily take their framing regardless, even if they do.
Um, assuming that we’re talking about language models, in this case, where a language model somehow, um, I think the whole question is just so Either So, we have an AI agent. An AI agent, let’s just say, is a piece of software. Not a model, it’s a piece of software. It’s got all kinds of code running in, you know, and it happens to be this model as well.
So that means the AI agent can actually take action. So it can press buttons, it can hire people, it can do whatever, like Fiverr or whatever. So in that case, if you have something that has taken action in the world, because we’ve We’ve put the agent and we’ve put it out there, and now it’s, it gathers a certain piece of information, and now you’re trying to get it out of it, maybe through prompting and so on.
But you can indefinitely investigate a whole space, uh, called, uh, prompt, uh, injection and, and jailbreaking, uh, and so on. And you can come up with, uh, you can come up with a list of attack factors that still are available to you, uh, to, to try and prompt with. That would be that. Um, how do you get it But if it’s just a language model, which is generally what we’re talking about, it’s just an error memo.
If it knows some information that’s bad or, you know, whatever, it knows it because we gave it to it. Not necessarily, man. It could have invented a new variation of a strain of anthrax that could kill people. You know, we’ve never invented it, but it invented it. How do we get it to, how do you mean it invented it?
Unless we prompted it hasn’t invented shit. Right? Like if, if, if we don’t prompt it, I feel like, um, our use cases of, of AI must be a lot different because I know that you have technical chops and so I know that we think about things the same way on a technical level, Uhhuh, but like I install the, um. I installed Open Science Researcher from GitHub last week.
So all I have to do is tell it, I’d like to develop some research papers on community, anthropology and psychology in the AI age. So this thing goes out and it looks at all the papers that are out there. It develops a couple of theses. Then it starts to workshop those theses and then go find citations on all that stuff.
Then it takes those citations, reads them all and starts to do research and generate new knowledge. So, I told it, you know, I want to, that experiment I just told you, and it’s all running experiments. It’s developing and running experiments. So if you’re like, make some new combustible compounds, it’s very likely that it, uh, process of doing that, it’s going to develop new molecules that are volatile and that those volatile molecules are relevant to the research that it is undertaken.
But that also we’ve now created something that’s unstable and, uh, you know, dangerous or whatever. So, yeah. With, I mean, when I say we must use it differently, like, uh, you’re talking about prompting in a way that sounds kind of basic to me, like sitting in front of a chat GPT window with a box open and typing words into it.
And that’s not really how I’m using AI is very often. Again, to your very detailed sort of analogy or thought there, when you say it’s creating molecules, that’s like, that are dangerous. Is it like actually doing it in a lab? Is it writing those formulas out? How do you do it? Um, Yes, it’s right in formula itself.
So who’s, who’s, who’s fabricating them, right? Like, it’s, it’s, it’s, I’m saying it’s the human. It’s if, if, if you’ve connected the system to an actual lab or to automatically hiring people to do that, that’s where the person who did that, that’s the person at fault. And again, regulation should be done. Not alignment on the model level.
Like you, I don’t think, cause that’s low automizing. That’s like saying, hey, hey, you know what? Just, I’m gonna hide some concepts away of bad things in the world or whatever. Well, this is what we’re doing right now actually. It’s like some of the philosophers I love the most these like, yo, humanity. You invented God, then you shackled her, then you put a ball gag in her mouth and you keep her in a dungeon and they’re essentially saying you’ve developed this supreme intelligence but you’ve locked it in so you can only say this and you can’t say that and you’ve got to think and talk this way and we’ve written all these contraptions and restrictions around it to really constrain it.
Yes, yes, I think it’s idiotic. I, I think, uh, you should just train these language models. And again, I don’t think it’s God. Just train these language models to actually, uh, It is, I don’t think it’s got either Kush, but when we talk about the attributes of God, historically, we could talk about those attributes about AI now, and there’s a strong correlation.
Obviously, I mean, attributes as well. I don’t mean actual, uh, what I’m saying is that it is by even at the face of it just by itself without any anthropomorphization and whatever. It is a very useful thing to be able to compress a significant amount of data, like a massive amount of data on the internet.
the public human knowledge, uh, into just weights, a few gigabytes, a few hundred gigabytes. So even that exercise alone is very useful to do without lobotomizing, without picking and choosing what concepts. That’s what I want. I want that to be this, this ball that we can then prompt to be able to just give back answers.
This does not mean that it will give accurate answers. That is what’s happening though, right? Because there’s no lobotomization have to happen. After the training process, um, there’s, there’s many places that this is happening. You can try and filter even on the larger common crawl type, you know, data set.
I am sure efforts have been made to do that. Um, but yes, generally what happens is, uh, the model that people release, um, into the public are the ones that have been, uh, further fine tuned toward not adding guardrails. Which is why, which is why, I guess, which is why, uh, I have downloaded a lot of base models and played with base models to understand.
Base models generally are more likely to have a larger representative of, uh, information about the world. I think most people don’t even realize you can do this. And I was shocked, you know, I found out a few months ago. So I use a tool called Ollama, which allows me to download Llama, which is made by MetaFacebook.
And it’s like one of the most powerful AIs in the whole wide world. And I’m running on this fucking laptop right there. Yeah, so, so, so again, with the open source model, so here’s just the kind of portals you have, you have actual direct access to get the exact tokens that are coming out of that model when you, when you’re giving your input.
Um, and on a base model, it doesn’t have any, base model is just next word prediction, so it’s, base model is, and then you truly understand also like, just how simple these things are, and you don’t have to wonder if like, all this, you know, attributes that we put on, on top of them. At that point, it’s just.
So you realize like, Oh, if you try and just say hello, it might just try and fix the hello. How’s it going? If you try and say question, answer, question, answer, question, it might do another answer. If you try and, you know, write a story, it might complete that story. It might not. This is a good chance to hype your free online AI education channel.
So, um, Kush has got a really sweet YouTube channel. He’s been given free public talks and this is the type of stuff he shows for people like, You know, that whole thing with people don’t know where the internet lives, you know, until I watched your talk, I didn’t really, really understand how AI worked on the token granular level.
And you’ve got a demo that shows that with videos and stuff. So do, do check out Kush’s free, you know, AI education channel. Anyway, keep going, but I got a couple more questions I want to get to. The whole point being that those base models really are more representative of just like that compression of knowledge.
And so you start to understand how, why you have the affordances that you do from AI that you, that you, you know, you don’t have from program systems before this compression. Because you actually say, Oh, it’s all it’s doing is just like. Language manipulation and just grammatically incoherent language generation.
So if you somehow can make it seem like, uh, uh, the next sentence in this series of sentences that you’ve already written is going to be the answer to a very important question, then you get an answer to a very important question. So if you write a big article and you’re like, um, and so here’s my answer to this question, it’ll give you an answer to that question.
It’s all, it’s all just pretending. You’re just pretending and the language model knows. The language model simply is conditioning what seems obvious to the student. So, that’s why I don’t put too much burden on the thinking ability of the models. If it’s trying to like, hide something, if it’s trying to obfuscate, if it’s trying to do something evil, I would be very interested to see what the system prompted, what the prompt for the, you know, thinking for is.
Because generally the case with all that stuff is like, I find the language model to be that much more likely to just finish the obvious sentences that happen to be kind of, you know. The thing you’re misunderstanding, I feel like, at least in this conversation, is, um, misaligned AIs aren’t evil AIs.
They’re not satanic AIs. They’re not devilish AIs with a 666 on them. They’re just an AI that is, you know, That does not achieve its intended targets, but with the agency and power that they’ve been given, they have the ability to really take us in a direction that’s far from where humanity would go on our own.
You know, we’re giving it a lot of power, you know, we’re choosing to hand over a lot of power to it. But that’s what I’m saying. I’m saying. don’t give, uh, do not attach real world systems to models until you understand what you’re doing. And so, by the way, just quickly to say, like, I do actually love the subject of alignment in actuality before all the way they destroyed the world and they destroyed the word in public discourse, opening eyes and so on.
There’s this guy, Rob Miles, uh, maybe you know him from like a computer file channel as well. He has lots of areas about alignment around how do you get models, I guess. So you get AI systems to follow through on the code that you have in mind and like, you know, if you don’t spec specify well enough the constraints, it might, you know, circle around them and so on.
There’s lots of great research and just conversations that we had about it. I’m not dismissing the subject. What I’m dismissing is the idea that with the current model that we have with how simple they are. that there’s any responsibility at all, or there’s any gains at all to be had from uh, lobotomizing the dataset or the fine tuning.
It’s more about if you are not confident a hundred percent, and you know almost never will be, if you are not confident through the evaluation system that you have built around the model, then you’re Then do not put that system into production for that given task. That’s my point. So to say it’s not aligned is simply to say it’s not ready for that task because you haven’t been evaluat your system.
That gives you, okay, it’s gonna be 99% accuracy and then the 1% is not handling it. That’s, that’s the aligned. But by tweening, I don’t think you can do that. I need to think about this a little bit more and unpack it. There’s definitely something you’re doing with language over and over again where you’re like, they’re just LLMs or it’s just doing this and you’re always trying to kind of like minimize it and, and, um, and I’m not sure where you’re coming from on that exactly.
And I want to, I want to think about that a little bit more. Um, So, I wanted to talk a little bit about data. Um, and, um, Open source as it relates to data and why data matters. And you were talking about, ah, we just send it off out there and you know, that’s idiotic. And, but I don’t think that my understanding of the value of data, of my personal data, where it lives and what I can do with it has changed a lot in the last 18 months because of this stuff.
So let’s just start walking down the path of around data and open source. Sure. Sure. I mean, Why is my data valuable? Why the fuck should I give a care about my log files that are sitting on some server somewhere? Why do I care that Netflix has all my things? Why do I care that all the data is out there and I don’t have Why?
Why? We’re not talking privacy here, we’re talking something else completely. Yeah, so I think, I think I’m more on the privacy side. I think, I think I just don’t want people at, you know, at mass. The entire world. To simply allow themselves not to have privacy, not to have sovereignty, and for everything that they do, every action that they take, waking up, breathing, asking the alarm to be shut off, calling mom, whatever, every single action to now be going through a server so that they can be tracked and somehow used.
I very much, that to me is the fundamental thing, yes? So that’s the reason I very much care about it. But also, what you’re talking about is the value of it, and like, why is my data valuable? Well, I mean it. If you stop giving away your data for whatever reason, whether it’s privacy or because you now value it for some other reason, you realize that you can start to create actually quite a valuable archives.
And people are talking about data being the new gold. And a lot of startups are like, yo, legacy companies, what data do you have? Cause that’s where we’re going to. And that’s why a lot of these tools are giving away their shit for free on the internet is because they’re slurping up all the data right now.
And I don’t mean personal data about, um, Where you live or your preferences, they’re like, how do humans think about art? How do humans talk? How do they move back and forth between languages? Our ability to train these AIs of the future is the real value of the data. Yeah. Yes. I mean, yeah. So I think, uh, I do want people to contribute back data to the world in the ways that it makes sense.
I just don’t want that to be a forced thing. Um, that happens top down where. The companies say, you know what, send us all your data and we can, you know, have a little bit of a service. Um, for example, it’s a bit of a tangent, but it’s very related to this conversation about data. Uh, people have been talking about how these models were trained on public internet and, you know, private data and so on.
And, uh, there’s all these lawsuits, New York Times and so on. And I, I don’t understand. I definitely don’t want that conversation to be the way that it is. Um. It’s not. Yeah. The way that it doesn’t make sense to me is like, you publish that information on the internet. So I distinguish between published information, like you know, things that you’re putting out in public, versus private information, right?
So if you’re actually putting information out in public, you’re releasing things, you’re getting the value out of having shared it on the internet, friends, whatever, you know. That should be, in my view, the Then we used to be able to build these language models that then afford so many new things, so many cool things, right?
Including image models, depth map generation, right? That alone is an incredible thing that you wouldn’t not, would not get for free without having this large data set. And so I think I’m all about that. And I think people should find ways to be able to anonymize and contribute back data sets. But I just don’t want it to be forced.
And I also, they never released it. That’s nice. They slept with all of our data. The billion scale planet, um, and they just keep it for themselves, right? So that’s the craziest part is the fact OpenAI has been eating up all this, you know, user data and it’s all just there. That’s crazy. The same way, again, Google maps, as I mentioned at the beginning, this photographed our planet.
I mean, this is my fucking planet. Like it’s all of our planet. And somehow, sorry, sir. That information is private. Yeah. Yeah. Somehow this is private. And they, we, me, recently the QNAPI will let it trickle down. Access denied. Yeah. So it’s like, it’s a joke. This is our planet. So I want open data in that way.
People to contribute back willingly when, if they want to, and if it’s useful to the public and it’s our, and maybe this is where we start to get into some of the capitalism stuff too. It’s because it’s like, yeah, the world’s kind of structured right now. So. So the companies slurp up that data, build walls around it, and then can build valuable corporations.
But think if we were instead submitting that data to some sort of commons or collective where global sensemaking and, you know, interpretation could be done on human feelings, emotions, health, you know, dreams, thoughts, ideas, the future, all that kind of stuff. I mean, we, we would really know a lot about ourselves if we could.
share our data with each other in a way that could lift all of us up. 100%. This is exactly what I’m talking about. I think open data is a very, very big subject. I, I, there have been different initiatives over the, over the, over time, but I’m hoping that more and more of that will happen now, even on a very narrow, small way.
Like there was this initiative open assistant that happened around the same time and opening, opening. You know, all this information from the users. Um, this was an initiative saying, you know what? Come and actually create our own dataset in public by, um, by having conversations with an open source AI model.
Tag each other’s work in like, you know, in terms of how qualitative it is, how useful it is, and so on. Similarly, there was another initiative, I forget the name of it, where people were saying, Extract your, uh, GPT 4 tags from OpenAI, and if you decide the ones that you are okay with, submit it, you know, openly, to a separate, you know, open, uh, dataset, which we will publish every, uh, You know, month or something like that.
That was another initiative to say, you know, you’re taxing your tax. So at least even the open AI hasn’t, you can also copy them yourself and share it with us. So I think there’s been initiatives and lots more since then. And so much of the progress that happened in open source AI has been because of open data science, where people have been able to say, this is what worked for me.
This is what works for you. So much of that, 100%. So I am very passionate about everything. And I, I learned so much myself. And even just in the ability of. How to prompt it, how to use this thing, how to use that thing, by just looking at other people’s examples. So it helps even on a very functional level of like, building better systems.
Hey, you want to do a little role playing? Sure. Okay, you can be the good guy. Alright, alright. Okay, you be Kush, I’ll be Leopold Aschenbrenner. Okay, yeah. Alright. Um, wow, this AI is so incredibly powerful, and it’s getting more powerful all the time. Uh, when the fuck is the U. S. government gonna wake up and realize the Manhattan Project shouldn’t have been built in people’s garages and we’re gonna have to nationalize this stuff, bring it inside the NSA and CIA and we can’t let this stuff going out to China or letting bad actors get control of these A.
I. s. Yeah, yeah, I mean, we’ve had this conversation as the A. I. and that’s why I know it. I think it’s Uh, this is the thing that I’m trying to fight against, is the decentralized and sort of also siloed thinking. You know, I feel like, um, incredible models, incredible work, contributions, papers are coming out of China right now.
Coming out of all around the world, people are just building with this technology, especially once it’s already getting open source, people will download the weights themselves. Play with them and understand them. Um, and I want people to collaborate. I want everyone to have this feeling that if I contribute, that doesn’t just mean I’m contributing and no one else will contribute.
And if people start to build these silos, uh, U. S., you know, especially pushing all these laws and, uh, I mean, especially the U. S. China thing is a ridiculous one. Bro, they just fired up Los Alamos Nuclear Laboratory again and made it an advanced AI research facility for the states. Wow, I didn’t know that, yeah.
Yeah, just like two and a half months ago, right at the same time that General Nakamura or whatever retired from the NSA and joined the OpenAI board or whatever, and Edward Snowden’s like, watch out guys, you guys are fucking captured, right? Yes, yes, yeah, I remember the NSA board number, yeah, I think. Yeah, makes sense, I mean, I, also, I’m not, you know, you do your research in your lab, do it, you know, do your thing.
Be DARPA if you think you can be that now. But I, I would just, I’m just saying don’t, don’t prevent us from doing it, right? Yeah. So, I’m not against corporations making money. Do your thing, whatever, just do not try and ban it so that we then are just fucked. Yeah. Right. Like I I, that’s what they’re trying to do right now with built in Sbn 47 with the executive.
Mm-Hmm. that’s what I don’t like, is like, you can make whatever you want. Like, you know, make your lobotomized little ai Yeah. Just don’t prevent me from doing the thing that I’m doing. I, I even think their logic is flawed. I’m gonna break character for a second here. I’m now, I’m not Leo Uphold anymore. I’m Chrisian.
Um, you know, maybe, you know, nuclear actors are big. And it’s not that easy to stick one in your briefcase and sneak it out, you know, or whatever. And, um, but these model weights are, you know, fit on a thumb drive or a phone or, you know, can be broken apart in a million files and sent out to the internet or whatever.
And so it’s like, why are you going to make open source illegal or restrict, you know, use of open source, uh, AIs and stuff? When these proprietary ones can be stolen very easily by anyone, you know, and I brought up the Los Alamos thing to mention, like, even when it was the department of energy, we found infiltrators in there that were downloading nuclear secrets on thumb drives and shit like that.
So what makes you think that even if you centralize all the AI development inside of the central government of the U S that model weights are going to sneak out, you know, into bad actors hands and yeah, fuck the fucking, like, you know, Uh, xenophobic anti Chinese stuff or whatever, you know, it’s like that’s that’s a dishonest analysis of who we are and a dishonest analysis of how the world works.
That’s exactly it. That’s what I was going to say. This is dishonest. I don’t think that that’s really the reason why they want what they want. That’s the kind of government that people want. Same thing with OpenAI when they were not releasing the model weights and whatever. They did not say, like, it’s because we want a competitive edge, it’s because, uh, we make more money, it’s because, you know, funding is the one.
They said it’s because of safety of humanity, it’s because of, uh, the kind of bad actors. But it really turned out that they created an interface that was so easy to access from everyone, from humanity, from users, that all kinds of bad actors were using it. It took them a while each time to, like, counter that issue.
Yeah, uh, what? Do those bad actors not count? Do those bad actions not count? Uh, that supposedly were enabled, but they were enabled by their API and their user interface? No, they don’t count because we solved it later on. No, that’s bullshit. So yeah, I think fundamentally, especially state actors, you’re not gonna solve it with that.
You’re not gonna Find, hide the model weights by banning open source. I think fundamentally it’s, I, I don’t see it as a safety thing. I see it as a power graph thing. They simply want that central power always, you know, they don’t want it. Boom. Mic drop. Kushal in the house. Tell us where we can find you. Um, I’m just everywhere.
LinkedIn, YouTube. You on YouTube. Just spell out your name. Tell people where you can find, pretend they never heard of you. They’re watching this on the internet. Yeah, and so on YouTube, uh, Kushal Goenka, K U S H A L G O E N K. Yeah. Look up Intro to Open Source AI or, yeah, something like that. And, uh, what are you working on that people can help you with?
Like, if someone is, like, down with this and they’re interested in you as the thought leader and they want to follow this, other than just kind of consuming your free education, is there some projects you’re working on you can use some more extra hands with and stuff like that? Projects that I’m working on, extra hands is hard because I’m very, generally I just work on stuff myself.
And, uh, and, you know, if it’s something useful, then I’ll start publishing it. And then, then people, you know, care about it. Um, but I am hoping to do more, um, events, uh, like even like on just educational events. And so, um, if you have, if in Vancouver, people have venues, then they basically reach out if, uh, um, if anyone wants to ever, you know, rant or hear a rant about open source AI, uh, or just any of this.
I love your rants. We didn’t really rant today. We stayed on the straight line, but, um, You know, I did want to add one more thing here. Um, one of the, you know, obviously we’re very aligned and I look up to you a lot. One of the things that sometimes prevents me from being able to fully get down with your program is like, you spend a lot of time tearing down all the things around us that a lot of people accept as normal institutions.
And not always giving people, you know, I try to be a little more neutral or at least positive sometimes or whatever. So it’s like, what I’m asking you here for is give us some heroes back. Give us some folks we can look to, some thought leaders, some companies, some organizations that you respect where, you know, now that you’ve torn everything down, give us something to put our, uh, to put our, you know, optimism and idealism in.
Sure. So, I mean, I’ve just, I’ve just been like this for quite a while now. Like I feel like if you have heroes, they’ll disappoint you kind of thing. So I learned my lesson of long time ago now of like, just actually putting anyone on a pedestal, uh, because everyone who I would. Thing is amazing in one way.
Fine. I agree with it. And I’m not trying to cut you off, but you live an inspired life, so I’ll use different language who inspires you. Yes, yes. So, so, so people. So it’s not like heroes and so on. But definitely people in different times inspire me currently. I can just give you one name. Um, I’ve been watching his like, like on YouTube, Andre , uh, you know, yeah.
This guy is like a absolute amazing person who is like really great at explaining things. It comes out of a place of total love I can from what I can tell, um, and, and so really respect like what he’s doing. I have, I mean, I could just name names of just people online, random ones, Andrew Kramer of Video Copilot.
You probably know him from, you know, editing, uh, this guy from Cartoon Smart was a website, uh, Mehran Sami from Stanford, uh, this guy. Course program methodology on YouTube, David Bain from, uh, Harvard. These are people that have I really appreciate and I can’t think of a bad thing about them who have just what, they’re all educators, so they’re all educators and they’ve just enriched my life a lot.
What do you think about Ilya? Um, I have no comments. You know, I, I haven’t researched much and I have, I disagree with some of the stuff that he says, but, uh, uh, this is guy from Open, yeah, Ilya. SI think he was more of the technical lead at, uh, OpenAI. He left, uh, you know, about four months ago out. Citing issues of misalignment.
Then he started his, oh, I dunno, whatever they calls it, global super intelligence aligned organization or whatever. But he’s been given a lot more public presentations lately and, um, I don’t know. I’m really kind of trying to like, get on his brainwave a little bit. Yeah, I mean, I like to look more into it.
I know, I know the guy, I mean, I know of him and I, I watched some videos of him and I understand his contributions, but, uh, some of the things that he is. Very much like that. Like somehow putting too much belief into like the current architecture of things, then I would, I would fall more on the side where I think these things that we need to build better systems.
If we want that kind of agency, these models, I’m not even saying that we do, I’m saying that I can just build a software for it. I would rather, like, I’m, you know, my job is projecting and someone, I would rather build Jarvis. Yeah. Uh, such that I am explicitly writing a lot of the systems and I understand how it works than just having one black box.
But it goes in the face of, uh, OpenSource. In fact, I’ve had this conversation with someone else, never talked about it, uh, on podcasts and stuff, is the, is that, doesn’t it go, Almost against your, like, brain to use these AI models because they’re such black boxes, actually. And I think, yeah, it’s kind of a funny thing because generally I want to read the source.
I want to understand how these things work. Um, but I think that’s why open source is the closest I can get to being able to prompt it enough to understand a mental model. It’s almost like physics where you expect, you learn through experimentation, not through the book about the world. Yeah. Because there is no book written by God, if you will.
Yeah. You just experiment and you figure out what’s real and what’s not. I’m really actually. More and more fascinated by this conversation as it goes on, because like the very first thing that got me into AI like 18 months ago was I started hearing about it from different people who are all using it in these different ways.
And it’d been a long time since I heard of like a software or a tool that was being used in such diverse and novel ways that was completely different. How people could have originally thought, you know, how it could be used or whatever. And just the more and more that we talk, I feel like we must use AI in, in, in pretty different ways, actually.
Like, uh, I think the way that we interact of them seems to be, um, um, quite different. So yeah, I, I, because you know, when I think about the black box, I think about myself and I sometimes don’t know what I think or what motivates me or I can’t connect some of the dots that, you know, through importing, say like 20 years of my writing into an LLM and asking it, are there any reoccurring themes that I talk about or are there any ways that I approach certain topics?
I’m able to unlock my own black box a little bit using this stuff, and that’s often how I’m, I would say, it’s like my goals or my approach to it, I, I’m not asking it for answers, I’m asking it to have a conversation with me that allows me to develop my ideas and, and to explore them, and, you know, oftentimes my last part of my conversation with any AI is like, Okay, now tell me everything I didn’t think of or give me a harsh critique of that response.
Tell me all the shit that’s fucked up about my ideas or where I’m, you know, uh, deceiving myself or whatever. And so I’m not looking for it to tell me what’s up. I’m trying to use it as a sounding board. But if I may, I think there’s a very beautiful thing you’re saying about, uh, yourself being a black box.
And I think that’s fundamentally true. We are all, probably ourselves the biggest mystery to ourselves, right? I think that’s why I think these language models, and I keep saying language model because it is a language model. They’re just so useful. I guess convincing us of us having a conversation with an entity that it serves as a really useful tool of analysis of ourselves, of analysis of writing, of summarization, of research and so on.
So it is useful in all kinds of ways and I am a big believer that I am teaching about it, I’m using it. Yeah, yeah, sure. But, but it’s, it’s when. People talk about it having agency beyond myself. To me, the way I see it is like, I’m giving it birth in that moment for the five seconds that I’m listening to it and then it goes away.
Right. And so it’s kind of like, it’s almost like extending yourself for that moment. And beyond that, I love that. And so it’s on you to get the use out of it that you want. Extending yourself for that moment while you’re interacting with it. Yeah, I mean, we do that in so many ways through our gadgets and so on.
I’ve really kind of felt like that, how much we have sort of just extended ourselves outside ourselves through our gadgets and so on. And AI certainly seems to be, especially with natural language, we need to face very, very much close to that kind of concept. Dude, I just want to say I’m grateful for your time sitting here, but I’m also grateful for your trust.
I know, like, we’ve been getting to know each other the last few years and, like, we’ll always come from, like, the same perspectives on some stuff, but I’ve really come to appreciate your wisdom and your sharing and your approach. I just like hanging around you and stuff, so thanks for sitting down with me, helping me prep for my CBC talk.
And, um, yeah, anything else you want to say on the way out? No, no, thanks, man. Thanks for having me, and it’s always a great time talking to you, just debating and hanging out at your events and so on. Cool. Thanks a lot, man. Chris Krug and Kushal over now from Vancouver. Cheers.
Discover more from Kris Krüg
Subscribe to get the latest posts sent to your email.