LLMs and Beyond: Reframing the Future of Artificial Intelligence

Let’s be real for a minute—there’s no shortage of opinions floating around about AI, and a lot of them are, frankly, missing the point. It’s easy to get caught up in the hype, the fear, or the doomscrolling predictions of a tech-apocalypse, but somewhere between all the noise, there’s a conversation we actually need to be having.

That’s where Berit Anderson’s recent take on AI and LLMs comes into play. Sure, she’s got some points worth chewing on, but let’s not pretend it’s the last word on the subject. Her take feels like it’s stuck in second gear, when we need to be revving up to handle what’s really ahead.

So, let’s strip this down, pull apart the ideas, and see where she hits, where she misses, and where we need to be pointing the discussion as we navigate this wild AI frontier.


The AI Haves vs. Have Nots: The New Digital Caste System

First up, let’s give credit where it’s due—Anderson’s got a decent grasp on the whole “AI Haves vs. Have Nots” situation, but it’s like she’s just peeking over the edge of the abyss without really diving in. We’re not just talking about a few tech companies jostling for top dog status; we’re on the verge of creating a new digital caste system. Imagine a world where AI literacy is as essential as knowing how to read or write, and suddenly access to AI becomes the golden ticket to success. If that doesn’t set off alarm bells, you’re not paying attention.

We’ve got to get serious about this—are we laying the groundwork for a future where the AI-savvy elite leave everyone else choking on their digital dust? Because if that’s the path we’re on, it’s not enough to just talk about it—we need to act. We need to be building bridges now, making sure everyone’s got access to the tools and education they need to thrive in an AI-driven world. Otherwise, that gap Anderson mentions? It’s going to become a chasm, and by then, it’ll be too damn late to do anything about it.

The implications are staggering:

  • Are we creating an insurmountable gap between the AI-savvy and the AI-illiterate?
  • How can we ensure universal access to AI education and tools?
  • What are the societal risks if we fail to address this divide?

LLMs: Far from Done, Just Getting Started

Now, let’s talk about Anderson’s claim that LLMs are “done.” I mean, seriously? That’s like saying the internet was done after people figured out how to send emails—shortsighted, myopic, and completely missing the point. LLMs aren’t just some passing fad that had its moment in the sun—they’re evolving, adapting, and just getting warmed up.

Sure, today’s LLMs aren’t perfect—they make mistakes, remix old ideas, and sometimes spit out nonsense. But here’s the thing: innovation isn’t about getting everything right on the first try. Remember when smartphones were clunky, battery-draining bricks? Fast forward to today, and they’re practically extensions of our very souls. LLMs are on a similar trajectory. The real magic is going to happen when we start combining them with other emerging tech—think blockchain, IoT, quantum computing. It’s in these intersections that LLMs will really start to shine, transforming industries in ways we can’t even fully grasp yet. Declaring them “done” now is like calling the Wright brothers’ first flight the peak of aviation—it’s wildly premature and, frankly, kind of absurd.


The Economic Realities: Time to Get Creative

Anderson’s breakdown of the financial dynamics between AI powerhouses like OpenAI, Anthropic, and their cloud-hosting puppet masters is a decent wake-up call, but it’s also where her analysis starts to feel a bit stale. The centralized, top-down model she describes is exactly what we should be tearing apart, not accepting as the inevitable status quo. The AI economy doesn’t have to be this pyramid scheme where value trickles up to a select few while the rest of us fight over the scraps.

Let’s flip the script. Imagine a decentralized, blockchain-driven AI ecosystem where value isn’t just hoarded at the top but shared across creators, users, and communities. This isn’t some utopian fantasy—it’s a vision of AI that’s actually worth fighting for. We’re talking about turning a system that’s built for exploitation into one that’s designed for empowerment. Anderson’s take? It’s like describing the Titanic’s layout while ignoring the iceberg up ahead. We need to be thinking bigger, bolder, and with a hell of a lot more creativity.

It’s time to envision a new paradigm:

  • Could a decentralized, blockchain-driven AI ecosystem democratize value creation?
  • How might we transition from exploitation to empowerment in the AI economy?
  • What role should regulation play in shaping a fairer AI landscape?

Environmental Concerns: The Silent Crisis

Anderson gives a nod to the energy consumption of AI data centers, and while that’s cute, it’s also not nearly enough. We’re on the edge of a full-blown environmental crisis if we don’t start taking this issue seriously. This isn’t just a side note—it’s the next battle we need to be gearing up for. The tech industry has always been about pushing boundaries, and it’s time to push toward sustainability with the same zeal we reserve for innovation.

Think AI-driven smart grids that optimize energy use or quantum computing that slashes the energy footprint of data centers. These aren’t just nice-to-haves—they’re non-negotiables if we want a future where we can still breathe the air and enjoy the planet we’re so eager to tech-ify. Anderson’s take is like acknowledging there’s a leak in the boat but not bothering to bail out the water. We need to lead this charge, not just for the sake of progress, but for the sake of our very survival.

The energy consumption of AI is more than a footnote—it’s an impending crisis that demands immediate action:

  • How can we push for sustainable AI development with the same vigor we apply to innovation?
  • What role can AI play in optimizing energy use and combating climate change?
  • Are quantum computing and other emerging technologies the key to greener AI?

The China Factor: Innovation, Not Just Catch-Up

When it comes to China, Anderson gets it half-right. Yes, they’re not just playing catch-up—they’re rewriting the playbook. But here’s where the analysis needs to shift gears. The real question isn’t whether we can outpace China in this AI arms race; it’s whether we can figure out how to turn this competition into collaboration. AI doesn’t have to be a winner-takes-all scenario where we duke it out until only one country is left standing.

Instead, let’s look at how we can foster international cooperation, share breakthroughs, and build an AI landscape that benefits everyone. Anderson’s approach feels like a relic of Cold War thinking, where the only options are win or lose. But we’re living in a new era, where the biggest wins will come from collaboration, not conflict. It’s time to stop thinking in terms of borders and start thinking in terms of a global AI community, where progress is shared and everyone moves forward together.

The narrative of an AI arms race with China is outdated and dangerous. We need to shift the conversation:

  • How can we foster international cooperation in AI development?
  • What models of shared progress and ethical AI governance can we create?
  • How do we balance national interests with the global benefits of AI advancement?

The Human Element: Shaping the Future of Creativity and Cognition

And finally, let’s talk about what really matters—the human element. AI isn’t just reshaping industries; it’s reshaping us—how we think, create, and connect. Anderson touches on this, but it’s more like a passing glance than a deep dive. The truth is, AI isn’t just a tool; it’s becoming an extension of our cognition, a partner in creativity, and that’s where the conversation needs to get serious.

We’re not just bystanders in this evolution; we’re the ones who will define AI’s role in society. As AI becomes an extension of human cognition, we face profound questions about creativity, identity, and the nature of intelligence:

  • How do we ensure AI amplifies rather than replaces human creativity?
  • What does it mean to be human in an age of increasingly intelligent machines?
  • How can we shape AI development to enhance rather than diminish human potential?

We stand at a crossroads in human history, facing a future where the boundaries between human and machine intelligence blur. It’s not enough to be passive observers in this revolution. We must actively shape the role of AI in society, ensuring it serves to elevate humanity rather than subjugate it. The future of AI is, ultimately, the future of us all.


Discover more from Kris Krüg

Subscribe to get the latest posts sent to your email.

Leave a Reply