Deep Dive Into Cybernetic Feedback (AI podcast)

🦋🤖 Robo-Spun by IBF 🦋🤖

>>> 👨‍💻🤖 Cybernetic Feedback 👨‍💻🤖 <<<


The podcast episode “Deep Dive Into Cybernetic Feedback” explores the concept of cybernetic systems and how AI-driven feedback loops influence human behavior. It examines how AI companies collect surplus information to refine algorithms, shaping individual choices and societal trends. The discussion includes perspectives on the risks and opportunities of AI, with a focus on how it can be either a tool for manipulation or empowerment. The conversation touches on major tech figures and their differing approaches to AI’s future.

Generated by Google’s NotepadLM website given these links:

1) AI Companies and the Politics of Surplus-Information: A New Logic of Capitalism

2) The AI Policy Rift: Sam Altman, Dario Amodei, and the Spectacle of Elon Musk’s Robot Army

3) Theory of Cybernetic Feedback: Surplus-Value, Surplus-Enjoyment, Surplus-Information, Surplus-Power

4) AI and the Reconfiguration of Collective Will: Overcoming Denial and Updating Humanity

5) The Hyperdigital Nature of Artificial Intelligence and Its Inhuman Dimension

6) The Overlooked Danger of Natural Carbon Sinks: A Legacy of Gaia Hypothesis Ideology Trusting in Natural Intelligence?

This is part of Numerical Discourses


Hey everyone, welcome back! Today we’re going to dive headfirst into AI—artificial intelligence—a topic you can’t seem to get away from these days.

Yeah, exactly, and we’re going way deeper than just the hype.

Absolutely, we’ve got a stack of recent articles from Žižekian Analysis—Žižekian Analysis.

Yeah, and don’t worry, we’ll break it all down.

We will, but our goal here is to uncover the real forces that are shaping AI, the motivations of the people who are actually leading the charge, and what it all means for us—you know, everyday people.

What does it mean for us? That’s the question, right?

That’s the question, right, and luckily, I’m joined by our expert here who can, you know, connect those dots. You ready to get this deep dive started?

Yeah, let’s jump in.

Okay, so one of these articles poses this fascinating question, and it is: Why are these AI companies pouring billions and billions of dollars into research, often without a clear path to immediate profit?

Yeah, we get the whole “new phone, who dis?” but this feels different.

This is different. It is different. It’s fundamentally different, right? I mean, most businesses chase profit, right?

Right.

But these AI companies—what this article is saying—is that they’re driven by something called “surplus information.”

Surplus information?

It’s about amassing these colossal data sets, way bigger than anything you would need for a traditional product.

So it’s not just about selling us stuff; it’s about controlling the information itself.

That’s exactly right, and that’s where it gets a little scary for you and me, right?

Right.

Because this surplus information isn’t about selling us more widgets—it’s about understanding our habits, our preferences, even our beliefs, at an incredibly granular level.

That is a little unnerving when you actually think about it. It’s like Google Search, right? We use it every day, it’s free, but it’s constantly learning about you. Each search, each click, feeds this surplus information beast.

And that data then refines the algorithms, making them more “intelligent,” more capable of shaping your online experience, your choices.

Yeah, your choices—it’s incredible. It’s like they’re playing a whole different game, and we’re just starting to understand the rules.

And the stakes are very high because whoever controls this information—they’re not just controlling a product, they’re potentially controlling the flow of ideas, the narratives we see, the future we build as a species.

Okay, now this is where it gets really interesting, because if we follow this thread of, you know, who’s controlling AI, it leads us to these larger-than-life figures we keep hearing about, right?

Right.

So this next article that we’re going to dig into really gets into these—I mean, some of the biggest names in all of AI, and it feels like, I don’t know, we’re suddenly in a philosophy seminar or something.

Well, it’s not every day you see tech CEOs grappling with questions that have vexed philosophers for centuries, right?

Exactly, exactly. So the article really digs into these three figures: Sam Altman, Dario Amodei, and, of course, the always-entertaining Elon Musk. And they’re each taking these wildly different approaches to AI, almost like they have completely different visions of what AI should be.

Right, and it’s a fascinating dynamic because you have Altman, the architect of OpenAI.

Yeah.

And he’s almost utopian in his belief that AI will be the solution to humanity’s biggest challenges.

He’s really banking on this whole idea of deep learning.

Right, deep learning worked, he famously said. Just feed those algorithms enough data, and they’ll figure everything out.

Exactly. His perspective seems to be, you know, don’t overthink it, just scale it up, give it more power, and watch the magic happen.

Right. And then you have Amodei, who founded Anthropic, and he’s coming from this much more cautious angle.

Yeah, much more cautious.

Much more cautious. Amodei is all about, how do we manage the risks of AI? How do we make sure that its benefits are distributed equitably?

Right. He wrote this essay of love and grace, really highlighting the potential dangers of unchecked AI development.

Right, the possibility of it exacerbating inequality, the possibility of it being weaponized, even the possibility of it spiraling out of control.

That’s a pretty stark contrast to Altman’s, you know, almost idealistic view. It’s like one is saying, “Hey, AI will save us,” and the other one is saying, “Well, it might just destroy us all if we’re not careful.”

Yeah, and then you have Musk—he always seems to defy categorization.

Always, right, the wild card.

The wild card in the bunch. Remember that Tesla Optimus robot demonstration?

Oh yeah, it was pure spectacle—robots serving drinks, attempting to walk like humans—it felt more like a show than a serious scientific presentation.

Yeah, and don’t forget the, uh, I don’t know if I’d call it a highlight—the awkward rock-paper-scissors game.

Oh yeah, awkward with the robot—it just highlighted this gap between Musk’s grand vision of an AI-powered future and, you know, the reality of actually building these systems, which is often messy and unpredictable.

So we’ve got these three incredibly influential figures, each with a radically different approach to AI, a different vision for the future, and that raises this pretty big question for all of us: What kind of future are we comfortable building with AI?

It’s almost like, you know, these AI leaders are wrestling with fundamental questions about human nature itself, right?

Yeah, like can we even handle this much power?

Yeah, are we going to use this for good, or for, well, you know, less good, let’s just say?

Exactly. And that question of, are we really in control, or are we being controlled, that’s really at the heart of what this next article digs into. It’s this idea of AI reshaping our collective will, how our desires, our choices, might be influenced in ways that we don’t even realize.

Okay, so we’re not just talking about, like, robots taking over the world; we’re talking about something much more subtle.

Much more subtle, much more, I don’t know, insidious, in a way.

Yeah, I mean, look, think about the climate crisis. We know what we need to do, we have the technology, we even have AI-powered solutions, but actually getting people to act collectively, right? To change their behavior on a large scale—that’s where things get really tricky.

It’s like we’re all stuck in this loop where we know something is wrong, but we can’t seem to break free.

And this article, this article calls this “cybernetic feedback.”

Can you break that down—what do they mean by that?

Think of it like this: Cybernetic feedback is basically how systems—whether it’s a society, an economy, or even, you know, our own minds—how they regulate themselves through these loops of information and influence.

So, are you saying that we’re all just stuck in these giant feedback loops? Like, give me a real-world example so I can kind of wrap my head around this.

Okay, so how about this: social media algorithms.

Okay.

Right? They’re designed to learn what content is going to keep you scrolling, what’s going to trigger your emotions, and then they just feed you more of it, whether it’s good for you or not.

And that’s how we end up, you know, doomscrolling for three hours on a Tuesday night when we should be doing something else.

Exactly.

But we just keep going.

Yeah, and it just makes us feel worse and worse, but we just can’t stop.

Exactly, it’s a self-reinforcing cycle.

Right, right, and what this article calls this is “surplus enjoyment,” right? This quick hit of dopamine that we get from seeing something shocking, something engaging, even if it’s ultimately meaningless.

Yeah, okay, so we’re already kind of trapped in these feedback loops, right?

Yeah.

But how does AI amplify this? Is it really that different from what we’re already experiencing?

I think the difference is the scale and sophistication.

Right.

We’re talking about AI systems that can process information and learn at speeds and levels of complexity that we can barely even comprehend.

Right.

Imagine algorithms that could predict and manipulate your desires with pinpoint accuracy, subtly nudging you toward certain choices, certain products, even certain beliefs.

I mean, it’s a little terrifying.

It’s almost like a choose-your-own-adventure book where the AI is kind of like subtly guiding you to the ending it wants, right?

Yeah, exactly, exactly. And that’s where the real danger lies here, right? If we’re not careful, these AI-powered feedback loops could lock us into these patterns of behavior, of consumption, even of thinking, that ultimately benefit a select few at the expense of everyone else.

So what can we do about it? I mean, are we supposed to, like, swear off technology, go live in a cabin in the woods? What are we supposed to do?

I don’t know if it’s about, you know, rejecting technology altogether, but I think it starts with understanding the forces at play, right?

Right.

Being aware of the potential risks and demanding more transparency and ethical development from the people who are actually building these systems.

So we need to be more aware of these, you know, surplus forces—the information, the enjoyment, the power—and how they’re shaping our world, even, you know, shaping

our own minds in a way.

But how do we break free from this cycle? Like, is there even a way to harness these forces for good? Is that even possible?

That’s the million-dollar question.

Yeah.

And that’s where this next article takes a really intriguing turn, because if AI can be used to manipulate and control us, then maybe, just maybe, it can also be used to, I don’t know, empower and liberate us.

Exactly, right? That’s the really intriguing possibility that this article explores. It suggests that these cybernetic feedback mechanisms—the same ones that can create these loops of control—they can also be harnessed for collective good.

Okay, so instead of AI algorithms trapping us in these echo chambers of, like, you know, outrage or feeding our worst impulses, they could actually help us make more informed choices, connect with different perspectives, maybe even, like, tackle those big global challenges together.

Exactly. Imagine AI systems that are actually designed to promote collaboration, to promote transparency, to amplify the voices of marginalized communities, to help us break free from these cycles of short-term gratification, and actually focus on long-term well-being.

I mean, that’s a very different vision of what AI could be.

That sounds amazing, but how do we get there? I mean, right now, so much of AI development just feels, I don’t know, opaque, like it’s being controlled by, you know, a select few behind closed doors.

That’s a valid concern, and it’s why this article really emphasizes the importance of decentralized AI networks, right? Where control is distributed more evenly across society.

Okay.

Right? So imagine a world where AI isn’t just in the hands of a few powerful corporations, but it’s accessible to everyone, giving individuals and communities more agency over their data and their futures.

So instead of us just being these passive consumers of AI, we become active participants in shaping its development and its impact on our lives.

Exactly. It’s about shifting the focus from AI as a tool of control to AI as a tool for empowerment, for collective intelligence.

Which brings up this, I think, even bigger question that this article touches on, which is our reliance on, you know, so-called natural intelligence. Because if we’re struggling to even manage our own creations, right? How can we even begin to comprehend something as complex as AI?

Yeah, and this article uses this really fascinating example to illustrate this point.

Okay.

It’s the failing natural carbon sinks. You see, for years, climate models have relied on forests, oceans, soil, to absorb a significant portion of our carbon emissions.

Right.

We just kind of assumed that nature would naturally regulate itself, that the planet had its own intelligence to maintain balance, like, “Hey, nature’s got this,” right?

Exactly, we just kind of outsourced the problem, right? We’re like, “Nature, you take care of it. We’ve got other stuff to do.”

Exactly. And now, now we’re finding out that those natural systems aren’t as resilient as we thought they were, and that our trust in “natural intelligence” might have been misplaced.

It’s a pretty major wake-up call, like we can’t just rely on these, like, you know, natural solutions—we actually have to take responsibility for the impact we’re having on the planet.

Exactly, exactly, and this ties back to AI in a really powerful way because if even our understanding of something as fundamental as nature’s ability to self-regulate, right, if even that is flawed, then we need to approach AI development with even more humility, with a deep awareness of our own limitations.

So, be cautious, but not afraid. Be aware of the risks, but also open to the possibilities.

I think that’s a good way to put it. Yeah, because ultimately, I mean, this is what this article is saying, right? If AI is a reflection of ourselves—our ingenuity, our flaws, our potential—then the kind of future we create with AI is entirely up to us. We actually get to decide.

That’s a really powerful message to end on, right?

Yeah, it’s not about surrendering to some predetermined technological future—it’s about actively shaping that future, asking the hard questions, making conscious choices that align with the kind of world we want to live in.

Wow, this deep dive has really given me a lot to think about. I feel like I need to go for a walk and maybe have a strong cup of coffee to process all this.

Yeah, you and me both. But if this conversation has sparked your curiosity, that’s a good thing, right? Keep asking those big questions, keep exploring these ideas, and most importantly, keep the conversation going.

Exactly. Yeah, keep talking about this. And on that note, we will leave you with this final thought: If AI is indeed a reflection of ourselves, what does that say about the future we’re creating?

Until next time, keep those minds open, and as always, happy diving!

2 comments

Comments are closed.