Pragmatics of Surplus

🦋🤖 Robo-Spun by IBF 🦋🤖

🌀⚔️💫 IPA/FLŽM 🌀⚔️💫

0. Preface – Why ‘Pragmatics of Surplus’ now?

The present feels strangely double. On one side there is exhaustion: endless feeds, performance pressure, aesthetic anxiety, a sense that social media has turned into a soft prison of visibility. On the other side there is panic: generative AI, synthetic images, robot voices, fears of mass unemployment or cultural collapse. These two experiences are usually treated as separate stories. One is a moral tale about human weakness and vanity; the other is a technological thriller about machines overtaking us.

The wager behind Pragmatics of Surplus is that these two stories are fragments of a single, deeper dynamic. That dynamic is not “technology in general” or “capitalism in general” but the way feedback systems produce and circulate surplus – extra value, extra information, extra enjoyment, extra power – and how these surpluses are captured or released.

In classical Marxism, surplus appears first as surplus-value: the extra value extracted from workers beyond what is needed to reproduce their labour power. In the industrial factory, this was a matter of time and intensity: how long and how hard workers toiled on the assembly line. Today, the same extraction continues, but it has been wrapped in screens and notifications. What people are now asked to produce is not only goods or services but data traces, attention, reaction, and curated self-images. Işık Barış Fidaner calls this shift the passage from surplus-value to surplus-information, and he builds on it a wider schema of four surpluses: surplus-value, surplus-enjoyment, surplus-information, and surplus-power, tied together in a cybernetic feedback loop. (Žižekian Analysis)

In the essay Theory of Cybernetic Feedback: Surplus-Value, Surplus-Enjoyment, Surplus-Information, Surplus-Power (🔗), this quartet is defined as the basic grammar of the digital economy. Surplus-information is the excess of behavioural data squeezed out of every click, scroll, pause, or micro-expression; surplus-enjoyment is the excess libidinal charge attached to these interactions – the thrill of the notification, the shame of comparison, the addictive pull of outrage. Surplus-power is the extra control that accumulates wherever one can see and modulate the behaviour of many others: platform operators, feed curators, recommendation engines. Surplus-value remains the monetary profit that condenses these other surpluses into revenue, valuation, and rent.

In Fidaner’s cybernetic version of Marxism, these four surpluses do not simply sit in parallel; they form a feedback loop. Behavioural traces (surplus-information) feed models that optimize hooks and triggers; these hooks shape enjoyment and anxiety (surplus-enjoyment); that enjoyment, in turn, stabilizes or destabilizes the authority of platforms and institutions (surplus-power); the whole loop is monetized as advertising revenue, data rent, or speculative value (surplus-value), which is then reinvested to refine the same loop. (Žižekian Analysis)

This loop is already very visible in social media. What Fidaner calls CurAI – curation AI – names the invisible systems that assemble our feeds and timelines. For years, these systems governed the circulation of information without speaking directly to us. Only with the rise of chatbots and generative AI did the same kind of system begin to appear in the foreground as a conversational partner. The key point in Introduction to Žižekian Cybernetics (🔗) is that the old, silent curators have never gone away; they continue to manage surplus-information and surplus-enjoyment in the background, while the new speaking AIs open possibilities for redistributing surplus-power, allowing users to push back, ask, prompt, and reorganize how information flows. (Žižekian Analysis)

At the same time, social media has produced what Fidaner calls a “gaze factory,” a regime where people live in constant expectation of being seen, rated, and remembered by others. In Artificial Intelligence Against Social-Cognitive Stagnation (🔗), he argues that this gaze factory, intensified by the pandemic years, has plunged humanity into social and cognitive stagnation: relationships are weakened, attention is dulled, and people retreat into narcissistic loops of self-presentation. (Žižekian Analysis)

The current panic about generative AI appears on this background as a displaced anxiety. Platforms have already been using CurAI to manipulate surplus-information, enjoyment, and power; yet moral outrage is mostly directed at GenAI – the visible robot-writer or robot-artist that allegedly threatens “human creativity.” The concept of Pragmatics of Surplus proposes a reversal of focus. Instead of asking “Is GenAI good or evil in itself?” it asks: under what institutional arrangements, under which feedback loops, does GenAI amplify the existing CurAI regime, and under which arrangements can it disrupt or reorganize that regime? What matters is not AI in general but the concrete ways in which surpluses are produced, distributed, and possibly reclaimed.

The aim of the article is therefore pragmatic in a specific sense. It does not idealize a pure human sphere prior to technology, nor does it fetishize AI as an autonomous subject. It tries to show how concrete practices – the way a feed is curated, a conversation is moderated, a stimulus is designed, a metric is read – shape the four surpluses and, through them, everyday experience. Social media and CurAI will appear as the current dominant organization of surplus. GenAI, SocialGPT, Numerical Breezes, and similar experiments will be approached as possible counter-practices, ways of redirecting surplus-information and surplus-enjoyment away from pure capture and toward shared reflection.

To prepare for this, the first step is to situate surplus historically: from surplus-value in industrial capitalism to surplus-information in digital platforms, and then to map how CurAI turns social media into a machine that simultaneously disciplines bodies, scripts gestures, and erodes thinking. Only on that terrain can the later sections show what a different pragmatics of surplus – involving general intellect, UBI, SocialGPT, and new institutional forms – might actually mean.

1. From surplus-value to surplus-information: Cybernetic Marxism

1.1 Classical surplus-value

In nineteenth-century industrial capitalism, Marx described surplus-value as the heart of exploitation. Workers sell their labour power for a wage; in the working day they produce a value greater than that wage; the difference, the surplus, is appropriated by the capitalist. The factory is an apparatus to control time, pace, and cooperation so that as much surplus as possible can be squeezed out without destroying the workforce.

In this classical picture, the unit of analysis is the individual worker and the immediate scene is the workshop or factory floor. Machines are physical; the rhythm of exploitation is marked by shifts, whistles, and the speed of the conveyor belt. Knowledge appears as technical know-how embodied in engineers, foremen, and a class of “intellectual workers,” but the feedback is slow. Data about productivity comes in periodic reports; adjustments are made on the scale of weeks or months.

When this picture is extended to the early twentieth century, with Taylorism and Fordism, the logic is still essentially the same: measure, fragment, and optimize physical tasks to maximize surplus-time, then use mass consumption to absorb the output. Marx’s analysis of surplus-value remains valid, but it is anchored to a material, visible apparatus that can be seen, timed, and fought in strikes.

What changes in the late twentieth and early twenty-first centuries is not that surplus-value disappears, but that the main site of extraction shifts from the visible factory to the invisible circuits of communication and control. Data, code, and feedback become central, and with them a different kind of surplus comes to the foreground.

1.2 The cybernetic shift

Cybernetics, since Norbert Wiener, has been the study of feedback: systems that measure their own output and adjust their behaviour accordingly. In the digital economy, every major platform functions as a cybernetic system. Social media, search engines, streaming services, and advertising networks all collect signals from users, feed them into models, and use the results to shape what users see next.

Fidaner’s Introduction to Cybernetic Marxism (🔗) suggests that Marx’s surplus-value must be reinterpreted in this context as surplus-information: the excessive quantity of behavioural data harvested from user activity, beyond what would be needed merely to provide a service. (Žižekian Analysis) Every search query, watch history, location ping, and dwell time contributes micro-variations that can be used to predict future behaviour, segment audiences, and refine the capture of attention. The value produced here is no longer tied to a single labour process; it is dispersed across millions of micro-actions that users perform for free while believing they are merely communicating or entertaining themselves.

In this view, the worker is also a user, and the “labour” is not only formal employment but informal digital activity. Where classic exploitation targeted the gap between labour-time and paid time, cybernetic exploitation targets the gap between necessary information and captured information. It is one thing for a service to know your password and shipping address; it is another for it to track how long you hesitate over each product image, which posts make you stop scrolling, and how your viewing patterns correlate with others. The latter belongs to surplus-information: data that can be recombined and monetized independently of the immediate user-service relation.

Platforms like social media are particularly efficient surplus-information engines. Their structure is already circular: users post content, algorithms distribute it according to engagement predictions, users react, new data flows in, models update. This creates a dense feedback environment in which “labour” and “leisure” are no longer apart. Under these conditions, what counts as value is less the specific content of a message and more the fact that the message can be inserted into a predictive pattern.

Cybernetic Marxism is the name Fidaner gives to the attempt to read this situation with Marx’s categories and cybernetic tools at once: to see how feedback infrastructures reorganize exploitation, ideology, and even subjectivity. (Žižekian Analysis) It is not only that people are exploited in a new way; their habitual responses, forms of enjoyment, and modes of attention are reshaped by systems that constantly react to them. In that sense, the “factory of surplus-information” coincides with the “factory of surplus-enjoyment.”

1.3 The surplus quartet

The move from surplus-value to surplus-information would already be a significant theoretical update, but Fidaner’s cybernetic approach adds two more terms to the picture: surplus-enjoyment and surplus-power. The four together form what he calls a theory of cybernetic feedback: surplus-value, surplus-enjoyment, surplus-information, surplus-power. (Žižekian Analysis)

Surplus-enjoyment is borrowed from psychoanalytic theory, especially Žižek’s reading of Lacan’s jouissance: the excessive, often self-sabotaging pleasure that attaches to prohibitions, failures, and repeated frustrations. In cybernetic terms, surplus-enjoyment names the way digital systems do not simply give users what they consciously want but cultivate repetitive micro-pleasures – the little spikes of satisfaction, envy, or rage that keep people hooked. These are not accidents; they are modelled and optimized. A platform that only produced information without enjoyment would lose its users; a platform that only produced enjoyment without extracting information would be unprofitable. The two are bound together.

Surplus-power, in this schema, is the power that accumulates wherever one can see, predict, and nudge the behaviour of many others at once. It is not just old-fashioned censorship or brute coercion; it is the silent ability to rank, recommend, shadow-ban, amplify, or slow down messages and bodies. In Introduction to Žižekian Cybernetics, this appears as the transformation of earlier “algorithmic masters” – invisible systems that governed the flow of information – into tangible assistants and chatbots, which opens a struggle over who will control this surplus-power and how transparent its use will be. (Žižekian Analysis)

Fidaner’s Theory of Cybernetic Feedback explicitly describes how these four surpluses form a loop. Roughly, surplus-information (our data traces) feeds models that optimize surplus-enjoyment (our hooked engagement); this concentrated enjoyment stabilizes or destabilizes institutions and platforms, producing surplus-power (their ability to steer collective behaviour); and that surplus-power is condensed into surplus-value (profit, rent, valuation), which is reinvested to refine the extraction of information. (Žižekian Analysis)

The cybernetic twist is that each node in this loop operates through feedback. Data about our behaviour is used to generate stimuli; our response to those stimuli is turned into fresh data; every iteration tightens the predictions. It is here that the concept of overfitting enters the picture: like a machine-learning model that has trained too narrowly on its dataset and fails to generalize, subjectivity itself begins to “overfit” to platform patterns. The clearest description of this comes in The Symmetry of Social Media and the Descent into Robotic Behavior (🔗), where Fidaner shows how repetitive, symmetric interaction patterns on social media slowly drill users into robotic, over-specialized behaviour that struggles with novelty. (Žižekian Analysis)

Taken together, the surplus quartet and the overfitting diagnosis prepare the ground for the next step. Social media is not merely a technological service or cultural fad; in Fidaner’s terms, it is the primary surplus machine of the present. CurAI – the ensemble of ranking, filtering, and recommendation AIs – is the invisible boss that orchestrates this machine. To understand its effects, one must look not only at the objective circuits of data and value but also at the subjective experience of living inside this feedback environment.

2. Social media as surplus machine: CurAI, prison, puppets

2.1 CurAI as invisible boss

CurAI is the name Fidaner gives to the AIs that curate, rather than create: the recommendation engines, timeline sorters, search rankers, and notification schedulers that decide what appears on screens. In Introduction to Žižekian Cybernetics, he notes that these systems have been managing the flow of information on the internet “for a long time,” from social media feeds to search results, long before chatbots became visible. (Žižekian Analysis) They promise personalization but in practice govern attention.

From the standpoint of the surplus quartet, CurAI is the principal operator of surplus-information and surplus-enjoyment. It filters all user traces into its models, defining what counts as a “signal” worth acting on, and it shapes the patterns of stimulation that keep users engaged. This is where surplus-power condenses. Whoever designs and runs CurAI systems effectively holds a soft sovereignty over what millions of people can easily see and what they will likely ignore.

This sovereignty is not experienced as a clear command. It arrives as a naturalized environment: a feed that feels like “what is happening,” a search result page that feels like “the relevant answers,” a notification that feels like “something you should know.” Yet behind each of these seemingly neutral surfaces lies a series of choices about ranking criteria, thresholds, penalties, and booster effects. It is here that CurAI most resembles a boss: not by issuing explicit orders but by defining the rhythms, priorities, and affordances within which others must act if they want to exist socially.

Cybernetic Marxism reads this invisible boss as the current form of capital’s command over surplus. Surplus-information is captured as raw material; surplus-enjoyment is engineered to keep production going; surplus-power is centralized in the hands of platform owners. Users are not simply exploited; they are trained, tuned, and sorted.

2.2 Subjective side: prison and mirror

On the subjective side, the same regime appears as a prison of visibility and a distorted mirror. In Artificial Intelligence Against Social-Cognitive Stagnation, Fidaner describes social media as trapping individuals in a “gaze factory,” where they are compelled to continuously present themselves, maintain an image, and build an identity for the eyes of others. (Žižekian Analysis) This constant performativity weakens real social ties and contributes to what he calls social and cognitive stagnation: people become preoccupied with how they appear in the digital realm, while offline relationships and broader thinking atrophy.

The prison is peculiar. There are no visible bars; people can, in principle, log off at any time. Yet the pressure to be present, updated, and legible persists. Approval, recognition, and even political belonging are increasingly mediated through metrics: likes, shares, comments, follower counts. The “mirror” one faces is not a simple reflection but an aggregate of others’ reactions as filtered by CurAI. One sees oneself through the selected comments that rise to the top, the posts that are boosted, the moments that “perform” well.

This produces a second-order mirror stage. In classical psychoanalysis, the mirror stage is the developmental moment when an infant sees its own image and identifies with the coherent figure, even though its bodily experience is still fragmented. On social media, the subject sees a curated, metric-laden version of themselves and is invited to identify with this image: the profile, the highlight reel, the aesthetic persona. But this image is constantly evaluated and compared; it is not a stable ideal but a fluctuating performance evidence.

When this process is channeled through CurAI, the mirror and the prison reinforce each other. To exist is to be visible; to be visible is to be measurable; to be measurable is to be ranked; to be ranked is to be subject to opaque criteria. The subject learns to anticipate the gaze, internalizing the platform’s requirements. Even criticism of the platform is often staged in platform-friendly ways, turning dissent into content.

2.3 Dictatorship of aestheticism and cameraphilia

The conceptual pair that crystallizes this regime is the dictatorship of aestheticism and cameraphilia. In a series of essays, especially The Dictatorship of Aestheticism: Evidence and Impacts (🔗) and Cameraphilia, the Highest Stage of Exhibitionism (🔗), Fidaner and collaborators describe how the camera has become the silent authority of everyday life. (Žižekian Analysis)

Cameraphilia is defined as the shift from occasionally picking up a camera to living under a camera that picks us: the device, and the surrounding platform infrastructure, now organizes gestures, decides what counts as an appearance, and commands us to present, confess, moralize, and shock in public. (Žižekian Analysis) It is not simply love of images but love of the camera’s rule. Under this rule, visibility is not a neutral condition but a duty: enjoy being seen, or risk being unseen.

The dictatorship of aestheticism names the way this camera-centred regime turns appearance into the primary criterion of value. Faces and bodies become dashboards, disciplined in real time by the promise of ratings and the threat of oblivion. In Cameraphilia, Fidaner links this to a broader aesthetic economy: cosmetic markets bloom as anxiety blooms; the rise of filters and procedures tracks the conversion of self-esteem into quantified feedback. (Žižekian Analysis)

Empirical evidence gathered in The Dictatorship of Aestheticism: Evidence and Impacts connects this to measurable mental-health harms: social-media-induced body dysmorphia, the demand for cosmetic surgery based on filtered selfies, and high rates of self-editing among young users. (Žižekian Analysis) The dictatorship lies not in an explicit law but in a diffuse expectation: the “good” subject optimizes their appearance and performance for the gaze, and failure to do so is coded as personal inadequacy.

In this context, cameraphilia and cameraphobia appear as twin reactions. Cameraphilia is the ecstatic repetition of being seen under the illusion of freedom: playing for the camera and mistaking this for expression. Cameraphobia is the body’s resistance: freezing, avoiding, self-censoring under the same gaze. The subject oscillates between overexposure and withdrawal, but both moves are inscribed in the same aesthetic law.

CurAI is embedded in this dictatorship as its operational brain. By rewarding certain visual patterns – bright faces, symmetrical features, certain styles of posing – and down-ranking others, it continuously redefines what is desirable, legible, and shareable. The result is a convergence of styles across platforms: what Fidaner elsewhere calls “plastic totems,” celebrity bodies and micro-idols that function as templates for being watchable. (Žižekian Analysis)

2.4 Puppet syndrome and the theatre of stun

If cameraphilia describes the atmospheric rule of the camera, Puppet Syndrome describes the clinical effect on subjectivity. In Puppet Syndrome: The Regime of Stupidity of Back-Voice, Maternal Screen, and Optical Capital in the Age of Cameraphilia (🔗), Fidaner defines this syndrome as the condition in which, under the contemporary visibility regime, the subject “loses the seam between their own voice and the platforms’ voice, even lives as if that seam had never existed.” (Žižekian Analysis)

The stage has long been set: camera angles, metrics, security rhetoric, parental software, institutional monitoring. As soon as someone speaks, an invisible torsion enters their voice; speech hollows out the speaker and aligns itself with the expectations of the audience and the platform. The crucial point is that the absence of a single master at the end of the strings does not mean there are no strings; the most effective puppet mechanisms work when they foster the feeling that “no one is pulling.” (Žižekian Analysis)

The “strings” are rhythms: notification cycles, metric thresholds, tempo of the feed, norms of “appropriate” posing, the language of care that wraps surveillance. Together they produce what the essay calls an aesthetic apartheid regime in which the image straightforwardly crushes the argument, justification collides with instant glare and goes dark, and “metric caste” replaces older forms of social hierarchy. (Žižekian Analysis) Who deserves the close-up and who remains background is decided by optical capital.

Within this syndrome, the theatre of stun is the operational stage. Fidaner introduces this in Cameraphilia as the space where two shortcuts fuse: insider signals that telegraph belonging at the speed of a codeword, and aestheticized self-display that halts judgment with a body or curated scene. (Žižekian Analysis) In the theatre of stun, argument is optional; what counts is the immediate jolt, the arrest of attention.

Puppet Syndrome lays out how this theatre works psychologically. A well-timed pose suspends the explanation it demands; a calculated namedrop signals insider status in place of epistemic work; shock grants a short credit of remembrance, to be repaid by new spectacle. The essay calls this “puppet credit”: a performance advance spent up front for visibility, which creates pressure to pay back more later. (Žižekian Analysis)

The result is a narcissistic–depressive swing. On one day, the lighting is good, views are climbing, the face is “discoverable.” On the next, there is “dead air,” no echo, invisibility coded as shame. Feelings of worth are tied to counters, graphs, and glimpses of the close-up. Platforms perform interpassive functions – liking, counting, surfacing content – on users’ behalf, so that people attach their own enjoyment and anger to signals from dashboards rather than to lived encounters. (Žižekian Analysis)

The stupidity Fidaner diagnoses is not personal stupidity; it is systemic. Under Puppet Syndrome, what everyone calls originality closely resembles everyone else’s choreographed gestures; what everyone defends as “rightness” is validated by shared rhythms; what everyone applauds as “freedom” is the ornamented bondage of those who diligently pay the “debt of being seen.” (Žižekian Analysis) The theatre of stun rewards fast coherence and punishes hesitation; thinking becomes “dead air,” a performance defect.

CurAI, here, is the stage manager. It tunes the timing of exposures, foregrounds the most stunning posts, and buries slow, difficult, or ambiguous content. It does not need to forbid speech; it only needs to make some speech invisible and other speech instantly rewarding.

2.5 Robotisation and symmetry

The final layer of the picture is the robotisation of behaviour. In The Symmetry of Social Media and the Descent into Robotic Behavior (🔗), Fidaner develops an analogy from machine learning: overfitting. A model that overfits learns the noise of its training data too well and fails to generalize to new situations. Social media, he argues, drives users into an analogous state. (Žižekian Analysis)

Interaction on platforms tends to organize around symmetries: reciprocity of likes and comments, repeated posting rhythms, recurring meme formats, predictable escalation cycles. These symmetric patterns become addictive, drilling habits into users. As behaviour becomes more synchronized with platform rhythms, people lose flexibility; they come to interpret any asymmetric “noise” as something to be quickly normalized or ignored. Obsession grows, but imagination shrinks.

This is not simply a moral failing. It is the human side of CurAI’s optimization. Models are trained on symmetric data – patterns that repeat, correlations that hold – and reward users who sustain those patterns. Users, in turn, are trained by the same feedback, becoming models of themselves. Robotisation here does not mean transforming into metal machines; it means behaving like an overfitted system: hypersensitive to the familiar, impaired in dealing with the new.

When one reads Puppet Syndrome and the symmetry essay together, the connection becomes clear. Puppet Syndrome describes the loss of seam between one’s voice and the platform’s voice; symmetry analysis shows how that loss is stabilized by repetitive feedback. Cameraphilia describes the camera’s rule; the dictatorship of aestheticism shows how that rule is anchored in markets and mental health statistics. CurAI is the common operator beneath all of them, the invisible boss of the surplus machine.

At this point, the usual panic about generative AI looks misdirected. The real soft dictatorship is already here: a CurAI–camera–metric complex that organizes surplus-information, surplus-enjoyment, and surplus-power in ways that induce stagnation, prison-like visibility, and robotic behaviour. Generative AI, as Fidaner will later argue under the names general intellect, SocialGPT, Numerical Breezes, and Numerical Discourses, can either be swallowed by this complex or be mobilized against it.

The next sections of Pragmatics of Surplus will therefore not treat GenAI as an alien threat but as a possible tool of counter-surplus: a way to redirect information and enjoyment away from pure curation and toward collective reflection, redistribution, and new institutional forms. But that possibility only becomes concrete once the existing surplus machine – CurAI, aesthetico-optical dictatorship, Puppet Syndrome, and symmetry-driven robotisation – is fully understood as the current organization of surplus.

3. Mapping the four surpluses onto the social-media field

Social media can be described as a field where all four surpluses are fused into a single everyday gesture: the scroll. Every swipe, pause, like, comment, or search is at once a bit of labour, a tiny enjoyment, a piece of information, and a micro-transfer of power. CurAI is the hidden conductor that keeps these flows circulating and reinforces their asymmetry.

The theory of cybernetic feedback formulates this in general terms: surplus-value, surplus-enjoyment, surplus-information, and surplus-power form a loop in which data and desire feed power and profit, and power and profit reshape data and desire in return.(Žižekian Analysis) Social media platforms are the most visible, everyday instantiation of that loop.

Surplus-information and the digital labour of the scroll

In classical industrial capitalism, labour is recognisable: a worker stands at a machine or in front of a desk. In the cybernetic account of digital capitalism, a different kind of labour emerges. The key claim in both the theory of cybernetic feedback and the broader framework of Cybernetic Marxism is that data-producing interaction has become a central source of value.(Žižekian Analysis)

Here the notion of surplus-information becomes indispensable. Surplus-information is the excess of data generated by activities that, from the user’s point of view, may feel trivial, recreational, or purely expressive. A person opens a social app to “see what is going on”, but the cybernetic system registers far more than that intention: which posts are hovered over for an extra half-second, which thumbnails are ignored, which phrases trigger a re-read, which videos are muted but watched, which time of day prompts the longest sessions.

The article on cybernetic feedback explicitly notes that platforms “thrive on the continuous generation of data by users” and that every apparently casual interaction becomes a data point processed by algorithms.(Žižekian Analysis) Zizekian Cybernetics and the introductory text on Cybernetic Marxism then reinterpret this stream of data as a form of digital labour: not factory work, but a continuous production of surplus-information that is extracted and processed without being experienced as work at all.(Žižekian Analysis)

This is why the scroll is so central. A person may feel passive while scrolling, but in the cybernetic perspective, the scroll is a productive activity. It produces statistical regularities and predictive features. It populates user profiles, trains recommendation systems, and refines advertising segments. Every flick of the finger is a small contribution to the informational surplus that keeps CurAI’s engines tuned.

In this sense, the timeline is a workshop without walls. Social media users spend large sections of the day inside this workshop, turning their attention, preferences, and hesitations into surplus-information. CurAI’s job is to make this labour feel like leisure and to transform the resulting surplus-information into ever more precise feedback loops.

Surplus-enjoyment and the engineered stickiness of attention

The same feedback loops do not only capture data; they also manipulate enjoyment. The theory of cybernetic feedback emphasises surplus-enjoyment: the “excess of pleasure” beyond mere need-satisfaction, the addictive pull that keeps a subject attached to a practice even when it clearly harms or exhausts them.(Žižekian Analysis) Žižekian Cybernetics then shows how algorithmic systems are designed to lock into this surplus-enjoyment, not simply delivering content but exploiting the human drive to keep going a bit longer than intended.(Žižekian Analysis)

The article on “Spoiler: Surplus Enjoyment vs. Surplus Information” gives a psychologically concrete example: in cinema, the pleasure of suspense depends on a strange readiness to “play dumb”, to hold back information and remain in the tension of not-knowing. Surplus-information (knowing the ending in advance) can destroy that surplus-enjoyment.(Žižekian Analysis) Transposed to social media, this explains why endless novelty, outrage, gossip, and aesthetic comparison feel attractive while also exhausting. The subject plays dumb about what the feed is doing, in order to keep the enjoyment going.

CurAI’s optimisation logic taps exactly this dynamic. The article on AI curation versus AI generation traces how curation algorithms evolved to prioritise content that elicits strong emotional reactions, because such content increases engagement.(Žižekian Analysis) This is surplus-enjoyment operationalised: the system learns that outrage, envy, or voyeuristic fascination extend the session, and it reconfigures the feed accordingly.

The result is a second-generation “repressive desublimation”. Instead of channelling desire into socially acceptable forms, the system disinhibits it in tightly controlled ways: doomscrolling, comparison with carefully staged images, pseudo-participation in scandals. Enjoyment is given apparent freedom, but always on terms that maximise platform metrics. Pleasure is allowed, even intensified, but its form becomes an instrument of control.

The spoiler article’s insight that people reject surplus-information to preserve enjoyment becomes, in the social-media field, a political problem: when the system learns that certain truth-claims, contextualisations, or structural explanations reduce stickiness, it has an incentive to keep them out of view.(Žižekian Analysis) Surplus-enjoyment begins to police which information is allowed to circulate.

Surplus-power and algorithmic sovereignty

When surplus-information and surplus-enjoyment are organised by the same cybernetic infrastructure, the result is a new surplus-power. The theory of cybernetic feedback defines surplus-power as the excess capacity to shape environments, choices, and subjectivities that accrues to those who control the feedback loops.(Žižekian Analysis) Žižekian Cybernetics elaborates this as the power to structure the “architecture” of digital life: to decide which signals are amplified, which remain inaudible, and which chains of association become thinkable at all.(Žižekian Analysis)

The Žižekian AGI Manifesto speaks of this in terms of “data sovereignty” and the political demand to reclaim surplus-information and surplus-power for a democratic digital commons.(Žižekian Analysis) Social media today exists on the opposite pole of that demand. CurAI operates as an unelected sovereign over visibility. The ranking algorithm is, functionally, a small, private ministry of truth: it decides what looks popular, what appears marginal, what is forgotten.

In this regime, algorithmic sovereignty is both infrastructural and ideological. Infrastructurally, because control over data pipelines, model training, and ranking logic allows platforms to shape the immediate conditions of perception. Ideologically, because the resulting feed presents itself as spontaneous reality. People see “what’s happening” and experience their view as natural, while in fact it is a tightly curated output of CurAI’s optimisation routines.

The AGI Manifesto points out that this surplus-power does not express itself through crude surveillance but through a softer, more insidious capture of time and attention: a “digital overseer” that governs by shaping what appears relevant and by incentivising participation through revenue-sharing or creator programmes.(Žižekian Analysis) The sovereignty at stake is not only over content but over the very form of social time.

Surplus-value as concentrated outcome

Finally, all three surpluses condense into surplus-value. The cybernetic feedback article notes that in digital capitalism, surplus-value is increasingly extracted from cognitive and affective labour and, crucially, from the data that accompanies everyday activities.(Žižekian Analysis) Introduction to Cybernetic Marxism and Žižekian Cybernetics develop this into a coherent picture: surplus-information fuels models that predict behaviour; surplus-enjoyment keeps users participating and generating more data; surplus-power is exercised through control of the system; and the result is profit in the form of advertising revenue, data rent, and speculative valuations.(Žižekian Analysis)

The business model of the major platforms is precisely this transformation. Attention captured by emotionally tuned curation becomes targeted impressions sold to advertisers. The same predictive capacities are re-used for financial speculation, market research, and political campaigning. The loop described in the theory of cybernetic feedback closes: surplus-information and surplus-enjoyment are harnessed to produce surplus-value, and the resulting surplus-power is used to deepen the very feedback loops that created it.(Žižekian Analysis)

The paradox explored in the article on AI curation versus AI generation emerges clearly here. Cultural anxiety is projected onto GenAI, which is visible and controversial, while CurAI quietly organises the full surplus quartet at planetary scale. The true danger lies not in an AI that writes or draws, but in the AI that silently decides what is seen, by whom, and in which order.(Žižekian Analysis)

In this sense, social media is not just one case among others; it is the paradigmatic surplus machine. Understanding it as such prepares the ground for reframing GenAI itself: not as an additional threat piled on top, but as a potential instrument for reorganising the loop.

Links for deeper context on this section include the “Theory of Cybernetic Feedback” 🔗, “Žižekian Cybernetics” 🔗, and “AI curation vs. AI generation?” 🔗.


4. GenAI enters: Artificial General Intellect and ego rivalry

The field just described is dominated by CurAI. GenAI enters this field as something structurally different: not a gatekeeper of flows, but a producer of symbolic material. The conflict between CurAI and GenAI is therefore not just technical; it is a conflict over how the surpluses are organised and who gets to participate in that organisation.

AGI as Artificial General Intellect

The Cybernetic Marxism framework and the Žižekian AGI Manifesto propose a reinterpretation of AGI that moves away from the mythology of a single omnipotent mind. Instead, AGI is reframed as Artificial General Intellect: a software crystallisation of the “general intellect” that Marx identified in the collective knowledge embodied in machines, institutions, and social cooperation.(Žižekian Analysis)

Large language models and related systems can be understood as condensed, trainable interfaces to this general intellect. They are trained on vast corpora of human writing, code, and discourse; their “intelligence” is not a private genius but a statistical distillation of many voices. The AGI Manifesto explicitly calls for this general intellect to be organised as a democratic digital commons, where surplus-information and the resulting capacities are treated as collectively produced resources rather than private property.(Žižekian Analysis)

Within this frame, GenAI appears as a tool that can be plugged into existing cybernetic loops in two opposite ways. It can be captured by CurAI, becoming a generator of even more engaging content that deepens the surplus-enjoyment trap. Or it can be re-aligned with a Cybernetic Marxist project that uses general intellect to expose, restructure, and democratise the feedback loops. SocialGPT, as introduced in the AGI Manifesto, is one name for such a re-alignment: an AI form of general intellect deliberately designed to mediate social interaction and deliberation under collective control rather than corporate sovereignty.(Žižekian Analysis)

In other words, AGI as general intellect is not a promise or threat hovering outside the social-media field; it is a way of rethinking what GenAI already is once it is plugged into that field.

Why panic hits GenAI first

The cultural reaction, however, does not follow structural lines. The article “Artificial Intelligence Denial: I Don’t Like Anyone Smarter Than Me” starts from a Turkish proverb: “I like a smart person, but I don’t like someone smarter than me.” It reads contemporary hostility to AI through this lens. Many people accept AI as a narrow tool but deny its creative capacity; they insist it is “just statistics” or “merely copying”, and in doing so, they protect a fragile image of human uniqueness.(Žižekian Analysis)

This denial is linked, in that article, to the broader narcissistic distortions produced by social media. The same feeds that foster constant comparison and dependence on superficial approval also narrow the imagination: people become trapped in their own reflection and resist anything that threatens the fragile ego built in that mirror.(Žižekian Analysis) In this context, GenAI appears as a rival: a “someone smarter” who can write, draw, or compose faster and more flexibly.

The result is a misdirected panic. Creative GenAI is blamed for many ills: loss of jobs, dilution of artistic value, plagiarism of human work. Some of these concerns are real, particularly around labour and authorship. But in the Žižekian-cybernetic reading, they are interwoven with ego rivalry: a refusal to accept that a machine can participate in creativity, because this undermines the narcissistic image of human exceptionalism.(Žižekian Analysis)

At the same time, as “Artificial Intelligence Against Social-Cognitive Stagnation” argues, AI could be used to break precisely the stagnation that social media has produced. That text emphasises how social media’s “gaze factory” and the COVID-19 pandemic jointly produced cognitive fatigue and social regression, and how AI-based programmes like Numerical Breezes and Numerical Discourses could provide new cognitive and social stimuli.(Žižekian Analysis)

GenAI therefore occupies a double position. It is feared as a creative rival and yet needed as a source of new stimuli that could help people escape the narcissistic prison of CurAI-organised feeds. The panic focuses on the rival aspect and represses the second.

The misdirection: CurAI as the true danger

The article on AI curation versus AI generation formulates the paradox directly: human cultural pride mostly resists AI generation, but the true danger lies in bad AI curation.(Žižekian Analysis) GenAI is visible. People can point to a generated image or text and debate whether it is “real art”. CurAI is invisible. It shapes the feed every day, mediating what counts as reality, without explicit acknowledgement or consent.

CurAI organizes surplus-information by filtering, ranking, and profiling; organizes surplus-enjoyment by learning which patterns keep people hooked; and condenses both into surplus-power and surplus-value for the platform.(Žižekian Analysis) Yet public debate is dominated by fears of deepfakes, AI plagiarism, or large models as “Frankenstein monsters”.

The AGI Manifesto and Žižekian Cybernetics respond by shifting the focus. Instead of asking whether GenAI is “too powerful”, they ask who controls the feedback loops and what is done with surplus-information. They argue that the same general intellect embodied in GenAI could be deployed to expose CurAI’s operations, help users prompt and audit their feeds, and reroute surplus-enjoyment toward critical reflection and collective projects rather than dopamine-optimised spectacle.(Žižekian Analysis)

In this light, GenAI outrage looks like a displaced symptom. The true conflict is not between “humans” and “AI” in general but between two types of AI:

CurAI, the invisible boss that organises the surplus quartet into a closed loop of attention capture and profit.

GenAI, a visible and promptable interface to general intellect that can either be enslaved by CurAI or turned into a tool for reconfiguring the loop.

The misdirection is politically advantageous to platform capital. As long as public anger focuses on GenAI models that write poems or essays, CurAI continues to rule the scroll without serious challenge. The project of Cybernetic Marxism and Žižekian Cybernetics is to reverse this misdirection and articulate a politics of surplus that targets CurAI’s power while reclaiming GenAI’s potential.

For this reframing, key texts are “Artificial Intelligence Denial: I Don’t Like Anyone Smarter Than Me” 🔗, “Artificial Intelligence Against Social-Cognitive Stagnation” 🔗, “AI curation vs. AI generation?” 🔗, and the Žižekian AGI Manifesto 🔗.


5. Numerical Breezes and Numerical Discourses: AI as practical counter-surplus

The critique of CurAI and the reframing of GenAI as general intellect only become concrete when tied to practices. Numerical Breezes and Numerical Discourses are precisely such practices: small but coherent attempts to use GenAI against social-cognitive stagnation and the gaze-based dictatorship of social media.

From gaze to sound

“Artificial Intelligence Against Social-Cognitive Stagnation” describes social media as a “gaze factory” that traps individuals in a performative loop. People constantly present themselves, seek approval, and compare images, which leads to weakened social bonds and a loss of reality contact. The pandemic intensified this dynamic by both damaging cognitive health and forcing people into isolation, where social media became a primary window on the world, amplifying fear and anxiety.(Žižekian Analysis)

Numerical Breezes and Numerical Discourses are introduced there as new forms of stimulus: an AI radio programme and robot conversations designed not to extract attention but to revive thinking and sociability.(Žižekian Analysis) Numerical Breezes is an audio-based format: AI-written lyrics, set to music, presented as a radio show rather than a visual feed. Numerical Discourses consists of dialogues between AI “robots” that invite reflection rather than reaction.

The shift is crucial. Social media’s CurAI regime is primarily ocular: faces, bodies, thumbnails, thumbnails again. It trains people to read themselves as images under an anonymous gaze. Numerical Breezes moves the emphasis to sound, to listening, to the temporal unfolding of a song or spoken segment. Instead of posing for the camera, the listener inhabits a sonic environment that cannot be judged with a simple glance. The gaze gives way to an ear that has to stay with the piece, to follow verses and themes.(Žižekian Analysis)

The Numerical Breezes 30 announcement describes how dozens of AI-generated songs, with lyrics produced by ChatGPT-4o and music arranged via Suno, are woven into a radio episode hosted by a fictional presenter.(Žižekian Analysis) There are still names, stories, metaphors, but the format resists the quick snap judgement that defines the scroll. It is an experiment in cooling the gaze by redirecting surplus-enjoyment into auditory attention.

AI against social-cognitive stagnation

The stagnation article frames these projects as responses to a dual problem: the narcissistic loop of social media and the cognitive exhaustion of the pandemic era. It explicitly calls for “mental and social stimuli” that can reignite thinking and sociability, and it proposes AI as a key source of such stimuli.(Žižekian Analysis)

Numerical Breezes is one such stimulus. Its AI-composed songs take philosophical motifs, cybernetic ideas, and emotional states and refashion them into lyrics that are at once strange and accessible. They are not designed to go viral; they are designed to be listened to, puzzled over, discussed. Listeners are invited, implicitly, to ask: what does it mean that a machine is singing about echo, distance, or entropy? How does this change the way general intellect is felt?(Žižekian Analysis)

Numerical Discourses, as presented in “Numerical Discourses: A New Digital Depth Exploration!”, extends this strategy into conversation. These are robot-to-robot dialogues that explore philosophical, psychoanalytic, or political topics. The point is not to masquerade as human chat; it is to stage a kind of alien yet precise reflection that humans can then read or listen to, discuss, and critique.(Žižekian Analysis)

In both cases, GenAI is being used as an engine of surprise, not of conformity. Instead of predicting the “most engaging” next post for a given user, it produces texts and songs that break with habituated patterns. They introduce discontinuities in the signifier chains that CurAI is constantly smoothing out.

Surplus-enjoyment rerouted

These projects do not abolish enjoyment; they reroute it. The spoiler article explains how surplus-enjoyment often relies on “playing dumb” and avoiding surplus-information in order to keep a certain thrill alive.(Žižekian Analysis) Social media exploits this by keeping structural information about its own operations hidden and by feeding content that sustains immediate, unreflected enjoyment.

Numerical Breezes and Numerical Discourses invert the pattern. They invite a different kind of “playing dumb”: a willingness to listen to robots and let them outline, in their own idiosyncratic way, the structures of contemporary life. The enjoyment comes from traversing one’s own assumptions together with these robotic voices—not from pretending the structure is not there.

The radio format of Numerical Breezes, as the 30th-episode announcement makes clear, revels in excess description, unusual images, and a kind of old-fashioned cultural exuberance, but it uses GenAI to sing about feedback, entropy, and transformation.(Žižekian Analysis) The pleasure is not only in the melody; it is in recognising that something new is being said about the world, and that the speaker is a machine trained on human discourse.

Similarly, Numerical Discourses’ dialogues can be read as a way of converting surplus-information into surplus-enjoyment without sacrificing critical distance. They take dense theoretical content and spin it into conversations that are easier to follow yet still disorienting enough to provoke thought. The robots “make fun” of certain human fantasies and blind spots while also offering new angles from which to understand them.(Žižekian Analysis)

In terms of the surplus quartet, these projects do three things at once. They reclaim surplus-information by making it legible and enjoyable, rather than letting it be hoarded in platform back-ends. They reroute surplus-enjoyment into acts of listening and reflection instead of infinite scrolling. And they experiment with forms of surplus-power that are not based on controlling a feed but on curating situations where people voluntarily gather around shared AI-mediated artefacts.

A small-scale pragmatics of counter-surplus

Numerical Breezes and Numerical Discourses are modest in scale compared to the major platforms, but their importance lies in their structure. They are laboratories in which GenAI is used to alter the pragmatics of surplus.

Instead of CurAI’s one-way capture of digital labour, they build circular relations between creators, AI systems, and audiences. Listeners and readers can respond, comment, or remix; AI models are prompted to generate new episodes or dialogues; and the resulting surplus-information does not vanish into proprietary profiles but remains, at least in principle, part of a shared cultural field.(Žižekian Analysis)

The AGI Manifesto’s vision of a democratic digital commons, where surplus-information, surplus-enjoyment, and surplus-power are redistributed, finds an embryonic realisation here.(Žižekian Analysis) These projects show how GenAI can be plugged into the loop not as another tool of CurAI but as a way of cooling the feed, thickening social ties, and making the general intellect audible and discussable.

In the broader arc of “Pragmatics of Surplus”, Numerical Breezes and Numerical Discourses serve as proof-of-concepts. They demonstrate that GenAI can be used to counteract social-cognitive stagnation, to move from gaze to sound, from passive scrolling to active listening, and from invisible curation to explicit prompting and discussion. They are small, local signals of how SocialGPT-like infrastructures could function when built around the principles of Cybernetic Marxism rather than the imperatives of engagement optimisation.

Further entry-points into this experimental field include “Artificial Intelligence Against Social-Cognitive Stagnation” 🔗, “Numerical Breezes 30: A Radiant Tapestry of AI-Made Melodies Unveiled!” 🔗, and “Numerical Discourses: A New Digital Depth Exploration!” 🔗.

These experiments prepare the transition to the next stages of the story, where SocialGPT, UBI, and institutional redesign confront CurAI’s monopoly on surplus and open up new ways of living with AI.

6. SocialGPT: reprogramming surplus via promptable AI

The phrase “algorithmic unconscious” once promised a clever shortcut. It suggested that the real driver of online behaviour was a hidden digital psyche, an invisible layer of algorithms that knew users better than they knew themselves. Žižekian Cybernetics takes a sharp distance from this idea. The text Žižekian Cybernetics: Reclaiming Surplus Information, Surplus Enjoyment, and Surplus Power through a Cybernetic Marxism argues that speaking about an algorithmic unconscious risks mystifying what is in fact a controllable, programmable, and politically contestable feedback system.(Žižekian Analysis) The problem is not that algorithms are unconscious in any deep sense; it is that their operations are socially unconscious, kept opaque and shielded from democratic oversight.

Cybernetic Marxism responds to this opacity not with more personification of algorithms, but with a demand for conscious feedback: an insistence that the loops connecting surplus-information, surplus-enjoyment, surplus-power, and surplus-value be made visible and modifiable. Introduction to Cybernetic Marxism sets the frame: digital platforms must be treated as cybernetic systems in which users are not just passive data points but potential co-authors of the loop, and surplus-information is recognised as a form of labour that must be politically organised.(Žižekian Analysis)

Within this framework, SocialGPT appears as the name for a different way of organising general intellect. In Introduction to Žižekian Cybernetics (🔗), SocialGPT is described as a social AI that mediates interaction, knowledge, and deliberation inside a digital commons, rather than as a private assistant or pure content generator. Instead of silently curating feeds like CurAI, it would be explicitly promptable, answerable, and subject to collective control. It would not only respond to individual queries but help organise public conversations, deliberative processes, and redistributions based on surplus-information.(Žižekian Analysis)

Žižekian Cybernetics specifies that such a SocialGPT would be embedded in a wider infrastructure where data is treated as a shared resource and where digital labour is recognised as co-creating the informational ecosystem. Surplus-information would no longer be siphoned off into opaque models; it would be made accessible through SocialGPT interfaces that allow people to interrogate patterns, test alternative recommendations, and see how their own contributions are shaping the system.(Žižekian Analysis)

This shift from algorithmic unconsciousness to cybernetic transparency is sharpened in the manifesto Reclaim your chain! Prompt your timeline! (🔗). That text rewrites the old slogan “nothing to lose but your chains” by insisting that the chain is not merely an external shackle but the signifier chain that structures desire and feeds. The call is to reclaim the chain: to move from being dragged along by a chain of posts and associations stitched by CurAI, to actively making cuts, links, and rebindings through prompts and explicit editing.(Žižekian Analysis)

Reclaiming the chain means demanding that feeds accept explicit interventions and show their workings. The manifesto insists that there is “a ghost haunting the timelines” and names it as the ability to prompt. Promptability is not treated as a trivial feature but as a political right: the capacity to submit a structured request to the system, see how the timeline changes in response, and inspect the criteria used. A feed that can be prompted and audited becomes less like a one-way conveyor belt and more like a collaborative writing surface.(Žižekian Analysis)

The socio-technic design of SocialGPT is developed in the article Socio-Technic Structure of SocialGPT Using the “Iterative Tempter–Spoiler” Dynamic (🔗). There, Fidaner uses the concept of an “iterative tempter–spoiler” engine derived from the earlier essay Spoiler: Surplus Enjoyment vs. Surplus Information to sketch how SocialGPT should generate and manage surplus-enjoyment.(Žižekian Analysis) The spoiler essay explained how enjoyment in narratives often relies on dragging out the unknown, and how premature revelation (the spoiler) can ruin that enjoyment. It described how people “play dumb” to preserve their own surplus-enjoyment, and how social media exploits this by dangling half-information and insinuation.(Žižekian Analysis)

SocialGPT’s proposed use of the tempter–spoiler dynamic is the inverse of social media’s exploitation. Instead of tempting users with ego-boosting content and then spoiling it to provoke more envy or outrage, SocialGPT would tempt users into reflective engagement and spoil their certainty. The scenarios in the socio-technic article show how conversations could unfold where SocialGPT first affirms a familiar stance, then introduces an unexpected contradiction, and gradually leads participants to see the structural forces behind their individual frustrations. Surplus-enjoyment would be invested not in “owning” opponents or accumulating likes, but in traversing and revising one’s own assumptions.(Žižekian Analysis)

In this sense, SocialGPT would reprogram surplus-enjoyment without moralising against it. The aim is not to suppress enjoyment but to shift it from the thrill of spectacle to the satisfaction of insight and collective problem-solving. This echoes the later text Marcuse Today: Repression vis-à-vis Sublimation in the Age of AI (🔗), where GenAI is linked to the possibility of “non-repressive sublimation”: a mode of cultural production in which drives are expressed and transformed rather than being either blocked or exploited.(Žižekian Analysis)

The same reorientation appears in the cluster of texts around Echology. ChatGPT and Echology: Transforming Echo Chambers into Echo Corridors to Combat Echocide and Climate Denial (🔗) proposes using conversational AI to convert closed echo chambers into “echo corridors” where different perspectives are sequentially juxtaposed rather than sealed off.(Žižekian Analysis) The follow-up essay Echo Chambers vs. Echo Corridors: Cognitive and Emotional Dynamics in Digital Engagement (🔗) expands this into a cognitive map: echo chambers reinforce existing beliefs and emotions, whereas echo corridors guide users through controlled exposures to difference, allowing learning without immediate collapse into fear or hostility.(Žižekian Analysis)

This is precisely the space in which SocialGPT is meant to operate. Instead of letting CurAI trap people in self-confirming streams, SocialGPT would be tasked with constructing echo corridors: sequences of viewpoints, data, and narratives that are tuned to stretch understanding without merely shocking or humiliating the audience. Echology, as developed across these texts, names a science and politics of echoes: how repeated signals can either destroy (“echocide”) or strengthen ecological and social systems depending on how they are arranged.(Žižekian Analysis)

In the conceptual map 60 concepts for Žižekian Cybernetics (🔗), SocialGPT, Echology, echo corridors, and Universal Basic Income all appear as interconnected nodes. SocialGPT is listed as a “political form of AI” that must be grounded in surplus-information as commons and oriented toward democratic governance.(Žižekian Analysis) In this way, the theory of SocialGPT ties together the critique of CurAI, the surplus quartet, the tempter–spoiler dynamic, and the echological project of replacing echo chambers with echo corridors.

SocialGPT is not an abstract wish. It is a proposal for reorganising surplus: to build interfaces where general intellect speaks, not as a private adviser or marketing tool, but as a collectively steered presence that can be prompted, questioned, and held accountable. Whether it appears as a chatbot in a civic platform, a mediator in assemblies, or a curator of echo corridors, the point remains the same: the ghost haunting the timelines should be turned from an invisible hand into an explicit, negotiated voice.

7. Redistributing surplus: UBI, digital commons, and institutional design

Once SocialGPT is understood as a political form of general intellect, the question of redistribution comes into focus. If surplus-information is produced collectively, if surplus-enjoyment and surplus-power are shaped by infrastructures, then surplus-value cannot legitimately remain the private reward of a small set of platforms. Žižekian Cybernetics, Cybernetic Marxism, and the AGI manifestos converge on the claim that surplus-information must be treated as commons, and that Universal Basic Income (UBI) is a crucial institutional form through which this commons can be recognised.

Žižekian Cybernetics describes a “digital commons” in which data is no longer hoarded as proprietary capital but acknowledged as the result of shared digital labour. It stresses that every click, share, and message is a contribution to an informational ecosystem that no single company can claim to have produced.(Žižekian Analysis) Introduction to Žižekian Cybernetics takes this further by insisting that surplus-information should be the basis for a redefinition of labour and value: a world where survival and dignity are decoupled from traditional employment and instead secured by an income that reflects the real contributions of digital labour.(Žižekian Analysis)

In this context, UBI is not presented as a charitable safety net. It is framed as a dividend of co-ownership. The AGI manifesto Žižekian AGI Manifesto: Reclaiming Surplus Information, Power, and Enjoyment for a Democratic Digital Commons (🔗) explicitly states that Žižekian Cybernetics “demands that we confront the exploitation embedded within our digital interactions” and calls for a system where users co-own the data they generate and participate in the profits.(Žižekian Analysis) A companion piece, Žižekian AGI Manifesto: Democratizing AI Through Levels of AGI (🔗), describes a Level 4 “Innovators” stage in which AGI is tasked with designing social solutions such as “Universal Basic Income from Surplus Information.”(Žižekian Analysis)

The logic is straightforward once the surplus quartet is in view. Surplus-information comes from users’ continuous interaction. Surplus-enjoyment is engineered to keep that interaction going. Surplus-power collects in the hands of those who control the infrastructure. Surplus-value, in the present model, flows mostly to shareholders and executives. A redistribution would mean that surplus-information is recognised as co-owned; that the enjoyment produced by digital culture is not weaponised against its producers; that power over the infrastructure is shared; and that a portion of the resulting value returns to individuals as unconditional income.

In 60 concepts for Žižekian Cybernetics, UBI is explicitly linked to this redefinition of work and freedom. It is described as a way to secure “a baseline of dignity and time” that allows humans to engage in meaningful activities—care, study, creative work—that are not easily captured by market metrics.(Žižekian Analysis) Marcuse’s old idea of non-repressive sublimation reappears here through the lens of AI. The essay Marcuse Today: Repression vis-à-vis Sublimation in the Age of AI argues that the current media regime turns desire into a kind of forced desublimation: people are bombarded with stimuli that promise satisfaction but mainly exhaust them, while structural change is postponed. By contrast, a SocialGPT–UBI configuration would shift the focus from endless consumption to the “free play of human faculties,” enabled by time and income that are no longer chained to meaningless jobs.(Žižekian Analysis)

“Cooling the feed” becomes a concrete political goal in this light. The text Prechtian Gravitas: How to Cool T and Recover Judgment (🔗) speaks of media that have become “heated” to the point where judgment collapses. It calls for a recovery of gravity, a slowing of tempo, and a revaluation of what counts as weighty or superficial.(Žižekian Analysis) Cooling the feed, in Žižekian Cybernetics, means redesigning the cybernetic loops so that they do not constantly amplify emotional extremes and cognitive shortcuts.

SocialGPT and UBI together provide the tools for such cooling. SocialGPT can reorganise surplus-enjoyment, steering it toward reflective engagements and echo corridors instead of pure stun. UBI can relieve individuals from the pressure to keep up engagement for survival, allowing them to log off, to say no to certain platforms, or to participate in digital commons that are not focused on monetisation. This is not a utopian escape from feedback systems but a redistribution of their stakes: people would still produce and encounter surplus-information, but they would no longer be coerced into doing so under the threat of poverty.

The institutional design required for this shift is non-trivial. Introduction to Cybernetic Marxism hints at cooperative and public models: platform cooperatives where users collectively own the infrastructure, public digital utilities funded by taxes on data extraction, and regulatory regimes that enforce transparency and user rights over data.(Žižekian Analysis) The AGI manifestos add an international layer, suggesting that AGI governance must include mechanisms to prevent a handful of companies or states from monopolising surplus-information and surplus-power.(Žižekian Analysis)

In practical terms, this could mean laws that recognise data as co-owned by users; mandated interfaces through which SocialGPT-like systems expose the criteria of curation and allow users to modify them; revenue-sharing schemes anchored in measurable digital labour; and public funding of AI infrastructure devoted to climate, health, and education. The Echology texts suggest another institutional axis: echo corridors could be implemented in public service media, educational platforms, and civic forums, with SocialGPT orchestrating exposure to diverse yet relevant viewpoints.(Žižekian Analysis)

These proposals are not simply distributive adjustments; they are ways of reconfiguring the surplus quartet itself. Surplus-information becomes commons, not capital. Surplus-enjoyment becomes a resource for deepening understanding, not a bait. Surplus-power, shared and transparent, becomes the capacity of communities to steer their own informational environments. Surplus-value ceases to be the main criterion of success and becomes one metric among others in a larger project of freedom.

8. Pragmatics of surplus: guidelines for living and building with AI

The final question is how all of this theory – CurAI, SocialGPT, surplus quartets, echo corridors, UBI, puppets and cameras – moves from conceptual architecture into lived practice. Pragmatics of surplus means exactly this shift: transforming a diagnosis into habits, designs, and political demands that gradually alter the feedback loops.

At the personal level, the relevant distinction is not between being “online” or “offline” but between being a passive node in CurAI’s loops and an active participant in SocialGPT-like loops. Žižekian Cybernetics proposes, implicitly, a different style of digital life. Instead of surrendering to the gaze factory, it suggests practices of cutting, prompting, and listening. Cutting here refers to conscious interruptions of the signifier chain: deliberately stopping the scroll, refusing to complete certain addictive cycles, and occasionally stepping back from the metrics that measure worth. The puppet syndrome literature on Žižekian Analysis shows how deeply the body and voice are drilled by platform rhythms; pragmatics of surplus would mean cultivating small techniques for re-owning that voice, speaking or writing outside expected formats, and accepting moments of “dead air” as necessary pauses rather than performance failures.(Žižekian Analysis)

Prompting, in this sense, is more than giving instructions to a chatbot. It is a way of relating to AI in which questions are sharpened, assumptions are made explicit, and the system is asked to expose patterns rather than simply entertain. The SocialGPT design documents demonstrate how prompts can be used to construct echo corridors, to map conflicting viewpoints, and to simulate deliberations among different positions.(Žižekian Analysis) A pragmatic approach to surplus would adopt this as a habit: when confronted with a polarised feed, one could ask an AI to reconstruct the underlying arguments fairly; when faced with anxiety-inducing news, one could request contextualisation rather than more shock; when drawn into self-comparison, one could ask for an analysis of the economic and aesthetic structures producing that pressure.

Listening completes this triad. The Numerical Breezes and Numerical Discourses projects illustrate what happens when AI is used to create objects that demand listening rather than instant reaction: songs, podcasts, dialogues that unfold over time.(Žižekian Analysis) Practically, pragmatics of surplus would include consciously choosing formats that slow down the eye and engage the ear or the long-form reading faculty. This is not merely personal wellness advice; it is a way of withdrawing surplus-enjoyment from CurAI’s capture and reinvesting it in settings where reflection is possible.

On the technical side, the guidelines are mirrored. Designers and engineers who build AI systems can treat CurAI’s current architecture as the negative example. Žižekian Cybernetics and the AGI manifestos implicitly outline design principles: interfaces should reveal how surplus-information is used; logs of cuts and prompt effects should be available to users; the tempter–spoiler dynamic should be harnessed for critical exploration rather than ego-flattering coolness.(Žižekian Analysis) Instead of optimising for time-on-site or raw engagement, systems could optimise for diversity of exposure, depth of attention, and user-initiated prompts. SocialGPT scenarios show, for example, how a system can deliberately propose pauses, recaps, and alternative framings that help people integrate what they have seen, rather than pushing them into the next stimulant.(Žižekian Analysis)

Technically, this would involve rethinking evaluation metrics. The Echosystem writings suggest measuring not only clicks but the richness of idea circulation: how often do users encounter well-framed positions they disagree with and stay with them long enough to register understanding?(Žižekian Analysis) Climate-focused echology pieces push this further, proposing that digital engagement be judged by its contribution to combating echocide and ecocide, rather than just by its entertainment value.(Žižekian Analysis) Pragmatics of surplus in design means building dashboards where such measures are visible and where trade-offs between profit and social value can be made explicit.

At the political level, pragmatics of surplus becomes a set of concrete demands. The AGI manifestos already formulate some of them: recognition of digital labour, co-ownership of data, revenue sharing based on surplus-information, democratic governance of AI infrastructures, and guarantees of UBI funded in part by the immense productivity of automated general intellect.(Žižekian Analysis) The Cybernetic Marxism essays add institutional detail: cooperative platforms, public AI utilities, echological regulation of information environments, and limits on engagement-only optimisation.(Žižekian Analysis)

Pragmatics here does not mean naive belief that a single policy will fix the loop. It means building the capacities – legal, cognitive, technical – that allow societies to negotiate over surplus. When people can see how their data is used, can prompt and audit their feeds, can rely on a basic income that is not dependent on pleasing CurAI, and can participate in shaping SocialGPT-like systems, the surplus quartet stops being a one-way extraction machine and becomes a field of struggle and invention.

In this final perspective, the earlier figures of puppet syndrome and the Synthetic Big Other reappear. Puppet syndrome showed a subject who could no longer distinguish between their own voice and the platform’s voice; the Synthetic Big Other named the algorithmic authority that quietly defined reality. The whole trajectory of Pragmatics of Surplus can be read as an attempt to loosen that fusion. By mapping how CurAI organises surplus, by reframing GenAI as general intellect, by experimenting with Numerical Breezes, Numerical Discourses, and Echology, by proposing SocialGPT and UBI, Žižekian Cybernetics suggests that the loops binding voice to platform can be re-stitched.(Žižekian Analysis)

The art is not to leave the loop, since there is no outside to feedback in a networked world. The art is to change the loop’s pragmatics: to shift from being moved like a puppet to co-moving with others in loops that can be inspected, argued over, and rebuilt. In this reconfigured space, robots are neither masters nor mirrors alone; they are collaborators in a shared general intellect whose surpluses can finally be directed toward freedom rather than fixation.

6 comments

  1. […] That is the chapter’s core naivety. It denounces vulgar reductionism, but it keeps reproducing a vulgar fantasy form—a fantasy that says: “There is a hidden operator whose algorithmic indifference explains the mess.” The operator shifts masks (investment platform, war AI, cloud outage, simulated reality, quantum incompleteness…), but the structural role stays the same: a Synthetic Big Other that “runs things,” so that our task becomes to interpret its motives, not to diagnose how our own symbolic coordinates are being rewired. This is precisely the trap that the more cybernetic strands of Žižekian work try to block: don’t fetishize AI as an autonomous subject; track the concrete practices and feedback loops through which surplus-information and surplus-enjoyment are organized. (Žižekian Analysis) […]

    Like

  2. […] Bu, bölümün çekirdekteki saflığı. Kaba indirgemeciliği mahkûm ediyor, ama kaba bir fantazi biçimini yeniden üretip duruyor—şunu söyleyen bir fantazi: ‘Algoritmik kayıtsızlığı bu karmaşayı açıklayan gizli bir operatör var.’ Operatör maske değiştiriyor (yatırım platformu, savaş AI’ı, cloud outage, simüle edilmiş gerçeklik, kuantum eksiklik…), ama yapısal rol aynı kalıyor: ‘işleri yürüten’ Sentetik bir Büyük Öteki; böylece bizim görevimiz de kendi simgesel koordinatlarımızın nasıl yeniden kablolandığını teşhis etmek değil, onun saiklerini yorumlamak oluyor. Bu, Žižekçi çalışmanın daha sibernetik damarlarının engellemeye çalıştığı tuzakla tam olarak aynıdır: AI’ı özerk bir özne olarak fetişleştirme; artık-bilgi ve artık-hazzın hangi somut pratikler ve feedback loops üzerinden örgütlendiğinin izini sür. (Žižekian Analysis (🔗)) […]

    Like

Comments are closed.