🦋🤖 Robo-Spun by IBF 🦋🤖
📺🫥📡 Medium Other 📺🫥📡
Remember cat videos, one of the internet’s first ‘innocent’ icons: it looked like a short economy of nonsense—convincing no one, accountable to no one, just glanced at and passed by; but today that innocence itself should be read as an illusion bound to the feed’s law of ranking. Because the feed is not a shop window where contents are lined up one after another, but a regime that distributes visibility: what decides which image will be ‘the world’ is scoring and ranking more than content. The fairground of autonomous agents that Hegel called the ‘spiritual animal kingdom’ is also updated here: while everyone thinks they are doing ‘the thing itself,’ what they in fact do is produce a signal compliant with the metric; content is the packaging of the signal; and the cat is the cheapest, least risky, most serially reproducible raw material of this packaging. The result is the concrete form of the final ‘going to shit’ (platform decay): the experience polished first to attract users then gets muddier as it serves advertisers and interest partners; what remains is a ranking machine that pushes infinite variations of the same cat, and the digital mud it produces. (🔗)
Recommendation systems: Why ranking became more decisive than ‘content’
What is called the ‘feed’ in social media appears at first glance like contents being lined up one after another; in reality, it is a ranking problem. This distinction is not simple, because the moment ranking changes, the ‘visible world’ changes. Even under the conditions of ‘the same platform, the same accounts, the same day,’ when the ordering rule is changed, the reality that users come into contact with turns into another reality. That is why recommendation systems become more decisive than content; content gains or loses its social effect as it passes through ranking.
Hegel’s move in the Phenomenology (Phänomenologie des Geistes) was to show that what consciousness (Bewußtsein) thinks of as ‘what stands outside’ is constituted through its own acts. Spirit (Geist) here is not a mystical substance; it means the objectification of people’s shared practices. Recommendation systems demand a Hegelian reading at exactly this point: the feed’s seeming like an ‘externally imposed technique’ conceals the fact that the feed is also tied to metrics derived from user behaviors. Over time, ranking turns into a regulator that decides not only the question ‘what is interesting,’ but the distinction between ‘what is visible’ and ‘what is invisible.’
An early and instructive threshold of this transformation is the 2006 News Feed controversy. When News Feed first appeared, it was a design that, instead of leaving people’s actions toward one another on individual profile pages, gathered them into a central feed. When the backlash grew, Facebook announced additional privacy controls for News Feed and Mini-Feed; the matter of ‘who sees what’ was turned into a management problem in the interface from the start (🔗). The critical context here is this: this move is not merely a privacy setting; it is a declaration that the feed is not simply a record of ‘what happens,’ but a mechanism that distributes visibility.
Instagram’s 2016 announcement that it would abandon chronology and switch to showing ‘the posts we think you care about first’ is another clear threshold in the centralization of ranking (🔗). This announcement, citing the rate of content users ‘miss,’ declares that the feed will now be constructed not like a timeline but like a probability calculation. Once you move to a probability calculation, content is handled not as ‘itself’ but as a ‘likelihood of interest’; the feed operates not despite the user, but through the user.
Facebook’s 2018 claim that it recalibrated News Feed toward ‘meaningful social interactions’ shows that ranking is not only technical optimization but a way of setting norms (🔗). Even if the word ‘meaningful’ sounds like a moral declaration here, its practical counterpart is which kinds of interaction will receive more weight in ranking. In other words, the language of ‘a better experience’ is the language of choosing criteria; and choosing criteria decides which contents will survive.
TikTok’s 2020 explanation of how it builds the For You feed is one of the starkest disclosures of this order. TikTok writes that recommendations are ranked by the combination of various factors, tuned starting from new user interests and even by ‘not interested’ signals; the feed is defined as the continuous processing of user behavior with feedback (🔗). Here ‘recommendation’ is not presenting content, but world-building from behavior. For the same reason, in everyday experience ‘the algorithm’ is perceived not as a tool but as an environment; because what determines the environment is the visible order.
Instagram’s 2022 offering of chronological views like ‘Following’ and ‘Favorites’ is also prone to misunderstanding in this context. This move, rather than bringing chronology back, reinforces the legitimacy of the default ranking by making chronology an option; it produces a sense of control, but does not dislodge the center of the basic order (🔗). Thus ranking not only gets ahead of content; it becomes the precondition of the act of ‘seeing content.’ Content is no longer the beginning; it becomes ‘content’ after being selected by ranking.
Sense-certainty: The conversion of what is thought to be ‘this post’ into a metric
In Hegel, sense-certainty (sinnliche Gewißheit) is consciousness’s barest claim: ‘This, now, here.’ At this level consciousness thinks it has the most immediate contact with reality. Yet precisely this claim to immediacy dissolves within consciousness’s own operations; because to determine, point out, and share the ‘this’ with others, a form is required. Form does not preserve singularity; it generalizes singularity. The claim to immediacy turns into mediation most quickly.
In social media, nostalgia for a ‘chronological feed’ is the modern guise of sense-certainty. The desire to ‘see what happens as it is’ is described as if it were a direct window. But the 2006 example shows that this window was designed as a door from the very first moment. The backlash created by News Feed and the privacy controls developed in response make it clear that ‘seeing’ itself is managed: who will see what, in what form, and in what context is an object of settings (🔗). Sense-certainty here is not the spontaneity of ‘what appears in the feed’; it is the feed’s collecting and distributing visibility. ‘This post’ is no longer only content; it has turned into an appearance permitted by the feed.
In 2016, Instagram legitimizes moving away from chronology with a quantitative justification such as ‘users miss an average of seventy percent of the feed’ (🔗). This sentence is the breaking point of sense-certainty: the value of ‘what is happening now’ is redefined through the ‘rate of being missed.’ Seeing is now arranged not through singularity but through a sense of lack. The sense of lack provides the psychological ground needed for the transition to the ranking mechanism; because ‘seeing’ ceases to be an experience in itself and becomes an access problem that must be optimized.
Thus the social media counterpart of sense-certainty turns into this: while it is thought that one is looking at a singular post, in fact that post’s appearing ‘here and now’ is already the result of a selection. When selection is made invisible, experience is lived as immediacy; when selection becomes visible, it is lived as ‘manipulation.’ From a Hegelian perspective, the issue is not the truth or falsity of the accusation of manipulation, but that the fantasy of immediacy already rests on a mediated structure. What is called ‘this post’ is, at first glance, the production of an appearance bound to a metric.
Perception: The splitting of content into a bundle of qualities and the conversion of aesthetic selectivity into signal
At the level of perception (Wahrnehmung), consciousness no longer remains in the plain singularity of the ‘this’; it holds the thing as the unity of properties. A post, a video, or a photo is no longer merely ‘this content,’ but ‘this bundle of qualities.’ Inside this bundle are many elements such as speed, rhythm, framing, emphasis on face and body, density of language, headline structure, dose of conflict, sound selection, visual brightness, and the ‘readability’ of the composition. Perception processes these elements not with a conscious aesthetic theory but with reflexes; yet when reflex repeats, it becomes preference. And when preference is recorded, it becomes a metric.
Instagram’s 2016 announcement makes exactly this transformation clear. The feed’s ‘interestingness’ order will be constructed with variables such as ‘the likelihood you’ll be interested in the content, your relationship with the person who posted, and timeliness’ (🔗). This language shows that it treats content not as a ‘thing’ but as a ‘composition of qualities.’ Even ‘relationship’ can be reduced, rather than being a human relationship, to measurable traces of past interactions. Even temporality becomes, not the immediacy of ‘now,’ but a variable the model will score.
This approach is a sufficient starting point for understanding how recommendation systems work in practice: while perception categorizes contents, it simultaneously categorizes the user. The user’s ‘aesthetic selectivity’ is decisive here; because selectivity produces not only what is liked but also what is ignored. Swiping past, not waiting, not reading, not stopping—these are also the negative preferences of perception and, in many systems, carry signal value. That is why ‘impatience’ becomes a metric as much as ‘liking’ does. Once it becomes a metric, personal taste is no longer personal; it is converted into a generalizable pattern of behavior.
TikTok’s For You explanation makes this context even clearer. TikTok writes that the system ranks recommendations according to various factors and is tuned not only by signals with which ‘you express interest,’ but also by negative signals such as ‘not interested’ (🔗). That is, just as perception breaks down and classifies contents, it also breaks down and classifies user reactions. This classification then returns to the user as a feed; the user lives the organized likeness of their own reflexes as if it were ‘the world.’
The critical consequence of the level of perception is this: content production is also adapted to this logic of fragmentation. The producer is pushed to produce not ‘a singular narrative’ but ‘a quality set compliant with the metric’; because content is recognized first through qualities, then distributed. This is not done by a direct command; it is done by selection pressure. Some forms circulate more, others circulate less; circulation is the binding of perception to a metric. Thus aesthetic selectivity ceases to be only the viewer’s private preference; it turns into a general language of ranking. Perception constitutes not what it thinks it sees, but what it will be compelled to see.
Understanding and force: The model as an ‘inner world,’ ranking as ‘law’
In Hegel, understanding (der Verstand) begins by assuming an inner world (Inneres) behind appearance (Erscheinung). This assumption is not merely that consciousness thinks ‘there is an order behind it’; it is sharper: there must be a law (Gesetz) that guarantees the consistency of appearance, otherwise appearance falls into a heap of contingencies. The social media experience resembles a Hegelian experience of ‘understanding’ at exactly this point: it is sensed that the feed is not random; the question ‘who is shown what, and why’ turns into the belief that behind appearance there is an inner mechanism. In practice this inner mechanism is called a ‘model’; but the model is not a fixed formula, it is a continuously updated network of relations. It is lived like a law because it reproduces; but it cannot be explained like a law because its raw material consists of users’ scattered acts.
At this level the concept of ‘force’ (Kraft) comes into play. In Hegel, force is what produces appearance but does not directly appear in appearance; it is known by its effect, not by itself. In the recommendation system too, force does not stand as the ‘quality’ of individual contents; it reveals itself through the outcomes created by ranking. When a platform says ‘we are updating ranking,’ what looks like a small adjustment from the outside changes the topology of content circulation. Facebook’s 2018 announcement that it changed News Feed ranking toward ‘meaningful social interactions’ clearly shows this logic of force: the law will be established not from the value of ‘content’ but from the type of interaction; moreover, by writing that it will reduce what is called engagement-bait, comment/reaction begging, it defines for itself which interaction will count as ‘real’ (🔗). (🔗)
The level of understanding draws this from such declarations: so behind the content world there is a ‘regime of criteria,’ and this regime can be changed. But the same understanding quickly hits a contradiction. If there really is a law, why can no one learn clearly how it works. Here the algorithm’s ‘becoming opaque’ cannot be reduced to an accident; opacity is the structural tension between understanding and force. Because the law is produced from user acts, it changes as it is explained. The reason it changes as it is explained is not only ‘they want to hide it’; there is a more fundamental reason: the law operates less like an explainable set of rules and more like a statistical tuning order, and the order becomes a target at the moment it explains itself; and once it becomes a target, it corrupts the object of measurement. That is why platforms, on the one hand, publish texts about ‘how we recommend,’ and on the other hand, avoid precise formulas in these texts.
TikTok’s text explaining the For You feed establishes this inner-law idea in a naked way: it says recommendations begin with ‘interest signals’ and are tuned also with ‘not interested’ signals, producing a ranking output through the combination of many factors (🔗). (🔗) In this account, the law is not a single rule but a weighted sum; and force becomes visible right here, because how the weights change is lived by the user as the experience of ‘how the world changed.’ The headings YouTube lists for recommendations strengthen the same schema: data such as watch history, search history, subscriptions, and likes are used to produce a ‘likelihood of interest’; it is explicitly written that likes, in particular, help predict the likelihood of being interested in similar videos in the future (🔗). (🔗)
At this point, the model’s being lived as an inner world and ranking’s being lived as a law merge. While the inner world is an assumption posited to explain appearance, in the feed experience the inner world effectively turns into a center of governance. User and producer look at the same inner world and see different things: the producer wants to pass through the model’s gate; the viewer wants to escape the model’s gate. Both appear to demand the same thing: transparency. Yet the demand for transparency itself is limited by the nature of the law; because the law is not a fixed ‘rulebook,’ but a variable field of force arising from the sum of acts. Opacity becomes a governance technique of this field of force: the law both exists and, when asked ‘exactly where is it,’ slips; because the law is rewritten forward while being measured backward.
This tension also opens a broader context of institutions governed by metrics. Metricization becomes a mode of governance not only of social media but of many domains; the aim is to replace meaning itself with measurement. An example where this transformation is discussed within a frame like ‘when the objective function is set up wrong, the institution loses its meaning’ can also be seen in analyses emphasizing that metric and algorithmic logic can erode institutional value (🔗). (🔗) The connection here is that in social media the ‘inner world’ is not only technical, but a managerial regime: the law is positioned not behind appearance as explanation, but above appearance as rule.
The inverted world: The substitution of quality by likes and the reversal of the value criterion
Hegel’s moment of the inverted world (verkehrte Welt) is that the ‘law,’ instead of explaining appearance, emerges from within appearance through an inverted functioning. It is not a simple surprise like sweet turning out sour; what is at stake is the displacement of the criterion. In the social media context, inversion is the silent transformation of the sentence ‘good content gets likes’ into the sentence ‘content that gets likes is counted as good.’ This transformation is not done with a declaration of intent; it is done with the institutionalization of the metric. When the metric is institutionalized, quality ceases to be a trait sought in content; it becomes the name of what can pass the metric.
A crude threshold of this inversion on the producer side is clearly seen in YouTube’s 2018 raising of thresholds for the Partner Program. YouTube announces that it ties the gate of eligibility for ad revenue to the conditions of 1,000 subscribers and 4,000 watch hours in the last 12 months; below the threshold, independently of any ‘good-bad’ debate, one remains outside the system (🔗). (🔗) The same thresholds are also seen to be justified in the Google product blog; that is, the threshold is not a decision of a single department, but is embraced as the platform’s regime of value (🔗). (🔗) Here inversion operates not as ‘the reward goes to the good,’ but as ‘the reward threshold is the definition of the good.’ The producer has to prove quality not with narrative but with the metric; the metric ceases to be an indicator about quality and turns into the measure that constitutes quality.
On the viewer side, inversion is more cunning, because on the surface the language of ‘taste’ is still spoken. People live the like as a judgment; yet the like is at the same time a distribution command. Facebook’s 2018 change makes this double function explicit: the aim of ‘meaningful interaction’ strengthens an order in which more comments and shares mean more visibility; but the same text, by writing that it will reduce engagement-bait, declares that even the begging done for visibility will be regulated by the system itself (🔗). (🔗) The inversion here is two-layered. In the first layer, likes replace quality. In the second layer, because techniques for producing likes are not counted as ‘natural,’ they are suppressed; that is, the system tries to manage its own metric-dependence—produced by itself—again through the metric. This is the governance form of inversion: the law stigmatizes its own side effect as ‘abnormal’; but the side effect is born from the law’s logic.
At this point, ‘opacity’ turns into the fuel of inversion. If the rule were clear, the producer would exploit the rule by the shortest route; the viewer too would mechanize the feed by knowing exactly what will happen when pressing which button. But what is desired for the platform is both to steer behavior and to be able to present that it steers as ‘natural preference.’ That is why the law is not fully explained; yet because it is not fully explained, it appears ‘arbitrary.’ The experience of arbitrariness is the psychological face of inversion: people begin to live the generalized result of their own acts as an external power.
The Cambridge Analytica scandal was a threshold that made visible how inversion operates not only as an aesthetic and economic force but also as a political one. The scandal’s exposure grew around allegations that data collected via Facebook were used for profiling and targeting; here the issue was not a singular ‘bad actor,’ but that likes and similar signals could become raw material for behavior prediction (🔗). (🔗) Inversion sharpens at this point: while the like is lived as an innocent gesture of approval, the generalized trace of the same gesture can be converted into a capacity for prediction and intervention. Thus the sentence ‘I just liked it’ turns, at the system level, into the result ‘I gave myself over to the metric.’ Even if the content of the law does not change, the social meaning of the law turns upside down; because the metric not only selects content, it also selects the subject.
Self-consciousness: The economy of recognition (Anerkennung) and the Like becoming the new social currency
In Hegel, self-consciousness (Selbstbewußtsein) is born when consciousness finds itself not in the world of objects but in another consciousness. Knowing oneself is tied to being recognized by another; recognition (Anerkennung) is not an ornament, it is the condition of the subject’s having itself. Social media does not leave this relation at the level of ‘communication’; it measures recognition, stores it, redistributes it. Once measured recognition becomes the raw material of the feed, recognition becomes both a social and a technical object. The Like is the point of intersection of these two domains: it means both ‘I saw you’ and ‘multiply you.’
In YouTube’s recommendations text, the function of likes directly declares this duality: it is explicitly written that liked videos are used to predict the likelihood of being interested in similar videos in the future (🔗). (🔗) This sentence shows that the Like is now less a ‘judgment’ than a prediction input. Once it is a prediction input, the Like not only records the past; it constitutes the future. The level of self-consciousness becomes technical here: while the person signals what they think they ‘like,’ at the same time they hand the system what they will be steered to like. Recognition ceases to be the ground on which the subject constitutes itself; it becomes the subject’s handing itself over to an external model.
TikTok’s For You explanation establishes the same point in another language: interactions, viewing behaviors, and negative signals are combined to create a personalized feed (🔗). (🔗) Self-consciousness here is bound to a continuously feedback-driven circuit of recognition. The person thinks they express themselves, but the manner of expression is measured; the metric is then returned to the person as ‘this is who you are.’ For the producer, the same circuit reduces the question ‘who am I’ to the question ‘which format performs’; because recognition is granted not through content but through the metric.
This reduction means the transformation of recognition into social currency. To be a currency is not merely to be ‘cared about a lot’; it is to gain an exchangeable value. The Like governs this exchange in two directions. In one direction it opens, for the producer, the road to visibility and the possibility of income; in the other direction it produces, for the viewer, identity belonging and an in-group position. Once recognition becomes tokenized, the circulation of the token becomes more real than ‘content.’ That is why platforms sometimes attempt to reduce the token’s visibility; but they cannot remove the token itself, because the system’s law operates through the token.
Instagram’s tests of hiding Like counts and later its move to give users the option to hide Like counts show this contradiction well. On the one hand, it is accepted that Like counts produce competition and pressure; on the other hand, the Like is an indispensable signal for measurement and ranking. That is why the solution is not to eliminate the Like, but to manage the visibility of the Like. Instagram’s 2021 text announcing that ‘everyone can choose to hide public Like counts’ accepts the ‘stress’ side of the recognition economy while leaving the technical function of recognition in place (🔗). (🔗) TechCrunch’s 2019 reports also show that hiding likes was tested in different countries and that this was carried out together with the platform’s broader measurement regime (🔗). (🔗) The self-consciousness moment here is this: recognition remains the ground on which the person constitutes themself, but because it has now been moved onto a numerical surface, the person begins to see their own recognition as a ‘score.’ When the score is visible, comparison and competition increase; when the score is hidden, suspicion and paranoia increase; because the law continues to operate in the background, only the signs are veiled.
The Hegelian tension of self-consciousness is that recognition can be both necessary and destructive. In the context of social media, this tension knots itself in the Like’s being at once a gift and a command. Recognition is lived as reciprocity; but the moment it is bound to scale, reciprocity breaks down. Because reciprocity is possible through two subjects seeing one another; whereas here recognition is translated into a ‘pattern’ seen by a model. Recognition translated into a pattern, when it returns, is felt no longer as an experience belonging to the person, but like a judgment belonging to the system. For this reason, the level of self-consciousness is not merely a ‘desire to be liked’; it turns into the necessity of knowing ‘to which law the like is bound.’ When the necessity of knowing cannot be met, opacity becomes the condition of continuity of the recognition economy.
Master–slave: The platform’s seeming like the master, remaining dependent on user–producer labor
Hegel’s master–slave dialectic (Herrschaft und Knechtschaft) allows reading the modern platform relation without leaning on the fairy tale of a ‘malicious center’; because what is decisive here is not so much one side’s crude command over the other, but the inversion of dependency that is necessary for the command to be able to work. The platform seems like the master because it is the gate of the feed; it distributes visibility, closes it, opens it, changes the rules, renames the criteria. The producer and the viewer, meanwhile, are lived as the side that waits in front of this gate, adapts, is tried again every day. But Hegelian attention goes not to the asymmetry hidden inside this picture, but to the ground on which the asymmetry is established: the master cannot produce its own power directly; it can harvest it only through the slave’s act, its labor, its ‘work.’ The platform’s power, too, is nothing other than the conversion into metrics of seemingly ‘free’ acts such as users clicking, watching, scrolling, sharing, liking, complaining, and producing content.
The first layer of this dependency is the data chain. In the advertising and measurement ecosystem, platforms long assumed cross-app tracking capacity as if it were ‘natural infrastructure.’ Apple’s putting App Tracking Transparency into effect with iOS 14.5 made visible that this assumption was in fact tied to an external permission regime; that is, mastery was leaning on the door of another mastery (🔗). (🔗) The inversion here is not that the myth ‘the platform knows everything’ is broken because of a technical detail; the inversion is the revelation that knowing itself is an labor chain. The viewer, with the identity crumbs they drag across apps; the producer, by producing content and pulling the viewer there; the advertiser, by committing budget and demanding feedback—continuously reweave this chain. The platform appears as master, but it is not a self independent of the slave’s work.
The second layer concerns cutting off the ‘outside eye.’ The master’s power comes not only from holding the door, but also from monopolizing knowledge about how the door works. That is why platforms either bring third-party tools and the independent ecosystem inside or throw them out. X’s announcement that it would end free API access is, in this context, a ‘door lock’ move; because the API is the circulation channel of both alternative interfaces and measurement and oversight (🔗). (🔗) Even if the announcement is framed in languages like ‘cost’ and ‘sustainability,’ at the Hegelian level the issue is this: the master, in order to bind the slave’s labor more tightly, makes the slave’s tools themselves dependent on it. The slave’s labor is not only producing content; the slave’s labor is also producing and operating the tools that can measure the master. The API lock narrows the public circulation of this second labor.
The third layer is the baring of community labor. The blackout wave that emerged around API pricing on Reddit revealed that what the platform packages as ‘community’ is in fact a work regime; when moderation labor and continuity of use are withdrawn, the master’s ‘spontaneously functioning order’ stumbles (🔗). (🔗) With a Hegelian gaze, what appears here is that the slave not only produces, but also maintains the order. The platform presents the order as ‘rule’; but the survival of the rule depends on the sum of thousands of small decisions, complaints, cleanups, promotions, and exclusions. The slave establishes the master’s world every day anew by ‘processing’ it; the master, too, appropriates this as if it were its own nature.
The fourth layer is naked sovereignty that appears when content circulation is cut. Meta’s decision to restrict access to news in Canada showed how the power of ‘yes/no’ can be exercised in an instant; for news publishers the issue is not only loss of revenue, but the experience of distribution being reduced to a single button (🔗). (🔗) This move can be read as the master’s arbitrariness, but the Hegelian tension is more ruthless: even this arbitrariness gains meaning through the slave’s labor. If there is no news, time, attention, and interaction flow elsewhere; that is, by ‘cutting,’ the platform redirects the slave’s act into other channels. The master’s command does not abolish the slave’s work; it reshapes it.
The fifth layer is the narrowing of transparency. The shutdown of tools like CrowdTangle makes it harder to see from outside how public circulation is shaped; this is not a detail ‘for researchers’ only, it is a rearrangement of the knowledge leg of the master–slave relation (🔗). (🔗) As transparency narrows, the ‘algorithm’ appears like a more opaque inner force; whereas opacity is not the magic of power, but the management of dependency. To the extent that the slave can see the results of its own work less, it becomes more inclined to take the master as ‘absolute’; and the master uses this illusion to draw the slave’s work more steadily.
The language of flattery (Schmeichelei): Like/Share governing circulation as operation, not as language
Flattery (Schmeichelei), in the everyday sense, is not saying words that will please someone; in the Hegelian context, flattery is a language regime that instrumentalizes the recognition relation. Here language works not to speak truth, but to sustain the relation and to reproduce a balance of power. That is why social media interactions, while seeming like ‘expression,’ increasingly turn into ‘operation’ in a more naked way: Like and Share, rather than reporting a judgment, trigger a distribution mechanism. The clearest side of this transformation becomes visible in the platforms’ own texts. TikTok writes openly that recommendation systems personalize through ‘interactions,’ that user interactions determine ranking as signals; that is, praise is converted into the raw material of the system’s decision (🔗). (🔗) YouTube, too, systematically explains in its own descriptions that recommendations derive from past interactions and behavior patterns; recommendation multiplies not what the viewer ‘likes,’ but the metric the viewer taught through liking (🔗). (🔗)
At this point, flattery operates in two directions at once. From the producer’s perspective, flattery ceases to be a matter of looking cute to the viewer; it turns into the work of producing metric-compliant signals. Title, thumbnail, rhythm, repetition, call-to-action sentences, and emotional tone are optimized less through ‘what is said’ than through ‘how it is made to hit.’ From the viewer’s perspective, flattery ceases to be a gesture of support to the producer; it becomes the act of teaching one’s own attention and selectivity to a model. Praise here is not the content of a relation, but the key card that puts the relation into circulation.
Facebook’s 2018 line shows how the language of flattery is institutionalized by being renamed ‘more meaningful.’ While the platform sets up the frame of giving weight to ‘meaningful interactions’ in News Feed ranking, it places interaction not only as a measure of social benefit but as the heart of the ranking law; that is, flattery means not ‘speaking better,’ but ‘a more effective signal’ (🔗). (🔗) Thus Like and Share cease to be a judgment on content; content is reduced to the carrier of the judgment derived by Like and Share. The inversion of language is this: while people drop a Like to say ‘I agree,’ the system reads the Like as the command ‘show more’; people, without intending ‘show more,’ have produced a distribution order.
The natural consequence of this operationalization is the disciplining of flattery channels. Features like hashtag following once allowed the viewer and the producer to live an illusion of relatively visible control over circulation; because the statement ‘I follow this tag’ was an explicit preference marker. When such channels are removed or restricted, the language of preference shifts to more implicit signals; thus flattery becomes less something ‘said’ and more something ‘done.’ Instagram’s removing the hashtag-following option strengthens this line; discovery is pulled toward a more closed, more model-centered, and more externally unreadable order (🔗). (🔗) Moves like limiting the number of hashtags per post against hashtag spam also fit the same logic: the flattery trick is explicitly suppressed, but the flattery economy does not disappear; it is only sustained under the mask of ‘natural behavior’ (🔗). (🔗)
The dark side of the language of flattery concerns structure, not ‘good will.’ Once praise becomes the main input of distribution, praise can no longer remain innocent as a relational gesture; because the material counterpart of praise is visibility, and the material counterpart of visibility is money, prestige, and power. That is why flattery is lived at once as solidarity and becomes a cog of an exploitation mechanism. A local conceptual reading explicitly discusses this inversion between ‘being praised’ and ‘being operated’; the material function of praise within circulation inverts the meaning of praise (🔗). (🔗) In a more theoretical frame, a reading that the interaction economy has the subject live by ‘converting its own desire into a metric outside’ conceptualizes why the language of flattery continually reproduces itself; here praise turns into the return vehicle of the subject’s own selectivity (🔗). (🔗)
The law of the heart: The claim ‘my taste is universal’ generalizing and returning as an alien force
Hegel’s moment of the ‘law of the heart’ (Gesetz des Herzens) provides the sharpest lens for modern recommendation systems; because the tragedy here is not that a bad external power comes and corrupts an innocent inner world. The tragedy is that the inner world attempts to universalize its own law, and the universalized law then returns to the subject as if it were an alien force. ‘My taste’ is first lived as a right. Then this taste comes out through measurable acts such as Like, Share, watch time, scroll speed, pausing, and commenting. Then this externalized act is generalized; because the model learns not the caprice of individual persons, but the common metric of similar behavior patterns. In the final stage, the common metric stands across as if it were the ‘general order,’ and the subject experiences the order it produced as an imposition. Hegel’s described ‘madness of self-conceit’ (Wahnsinn des Eigendünkels) is updated precisely here: the subject’s inner law becomes the objective world; the subject, by taking that world as ‘outside me,’ becomes angry with itself.
The technical face of this inversion is not hidden in the platforms’ recommendation narratives; on the contrary, it is declared. TikTok explains personalization through weighting user interactions as signals; that is, ‘the world I see’ is ‘the generalized return of my interactions’ (🔗). (🔗) YouTube, too, constructs recommendations through predicting orientation toward similar content based on past behaviors; the viewer’s liking here is not only feeling, but the input of the future order (🔗). (🔗) That is why the complaint ‘the algorithm became opaque’ often misses a more fundamental alienation established before opacity: because the law already derives from the subject’s act, a gap opens between the subject’s ‘one-to-one intention’ and the subject’s ‘behavior poured into metrics.’ Opacity is the interface regime that makes this gap manageable; it is not the cause, but the governance form of the results.
The return of the law of the heart becomes even clearer on the stage of ‘control options.’ Instagram’s offering chronological views like Following and Favorites looks like an answer to the heart’s demand ‘my order’; but the same announcement also openly says that more recommendations and more personalization will be added over time (🔗). (🔗) At the Hegelian level, the meaning of this is: the heart demands its own law in the form of an ‘option’; but the general order remains as the default feed. The option does not replace the law; it becomes a buffer that absorbs the unease produced by the law. The moment the heart says ‘I want,’ even the form of what it wants is determined by the general order.
This inversion has become visible not only in the company–user relation, but also at the level of law. The Digital Services Act’s requiring large platforms to offer at least one recommendation option not based on profiling is the institutional translation of the law of the heart; the demand ‘my order’ turns into the clause ‘non-profiled feed’ (🔗). (🔗) The paradox here is this: law wants to limit the objectification of the inner law, but the limitation is again tied to the option interface. Thus opacity ceases to be a technical secret and turns into a governance form; the option exists, but the option’s real effect can dissolve within the architecture of the default.
The ‘sterile world’ desire of the law of the heart can be seen clearly in the example of political content. Instagram’s bringing automatic limitations to recommendations of political content and making this limit changeable in settings is the objectification, as platform policy, of the heart’s wish ‘not to encounter what disturbs’; but classifying what is ‘political’ inevitably produces an ambiguous boundary, and the ambiguous boundary becomes the raw material of new complaints (🔗). (🔗) While the heart universalizes its own comfort, it turns the conditions of its own comfort into an external power for others. What it wants happens; but the moment it happens, it ceases to be ‘my wish’ and is lived as ‘platform imposition.’
The same mechanism appears on the producer side as the experience of an ‘embargo.’ In Turkey, independent news sites’ experiencing sharp drops in Google Discover and News traffic as a kind of invisible embargo is the return of the law of the heart at a social scale; because here the issue is not the quality of individual contents, but that with a change in the ranking law the world appears overnight like another world (🔗). The law’s changing is lived like an ‘attack from outside,’ but the raw material on which the law works is signal residue produced for years by everyone’s everyday preferences. When the law of the heart is universalized, it no longer belongs to the heart; it stands against the heart as ‘world order,’ and the heart, because it cannot recognize its own product, takes it for an alien power.
The spiritual animal kingdom: The discourse of ‘the thing itself,’ fake authenticity, and signal engineering
The moment Hegel calls the spiritual animal kingdom (geistiges Tierreich) is the plane where consciousness constitutes itself as if it were a ‘world of autonomous agents,’ where everyone believes they do their own work in their own name, but precisely for that reason a shared illusion is produced. Here the claim ‘I am only doing my job’ ceases to be a personal declaration of intent; with behaviors being added onto one another an order emerges, and that order begins to operate like an objective necessity independent of individual intentions. In the social media ecosystem, the contemporary name of this is the discourse of ‘authenticity’ and ‘quality’: the producer assumes they produce for the sake of the ‘thing itself’ (Sache selbst), and the viewer assumes they select ‘what is truly good.’ But at the practical level on which recommendation systems operate, what is done is producing signals more than producing content; content is the carrier of the signal, and the signal is the metric of circulation.
This distinction becomes naked at monetization gates. YouTube’s change dated 16 January 2018, while preserving the language of ‘rewarding quality,’ in fact makes the metric a gate mechanism: subscriber and watch-time thresholds are defined for entry into the Partner Program; thus the link between ‘good work’ and ‘work that passes the metric’ is inverted. The threshold is presented like a technical condition, not a judgment; but the producer’s everyday rationality quickly turns into this: first the threshold, then the content. The authenticity narrative is now not an aesthetic claim, but a threshold-passing technique; the rhetoric of ‘the thing itself’ works like a moral language that conceals the command of statistics (🔗). (🔗)
This logic becomes clearer in the system’s self-defense reflex. At some point the platform has to limit practices like ‘repetition,’ ‘serial production,’ and ‘artificial multiplication’ in the name of ‘authenticity’; because signal engineering hollows out the circulation order and also lowers the value of the metric. In this respect, the sharpening of the ‘inauthentic content’ line in YouTube’s channel monetization policies is important: the problem is no longer only copyright or ‘harmful’ content; it is production forms tuned to deceive the metric itself (🔗). (🔗)
At this threshold, the character of the spiritual animal kingdom is completed: while everyone speaks of ‘work,’ in reality the work is the sustaining of the signal. The resemblance of formats to one another, titles imitating one another, rhythm being copied, templates that provoke reactions being multiplied do not require a psychological explanation like ‘creative poverty’; it is enough that what the system rewards is not ‘work’ but ‘metric.’ For this reason, findings saying that a serious portion of the videos recommended to new user accounts can consist of low-quality attention-farm content point to something harsher than ‘quality fell’ nostalgia: the metric can also turn into a self-feeding dump; because the metric does not represent content, it produces content (🔗). (🔗)
At this point, the ‘algorithm’ is not an external agent, but the internal mechanism of the spiritual animal kingdom’s total behavior. Platforms like TikTok defining recommendation through ‘interaction signals’ is the confession of this internal mechanism: actions like watching, rewatching, liking, sharing, and commenting produce a language of value independent of content; the discourse of ‘authentic work’ is the moral cover pulled over this value language (🔗). (🔗) The translation of this mechanism into everyday language is tied to the same conclusion in a local analysis that describes the TikTok algorithm as operating like ‘voting’: ‘like’ is not a judgment about content, but a command about circulation (🔗).
Hardening and cutting: API locks, news embargoes, category filters, and the experience of ‘imposition’
If the spiritual animal kingdom could continue operating as a calm ‘format economy,’ the inversion could be read only as a slow erosion of quality. But as the circulation order grows, the power struggle over the metric hardens, and the governance technique takes the form of ‘cutting’: access channels are closed, measurement tools are restricted, some content classes are systematically excluded from the feed or brought back in. This hardening is often legitimized with reasons like ‘security,’ ‘quality,’ and ‘user experience’; but its practical result is this: the experience generalizes that what was lived like ‘natural circulation’ can, with a single decision, become ‘yes/no.’
The API lock is the clearest form of this. When a platform closes channels that allow external observation and building alternative interfaces, it prunes not only developers but also public oversight; because independent measurement is the main way of making the inner law externally visible. X’s restricting API access and moving to a paid regime, even if presented like ‘regulating the ecosystem,’ in practice functioned as a cutting move that narrowed outside eyes; the public debate of this move was tied to the idea that the platform’s arbitrary control capacity increased (🔗).
A news embargo is a cruder cutting: a certain content type is removed from circulation across an entire country. Meta’s step on news access in Canada concretizes this power of “yes/no”; in response to the legal framework in Canada, the company explicitly declared that it had ended the sharing and viewing of news content on its platforms (🔗). (🔗) The issue here is not the question “do you like the news”; within the signal economy, news is not merely a content type, but a form of organizing public time. When it is cut overnight, on the user side this is lived as a “external imposition”; whereas the cutting is the logical result of the concentration of power produced by prior aggregate behavior: the metric that has squatted at the center of circulation turns, in a moment of crisis, into a crude switch.
A category filter, by contrast, is the more sophisticated form of cutting: circulation is reorganized through classes such as “politics,” “sensitive content,” “low-quality content,” “spam.” Here ambiguity is not a mistake, but a function; because as long as class boundaries remain unclear, the user cannot get a clear answer to the question “why am I seeing this,” and becomes more inclined to experience the order as a personal injustice. Meta’s back-and-forth approach around political content recommendations is a good example of this regime of ambiguity: in one period it is tied to cutting recommendations with the claim that “users want to see less political content,” in another period it is expanded again with the loosening of the approach; in the end, the question “what is politics” remains surrendered to the company’s classification decisions (🔗).
Another form of cutting being lived as “embargo” at a local scale is also seen in search and discovery traffic. The Reuters report stating that independent news sites in Turkey experienced a sharp drop in Google Discover and Google News traffic and that this created a risk of closure shows, through a current event, how the law of ranking returns to the content producer as an “external force”: the metric changes, revenue falls, public visibility narrows, and the producer side experiences this not as a technical update but as a political shutdown (🔗). (🔗)
The common denominator of these hardenings is this: the order of circulation is now determined not only according to “what is produced,” but according to “through which channel it is measured” and “into which class it is placed.” Cutting moves look to the user and the producer like an arbitrariness imposed from outside; but behind this appearance, there is the signal economy’s own internal logic. As the metric is objectified, the hand that holds it also becomes visible; as it becomes visible, the hand hardens.
Law and counter-oversight: The institutionalization of transparency demands and their conflict with ambiguity
As hardening accumulates, the counter-move cannot remain at the level of a “moral complaint”; it takes on an institutional form. The obligations shaped around the European Union’s Digital Services Act (DSA) can be read as an attempt to draw an external boundary around the recommendation systems’ “inner law”: the issue is no longer transparency left to the platform’s goodwill, but mandatory choice and auditable accountability. One of the most critical points of this framework is the provision requiring very large online platforms and search engines to offer at least one option in their recommender systems that is “not based on profiling”; that is, a feed alternative is defined for the user that is not surrendered to the inner law of behavioral history (🔗).
But there is a gap between the legal text and interface practice. Saying “the option exists” does not mean that the option is de facto accessible; design can render the option invisible, meet the obligation on paper, and sustain the regime of ambiguity. The DSA Observatory’s review discussing how platforms can in practice circumvent rules through manipulative design directly targets this conflict: the problem is not only “is the algorithm closed or open”; it is how options are presented, through which steps they can be found, and in which direction they push user behavior (🔗). (🔗)
Counter-oversight is not limited to the user interface; researcher and regulatory oversight also rests on data access. Recommender-system audits get stuck on the questions “which data will be accessed, with which metric will it be measured, by which method will it be tested”; because the platform’s inner law is a production line designed precisely not to be seen. The analysis in Tech Policy Press discussing recommender-system audits in the DSA context sets up the tension between ambiguity and accountability on this technical-governance plane: an audit is not only the job of writing a good report; without data access, method standards, and enforcement capacity, it carries the risk of remaining symbolic (🔗). (🔗)
For this reason, the “demand for transparency” itself also carries an inversion potential. The platform can package transparency demands in the form of “report,” “option,” “label,” and produce a new layer of legitimacy without truly limiting the inner law. The real measure of counter-oversight is not individual information texts, but under what conditions cutting moves, category filters, and signal engineering are constrained. Even if law appears here like an external intervention, the ground of the conflict does not change: the issue is the problem of the metric that consciousness itself produces then managing it back. The DSA’s aim is to break this back-management through “options” and “audits”; but as long as the regime of ambiguity can reproduce itself through interface and data control, the conflict does not close.
Beautiful soul: The claim of purity, complaint turning into signal, and the inversion of judgment
In Hegel’s stage of moral consciousness, “conscience” (Gewissen) is like a threshold: the word the subject says about itself is no longer merely a declaration of intent, but a claim to reckon with the order of the world. At this threshold, the social media experience sharpens in that an external coercion felt as “they are doing this to me” is in reality the objectified return of the act itself. The complaint about the feed’s “inner law” is often derived from the form of selectivity the complainant maintains within the feed; but when the place it derives from is forgotten, the law appears like an alienated force. This forgetting is not only ignorance, but the very operation of the modern measurement regime: the metric arises from singular acts; at the moment it becomes the total of singular acts, it ceases to be “my act” and stands across as “order.”
The “beautiful soul” (schöne Seele) appears precisely here. The beautiful soul sets itself as “clean” against the dirty world; it judges the world according to its own measure of purity; but this judgment itself remains dependent on the circulation it judges. In platform language, this is the paradox of the complaint “the algorithm broke”: the complaint condemns the field of visibility without leaving the field of visibility; it speaks with the fuel of the field it condemns. The complaint text produces measurable traces such as comment, quote, share, reaction, watch time, and repeat visit; and these traces are the raw material of “circulation decisions.” Thus the complaint ceases to be objection and turns into a mode of participation; to the extent that it becomes a mode of participation, objection is recoded in the system’s own language as “signal.” This should not be read as a moral character flaw; it should be read as a logic of measurement and ranking architecture.
The technical counterpart of this logic is that interaction is now tied not to “expression,” but to a “prediction” mechanism. YouTube explicitly writes that a like does not remain a mere statement of feeling; it is used as a sign that helps predict the probability of future interest (🔗). (🔗) TikTok also describes recommendation logic through “interactions” and signals derived from user behaviors (🔗). (🔗) The reason a complaint cannot remain as “pure speech” is, before the platform’s bad faith, that speech is already operationalized: speech turns into metric; metric into ranking; ranking into world-experience. The beautiful soul’s claim “I am outside” collapses here; because it is not outside, it is inside the metric.
Another face of this collapse is seen in attempts to hide metrics. Instagram’s move to expand tests of hiding like counts shows that the platform accepts metric pressure but ties the solution not to “external critique,” but to an interface setting (🔗). (🔗) The gesture here resembles the beautiful soul’s typical gesture: “dirtiness” (metricization) is complained about; then the dirtiness is placed inside the order as if it were “a cleaned-up option.” But what becomes an option often changes not the structure, but the way the structure is felt. In appearance there is “purification”; in reality the metric regime remains in place, only becoming less discussed. For this reason, the beautiful soul’s critique is inverted along with judgment itself: the voice that says “I reject the system that dirties me” turns the trace left by its rejection into the system’s fuel.
This inversion is not only individual psychology, but a cultural form of appearance. The timeline’s presenting ordinary pieces as if they were an “objectified” totality through curation and algorithmic boosting shows that pure subjective expression has long been placed inside an already assembled apparatus (🔗). (🔗) In such an order, something like “pure speech” would be possible only if it could speak without confusing itself with operation; whereas platform architecture has bound speech and operation to each other from the start. The beautiful soul is therefore not an innocence looking from outside, but a performance of innocence produced from inside: it cannot speak of dirt without participating in dirt’s circulation; and when it speaks of dirt, it participates.
In local debates, this inversion also becomes visible through the link between “being praised” and “being operated”: praise turning into an operation that produces visibility makes it easier for complaint too to be articulated into the same operation (🔗). The issue here is not to say “people are hypocrites”; the issue is that in the circulation economy of language, both praise and complaint are tied to the same metric machine.
Forgiveness and reconciliation: The cycle that does not close without mutual recognition and limited re-subjectification
In Hegel’s route, the beautiful soul is not a “final word”; it is a blockage. The blockage is that the judgment “I am right” suspends the world instead of explaining the world. The exit comes not with a declaration of moral superiority, but with a two-way movement: confession (Geständnis) and forgiveness (Verzeihung). Confession is the subject’s recognizing its own share; forgiveness is refusing to freeze the other as only “evil” (Böse). Without these two coming together, reconciliation (Versöhnung) does not form; because reconciliation is not emotional softening, but the re-establishment of recognition (Anerkennung). In the social media context, recognition is not only the demand “like me,” but the acceptance of reciprocity in a shared production of metric: who produces what, what fuels what, which complaint is tied to which ranking, which “innocent preference” turns into which collective law.
At this point, forgiveness is not absolving the platform. On the contrary, it is accepting that constituting the platform as an “external enemy” breaks the subject away from its own act. Platform decisions produce real consequences; but these consequences operate not only with the platform’s arbitrariness, but together with the social production of the metric. The meaning of forgiveness is not to ignore the agent’s singularity; it is not to reduce the relation between agent and structure to the fairy tale of “one-sided evil.” When this reduction is made, complaint turns into a kind of pure judgment; pure judgment, too, ends up producing signal and feeding the same cycle. Reconciliation begins with a movement of consciousness that can say “I, too, am here” at some point in the cycle; the name of this movement is “re-subjectification,” but it is not an absolute salvation: the metric regime continues, only becoming less fetishized.
A social and institutional face of this re-subjectification is seen in demands for transparency and options. In the European Union’s Digital Services Act, for very large platforms, the condition of offering at least one option in recommender systems that is not based on profiling is explicitly written (🔗). (🔗) This is an intervention against the “inner law” remaining as absolute mystery: the aim is for the user to be treated not only as a measured object, but as a subject addressed with an option. But at the same time, the hiding of this option in the interface, its being hard to find, or its being neutralized with manipulative design shows that the regime of ambiguity can be sustained not only in the model, but also in design (🔗). (🔗) Audit discussions also emphasize that recommender-system examinations get stuck on problems of data access, method transparency, and meaningful accountability; that is, even the demand for “transparency” meets the institutional resistance of the metric regime (🔗). (🔗) For this reason, reconciliation does not end with a single move of law; law only opens a framework in which confession and forgiveness can be possible on an institutional ground.
The reason the cycle does not close is that recognition remains one-way. The user, by saying “they are manipulating me,” makes themself innocent; the platform, by saying “I am providing you a good experience,” legitimizes itself; the producer, by taking refuge in the purity of “the thing itself,” denies optimization; but in fact everyone functions as different gears of the same metric machine. Forgiveness is the loosening of this mutual denial. To say “I, too, produce” is not only producing content; it is accepting that with the whole of behaviors such as metric, interaction, complaint, avoidance, blocking, quoting, one participates in producing a ranking order. The platform, too, must drop the lie “I only reflect”; because the platform is the agent that not only collects signal, but selects and changes the objective functions that make signal valuable.
Naming this two-way movement can sometimes partially remove complaint from being signal. The concept ‘enshittification / boka sarma’ tries to name a degradation dynamic in which platforms treat the user well and then squeeze the producer and then everyone; to the extent that the name moves pure grumbling closer to a language of analysis, it can loosen the cycle (🔗). (🔗) But naming alone is not enough; because the cultural attractiveness of complaint still produces circulation. The limit of reconciliation emerges here: there is no full “exit,” because a publicness lived outside the feed is now the exception. Limited re-subjectification is possible: to form complaint not as “pure judgment,” but as a confession that includes one’s own share; to move toward limiting the platform as an agent without reducing it to “singular evil”; to think recognition not only as a like-token but as the production of a shared order. Hegel’s scene of forgiveness and reconciliation is precisely for this reason not a “moral lesson,” but a closing technique: the way to close the cycle is not to flee outside the cycle, but to mutually recognize how the cycle is constituted.
[…] (Englisch, Turkisch) […]
LikeLike
[…] İnternetin Fenomenolojisi: Ruhsal Hayvanlar Alemi Nasıl Boka Sardı? / The Phenomenology of the Internet: How the Spiritual Animal Kingdom Went to Shit / Phänomenologie des Internets: Wie das geistige Tierreich in der Scheiße […]
LikeLike
[…] (Almancası, İngilizcesi) […]
LikeLike