Abstract
This essay examines the productive yet asymmetric relationship between Critical Theory and Large Language Models (LLMs), interrogating how Frankfurt School traditions of ideology critique, power analysis, and emancipatory thought illuminate the material and epistemic conditions of contemporary AI systems. While LLMs emerge from instrumental rationality and techno-capitalist imperatives, Critical Theory provides the conceptual apparatus necessary to diagnose their embeddedness within structures of domination, their role in perpetuating systemic inequities, and their potential for reifying rather than transcending existing power asymmetries. Through an analysis of bias amplification, computational capitalism, and the crisis of authenticity, this work argues that Critical Theory is not merely an external critique applied to LLMs but constitutes an essential epistemological framework for understanding AI as a socio-technical system imbricated within late capitalist social relations.
I. Introduction: The Historical Convergence of Instrumental Reason and Algorithmic Intelligence
I.i From the Culture Industry to the Algorithm Industry
The relationship between Critical Theory and artificial intelligence represents a contemporary crystallization of the Frankfurt School’s deepest anxieties about technology, rationality, and domination. When Max Horkheimer and Theodor W. Adorno formulated their critique of the “culture industry” in Dialectic of Enlightenment (1947), they identified mass cultural production as a mechanism for manufacturing consent under the guise of entertainment—transforming art into commodity, individuality into conformity, and enlightenment into deception. Cultural standardization, they argued, was not merely a symptom of capitalism but a condition for its reproduction, ensuring that audiences internalized the logic of exchange as a natural horizon of life.
Seventy-five years later, Large Language Models—trained on incomprehensibly vast textual corpora and deployed across the digital infrastructures that organize social, political, and economic life—embody what might be termed the “algorithm industry.” This new phase extends the logic of the culture industry from the realm of distribution to the realm of generation itself. No longer confined to reproducing culture, algorithms now actively produce it—generating texts, images, and discourses that shape cognition, identity, and collective memory. LLMs are not passive mirrors of language but active participants in its evolution, encoding within their probabilistic architectures the statistical sediment of human meaning.
The historical arc from mid-twentieth-century broadcast media to contemporary AI systems thus represents an intensification rather than a rupture in the dynamics of instrumental rationality that preoccupied the Frankfurt School. The culture industry’s centralized production and transparent mechanisms of manipulation have given way to a distributed, opaque, and self-updating system of algorithmic governance. The transition from radio and cinema to neural networks marks the movement from standardized content to standardized cognition, where the conditions of intelligibility themselves are increasingly mediated by computation.
This opacity—what Frank Pasquale (2015) has described as the “black box” of algorithmic decision-making—renders ideology less visible precisely as it becomes more total. The new machinery of sense-making fuses efficiency, prediction, and profit into a closed circuit of automated rationality, posing unprecedented challenges to democratic accountability and to the possibility of critical thought itself. In this sense, the algorithm industry completes the dialectic that Horkheimer and Adorno feared: the transformation of reason from a tool of liberation into an instrument of control.
I.ii The Stakes: Why Critical Theory Matters for AI Governance
The urgency of bringing Critical Theory into conversation with artificial intelligence arises from three interlocking realities.
First, AI systems have moved from experimental curiosities to core infrastructures of social coordination, mediating employment, finance, education, healthcare, and even the administration of justice. They increasingly constitute the invisible architecture of modern governance.
Second, the development and deployment of these systems are overwhelmingly concentrated within corporate and geopolitical power centers, embedding them in what Shoshana Zuboff calls surveillance capitalism: a regime of accumulation premised on the extraction and commodification of behavioral data.
Third, the dominant discourses of “AI ethics” have often proven technocratic and procedural, substituting checklists and transparency guidelines for genuine structural critique. The result is an ethics that manages risk rather than challenges power.
Without the analytic resources of Critical Theory, governance frameworks risk reproducing the very forms of domination they purport to regulate. As Kate Crawford argues in Atlas of AI (2021), artificial intelligence is neither artificial nor autonomous; it is the visible interface of an invisible extractive system encompassing labor exploitation, environmental degradation, and epistemological violence. Critical Theory exposes the ideological fantasy that technological progress is neutral or inevitable. It compels us to situate AI within the long durée of capitalist rationalization—the same historical process that once subordinated nature and labor to instrumental reason and now seeks to commodify language, cognition, and creativity themselves.
Moreover, Critical Theory’s normative ambition—its insistence on the possibility of emancipation—reclaims the political horizon that technical discourses tend to foreclose. To speak of bias, power, and justice in AI is to engage not merely in risk management but in the critique of society: to ask who benefits, who decides, and who bears the costs of computational governance.
I.iii Potential Setbacks and Limitations
Yet the encounter between Critical Theory and AI is fraught with difficulties. The most immediate is the epistemic divide separating humanistic critique from technical practice. Many computer scientists regard Critical Theory as overly abstract or politically charged, while critical theorists often lack the technical literacy to intervene substantively in machine learning discourse. The result is a mutual estrangement that impoverishes both sides: algorithms remain unexamined in their social meaning, and theory risks drifting into moralism without praxis.
A second challenge lies in corporate co-optation. The burgeoning field of “AI ethics” has become a site of institutional capture, where corporations deploy the vocabulary of fairness, transparency, and accountability as instruments of ethics washing—deflecting scrutiny while consolidating legitimacy. Under such conditions, critique itself risks commodification: radical concepts like alienation or exploitation are reduced to compliance metrics.
Finally, there is a temporal asymmetry between critical reflection and technological acceleration. Theorists labor to interpret systems that evolve faster than conceptual language can adapt. The velocity of innovation creates a form of epistemological precarity: the object of critique mutates as it is being understood.
Nevertheless, these very challenges render the task of a Critical Theory of AI all the more urgent. Its goal is not to produce immediate solutions but to preserve the conditions for thought itself—to keep open the space of collective deliberation about what forms of life we wish to sustain in an era when reason itself has become algorithmic.
II. Theoretical Framework: The Frankfurt School and Technological Rationality
II.i Instrumental Reason and the Domination of Nature
At the heart of the Frankfurt School lies a diagnosis of the Enlightenment’s paradox: that reason, in its quest to master nature, becomes a new form of domination. Horkheimer’s distinction between objective reason, concerned with ends and values, and subjective (instrumental) reason, concerned solely with efficiency, anticipates the epistemology of machine learning. LLMs, optimized through loss functions and statistical minimization, epitomize a form of rationality emptied of normative content. They ask not “what ought to be done?” but “how can prediction be improved?”—the quintessential question of instrumental reason.
This extension of rationalization from the material world into the symbolic order represents a profound mutation in the Enlightenment project. Language, once the medium of understanding, becomes itself an object of calculation. LLMs transform meaning into probability, dialogue into data, and thought into optimization. In the process, the distinction between rational mastery and reification collapses: the very tools designed to enhance understanding risk obscuring the conditions of intelligibility.
Adorno foresaw this trajectory. In his critique of total rationalization, he warned that the more completely reality is rendered calculable, the less it can be experienced as meaningful. The irrationality of the rational manifests today as algorithmic hallucination, bias, and the production of plausible falsehoods—symptoms of a system that mimics understanding while evacuating content. The domination of nature has become the domination of sense.
II.ii Ideology Critique and the Naturalization of Social Relations
Ideology critique, a cornerstone of Critical Theory, aims to denaturalize social relations that present themselves as necessary. Machine learning systems, by encoding patterns of historical data, perform the opposite operation: they naturalize contingency, transforming social hierarchies into algorithmic inevitabilities. Trained on biased data, an LLM reproduces and legitimates those biases under the aura of mathematical objectivity.
This process exemplifies what Louis Althusser termed interpellation: the calling of subjects into ideological structures that precede them. When a hiring algorithm favors male-coded résumés, or a predictive policing model targets racialized neighborhoods, the system performs ideological work—it constitutes social subjects according to pre-existing relations of power, all while claiming neutrality.
As Safiya Noble demonstrates in Algorithms of Oppression (2018), these logics operate through aggregation rather than intention: bias emerges not from explicit malice but from the statistical accumulation of historical prejudice. The ideology of algorithmic neutrality, which equates formal abstraction with fairness, conceals the fact that all data are socially produced and therefore politically saturated. What appears as a technical artifact is, in truth, an epistemic crystallization of social history.
II.iii The Culture Industry Redux: Commodification and Standardization
Adorno and Horkheimer’s analysis of the culture industry prefigures the logics of AI-generated content with uncanny precision. Their concept of pseudo-individualization—the production of superficial diversity within a framework of underlying standardization—finds its algorithmic apotheosis in generative text and image models. LLMs can produce infinite linguistic variation, yet all variations are bounded by statistical regularities drawn from the same cultural archive. Apparent novelty conceals structural repetition.
This is not mere mimicry but a deepening of commodification. In the age of generative AI, culture itself becomes a derivative asset, continuously recombined for optimization. Platforms governed by engagement metrics transform communicative reason into calculative attention, measuring meaning in clicks and conversions. What once was the commodification of art has become the commodification of expression—the conversion of language itself into capital’s newest raw material.
In this sense, the LLM is the final form of the culture industry: an apparatus that not only distributes ideology but automates its production. By simulating creativity while remaining bound to the logic of statistical repetition, it enacts Adorno’s warning that under total rationalization, the difference between art and advertisement, truth and entertainment, collapses entirely.
III. Uncovering Systemic Bias: Power, Representation, and Algorithmic Discrimination
III.i Beyond Fairness Metrics: The Structural Production of Bias
Mainstream approaches to algorithmic fairness often reduce bias to a technical anomaly—something that can be corrected through statistical parity, calibrated thresholds, or adjusted loss functions. Within this framework, fairness becomes a property of the model, measurable and optimizable. Yet as Critical Theory makes clear, such procedural remedies merely manage bias rather than interrogate its conditions of production. They treat symptoms as though they were causes.
Ruha Benjamin’s Race After Technology (2019) incisively reframes algorithmic bias as a manifestation of “discriminatory design”: a systemic phenomenon through which racism, sexism, and other forms of domination are not merely reflected but re-engineered within technological infrastructures. In this view, bias is not a deviation from a neutral norm but an expression of the social order itself—a continuation of hierarchy by computational means.
Large Language Models (LLMs), trained on internet-scale corpora that mirror centuries of unequal representation, exemplify this dynamic. The linguistic record from which they learn is not a transparent archive of human knowledge but a stratified sedimentation of power. Dominant voices are overrepresented, marginalized perspectives erased or distorted, and historical violence encoded as linguistic regularity. When an LLM associates certain names with criminality, genders with professions, or languages with inferiority, it performs what Adorno might have called a “second nature” of ideology: the transformation of historically contingent prejudices into seemingly objective statistical truths.
From a Critical Theory perspective, therefore, algorithmic fairness cannot be achieved through optimization alone. The fundamental questions are political and epistemological:
Who defines the training corpus? Who exercises control over the infrastructure of computation? Whose speech is amplified, whose is excluded, and who profits from the resulting system?
To address bias requires confronting the material structures that sustain it—capital concentration, data colonialism, and the asymmetrical distribution of technological agency. The point is not simply to make LLMs “less unfair” but to challenge the conditions under which they participate in the reproduction of exploitation and domination.
III.ii Epistemic Violence and the Politics of Representation
Beyond questions of distributional fairness lies a deeper terrain: epistemic violence, or the systematic silencing and appropriation of marginalized knowledges. LLMs trained predominantly on English-language, Western, and Global North sources encode particular epistemologies as universal while relegating others to the margins. This constitutes not mere omission but an extension of colonial epistemic hierarchies into digital form.
Gayatri Spivak’s notion of epistemic violence in Can the Subaltern Speak? resonates powerfully here: the subaltern cannot speak not because they lack voice but because dominant structures render their speech unintelligible. LLMs, in reproducing the linguistic norms of the dominant, automate this process. Indigenous knowledge systems, oral traditions, and non-Western philosophies often appear in training corpora only as exoticized objects of study or through colonial mediation. The result is a computational universalism masquerading as neutrality.
The consequences are material as well as epistemic. An AI system trained on Western medical literature may fail to recognize culture-specific expressions of illness; an LLM integrated into legal analysis may normalize Anglo-American jurisprudence as a global standard. Such failures are not technical errors but expressions of what Walter Mignolo calls “epistemic coloniality”—the persistence of colonial power in the organization of knowledge itself.
Critical Theory, enriched by postcolonial and decolonial perspectives, thus demands more than inclusion. To simply “add diversity” to training data leaves intact the architectures that determine what counts as knowledge. The challenge is ontological and infrastructural: to reimagine AI development as a plural, dialogical process rather than a universalizing one. This would entail alternative data regimes, participatory governance, and recognition of data sovereignty—particularly for Indigenous and subaltern communities whose knowledge has long been expropriated.
As Boaventura de Sousa Santos writes, “There is no global social justice without global cognitive justice.” The same holds for AI: epistemic justice is a precondition for technological justice.
III.iii Intersectionality and the Complexity of Algorithmic Harm
Kimberlé Crenshaw’s theory of intersectionality provides a crucial analytic lens for understanding how algorithmic discrimination manifests in complex, compounding ways. Systems of oppression—race, gender, class, sexuality, disability—do not operate independently but through interlocking mechanisms that produce specific, historically situated forms of harm.
An LLM may encode distinct biases against women and against Black individuals, but the experiences of Black women emerge from an intersectional matrix that cannot be decomposed into separate variables. In algorithmic contexts, this translates into nonlinear discrimination: harms that arise from the interaction of multiple attributes in ways that escape detection by conventional fairness metrics.
For instance, datasets underrepresenting Black women in professional contexts produce models that invisibilize their existence or render them “statistical anomalies.” Similarly, minorities, disabled, and working-class individuals—those at the nexus of multiple marginalizations—bear the heaviest burdens of algorithmic misrecognition. These are not incidental errors but the expression of deeper structural logics: the compression of social complexity into quantifiable categories that erase difference in the name of calculability.
Intersectionality therefore compels a shift from abstract fairness to contextual justice. It requires attentiveness to how algorithmic systems mediate lived realities differently across social positions, and how these differential impacts reinforce historical hierarchies. The intersectional framework reorients AI ethics from technocratic management to political critique—transforming “bias mitigation” into a struggle for recognition and redistribution.
IV. Transparency, Accountability, and the Black Box Problem
IV.i Opacity as Domination: The Politics of Inscrutability
The opacity of large neural networks—often described as their “black box” nature—represents not simply a technical difficulty but a new form of epistemic domination. When algorithmic systems govern access to employment, healthcare, credit, and justice, yet their internal logics remain inaccessible, the result is a profound asymmetry between those who design and those who are governed by these systems.
Critical Theory reveals that opacity functions ideologically: it transforms contingent design choices into a fetishized inevitability. The claim that neural networks are too complex for human comprehension reinforces the authority of technical elites and naturalizes algorithmic governance as beyond public scrutiny. As Langdon Winner famously argued, technologies have politics—not because of their hardware but because of the power relations they inscribe and conceal.
This inscrutability erodes the epistemic preconditions of democracy. As Jürgen Habermas observed, the legitimacy of decision-making depends on communicative transparency—the ability of citizens to participate meaningfully in rational discourse. In the algorithmic age, however, the communicative sphere is replaced by computational opacity, foreclosing the possibility of contestation. What cannot be seen cannot be resisted.
Opacity thus serves as a new mode of domination: a technocratic enclosure of reason itself. To confront it requires reclaiming interpretability not merely as a technical objective but as a political right—the right to understand, contest, and transform the systems that shape collective life.
IV.ii The Limits of Explainability: Technical Solutions to Political Problems
In response to concerns over opacity, the field of explainable AI (XAI) has emerged, offering methods to render models more interpretable. Yet from a critical-theoretical standpoint, XAI exemplifies the limits of technical reformism. Most explainability frameworks generate post-hoc rationalizations—narratives constructed for human consumption that may bear little relation to the model’s actual decision pathways. Even when accurate, explanations are situated discourses, intelligible only within particular social and institutional contexts.
Moreover, the very demand for explainability can depoliticize structural issues. By framing opacity as a cognitive gap rather than a power relation, it risks obscuring the fact that the real problem is not ignorance but unaccountable authority. A perfectly transparent algorithm can still reproduce domination if the purposes it serves remain unjust. As Wendy Chun notes, “transparency does not guarantee democracy; it often substitutes visibility for accountability.”
Critical Theory therefore shifts the question from how we can explain AI to who controls its development, deployment, and interpretation. What is needed is substantive accountability: mechanisms through which communities can contest, reshape, or even reject algorithmic systems. In other words, the goal is not clearer explanations but redistribution of epistemic and political power.
IV.iii Toward Democratic and Participatory AI Governance
In keeping with its emancipatory project, Critical Theory envisions accountability not as a compliance procedure but as democratic participation. A just AI system cannot be built merely by making private systems more legible; it must be governed by the collective subjects whose lives it shapes. This implies a radical reconfiguration of AI governance along participatory lines:
-
Participatory Design: Involve affected and marginalized communities from the earliest stages of design, allowing their knowledge and priorities to shape the purposes and parameters of AI systems, rather than being retrofitted as constraints.
-
Community Auditing: Establish independent, community-led auditing bodies empowered to investigate algorithmic harms and enforce remedies, shifting oversight from corporations to civil society.
-
Algorithmic Impact Assessments: Mandate comprehensive, public impact evaluations before deployment, akin to environmental assessments, ensuring that social, economic, and ethical consequences are scrutinized democratically.
-
Data Sovereignty: Recognize collective and Indigenous rights over data resources, rejecting the extractivist logic that treats linguistic and cultural data as raw material for corporate accumulation.
-
Public AI Infrastructures: Develop publicly governed AI systems oriented toward human development, education, and welfare rather than private profit—a commons-based alternative to corporate monopolies.
Each of these mechanisms points beyond reform to repoliticization: the recovery of public control over the means of cognition and communication. As Nancy Fraser reminds us, emancipation demands not only redistribution and recognition but representation—the democratization of decision-making itself.
Critical Theory thus calls for a transformation of AI governance from the management of risk to the practice of freedom—a collective reappropriation of reason from its algorithmic enclosure.
V. Computational Capitalism and the Restructuring of Power
V.i LLMs as Instruments of Capital Accumulation
To understand large language models (LLMs), one must situate them within the broader political economy of digital and cognitive capitalism. Contemporary AI development is dominated by a small oligopoly of corporations—Google, Microsoft, Amazon, Meta, OpenAI—whose economic power depends on extracting value from human data, automating intellectual labor, and extending commodification into ever-new domains of social life. Within this logic, LLMs perform multiple, interlocking functions: they automate content production to reduce labor costs; enable advanced user profiling and microtargeted advertising; generate new commercial products and subscription ecosystems; and consolidate market dominance through proprietary control over data, infrastructure, and compute resources.
From a Critical Theory perspective, these technologies are not neutral instruments but are deeply inscribed within capitalist social relations. Their design trajectories reflect not disinterested pursuits of technical excellence, but strategic imperatives of profitability, competitive advantage, and shareholder value. The persistent corporate reluctance to share training data, architectures, or research findings thus expresses not only intellectual property concerns but a structural contradiction within capitalist knowledge production—a mode that simultaneously depends on openness and systematically resists it. Knowledge must circulate to generate innovation, yet remain enclosed to preserve rent-seeking monopolies.
The extraordinary capital intensity of LLM development—entailing tens of millions of dollars in compute costs and immense energy consumption—further entrenches this concentration of power. Such material and ecological barriers to entry create what Jathan Sadowski calls structural dependency: societies become reliant on AI systems controlled by corporations whose interests diverge sharply from the public good. This dependency is reinforced by state-corporate alliances, in which governments rely on private models for administrative, military, or surveillance purposes, thereby deepening asymmetrical relations of control and dependency. The result is an emerging digital neo-feudalism, in which cognitive infrastructures are privately owned yet publicly indispensable.
V.ii Automating Cultural Production: The Transformation of Intellectual Labor
LLMs mark a qualitative leap in the automation of intellectual and creative labor. Whereas earlier waves of automation displaced manual and routine cognitive work, generative AI extends mechanization into domains once considered uniquely human: writing, translation, artistic composition, strategic reasoning, even scientific hypothesis generation. This development raises not only distributive questions about who benefits from automation, but existential questions concerning creativity, meaning, and human flourishing.
Critical Theory’s concept of alienation provides a crucial interpretive lens. As creative practices are absorbed into algorithmic production pipelines, human labor becomes estranged from both its process and its products. Writers, artists, and researchers find their intellectual signatures reproduced by machines trained on their work—often without credit, compensation, or consent. Capital thus appropriates the cultural commons of human creativity, transforming it into a data resource for rent extraction. The promise of productivity becomes, paradoxically, a new form of dispossession.
Moreover, the capitalist valorization of efficiency and scalability threatens to marginalize forms of expression that resist commodification—experimental, politically subversive, or unprofitable works. As synthetic content proliferates, cultural production risks becoming recursive: future LLMs trained on earlier synthetic outputs will generate homogenized feedback loops, attenuating novelty, criticality, and authenticity. This process mirrors Adorno and Horkheimer’s “culture industry”, in which the logic of mass production standardizes aesthetic experience and subordinates art to exchange value.
Beyond the economic displacement of labor lies a deeper crisis of meaning. If machines can replicate our creative outputs, what remains as the basis of human self-realization? Critical Theory’s insistence on non-alienated labor—work as a mode of self-expression and social cooperation rather than domination—implies that emancipation cannot be reduced to redistributing AI-generated wealth. It requires a reorientation of technological development toward human autonomy and collective flourishing, reclaiming creativity as a practice of freedom rather than a data source for capital.
V.iii Platform Power and Algorithmic Governance
LLMs are now integral to platform infrastructures that function as quasi-sovereign entities, governing speech, visibility, and participation in the digital public sphere. Platforms such as YouTube, X, Facebook, and TikTok deploy algorithmic systems to moderate content, curate recommendations, and enforce norms—effectively exercising regulatory authority without democratic mandate. Their decisions shape political discourse, affect reputations, and define the boundaries of public reason. Yet these immense powers remain largely insulated from public scrutiny, constrained only by market competition or reactive regulation rather than proactive democratic oversight.
Adorno’s notion of the “administered society” captures this condition precisely. Algorithmic governance embodies a bureaucratic rationality that substitutes procedural control for political deliberation. The rhetoric of neutrality—claims that “the algorithm decides”—masks the normative assumptions embedded within AI systems and the economic interests they serve. Content moderation algorithms, for example, routinely encode culturally specific standards of speech, marginalizing non-Western idioms and minority modes of expression, while affording protection to dominant actors and commercial partners.
Resisting this algorithmic domination thus requires more than technocratic reform. It demands redistribution of communicative power. This could include dismantling platform monopolies, mandating interoperability to dilute network dependencies, establishing cooperative or public alternatives, and creating democratic oversight bodies empowered to shape platform policies. The ultimate aim, in Habermasian terms, is to restore communicative rationality—the capacity of citizens to deliberate freely—against the colonizing tendencies of algorithmic and corporate rationality.
VI. The Crisis of Authenticity: Synthetic Media and Epistemological Destabilization
VI.i From Mechanical Reproduction to Algorithmic Generation
Walter Benjamin’s classic essay “The Work of Art in the Age of Mechanical Reproduction” analyzed how photography and film shattered the aura of the unique artwork, democratizing access while transforming aesthetic experience. Generative AI radicalizes this dynamic, extending it from mechanical reproduction to algorithmic generation. LLMs and diffusion models no longer copy existing works; they synthesize new ones that have no original—creating artifacts that simulate, rather than reproduce, reality itself.
This transformation destabilizes long-standing epistemic anchors. When text, image, and voice can be generated at scale, authenticity, authorship, and authority all become precarious. How can provenance be verified when synthetic content is indistinguishable from the real? How can trust persist in communicative interactions when interlocutors may be algorithmic constructs? How can accountability survive when responsibility for synthetic speech is diffused across datasets, developers, and users? These are not merely technical puzzles but ontological and political crises, striking at the foundations of public knowledge and democratic deliberation.
VI.ii Deepfakes and the Weaponization of Synthetic Media
The emergence of deepfakes and synthetic media intensifies these crises by introducing new forms of harm. Non-consensual pornographic deepfakes violate dignity and autonomy, disproportionately targeting women and reinforcing patriarchal structures of objectification. Political deepfakes threaten democratic legitimacy by fabricating evidence, manipulating perception, and eroding epistemic trust. Synthetic impersonations enable fraud, surveillance, and psychological operations on a scale previously unimaginable.
Critical Theory reveals that these harms are not evenly distributed. Power determines who wields synthetic media and whose reality is discredited by it. Wealthy states, corporations, and political actors can deploy generative technologies to manufacture consensus, while marginalized voices face epistemic erasure: their genuine testimonies dismissed as fabrications. This inversion—where truth is treated as falsehood and simulation as truth—embodies the dialectic of enlightenment in its contemporary form: the rationalization of deception as an instrument of domination.
VI.iii Toward an Epistemology of the Synthetic
The challenge, then, is not to nostalgically restore pre-digital notions of authenticity but to forge new epistemic frameworks capable of navigating a synthetic world. This requires institutional, infrastructural, and cultural responses that preserve democratic reason within a landscape of algorithmic simulation.
-
Infrastructural Authentication: Implement cryptographic watermarking and provenance-tracking systems for synthetic media, combined with enforceable legal standards for disclosure and liability.
-
Critical Media Literacy: Cultivate public capacities to evaluate information through source verification, cross-corroboration, and contextual reasoning rather than naive visual or textual trust.
-
Institutional Adaptation: Reform journalistic, academic, and legal institutions to recognize the ontological instability of digital evidence, developing new protocols for validation and accountability.
-
Democratic Governance of Generative Systems: License or restrict the deployment of high-capacity generative models, ensuring that their use aligns with human rights and democratic principles.
However, such measures must avoid reproducing existing hierarchies. Verification systems must not become new instruments of epistemic gatekeeping that privilege elites or established institutions. Media literacy must be universally accessible, and regulation must target malicious uses rather than criminalizing creative experimentation. The goal is not prohibition but public stewardship: a democratic negotiation over how synthetic media can coexist with truth, justice, and pluralism.
VII. Toward an Emancipatory AI: Critical Theory as Constructive Framework
VII.i Negative Dialectics and Technological Development
Adorno’s notion of negative dialectics—a form of critique that resists reconciliation and insists upon the persistence of contradiction, non-identity, and historical particularity—offers a methodological compass for rethinking the trajectory of artificial intelligence. Against the technological and managerial impulse toward synthesis, closure, and optimization, negative dialectics sustains a vigilant awareness of what is excluded or repressed by systems that claim universality. It urges thought to remain “unreconciled” with domination in its rationalized forms, and thereby serves as an antidote to the ideological naturalization of algorithmic power.
Applied to AI governance, negative dialectics counsels a stance of critical tension rather than premature synthesis. It means refusing the illusion that ethical or technical frameworks can definitively resolve conflicts between values such as privacy and transparency, innovation and precaution, efficiency and justice. Instead, it affirms the productive dissonance among these principles as the site of democratic deliberation itself. As Andrew Feenberg has argued, technology is not a neutral instrument but a social battleground; its forms embody historical struggles over meaning and control. Negative dialectics, in this sense, becomes a method of keeping those struggles visible.
In practical terms, this entails remaining skeptical of the technocratic fantasy that algorithmic systems can solve inherently political problems. It requires engagement with technical possibilities—machine learning interpretability, decentralized architectures, data cooperatives—without succumbing to the ideology of technological salvation. It also demands sensitivity to context: the recognition that the harms of automation, bias, or surveillance are not abstract but historically situated, differently distributed across race, class, and geography. A genuinely critical engagement with AI must therefore resist the universalization of “AI ethics” as an abstract, acontextual domain and instead treat every system as a concretely embedded social formation..
VII.ii Reimagining AI Development: Alternative Trajectories
Critical Theory’s emancipatory horizon invites the question: What would AI systems look like if their design were oriented toward human flourishing rather than profit, toward emancipation rather than domination? To imagine such trajectories is not to indulge utopian speculation but to reclaim the political agency that neoliberal governance seeks to foreclose. Several alternative paradigms, though embryonic, illuminate possible paths forward:
Community-Controlled AI: Systems developed by and for specific communities, grounded in local epistemologies and priorities. Indigenous-led AI projects for language revitalization or data sovereignty exemplify how technology can serve cultural preservation rather than extraction. Similarly, disability justice organizations designing assistive algorithms articulate a model of co-determination that reclaims technical agency from corporate monopoly.
Degrowth AI: A deliberate scaling down of compute-intensive frontier models that exacerbate ecological degradation and resource inequality. Drawing on degrowth theory, this approach values sufficiency, efficiency, and sustainability over relentless expansion. It privileges small-scale, adaptive systems—model compression, few-shot learning, modular computation—that align technical design with planetary limits.
Open and Collaborative Development: Moving beyond proprietary enclosures toward open-source research ecosystems that foster scrutiny, transparency, and democratic participation. This vision reclaims the Enlightenment ideal of public reason in the digital age, where collective inquiry replaces corporate secrecy as the motor of innovation. While openness introduces risks of misuse, these are outweighed by the emancipatory potential of collective stewardship over knowledge infrastructures.
AI for the Commons: Publicly governed AI infrastructures oriented toward producing social goods—education, scientific research, healthcare, environmental restoration—rather than capital accumulation. Such systems could reconfigure the digital economy around principles of solidarity and redistribution, transforming data from a commodity into a shared resource.
Labor-Centric AI: Technologies designed to augment human skill rather than replace it, governed through collective bargaining and workplace democracy. This model, resonant with Marx’s vision of the free association of producers, reconceives automation as a tool of liberation rather than dispossession. It invites unions, cooperatives, and workers themselves to participate in shaping the design, deployment, and oversight of algorithmic systems.
These alternative trajectories are not yet realities but interventions in the imagination—conceptual openings that challenge the supposed inevitability of capitalist AI. As Bernard Stiegler reminds us, every technology is a pharmakon—both poison and remedy. The task of an emancipatory AI politics is to cultivate the therapeutic potential of technical systems through collective governance and critical reflection..
VII.iii The Role of Critique in Shaping Technological Futures
Critical Theory’s role in the age of large language models is not to prescribe final solutions but to sustain the capacity for critique itself—to preserve spaces for reflection, dissent, and democratic deliberation amid accelerating automation. By denaturalizing AI—revealing it as the sedimentation of political and economic power rather than neutral progress—it reopens the horizon of possibility.
This critical engagement unfolds across multiple, interconnected domains:
-
Academic research that situates AI within the longue durée of capitalism, surveillance, and labor relations;
-
Activist mobilization that contests harmful deployments and demands reparative justice;
-
Policy advocacy that translates critical insights into institutional mechanisms of accountability;
-
Alternative design practices that demonstrate the feasibility of non-exploitative technologies; and
-
Public education that cultivates critical AI literacy as a condition of democratic participation.
The relationship between Critical Theory and AI is, by necessity, asymmetrical: large language models cannot interpret Adorno, but human actors can employ Adornian critique to interpret, contest, and redirect them. In this dialectical encounter, critique functions not as negation alone but as praxis—the active transformation of technological structures through collective agency. Against the fatalism of algorithmic inevitability, Critical Theory asserts the right to imagine otherwise: technologies ordered toward emancipation rather than control, toward justice rather than accumulation, toward human flourishing rather than the efficient reproduction of capital.
VIII. Conclusion: The Unfinished Project of Critical AI
The application of Critical Theory to large language models is neither a closed discourse nor a technical subfield—it is an evolving and inherently unfinished project. As AI systems proliferate across the infrastructures of governance, healthcare, and culture, the stakes of critique intensify. Yet this task faces mounting obstacles: the corporate capture of ethical discourse, the disciplinary silos separating humanistic and technical inquiry, and the sheer velocity of technological innovation that outpaces deliberative institutions.
Nevertheless, Critical Theory offers indispensable intellectual resources for confronting this new epoch. Its insistence that technology be understood within structures of domination, its commitment to emancipation through reason, and its analytic attentiveness to ideology and reification together provide the conceptual scaffolding for a genuinely democratic AI politics. It calls for neither rejection of technology nor naive optimism but for an engaged transformation—an insistence that reason itself remain a human and collective enterprise.
The fundamental question is no longer whether AI will transform society—that transformation is underway—but what forms of life and power it will reproduce. Will it deepen surveillance, inequality, and epistemic colonialism, or can it be redirected toward participation, solidarity, and autonomy? Critical Theory cannot answer these questions conclusively, but it provides the tools to pose them with rigor, and the normative imagination to guide their resolution.
As Horkheimer wrote in 1937, Critical Theory “is not merely a hypothesis in the business of men; it is an element in the historical effort to create a world which satisfies the needs and powers of men.” In this spirit, the project of critical AI remains open, perpetually incomplete—a refusal of closure that is itself emancipatory. The aim is not to perfect technological reason, but to humanize it; not to escape contradiction, but to inhabit it critically, transforming it into the engine of freedom.
References
Adorno, T. W., & Horkheimer, M. (1947/2002). Dialectic of Enlightenment: Philosophical Fragments. Stanford University Press.
Benjamin, R. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Polity Press.
Benjamin, W. (1935/2008). "The Work of Art in the Age of Mechanical Reproduction." In The Work of Art in the Age of Its Technological Reproducibility, and Other Writings on Media. Harvard University Press.
Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.
Crenshaw, K. (1989). "Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics." University of Chicago Legal Forum, 1989(1), 139-167.
Horkheimer, M. (1937/1972). "Traditional and Critical Theory." In Critical Theory: Selected Essays. Continuum.
Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.
Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press.
Sadowski, J. (2020). "The Internet of Landlords: Digital Platforms and New Mechanisms of Rentier Capitalism." Antipode, 52(2), 562-580.
Winner, L. (1980). "Do Artifacts Have Politics?" Daedalus, 109(1), 121-136.
Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.
No comments:
Post a Comment