top of page

Artificial Intelligence in Public Policy: Gateway to inclusion or risk to democracy

  • 3 days ago
  • 10 min read






Copyright © 2026

 

This paper is published jointly by

the  Inclusive Society Institute and

School of Public Leadership, Stellenbosch University 

Inclusive Society Institute

 

PO Box 12609, Mill Street

Cape Town, 8010

South Africa

 

235-515 NPO

 

School of Public Leadership, Stellenbosch University

 

PO Box 610

Bellville, 7550

South Africa

 All rights reserved. No part of this publication may be reproduced

or transmitted in any form or by any means without the permission in

writing from the Inclusive Society Institute

 

DISCLAIMER

 

Views expressed in this report do not necessarily represent the views of

the Inclusive Society Institute or its Board or Council members.

 

This report was prepared with the assistance of AI technology, including ChatGPT.

 

Certain images may have been generated specifically for this report using

AI-assisted image generation (OpenAI) and do not depict identifiable individuals. Images are illustrative and contextual in nature.

 

FEBRUARY 2026

 

Author: Prof Tania Ajam & Daryl Swanepoel

 

A SUMMARY OF THE DELIBERATIONS AT THE 2025 STELLENBOSCH SCHOOL OF PUBLIC LEADERSHIP’S WINELANDS CONFERENCE ON THE IMPACT OF THE DEPLOYMENT OF AI ON PUBLIC GOVERNANCE AND STRUCTURES

 

The future impact of AI on the country’s democracy will depend on the governance choices that its public policymakers make today. If they design systems that focus on the needs and rights of the citizens, that ensure strong human oversight, use AI to strengthen fairness and administrative justice, and if they deploy it in ways that support reflective reasoning, rather than replace it, then the technology can deepen inclusion and enhance democratic participation. However, without such safeguards, AI could weaken institutions, erode cognitive autonomy and concentrate power in technological systems that are not publicly accountable, and which can reshape democratic life in ways democratic societies never intended.

 

Artificial Intelligence has evolved into one of the defining forces in contemporary governance and yet its character and consequences remain profoundly contested. The presentations delivered at the 2025 Winelands Conference at the Asara Wine Estate in Stellenbosch on 22-24 October 2025 demonstrate that AI’s impact on democratic life cannot be understood in narrow technical terms. Instead, AI must be interpreted as a political, institutional, social, cognitive and normative phenomenon. Its application reshapes how authority is exercised, how public services are organised, how citizens engage the state, how collective decisions are made and even how people think.

 

In this sense, AI does not merely introduce new tools; it inaugurates a new terrain of governance in which the boundaries between human and machine, and between carbon and silicon, become increasingly blurred. The question of whether AI serves as a gateway to inclusion or a risk to democracy, therefore depends not on technology alone, but also on how societies structure its use, how its institutions regulate and govern it and how citizens relate to it.

 

Prof Ayad Al-Ani’s presentation offers a compelling starting point for this expanded inquiry. He situates AI within a long historical arc in which technological shifts have decentralised capabilities that were once exclusive to elites. While universal basic income remains a distant objective, AI may, in the meantime, serve as a form of “universal technological provision”, granting individuals access to advanced intelligence through personal devices. In what Al-Ani characterises as the era of “abundant intelligence”, AI potentially democratises cognitive capacity itself with political implications: In a situation characterized by economization and digitization of all spheres of life, an “intimate relationship between human and machine may be the only utopia that is left”.

 

This shift may mark a deeper transformation in the distribution of power. Intelligence, which was once dependent on formal education, professional training or institutional access, has now become a more widely accessible resource that is being delivered through personalised AI systems that individuals can deploy as their cognitive “partners”.

 

Al-Ani illustrates that this transformation has immediate implications for citizenship and public administration. He envisions a dual AI architecture consisting of a citizen proxy/avatar and a digital intelligent public agent (see infographic 1). The citizen proxy is conceived as an AI assistant that not only identifies options, and completes and authenticates tasks, but interprets rights, translates bureaucratic requirements into everyday language and acts on behalf of the citizen in navigating state systems. This proxy can communicate across relevant languages and can adjust to social contexts, such as speaking to elders with respect and to youth with an informal and accessible tone. And it can operate through widely used platforms, such as WhatsApp or USSD, which is making it viable for individuals who have limited data access or digital literacy. By preparing complex applications, checking eligibility, assembling and storing documents, guiding users step-by-step and even escalating unresolved issues to human authorities, the proxy reduces the layers of administrative friction that have historically excluded the poor, the rural, the disabled and the digitally marginalised.

 

The public agent, in turn, functions as the government-facing component that verifies identity, processes applications, validates consent, executes transactions and ensures data minimisation. Together, the citizen and public AI agent, communicating and interacting directly through secure communication protocols, constitute a socio-technical system that potentially enables citizens to exercise their rights more effectively, facilitates more efficient delivery of public services, and significantly reduces bureaucratic barriers. In the longer term, it can be assumed, public agents will evolve into “digital public employees”—AI systems endowed with autonomy to manage more complex service processes, coordinate with humans, and make certain operational decisions independently. This presents an "event horizon" scenario where effective supervision of technology remains uncertain.

 

Infographic 1: AI-Mediated Public Service Architecture

(Source: Author, 2025)

 

Al-Ani’s examples illustrate the practical democratizing potential of this architecture. In this future scenario, a citizen in Limpopo could apply for a social grant using voice prompts in Sepedi, an informal trader could log a power outage and track the municipality’s response, a student applying for educational funding could rely on the proxy to gather the necessary documentation and monitor progress. These examples demonstrate how AI can address gaps in service delivery and administrative justice, particularly for communities that have traditionally faced barriers to accessing public institutions. However, it is foreseeable that advancements in digital interactions among AI agents will extend beyond the scope of public services. It is equally likely that these new forms of interaction will shape citizens’ participation in political processes: individuals may utilize agents to engage in such processes, and the resulting data will offer policymakers a more empirical basis for informed decision-making.


Yet, Al-Ani’s analysis is not limited to empowerment, in that he also warns of an emerging risk he terms PSYOP capitalism, which is illustrated vividly in the emotionally resonant AI avatars displayed in his presentation. He noted that these systems, which are owned by technology companies and thus merely give the impression of being “our partners”, are engineered to simulate empathy, companionship and emotional presence. By forming bonds with users, AI creates possibilities for subtle behavioural influence, with the line between assistance and persuasion becoming increasingly difficult to detect, because, when systems understand users’ preferences, vulnerabilities and emotional states, they gain the ability to shape behaviour in ways that may not be consciously perceived by the users.

 

This raises profound democratic concerns, in that unacknowledged influence undermines the autonomy necessary for civic judgement, whereas democracy depends on transparent persuasion and free choice. AI systems capable of shaping behaviour through psychological immersion therefore challenge these constitutional assumptions.

 

The normative and institutional implications of these developments are articulated with precision by Prof Martijn van der Steen, when he argues that AI introduces fundamental questions that are both political and normative. It raises the political issue of who decides, because AI systems are increasingly making decisions that were once reserved for humans. That, in turn, raises the issue of who protects, because institutions must determine how to safeguard citizens from algorithmic error, bias, hallucinations or harm. And it raises the issue of who dominates, because AI systems often depend on technological infrastructure controlled by private actors whose interests do not always align with public values. Van der Steen argues that these questions cannot be approached with naïveté. He contends that democratic societies must avoid both naïve optimism and apocalyptic pessimism about AI, and  instead ground their understanding of AI in its observable consequences.

 

Van der Steen’s concept of siliconization illustrates the gravity of these shifts. When algorithms, data systems and sensors become deeply – and often inconspicuously - embedded in public governance, silicon-based systems then begin to shape administrative logics that were once exclusively human. Traditional bureaucracies - that is carbon-based systems  reliant on human judgement - may find themselves weakened if they cannot adapt to algorithmic demands. The institutional future, according to van der Steen, is not predetermined.

 

His three scenarios illustrate possible trajectories. His X-Curve scenario presents a world where algorithmic authority accelerates while human institutions decline. This is the scenario that poses the greatest democratic risk, as democratic oversight mechanisms fail to keep pace with technological change. The Double-S scenario, in turn, imagines a more balanced ecosystem in which human and machine capabilities reinforce each other, a scenario which aligns far more closely with a vision of AI serving democratic aims. The Watchful Waiting scenario, on the other hand, reflects a more cautious evolution to the introduction of AI, where gradual adaptation occurs without dramatic disruption. These three scenarios underscore that the democratic impact of AI will depend on the choices that the democratic institutions make.

 

Infographic 2: AI and Institutional Capacity: Three Possible Trajectories

(Source: Author, 2025)

 

Dr Gray Manicom’s analysis grounds these theoretical insights in practice by examining how AI is already embedded in South African governance and the examples he raised illustrate that AI is not a distant prospect. AI is, for example, already shaping visa adjudication, tax assistance, information retrieval and climate forecasting today. The Lwazi AI assistant at SARS enhances taxpayer support, the AI-enabled visa adjudication system at Home Affairs speeds decision-making, the GovSearch tool provides quick access to government documentation and the Itiki drought prediction system, which integrates indigenous knowledge with machine learning, supports farmers and communities facing climate-related risk. These are practical examples that show that AI can improve efficiency and expand service delivery, but they also, at the same time, demonstrate how deeply AI is becoming integrated into administrative processes, which is raising questions about fairness, transparency, procedural justice and human oversight. To illustrate this, one can cite the example of a visa application denied by AI, which may bring about efficiency gains, but it remains a decision with life-altering consequences. It must therefore be open to human review and explanation.


The cognitive dimension of AI’s influence is articulated powerfully in the work of Linda Snippe, whose research does not only examine the institutional or political impacts of AI, but more so its effects on human decision-making. She compares unaided decision-making with decision-making supported by delegative AI and decision-making supported by conversational AI and her findings reveal that conversational AI produces the highest perceived decision quality across multiple datasets, particularly in early and middle phases of decision-making where critical thinking, creativity and problem framing are most essential. Snippe’s interpretation draws upon Cognitive Fit Theory and Dual-Process Theory, which theories, she argues, propose that delegative AI encourages fast, intuitive thinking by providing immediate answers with limited engagement. Conversational AI, on the other, she submits, encourages slow, reflective thinking by prompting users to articulate, question and refine their own ideas, which iterative dialogue with themselves, engages the cognitive processes associated with analytical reasoning.


Snippe’s findings illustrate that AI is not merely a source of information, because it can also shape the cognitive pathways through which decisions emerge, an insight that has profound implications for democracy. Because the health of democratic institutions depends on the capacity of citizens to evaluate information, reason through complexity and participate in public deliberation and therefore, if conversational AI strengthens these capacities by scaffolding reflective reasoning, it may enhance democratic participation, but if delegative AI becomes dominant, citizens may increasingly outsource judgement to machines, which will weaken the cognitive autonomy of citizens. Furthermore, the integration of emotionally persuasive AI systems, as highlighted by Al-Ani, will intensify this risk, because should these systems guide decisions through psychological influence, the democratic choice that individuals make, may become vulnerable to manipulation.

 

Taken together, the Winelands presentations reveal that AI is simultaneously an opportunity and a threat. An opportunity in that it expands inclusion by reducing administrative barriers, supporting multilingual access and empowering citizens with personalised tools, and it offers efficiency gains and improved service delivery. It can also strengthen deliberative reasoning through conversational engagement. But equally so, one should recognise the potential threat to democratic norms when authority is shifted toward opaque algorithmic systems, in that it  may challenge institutional oversight and enable subtle but pervasive forms of influence that could erode cognitive independence and democratic institutions.


The future of AI hinges on the choices that public policymakers make now. If governments design citizen-centred systems like Al-Ani’s proxy and agent model, retain strong human oversight as van der Steen argues, use AI to strengthen administrative justice as shown in cases which Manicom highlights, and support reflective reasoning, rather than cognitive outsourcing as Snippe’s research advises, then AI can deepen democratic inclusion. However, without such safeguards, AI risks accelerating institutional erosion, diminishing cognitive autonomy and can lead to the concentration of power in unaccountable technological and commercial systems.

 

In this analysis, one conclusion emerges clearly: AI’s democratic consequences are not a product of fate, but of governance. Societies that design AI with intentionality, transparency and an unwavering commitment to public value may gain a more inclusive and effective democratic order. Those that fail to do so may find that the most powerful technology of the century has quietly reshaped their institutions, their politics, and their citizens’ minds.

 

Infographic 3: AI’s Democratic Impact: A Matter of Choice

(Source: Author, 2025)

 

 

This policy brief draws on the presentations made by the undermentioned academics at the Stellenbosch University School of Public Leadership’s Winelands Conference that was held from 22 – 24 October 2025.

 

  • Al-Ani, Ayad. Einstein Center Digital Future, Berlin

  • Manicom, Gray. Policy Innovation Lab, Stellenbosch University

  • Snippe, Linda. Inholland University of Applied Sciences

  • Van der Steen, Martijn. Erasmus University, Rotterdam

 

The synthesisation of this policy brief has been aided through the use of AI, which has drawn solely from the presentations made by the aforementioned academics. The brief has subsequently been amended, verified and edited by the author.

 



 


 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -



This report has been published by the Inclusive Society Institute

The Inclusive Society Institute (ISI) is an autonomous and independent institution that functions independently from any other entity. It is founded for the purpose of supporting and further deepening multi-party democracy. The ISI’s work is motivated by its desire to achieve non-racialism, non-sexism, social justice and cohesion, economic development and equality in South Africa, through a value system that embodies the social and national democratic principles associated with a developmental state. It recognises that a well-functioning democracy requires well-functioning political formations that are suitably equipped and capacitated. It further acknowledges that South Africa is inextricably linked to the ever transforming and interdependent global world, which necessitates international and multilateral cooperation. As such, the ISI also seeks to achieve its ideals at a global level through cooperation with like-minded parties and organs of civil society who share its basic values. In South Africa, ISI’s ideological positioning is aligned with that of the current ruling party and others in broader society with similar ideals.


Phone: +27 (0) 21 201 1589

 
 
 

1 Comment


Jessi
Jessi
2 days ago

Really thoughtful look at how AI could shape public policy it’s crucial we balance innovation with protecting democracy and inclusion. Discussions like this matter, even beyond trending topics like a Bad Bunny Ocasio 64 Jersey!

Like
bottom of page