top of page

CfA: Governing Artificial Intelligence

Mancept Workshops 7-10 September 2021 (online)


Convenor: Markus Furendal, Postdoc, The Global Governance of AI

Department of Political Science, Stockholm University


Political philosophers have recently started to address the moral and social implications of increasingly complex Artificial Intelligence (AI), including the way that the biases of AI systems might entrench existing injustices, and whether decisions made by opaque AI systems can be legitimate. Philosophers and engineers also debate how to best rein in a hypothetical future “superintelligent” AI system, and how to make sure that its values align with ours. Yet, countless other kinds of AI technology have already begun to be implemented throughout society, and even though many of them have potentially disruptive effects, there is still no systematic work addressing how the development and deployment of AI, more generally, ought to happen. The advent of AI technology is not merely a technical issue, and moral questions arise not only with regard to its applications. How AI is developed and deployed is also a political question, calling for collective decisions about what society ought to be like.


For instance, the current AI boom has in part been enabled by publicly funded research and the collection of data about people’s daily behavior, but most AI technology is developed and owned by relatively few large corporations. Since AI technology holds a promise to be tremendously profitable for those who own it, this raises the question of whether this state of affairs is just, or whether the value and productivity growth created by the technology ought to be shared more widely. Similarly, AI technology might provide us with greater convenience and increased human capacities, at the cost of values like privacy or community. Current decisions about what kinds of AI technology to develop are mostly made by private companies, based on a market logic, but there may be more fair or democratic alternatives. Who ought to have a say in this, and what kinds of institutions would have to be developed to ensure that it happens? Serious discussion of questions like these arguably needs to be informed by theories of justice, democracy, freedom, and power.


The purpose of the panel is to gather academics from different career stages to address this relatively neglected part of the rapidly evolving literature on the social and ethical impact of AI. Papers concerning moral issues involving the application of AI technology will be considered, but the aim is to interrogate issues at a higher level of abstraction, concerning what the social and ethical impact of AI technology will be, and how it ought to be shaped politically. This includes, but is not limited to, addressing questions like what the societal implications of the quickly developing AI technology are and what they ought to be, how to conceptualize ownership of AI technology and who gets to profit from it, how legitimate decisions about the development of AI ought to be made, and how to design (global) institutions to govern AI technology.


Submission guidelines:

Please send an abstract of no more than 500 words to by May 10. The abstract should be prepared for blind review. Please include a separate document with the title of your paper, your name, e-mail address, and institutional affiliation (if applicable). Acceptance/rejection decisions will be communicated within two weeks. Speakers will be asked to submit complete papers before the workshop, which will be circulated among the panel participants.

Information about Mancept Workshops:

Due to uncertainty about the pandemic, this year’s Mancept workshops will be fully online.  Please note that all participants at this panel need to register for the Mancept workshops. Registration opens in May. This year’s fees are


Full price (employed academics e.g. lecturer, professor etc.): 45£

Discounted price (PhD/Master Student, unaffiliated academic, etc.): 20£

Non-presenting attendee: 15£


For more information about the Mancept workshops, see and please direct general queries to


Questions regarding the specific panel and the submission should be directed to

bottom of page