Skip to main content

SoBigData Articles

Responsible AI: from principles to action

From 6th to 9th October 2020, the 12th International Conference on Social Informatics (SocInfo 2020), took place in a digital format.

SocInfo is an interdisciplinary venue for researchers from Computer Science, Informatics, Social Sciences and Management Sciences to share ideas and opinions, and to present original research work about the interplay between socially-centric platforms and social phenomena. The ultimate goal of Social Informatics is to create better understanding of socially-centric platforms not just as a technology, but also as a set of social phenomena. In this edition, presenters faced a variety of topics, from human migration to gender equality, from misinformation to fact-checking, from polarization to ideological bias of opinions, and from unemployment and health.

Also, four high profile invited speakers participated in the Conference. In particular, Virginia Dignum, who is Professor of Responsible Artificial Intelligence at Umeå University in Sweden and associated with the TU Delft in the Netherland, held a speech on Responsible AI: from principles to action.

In her presentation [1], professor Dignum started with the definition of AI systems that we need to care about, then she explained why and how we must take responsibility of such systems, which are the dimensions of responsible AI and how we can operationalize the guidelines.

Indeed, she pointed out that the main concerns and the main focus of attention regard systems that, dependently on the technique used to develop them, are able of some autonomous behavior, i.e., proactive behavior without being directed by a user, they are able of somehow (depending on the environment and the physical context) adapting their behavior, and they do that in interaction with us.  Thus, AI systems are not just algorithm, machine learning, or statistics. 

We care about taking responsibility because AI systems make decisions that could have ethical grounds and consequences. For this reason, we need to make sure that the purpose put into the machine is the purpose which we really want, i.e., what we meant.

 

Professor Dignum also discussed about the decisions that we want AI systems to take. First of all, what are these decisions (in a provocative example, she figured out a self-driving car that guide you to the gym instead of to the required fast food). Secondly, for who are the decisions. Here, the scientific community agrees that the decision must be optimal for a whole group or, even better, the whole society. The most tricky part regards how should AI systems make decisions: we need to discriminate among the involvement (there is a profound difference between suggesting a movie for the night or denying a mortgage), the legitimacy of a decision (i.e., who is controlling the system we are using), and how we can aggregate different opinions or interpretations (even in democratic systems, there are differences among the various voting systems). The decision should ideally align three dimensions, measuring the quality of the decision: it should be legally allowed, morally acceptable, and socially accepted. An additional difficulty is that all these three dimensions also change in time, and within different cultures or subgroups. We need to fully understand the context in which the decision is made, in order to be aligned with these dimensions.

In her talk, professor Dignum also outlined the three (not orthogonal) ways for taking responsability:

- In design: ensuring that development processes take into account the ethical and societal implication of AI in its role in socio-technical environments. This corresponds to the classical engineering approach "do things right, and do the right things." Of course, the problem is deciding what right is. Professor Dignum presented the approach developed in her research group at the Umeå University, i.e., the "glass box", relying on formal verification methods. The aim of the glass box approach is mainly monitoring existing AI systems (in terms of both functionalities and results), and controlling if they are aligned with ethical principles.

- By design: building a system that integrates ethical reasoning abilities as part of the behaviors of artificial autonomous systems. This implies that AI systems should reason about the decisions they make (i.e., AI agents should discriminate between right and wrong).

- For design(ers): verifying the integrity of stakeholders (researchers, developers, manufacturers,...) and of institutions to ensure regulation and certification mechanisms.

Finally, professor Dignum went more in depth about the aim of the ethical reasoning: it is not about the answer (usually there is not one right answer) but about recognizing the issue. Indeed, AI systems should support all users (everybody using or that is subject to a decision of a AI system must be able to use a system). Moreover, AI principles are principles for us. Trustworthy AI outcomes (what are the main principles, values, and methodologies behind an AI system) should be able to give us (all of us) the tools to make a decision that is aligned with our own principles. It is like eggs: we don't need to recognize a regular egg or an organic egg (there are experts for this), but through the existing (trusted) certifications we have enough information to choose between different options.

 

For further information, please see [1,2].

Written by: Francesca Pratesi

References:

[1] https://vimeo.com/event/354779/videos/466543084/

[2] Dignum, Virginia. Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer Nature, 2019.