Buzzfeed popularized Face2Face technology with his fake Obama video.

Buzzfeed popularized Face2Face technology with his fake Obama video.

Innovadores

The dilemma of ethical technology

Countless initiatives are keenly searching for solutions to ensure a fair and equitable development of technology, especially of artificial intelligence. The commitment of their leadership (questionable at times), the hurdles in the road, and the need to face contradictions and to make tough decisions increase the risk of them becoming just a piece of paper lost in a CSR department

27 febrero, 2019 10:00

Who wants technology to be ethical? Sorry to begin this article with a question but it is a fundamental starting point for which there is no clear answer. Years ago, the mere fact of asking such a question was considered bizarre. Now, the boom in artificial intelligence (AI) applications, their imperfections and their illicit derivative uses have led to a flourish of initiatives: declarations, manifestos, principles, guides, analysis, etc. focused on the measurement of the social impact of such systems and the development of an ethical AI.

It looks like a tipping point has been reached. Institutions, governments, technology companies and consultants, Think Tanks and all kinds of organisations have jumped on the bandwagon. Technological giants such as Google or Facebook have created their own ethical codes and principles. Microsoft has created the AI Now Institute "to ensure that AI systems are responsive and provide answers to the complex social domains in which they are applied." The European Union has several related projects. So do different European governments. In the United Kingdom, the House of Lords has declared their intention the UK leads the way. They have created a Center for Ethics of Data and Innovation (CDEI) and, unlike Spain, they have a plan to address the ethics of AI.

As a statement of intent, the concept of a code sounds good, but are such principles and plans ever realised? Putting into practice these kind of ethical guides is often tricky, especially for those who have no intention of doing so. "They tell you 'dear user, don’t worry, we will never share your data without your permission' while, in fact, they are doing it," Jonathan Penn, a technologist and writer, associated to the Berkman Klein Center of the Harvard University said during his participation in the HUMAINT winter school on the ethical, legal, social and economic impacts of AI, organized by the Joint Research Center of the European Commission in Seville.

As an example, Penn quoted the analysis of 1,000 Android applications undertaken by the non-profit organisation Privacy International, which found that 61% of them share information with Facebook immediately when a user opens the app, without asking for permission and regardless of whether or not the user is registered with the aforementioned social media. It has also been revealed how Google shares highly sensitive user data categorised by tags such as "mental health", "cancer", "impotence" or "drug abuse". This was uncovered in a complaint filed in Europe by the private web browser Brave, the Open Rights Group, and University College London, which accuse the search engine of violating the General Data Protection Regulation (GDPR) for "massive leakage of highly intimate data".

Google’s CEO, Sundar Pichai, published a few months ago a list of principles for the development of AI, where he assured that they would incorporate their privacy principles into the development and use of their artificial intelligence technologies. Is he talking about the same principles that led the company to share their user's data (from hundreds of Android apps) with Facebook without the user’s permission or about the principles that have allowed Google to sell users’ most intimate searches to third parties?

Beyond the privacy of data, there are many other risks associated with the design and implementation of AI systems. For example, algorithmic biases. "All non-trivial decisions are biased. There is a lack of understanding about the context of use, and there is no rigorous mapping of the decision criteria used by these systems, which also lack explicit justification for the chosen criteria", claims Ansgar Koene, researcher at the Horizon Digital Economy Research Institute of the University of Nottingham and founder of the UnBias project against the "often unintended and always unacceptable" algorithmic biases.

These biases can have serious effects. Among the cases discovered, discrimination by gender has been found when looking for a job, also by race in court, or for health issues when setting insurance arrangements. Employment, dignity, freedom, income and even health are at stake. The examples are numerous, not only of biases but also of failures and unexpected events. The Verge revealed in 2018 how an error in coding an algorithm caused an unjustified reduction in coverage within the health system of Arkansas (USA) for patients with cerebral palsy, who were assigned fewer hours of care than they deserved.

How to develop appropriate algorithms? Koene, also a participant at HUMAINT winter school, points out that this is not just a technical problem: it also needs a socially defined construct. In a very different context but with full application to this case, the director of the Division of Creativity in Culture of UNESCO, Jyoti Hosagrahar, says that "making culture a central element of development policies is the only means of guaranteeing that such policies will be human-centred, inclusive and equitable. " And this includes AI.

It is evident that the ethical development of technology goes beyond algorithms, as Virginia Dignum, member of the High-Level Expert Group on Artificial Intelligence at the European Commission and professor in the Department of Computer Science at the University of Umeå (Sweden) also points out. Dignum, co-organizer of the HUMAINT winter school together with the Spanish researcher Emilia Gómez, mentions various levels of responsibility in the development of 'ethical by design' systems in order to ensure that those processes take full account of the ethical and social implications of AI (the greater the autonomy, the greater the responsibility). As one of the potential solutions, she mentions the setting up of a certification that ensures people’s acceptance and increases public confidence in these systems.

Dignumt also highlights concomitant challenges, such as identifying relevant human values that can underpin ethical frameworks. Are there universal values? What cultural norms, ethical theories, codes and laws should be considered? How to weigh the differences between what is socially accepted, what is ethically correct or what is legally allowed? Who has a say on this and why? Designers, users, owners, manufacturers ... all of them? How to implement systems based on it?

These questions are addressed by the EC expert group, to which Dignum belongs, in the first draft of its AI Ethics Guidelines. The researcher recognises that the text is still very vague, and rather unspecific. Maybe that is why it has received an enormous amount of comments, with which the experts do not know yet how to cope.

Explainable AI

Another challenge in the development of ethical AI systems is to make their results understandable and interpretable. "AI is a sort of oracle: its answer may be right, but we have no idea where it comes from," Dignum says. That is why companies, universities, research centres etc. try to find formulas that enable more precise knowledge of how these systems reach a specific result. This is known as 'explainable AI'. However, what does it mean? "It must be adapted to the user and explain what the system can and cannot do. It has to be correct but not too complex; understandable, comprehensive and timely", Dignum explains.

For Dignum, making an interpretable system does not always involve making the black box that AI is today entirely transparent. "It is an illusion to believe that each system will be transparent. Organisations are not transparent", she says. For this reason, she believes that an intermediate solution is to provide specific parameters that allow knowing how a result has been obtained without disclosing details that compromise patents. An alternative is, to opt for certification systems that do not expose such details.

Either way, the researcher points out that all of these are essential elements but not the only ones to consider: "This is not only about AI but about the whole system: how AI is used, who uses it, for what purpose, in what context, what is the business model…”. Penn, instead, goes a step back and considers it worth exploring whether we need to quantify each part of our lives and when human needs justify the development of specific applications based on data and AI.

The technologist also believes that, in order to generate genuine trust in these technologies, it is essential that their value is distributed. Whilst Penn recognises that AI is augmenting human capabilities, he believes that, for the time being, there are no signs of AI benefiting the common good. "Nothing today gives us grounds to expect that anything will change. The promising future that was sold to us is not quite as bright as promised; not, at least, until we have social protection and benefits", he adds.

Penn also believes that the name "artificial intelligence" is a hoax since the technology is not intelligent. He thinks that a more appropriate name would be 'complex information processing'. However, of course, that name is not appealing. As a historian of science, he recalls that, at the very beginning, AI was focused on solving problems, not on intelligence. The technology was centred on human behaviour, something closer to Hosagrahar's vision of people-centred development.

The dilemma of ethical technology is not one. There are many. In the business arena, there are contradictions between moral duty, the desire to maximise profits, and obligations to shareholders.  In the field of ethics, there are dilemmas of shared values, ethical frameworks and theories. In Governance, the dilemma comes from the struggle between the forces that strive to perpetuate the status quo and those that advocate for a change in the system and its development model. In the sphere of research, the dilemmas lie between pursuing the efficiency, or to really imitate human intelligence instead. In terms of suitability, the dilemma is whether or not to create a technology if it does not address a human need and, instead, represents a threat to humanity. There is a dilemma even with the naming of AI, does it deserve to be called 'intelligence'?

Nobody said it would be easy, but being aware of the obstacles on the road is the first step to overcoming them. Let us take advantage, while we still have the chance.