Blog articles

Europeans for Safe Connections call for stronger regulation of Artificial Intelligence in Decision Making

Europeans for Safe Connections is a coalition of national and international organisations that are aware of adverse consequences of the modern communication technologies. We emphasise that we are not against technology, but in favour of safe technology and safe connections.

During one year of our campaign we have learnt lessons that can be useful for future organisers. First, if you hear that true voice in your heart telling you to change the world for the better for everything and everyone: follow that voice, fight and never give up hope!

if you go with the flow, people will easily go with you and you won't face resistance. But if you have to go against it, your message may be needed like thirsty soil repelling water needs soft persistent rain. Although the EU requires a million signatures, it is the quality of your ideas that counts.

In our European Citizens' Initiative (ECI) "Stop (((5G))) Stay Connected but Protected" we have 23 proposals. Among them we call for better regulation for data privacy and automatic decision-making by artificial intelligence. We propose to launch an impact assessment of the effects of 5G on personal data protection (proposal 19), we wish to see an active fight against discrimination and digital rights violations (proposal 21) and we think that citizens should be informed whether their data are processed by automated procedure (proposal 22).

horizontal

How it all began

Artificial Intelligence (AI) has been around for quite some time. Already in the early 50’s expectations were high on the endless possibilities intelligent technology would bring to our society. Now, more than a half century later, AI infused technology has managed to slowly creep into our daily lives. Although humanoid robots are not yet walking our globe, we do rely on multiple complex technologies in our infrastructure management, work processes and spare time.

The current ‘smart’ technologies might differ from what earlier scientists would call human-like intelligent machines. Whereas Alan Turing defined intelligence as thinking and acting like humans, nowadays smart systems vacuum our house with limited thought. Defining what AI exactly is and what it entails is difficult. Nevertheless, it has allowed us to live life more efficiently, smoother, and, perhaps, even more enjoyable.

But the drawbacks of endless automatization and robotization are also becoming more and more evident. Take, for example, the female applicants of Amazon: rejected because the algorithm had learned to favour men to women. Or the Microsoft’s chatbot Tay on Twitter that had to be taken offline because it had inferred some extremely racist ‘truths’ from fellow Tweeters. Or the fact that mainly pictures of males appear to the search term ‘CEO’ on Google.

We may think that AI seems to take out the worst of men and deepen existing inequalities. Yet this might be somewhat simplistic to conclude. The AI systems and the underlying algorithms often rely on data, lots of data, to learn about our world. Machine learning techniques, such as neural networks and decision trees, attempt to infer trends, links between concepts, and important parameters to help them choose the right options in future requests. This data is not something that was made up for the sake of machine learning. No, most of the data was generated by us, humans, while clicking around on the internet and sharing our preferences. By using our data to learn, AI systems thus unravel systematic biases that were already present, to some extent, in our society. And this makes the implementation of smart technologies not just a technological, but also a societal and ethical matter. For these reasons some researchers argue that engineers have long been hiding behind the technological aspects of AI, focusing on improving the calculations while neglecting the effects their innovations might have on the end users. Technology places itself between a developer and the outer world. This article describes three issues: discrimination, accountability, and black box logic.

dalle
Winter country with 20 telecommunication masts among firs and spruces
Credit: Public Domain
Source: https://labs.openai.com/

Discrimination and bias handling

Just like the female applicants of Amazon, people of minority groups fall outside the actual scope of AI systems. The reason is evident from the name: these are people form a minority. Their representation in the data will be limited and the algorithm will not learn the specific features representing these individuals. Like humans, systems perform worse with limited knowledge. The result: black individuals are labelled as apes by Google’s intelligent image reading software or as more dangerous in an automatic risk assessment system for recidivism. Simply because the software was trained on pictures containing white individuals (and perhaps gorillas).

Data scientists have been aware of this problem and there are already techniques to improve performance. For example, by adjusting the data set such that minority groups are better represented. Or by adding an extra step in the machine learning process to finetune the model.

And to make the discussion even more complicated: what if our system predicts outcomes very well. Assume we develop two algorithms. One that correctly detects a disease 80% of the time in white individuals but only 60% of the time in individuals of colour. And a second one that correctly detects a disease only 60% of the time no matter the background. Should we then strive for equality and take the worse algorithm? Even though the discriminating one could potentially save more white individuals? This is where ethical considerations come into play.

Our data scientist just became a person shaping the faith of a million others and must suddenly make difficult ethical considerations. Ethical dilemmas that are not yet answered in the public debate. We cannot expect engineers to make those decisions, nor should we want them to. Regulations are necessary to guide the software design.

discrimination
Artificial Intelligence is a good servant but a bad master.
Credit: Stop 5G Team 

Accountability and responsibility

In our society, individuals are held responsible for their deeds. With intelligent systems, it is difficult to identify the culprit. Especially if the systems are complex and self-learning. Engineers cannot always predict what the system will learn or how it will be behave. Amazon probably did not intend to jeopardize female applicants, nor did Google consciously put males at the top of the search results. Only after putting the system into the world, did these consequences appear. But who are we to blame? The company for using these systems, even though they had no reasonable grounds to doubt the quality of the system beforehand. Or the company that built the system for selling a product that turned out to be discriminating.

Innovations have always been disrupting and not without risk. They ask for adaptations in our society and justice system. Take the car. In its early days, a car was allowed to freely go around the cities without seatbelts, airbags, and road signs. Until the number of casualties was rapidly growing, and streets became unwalkable. Novel guidelines and regulations were necessary to streamline the new technology in the existing infrastructure. Few predicted that the car would become so dangerous for the walking crowd. By regulating the use, we were able to increase safety while also reaping the benefits of this new type of transport. Nowadays, we can hardly imagine a world without motor-driven transport.

Just like with cars, banning AI systems for their initial dangerous implications would be too short-sighted. AI systems can make, and are already even making, a positive impact on our society. Yet, at this point, AI systems are developed and dumped into our daily lives without any ‘seatbelts’ or other safeguards. It is important to critically think about how we want AI to exist in our society. To open the conversation on how we can increase the safety of these systems or decrease the damage in case of unexpected outcomes.

accountability
The red flag 
Credit: Stop 5G Team

Black Box

The GDPR justification states that people have a right to see the grounds on which decisions were made, what data is collected, and how this data will be used. This relatively novel law has been some step in the right direction but is far from a proper solution for establishing privacy or adhering to civil rights. When visiting a website on the internet, users are often confronted with large amounts of text vaguely explaining what personal data is collected. And most of the time, it is hard to reject any cookies, or you must click around several pop-ups. Companies are following the bare constraints of the GDPR and do not make it easy for individuals to oversee their own data. We hence believe that the GDPR is a naive initiative that shows the hunger for data of online companies.

But even if companies would be more willing to share the true collection and usage of personal data, they are not always fully able to. Many intelligent systems function like black boxes: put in lots of data and the system will give a certain output depending on the features of the data. In the last years, engineers favoured these black box systems. Such systems had a high potential for learning more complex concepts such as language or images. Famous examples of black box systems are neural networks, face recognition software, or natural language processing software (e.g., Google Translate). Engineers do have control over some parameters but do not have any insight in the type of information these systems are learning or inferring from the data. Only by checking performance on novel data can an engineer estimate if the system has learned what it was supposed to. An engineer could, for example, input a set of new images to see if the system is able to interpret those. But as we saw earlier, if the engineer has not tested the system thoroughly enough, photos of people of colour might be interpreted as those of apes. Could the engineers at Google have known about this error? Well, if they had tested the software on a set of photos of people of colour they might have. But photos can contain about anything. And it would be very hard to verify the system on everything.

More efficient would be to verify what kind of things the software has learned. If the Google algorithm could tell us what kind of steps it undertakes to get to an interpretation, the engineers could verify this reasoning and estimate probable exceptions or error cases. That is why members of the scientific community have been calling for more understandable approaches towards machine learning. Black box algorithms have not lived up to their potential yet and are not necessarily better than more understandable algorithms.

The interpretability advantage of these understandable algorithms is bigger than the expected performance advantage of black box algorithms. Only if we know what is going on, we can interfere or adjust accordingly.

problem data
Watching the black box 
Credit: Stop 5G Team 

Conclusion

AI and intelligent software is quite omnipresent in modern life. Influencing decision-making processes in companies and exhibiting biases towards minority groups. All the while we do not fully understand how artificial intelligence works, how it is impacting us and what the long-term effects will be.

The EU citizens have not been asked whether they accept the pervasive societal consequences from the Artificial Intelligence guided decision-making tools in the name of technological progress and digitalisation.

Therefore in the ECI "Stop (((5G))) – Stay Connected but Protected" we are calling for stronger regulation to protect citizens from privacy violations and discrimination as a result of  uncontrolled AI-system use in decision-making in proposal 19, proposal 21 and proposal 22 of the ECI.

And we are not alone:

  • The European Data Protection Board does not like new legislative proposals by The European Commission that will facilitate the use and sharing of (personal) data between more public and private parties. According to the European Data Protection Board this will “significantly impact the protection of the fundamental rights to privacy and the protection of personal data.
  • The European Council stresses the importance of a human-centred approach to AI policy. Think about issues such as biased and opaque decision-making affecting citizen’s fundamental human rights.
  • Unesco Preliminary study on the Ethics of Artificial Intelligence on page 10 states that "It is most important to ... educate future engineers and computer scientists for ethically aligned design of AI systems."
  • Another European Citizens’ Initiative (ECI) named Reclaim Your Face demanded a ban on the use of harmful AI such as biometric mass and face recognition surveillance.
  • Even back in 1942, Isaac Asimov foresaw the problems and stated Three laws of Robotics. The first law is that a robot shall not harm a human. As shown in this article, we are far from this.

Contributor

Amar van Uden

Amar van Uden is a writer for the European Citizens' Initiative (ECI) "Stop (((5G))) –  Stay Connected but Protected". Amar is from the Netherlands and studies Artificial Intelligence.

European Citizens´ Initiative Forum is a platform operated by the European Citizen Action Service (ECAS) on behalf of and under contract to the European Commission.
This article was first published there on: 24/01/2023

https://europa.eu/citizens-initiative-forum/blog/europeans-safe-connections-call-stronger-regulation-artificial-intelligence-decision-making_en

© 2024 Europeans for safe connections.