Algorithms are our bread and butter. Algorithms make decisions about our salary, our access to credit or if we will be attended to in hospital or not. In the majority of cases, we do not even realise. Thanks Ashoka Spain, for putting us in contact with Gemma Galdón, founder of Eticas Consulting, who told us about the impact that algorithms have on our lives, the bias and discriminations that occur in many cases and what must be done to create really intelligent artificial intelligence. An AI that puts the people in the limelight.
Breathing. Almost all phones, even if we do not have a fitness app, have accelerometers, and can see the steps we take, how we move, when we wake up and when we go to sleep. We generate data when we chat online, when we send an email, when we speak in a videoconference. When we go out and a security camera or someone else’s device captures our image. While internet shopping.
To live is to give off data. This is great, because for the first time we have a lot of information on social dynamics that up until now was exceedingly difficult to quantify, but it brings a lot of risks and we must handle this data in a responsible way.
The data economy is a tight-fisted economy. It does not create ecosystems. In the world of data, only the best wins. It is a “winner takes all” economy that gatekeeps the monopoly. The underdogs do not stand a chance. There are a lot of people trying to convert themselves into the next big data company or waiting for one to buy them. Then there is also the state, for good or for bad. This has not changed much, because the state has always had a lot of information on the citizens. Everything that currently happens. Collecting data without having an end goal. Collecting more data than you can justify. Not asking permission to collect data. Not informing the person about the motive for collecting the data. Incorporating algorithms that make discriminatory decisions, without any kind of transparency or explanation. Incorporating algorithms without any kind of mechanism in place that enables people affected by said algorithm to complain and say that they do not agree with the algorithm’s diagnostics.
This is not only irresponsible, but illegal. Regarding the protection of rights in a digital world, there are business models based on the collection of data that are no longer legal according to European law. If people are to be protected, you cannot just do what you want with the data. You cannot market personal information. What happens is that most companies, have ignored this. That is why we are starting to see tribunals that are condemning algorithms. The problem is that there is a time lapse between when the law is introduced and when it is enforced. The norm currently does not abide the data protection laws.
In Spain there are algorithms deciding the risk of incident of someone in jail or the risk of abused women and their need for protection. If these algorithms fail, people should have the mechanisms to raise their voice and defend themselves. In the offline world, mechanisms are established in order to defend ourselves, but in the online world, this ecosystem for protecting people, has not been established yet. Right now, the technological world is a wild west of rights: it is the survival of the fittest.
Technology always reenforces the power of those that already have power.
In the world of work this is more evident than in any other environment. Technology is always for controlling the worker. There is no technology for the worker to prove that they have worked extra hours. Nor is there technology to demonstrate that they have an ailment caused by their job.
Technology is never neutral. It is always biased. Technology is always developed at someone’s request or to be put in the service of somebody. In the world of work, we see a large dysfunction in who the technology serves.
Cities: the famous smart cities, received a ton of investment to make this new promise of putting the city in service of the citizens a reality. But we started to see that the real practice did not have anything to do with that, we were creating a “big brother”. Since we were founded, we have been reporting that everything that is intelligent, is watching.
When you see the adjective “intelligent”: be suspicious.
For example, if you read the manual for smart TVs, they recommend not having sensitive conversations in front of the device because it could have the microphone constantly switched on. They record everything that happens around them. It is one thing in a domestic environment, because you have decided if to make that purchase or not. In the context of a city, it is another, because you do not decide whether you do or you do not.
If the smart city is really for the people, we have to think about technology for the people, and not using people like a cheap resource which is how we end up feeling in many instances.
Migration, on the topic of migration, we have worked evaluating the effect of smart borders on immigration processes. We have seen terrible things. There have been many technological advances applied to the borders. For example: we are going to make it so the borders are not only policed, but also have biometric kiosks, where the person would have to have their fingerprints or their face profiled. There would be various ways to validate identity, not just with a passport, which can easily be forged. The application of AI was seen as a step forward in the context of migration, but in the end, what we are left with negative impacts.
If your identity is not on a document, but on your body (fingerprints or features), this is devastating for people who live in situations of repression and have a legitimate need for emigrating. We have seen cases in which people who want to change their lives by living in another country choose to self-harm. Why? Because in the end your body gives away who you are, and a database gives you away.
The thing is with technological innovation, and algorithms in concrete, is that nobody has thought about the consequences and negative impact they could have. How to solve these problems has been thought about even less Advancements are being made, albeit few and far between. We still live in a reality in which bad quality technology is sold. They want it sold fast just like everything else. We know that what really creates value is customised technology, technology that is designed for the client’s needs, a client who understands the context in which they operate and therefore can contribute to the context being considered in the development of the algorithm, that way incorporating negative impact mitigation mechanisms. Good technology is much less profitable.
We have advances in decentralised technology. For example, the COVID tracking app in Europe guarantees privacy. In the world of technology there are two alternatives. Total tracking of the population (the control of GPS which follows you even when you go to the bathroom) and the violation of fundamental human rights. Or deciding not to control the public using the pandemic as an excuse but wanting to inform someone that they have been close to somebody who has tested positive for COVID. The first alternative is one that takes advantage of the public in order to steal their data and is the habitual dynamic of technology. However, “pro-privacy” technology has started to sprout up that now do not use a problem as an excuse to collect data, but the technology is seen as a tool used to solve problems.