credit, Laura G. De Rivera
-
- author, Christina J. Orgas
- roll, BBC News Hay Festival Arequipa Mundo
- ×,
It was evening, so we decided to go out for dinner. Your partner may not know what you want to eat, but artificial intelligence does. Artificial Intelligence caught you watching a taco video this afternoon and is now convinced that you can’t stop thinking about tacos.
“If we don’t decide, others will decide for us,” writes Spanish journalist and author Laura G. de Rivera in her book. Slaves of algorithms: A manual for resistance in the age of artificial intelligence (Slave to the Algorithm: A Resistance Manual in the Age of Artificial Intelligence, Free Translation), the result of years of research.
“We live our lives immersed in thoughts, desires, and emotions that are imposed on us from outside, because we humans are quite predictable. Just by applying statistics to our past actions, it’s like someone is reading our minds,” he continues.
Our accuracy in predicting our needs and wants is so high that Michal Kosinski, a psychologist and professor at Stanford University (USA), has demonstrated in experiments that a well-trained algorithm with enough digital data can predict what you want and like better than your mother.
The idea that artificial intelligence could predict people’s interests very accurately sounds good in principle. But it comes at a price, Rivera said. “We lose our freedom, we lose our ability to be ourselves, we lose our imagination.”
“When we upload photos, we work for Instagram for free. As a result, the social network exists and we earn millions. We need to recognize and take advantage of the benefits of the platform without harming the risks,” he said.
BBC News Mundo (BBC Spanish service) spoke to Mr Rivera during the Hay Festival, which will be held in the Peruvian city of Arequipa from November 6th to 9th. The event will attract 130 participants from 15 countries.
credit, penguin random house
What is the solution to not becoming a slave to algorithms?
In my opinion, this solution is very simple, within the reach of everyone, free and has no environmental impact. It’s just thinking. In other words, use your head. It is a lost human ability that is no longer used.
We pick up our phones and get distracted by screens at every moment, whether we’re not at work or when we’re with others. We no longer think about the doctor’s waiting room or when we are bored at home.
These spaces that were once used for thinking are now completely occupied by constant distractions. Through our smartphones, we are bombarded with stimuli that prevent us from reflecting.
There are other things you can do, but for me this is the most basic and easiest. Only critical thinking can protect individual freedom in the face of algorithmic control and the will of others.
It is almost impossible not to provide data when signing up to the platform. It’s even harder to read all the fine print on a service or refuse to “.cookie“Every time I visit a website. Have we become lazy?”
We’re a little lazy and little puppets, and we’re also short on information.
Many people don’t realize that when they spend hours on TikTok, they’re working for the platform for free. They provide all their online behavioral data to the platform, and this data has economic value.
Therefore, education is fundamental and explains how the business models of these large platforms work.
How can Google be one of the richest companies in the world if it doesn’t charge us for its services? It’s really important to reflect on this to help people understand how valuable all the information we provide about ourselves is.
credit, Getty Images
What are the dangers of artificial intelligence?
In reality, the real danger is human stupidity. Because artificial intelligence itself doesn’t require you to do anything. It consists only of zeros and ones.
The problem is that we are so lazy that we think it would be even better if things were done for us. This leaves us in a position where we are more easily manipulated.
We live in a state of general paralysis of the will. We have become complacent with digitizing our health systems, mass surveillance, and educating our children online. We accept injustice, abuse, and ignorance as inevitable facts, but we do not rebel against them out of sheer laziness.
What are the potential consequences of relying entirely on the automatic predictions of algorithmic systems?
The stakes are high when delegating important decisions, even potentially life-or-death decisions. Especially since research shows that humans tend to believe that if a computer tells us something, it must be true, even if we think differently.
So who are you going to let decide? To your mother, teacher, boss or artificial intelligence?
This is a very old problem for humanity. I really like the books of Erich Fromm, a psychoanalyst, sociologist, and member of the Frankfurt School. fear of freedomspecializes in exactly this.
Fromm argues that humans prefer to be told what to do because they fear the idea of decisions being made for themselves. We are afraid of making decisions and like to be ordered around like robots. And Fromm already said this at the beginning of the 20th century.
credit, Getty Images
Is there a way to avoid disclosing data online?
of course. There are ways to pass no data, or only the bare minimum necessary. But the most important thing is to understand how the platform works. This is the only way to take action, even if it just makes life a little harder for the people who profit from your data and your life. It is possible to adopt small habits, such as refusing to say “.”cookie“When you enter the website.
What else can I do?
We can also talk about the need for regulations to protect us and the evolution of ethics on the part of companies using artificial intelligence.
Madam, are you referring to the Edward Snowden scandal that exposed the mass surveillance systems used by American intelligence agencies?
yes. For me, Snowden is one of the heroes of this century, but there are others. His case is the best known.
There’s also Sophie Chan, a data scientist at Facebook. He was fired after raising awareness within the company about the systematic use of fake accounts and bots by governments and political parties to manipulate public opinion and incite hatred.
Zhang noticed that in many parts of the world, such as Latin America, Asia, and even some parts of Europe, there are politicians who use fake accounts, non-existent followers, and constant likes and shares to fool the public into believing they have public support and acceptance, which is not true.
credit, Getty Images
When Sophie Chan reported the problem to her boss, she was surprised to find that no one was doing anything to fix it.
For example, it took Facebook a year to remove a network of fake followers of then Honduran President Juan Orlando Hernández, who was convicted in New York federal district court on charges of conspiracy to import cocaine into the United States and possession of a machine gun.
In her book, she also cites the case of computer engineer Timnit Gebru, co-director of Google’s AI ethics team, who was also fired.
Yes, we accused algorithms of promoting racism and sexism. She warned that large language models can pose risks. People believe they are human and can be manipulated by them. Despite a letter of protest signed by more than 1,400 employees, Gebru was eventually fired.
Another “whistleblower” is Guillaume Chaslot, a former YouTube employee who discovered that recommendation algorithms systematically steered users toward sensational, conspiracy theories, and polarizing content.
What hope do we have left?
We are convinced that no matter how hard we try, software programs cannot provide even the slightest bit of creativity to invent new options, options that are not based on the statistics of past data.
Nor can we offer solutions based on empathy, putting ourselves in someone else’s shoes, seeking our own happiness in the happiness of others, or providing solutions based on a sense of solidarity.
These three qualities are by definition unique to humans.