As a recent example, air canadawas awarded compensation after its chatbot caused harm to a passenger. false information About the price. The company tried to argue that the bot was an “independent entity,” but the court didn’t buy it. If a bot is posted on the official page, the company is responsible. This episode brought to light the following transcendental questions: Enthusiasm about artificial intelligence often collides with reality when it is released to the public.
The distance between promise and practice is becoming increasingly noticeable. According to research from the Massachusetts Institute of Technology (MIT), 95% of generative artificial intelligence projects in companies have no measurable impact on financial performance. In other words, enthusiasm grows faster than results.
I know the cause, but it keeps repeating itself. First, it is a disconnection from everyday reality. Many companies have systems in place that look impressive in internal demos but fail in real life. The Air Canada case sums it up well. Chatbots are efficient at presentations but confusing when faced with real questions.
Authoritarians don’t like this
The practice of professional and critical journalism is a fundamental pillar of democracy. That is why it bothers those who believe that they are the owners of the truth.
AI risks: 4 important questions for responsible use
Second, there is data bias. Amazon has discontinued its automatic hiring system after it was discovered that it discriminated against women. This model learned from ancient history and reproduced its biases. Rather than correcting human error, technology has amplified it.
Third, the illusion of precision. The CNET Digital Newsroom published dozens of notes written with AI, and more than half had to be corrected by AI. error. AI doesn’t lie maliciously; it simply doesn’t understand what it’s saying.
Fourth, the prediction failed. Real estate platform Zillow has shut down its home buying and selling business after losing millions of dollars. Their pricing model, which was supposed to predict market values, turned out to be unable to read the actual trends in U.S. housing.
The CNET Digital Newsroom published dozens of articles written by AI, but more than half of the errors had to be corrected. ”
Finally, lack of institutional preparation. Companies often install systems without providing monitoring, remediation, or termination mechanisms. If something fails, the protocol does not exist. The problem isn’t artificial intelligence, it’s the white space around it.
The scenario for the next few years combines regulation, realism, and a certain technological maturity. Artificial intelligence laws will be fully implemented in Europe between this year and 2027. The law requires companies to document their models, assess risk, and maintain traceability of the data they use. It will be a reality check. Compliance takes time and money, but it reduces reputational and legal damage.
Consulting firm EY estimates that large companies have already lost more than $4.4 billion due to failed AI projects.
At the same time, experts predict a shift in focus. AI will no longer be sold as a magic wand and will focus on discrete tasks such as summarizing text, categorizing emails, and improving customer service. Big speeches will be replaced by small achievements.
And enthusiasm can stagnate. After the first few years of highs and costly failures, the market is starting to demand proof. Consulting firm EY estimates that large companies have already incurred more than $4.4 billion in operating losses and penalties due to AI project setbacks.
AI is here to stay, but we need to be careful about living with it. First, ask for proof, not promises. When someone claims that AI will improve something, you need to ask how much, how it will be measured, and over what period of time.
Second, always check. If your chatbot provides sensitive information such as pricing, policies, health status, etc., we recommend checking with official sources.
Third, don’t distribute data without thinking. Personal documents, medical information, and financial information should not be shared with tools whose use and storage are not transparent.
Fourth, accept the margin of error. Systems learn, but they also make mistakes. Therefore, they need human accompaniment, not replacement.
Rather than creating new problems, artificial intelligence accelerates existing ones. If your organization is in disarray, AI will add to that disarray. With clear purpose, transparency, and control, you can turn information into value.
The famous “95% failure” is not a definitive statement. This is a reminder that the most advanced technology still relies on the most ancient human judgment.