© All rights reserved. NepalKhabar

Blog

AI Conundrums

Anup Ayadi

Anup Ayadi

 |  Kathmandu

(Photo Courtesy: Forbes)

When the San-Francisco based Open AI rolled out Chat GPT in late November of 2022, hardly anyone in the company expected it to blow up. Instead, it was supposed to be a sort of prologue to the more powerful GPT-4 they had been working on. The company was in for a surprise, however, when their release shook the world in the ensuing weeks, resurging wider public and media attention in AI. By January, Chat GPT had reached 100 million monthly active users. Soon, Microsoft’s nemesis Google was scrambling to release its own AI chatbot Bard so as not to fall behind in this “AI race”.

Within weeks, AI was the talk of the town. Suddenly, everyone was talking about Super General AI taking over human kind abound. AI startups proliferated while entities like the US, the EU and China found themselves in a regulatory race, trying to make sense of how to control this rapidly evolving technology. Prima facie, development of machine learning and data-driven Artificial Intelligence may appear beneficial. Certainly, it has proven useful in many applications such as in disease diagnosis, recommendation systems, natural language processing and the like. Yet, beneath the surface, this novel technology and its uses could also present several issues in the society. This article highlights two of these quandaries and hence, hints at why we should be more discerning and vigilant regarding the use and spread of AI.

AI biases and overreliance
Any machine learning and deep learning algorithms are often called “black boxes”. Instead of being told explicitly what to do by the programmer, these algorithms learn by crunching data and refining their neural network; making them vulnerable to being biased and error prone inadvertently.

Take Tay for example. A Twitter AI chatbot released by Microsoft in 2016, Tay was designed to mimic casual, humorous language patterns of a teenager by interacting with human users in Twitter. Hours after its launch, the bot learnt to tweet inflammatory and offensive tweets and had to be shut down within 24 hours of launch. Similar to how IBM’s Watson started using derogatory language after reading the Urban Dictionary website, Tay learnt from the data it had been fed. Clearly, the developers didn’t program Tay to be offensive; it learnt it by itself.

Sometimes, these errors can take form of biasness which can be even more subtle and serious. A 2019 study published in Science found that a major health care risk prediction algorithm used in many US hospitals discriminated against black patients. The system used health care costs as a proxy for need but in the data used by the scientists to develop this algorithm, black patients had spent substantially less money for health care than white patients. Subsequently, the algorithm perpetuated this same biasness; often classifying black people less sick than equally sick white people.

As more and more decision making responsibility shifts to AI, such accidental errors can become fatal, especially in critical settings like health care and in the military. Even in recommendation systems of Social Media Platforms, algorithms could potentially, albeit inadvertently, favor certain types of content while unjustly suppressing or even censoring others. Hence, it becomes ever more important to keep “humans in the loop” as AI is error prone; and it’s a knotty problem to determine whom to hold accountable for an algorithm’s seemingly untraceable decision.

Similarly, in connection to the aforementioned idea of AI errors, it is important to use AI tools as a supporting tool rather than a decision making tool. This is particularly important as AI systems can often generate a false sense of infallibility that can lead people to trust them, sometimes over their own judgment.

This can occur in various ways such as in automation bias where users overestimate the capabilities of automated systems like AI or confirmation bias where users tend to favor information provided by the AI if that aligns with their presumptions. Whereas AI support can complement human decision making process, overdependence in AI can minimize our intellectual curiosity and undermine human autonomy.  In schools, students using chatbots may trust the bots' opinions instead of forming their own, assuming the bots' expertise from extensive data training. However, until the president, even the state-of-the-art AI systems can only shuffle and regurgitate and sometimes confabulate information. Human minds, on the other hand, have far more potential.

Misinformation, disinformation and privacy
Deepfakes are extremely fake videos, audios, images and even text which leverage machine learning and deep learning techniques to create digitally manipulated media that can easily deceive. As AI systems improve, deep fakes can exacerbate the quagmire of unchecked misinformation in today’s connected world.

Slovakia’s 2023 parliamentary election, for example, could be a foreshadowing of the grave risks politically oriented deep fakes pose for democracy. In the preceding days of its highly-competitive elections, digitally synthesized audio-recordings made to degrade the reputation of Michal Šimečka, who was leading the liberal Progressive Slovakia party, rapidly spread in social media. While later, experts concluded the audio to be digitally manipulated, it is difficult to quantify the extent to which it affected the elections. It is even scarier to think how deepfakes could alter the dynamics of elections in the future.

AI generated fake content can further be used maliciously to damage individual’s reputation, impersonate and misrepresent politicians or famous personalities and execute social engineering scams. For instance, a fake video of Ukrainian President urging Ukrainians to surrender to Russia circulated in 2022 in the midst of the Ukraine war. Although the low quality video might have had little impact and was quickly debunked, it does raise concerns regarding future mishaps as AI technology blurs the line between credible and fraud content more than ever. It is not implausible to consider a future of rampant misinformation with no way to distinguish the authentic from the fake. As such, spreading awareness about deepfake technology is therefore imperative.

Along with systems that can create fake content, we must be wary of AI systems that can be used to suppress legitimate content and perpetuate content in the internet that can brainwash. AI systems could empower authoritarian governments to become digital leviathans; digital repression systems made to censor news, videos and information that go against the government undermining our internet freedom and right to free expression. These concerns are not wild speculations. All the way back in 2017, speaking to students during a national “open lesson”, Russian President Vladimir Putin’s sentiment hinted that whoever leads the AI sphere will become the ‘ruler of the world’.

These are just two of the many novel challenges the development of AI raises. However, it would be wrong to end this article without admitting that while the article does not explore the myriad of benefits machine learning algorithms have brought in our society, AI systems have indeed been a boon to a variety of sectors in human society. While cherry-picking the negative aspects of AI to hypothesize a doom and gloom AI takeover scenario would make for a catchy title, it does not change the fact the AI holds both positive potential and risks. It is necessary to acknowledge the good AI has brought in our society, helping humans from making new medicines to exploring outer space. 

Still, as initially mentioned in this article, it is also necessary to apply proper caution as companies, governments and individuals race to implement and integrate AI systems in various facets of our society. Just as ‘to err is human’, machines are not infallible. Even more, they are tools that can be used both for benevolence and maleficence. Therefore, it is essential to set guardrails, spread AI literacy and develop consensus on how to use these tools as we go along developing and integrating these systems. For all it’s unlikely in the near future that AI systems develop consciousness and go astray, in the hands of malice, these very tools could become a Frankenstein Monster.

(Anup is a student currently pursuing A-levels at Budhanilkantha School)

(Nepalkhabar encourages students to send in their articles on any issues of their interest. The article should be around 500 to 700 words in English and sent via [email protected]. We will select, edit and duly publish them in our blog section.)

 



Comments

Related News

My first experience of flying alone

My heart skipped a beat as I stepped into the airport. I felt lost, nervous, and whatnot. Sure, I w…

An educational trip to Trishul Danda

A field-based learning is far more effective than classroom-based learning for the over-all develop…

Summit for Change: A unique approach to advocacy

It is often believed that advocacy and empowerment campaigns are synonymous with raising voices in …

Nepali diaspora in Finland: Growth, contribution and cultural fusion

Foreign employment has been an ingrained Nepali experience, with historical roots extending to thei…