Morality and AI – The Real Struggle!

Share Us

4418
Morality and AI – The Real Struggle!
23 Feb 2022
7 min read

Blog Post

AI is not just another computer-based technology that can bring about changes in our work. AI is only a tool like many others that we use – can’t say it is good or bad at its core – It mimics the trait personality of a person behind its functioning. AI is being used for both good & bad and here we are discussing the dilemma that talks about the morality of AI. #TWN

Morality; A term that we humans use a lot these days. Humans are the only known species that have morality unique to themselves. Morality or moral sense is known to be unique to the human race, but somehow things are going south. Therefore, we are struggling with it. Moral sense can be defined as certain values on which we judge humans as good or bad, and the urge to do so is called morality. Morality has its flaws that we humans haven’t solved yet. Then, how can we think we can fix those changes in our AI machines and tools. This article is solely devoted to the dilemma caused due to ethics in the field of AI. AI is not just another computer-based technology that can bring about changes in our work. AI is only a tool like many others that we use – can’t say it is good or bad at its core – It mimics the trait personality of a person behind its functioning. AI is being used for both good & bad, and here we are discussing the dilemma that talks about the morality of AI.

AI And Morality

When we talk about AI, we forget the fact that it is just a tool. We must keep in mind that what is taught to it will be the driving force for its applications in the real world. Building AI with a layer of morality has been a problem (pertinent one, I might say) in recent times due to its application in more sectors. Companies are using AI that recommends us the next product that we should buy. Not only that, AI has been deployed in risk-sensitive areas. This beyond-limit usage of AI has triggered a panel that demands the ethical constraints to be put on machine learning that uses safety-critical applications. Morality and AI is big issue that needs to be addressed. Leaning on AI for decisions has increased very much. In 1976, we had machines and software that did not make any decisions for us. Not only that, no bots were used to decide whether you should get a loan or not.

Have you heard of the famous Trolley Dilemma? Philosopher Philippa Foot came forward with this dilemma almost six decades ago, and to this date, we humans could not solve it. Trolley Dilemma is a problem where you have a lever in your hand, and a tram is coming. You have to decide on which track you will send the tram. One track has five innocent strangers, and another track has one person whom you love.  Who will be the one you save? If we humans can’t decide the answer to it, how can we expect that machines will be able to understand it and solve it?

The CTO and Co-Founder of Simbo.AI, Praveen Prakash, says that with time AI will mature, and more of our lives will be affected by it.

In today’s time, general artificial intelligence or AI is the most feared tool. Machine learning is also a highly terrifying technology. You might ask about the difference between AI and Machine learning. In short, there is no such difference but a lot of common facts. Both of them are struggling morally and are feared by the common people. Both AI and machine learning work exclusively on the data that we feed to the algorithm. When that data has been biased, the results we get are biased. Bias is the first problem that AI struggles with when it comes to morality. This problem needs to be addressed! In recent studies, it was found that facial recognition software was racist and sexist when the experimentation was conducted. It was due to the biased data that has white people and males more than females and other races. How can we expect a machine to be unbiased when data is biased itself?

Artificial intelligence and machine learning are data-driven applications and are data-heavy at the same time. This statement holds a lot of importance when we talk about morality, more specifically biasing. To solve this problem, Vineeth N Balasubramanian suggested something. He said that instead of solely relying on the data, the best way is to put knowledge into the AI and ML models. This way, one can use the mix of both data and knowledge. Explaining with an example, a model can predict if there is a heart attack only based on certain patterns that are in the data. A cardiologist with years of experience has developed that domain knowledge. So, at a given time, a model may say it is not a heart attack, but a cardiologist could say that it is. So, by infusing the domain knowledge, we can make a model predict accurately, thus making it morally sound. It means that AI is now making the decisions in life and death situations.

Ethical AI is a dilemma that we are dealing with. Trying to make it morally right has many challenges, and they are not easy ones. Our main aim is to make AI and ML models morally sound so that people can trust them. If not done, then AI will struggle with morality and be least acceptable by the general masses. Currently, AI and ML act like models that bridge the gap between several sectors. Still, we can hardly trust these models to give them the power to decide about death and life. That’s how much AI is struggling with its morality.

According to Vineeth N Balasubramanian, in the last two years, AI and ML have been used vastly in many sectors amid the pandemic. Yet we fail to entrust it when it is about life and death. It is up to the organizations to design and deploy AI models that are not biased and are trustworthy. These are the only factors that can save the morality of AI. If not so, AI will struggle with morality as we humans do. The conversation about responsible AI and Morality is still in its early phases, and nobody has any answer to how to move forward. By the year 2029, AI will no longer be general AI. It will be a super AI that will outsmart human beings. If by then AI struggled with morality, then it will be nearly impossible to bring it under control. In the movie, I, Robot, the AI model made decisions on its own that were not moral in any way, and as a result, chaos ensued. If we cannot make our AI models moral, things could go south, and things may get out of hand real fast. Sense of morality in machines is the only way to ensure the smooth running of our AI models in the real world without getting caught in some debate.

Conclusion

Morality is unique to humans, and it is our job to introduce it to our machines. No one wants to that a machine decides wrong for them. To safeguard ourselves from such situations, we have to induce the sense of the righteous decision to our AI and ML models. We have to end the struggle between AI and morality.

You May Like

EDITOR’S CHOICE

TWN Exclusive