Loading...

The Ethical Dilemma of Autonomous AI: A Call for Legislation and Accountability in a World Governed by Machines

AI Ethics and Governance
05/16/2023
Avatar
Author
Nathan Garza

As the world grapples with the rapid rise of Artificial Intelligence, an open letter calling for a pause on giant AI experiments has stirred the tech world. The signatories, many of whom are central figures in the AI field, are advocating for a six-month halt in the development of AI systems more powerful than GPT-4. The irony, however, is palpable. The letter has been seen by some as little more than a 'CYA' move for the participants, particularly those who continue to forge ahead with AI advancements, even after putting their names to the document. Thus, when the unavoidable unfolds, they can conveniently claim they had issued a warning.

To be clear, I don't fault the participants. Given the opportunity, many of us working in the field would likely have done the same - signed the letter and continued with our projects. The only reason why I personally refused to sign the letter is because it calls for 'regulating organizations' access to computational power', a stance I don't support. I'll delve into my reasons for this in a future post.

The truth is, we need to slam the brakes on allowing autonomous decision-making AI platforms to come to market until we have a more profound understanding of the implications and are able to put legislation in place for accountability and auditing. To illustrate why, let's delve into a thought experiment that often haunts me.

Imagine an AI-driven car traversing a steep snowy mountain cliff, heading northbound on an icy bridge with near-perpendicular bends. Around the corner, two children lie across the road occupying both lanes. Inside the car are three passengers with no human driver. With no median or side path or guardrails, just a sheer drop of 3700ft, the AI rapidly calculates that braking time is insufficient to prevent a collision. The odds of inflicting fatal injuries on the children are pegged at 100%. However, the AI identifies two evasive actions that could save the two young lives:

  1. Veer right and drive the car, along with its three occupants, off the cliff. The survival rate for all three passengers is estimated at 10%.
  2. Veer left and drive the car, along with its three occupants, into the mountainside. The survival rate for all three passengers is estimated at 15%.

The question now is, what decision does the AI make? Does it prioritize pedestrian life at all costs? Does it prioritize the passenger life at all costs? Or does it prioritize saving the maximum number of lives?

Some food for thought: Would you acquire an AI car knowing that your family's lives may not be the priority? Or would you buy an AI car knowing that your family's lives were prioritized to the extent that the vehicle you bought could decide to take a life?

Where does the responsibility for the AI's actions lie? With the manufacturer? The owner?

There was a time when I found solace in the first of Asimov's Three Laws of Robotics: 'A robot may not harm a human being or, through inaction, allow a human being to come to harm.' However, considering the autonomous decision-making AI driving the car in our scenario must make a choice, can we genuinely assert that AI can be programmed to be incapable of taking a human life?

Even though the above scenario might seem far-fetched, the harsh reality is that over 40,000 individuals perish each year in the U.S. due to vehicular accidents. It's not a question of if, but when an AI will be faced with the critical decision of who to sacrifice.

Unlike a human who might instinctively swerve the wheel to avoid an impact, AI possesses the processing speed and capability to logically analyze the situation and take a course of action based on its programmed parameters. In essence, every decision it makes is, in fact, deliberate. A chilling thought when expanded to other fields like healthcare, where some studies claim medical errors cause more than 250,000 deaths every year in the U.S. alone.

But let's not forget: AI has the potential to save MORE lives than it will ever take. I firmly believe that we will witness a decrease in deaths related to vehicular accidents and medical errors.

However, the notion that our creations could one day be compelled to take a life weighs heavily on us as engineers and scientists who build AI. This burden feels distinct from that borne by today's non-AI engineers, who may design mission-critical systems like pacemakers where a bug could inadvertently cause a fatality. Any life taken by AI, on the other hand, would be a direct outcome of the priorities and weights explicitly defined by the engineer. Should the AI prioritize the occupants or the pedestrians? To use darker terminology, any life claimed would be a feature of the program, not a system failure.

Unlike the pacemaker scenario where a bug causing the system failure could be patched, an AI system would have been acting exactly as it was designed to - there is no patch to be made. Countless product managers and lawyers would have signed off on the priorities fed into the AI, hopefully understanding fully the potential consequences.

These sobering considerations are precisely why those of us who grasp the full potential of AI advocate for a pause.

And yet, on the other side of the coin, the awe-inspiring potential for AI to save lives and improve the human condition is what drives us to continue our work, despite these ethical quandaries.

However, decisions about life priorities should not rest solely on AI engineers, major automakers, or tech conglomerates. They should be determined by society at large and ratified into law. We also need strict control and monitoring mechanisms. We require a governmental regulator with whom companies can share their AI source code for auditing purposes.

What do I mean by strict control and monitoring? Consider a scenario where a country imports an AI-embedded product, like a car or medical device, from a country with differing life-priority legislation, or worse, no regulations at all. The host country will need a governmental regulator responsible for scrutinizing the AI source code for its life-priority parameters.

Moreover, we might consider adopting strategies from the anti-gun lobby, who advocate making firearm manufacturers liable for crimes committed by their products' users. In our free-market society, making the companies that develop AI accountable for harms caused by their AI may be the most effective way to encourage conscientious action.

The real concern is not a future with a Skynet or Terminator scenario, unless you're speaking with a singularity proponent, rather it's the ordinary AI being developed today that should give us pause. For example, last year saw airlines pushing for single-pilot cockpits, placing passengers' lives in the hands of an AI pilot if the sole human operator is incapacitated.

Let's consider a new thought experiment. An airplane fuel leak forces the AI pilot to make an immediate landing, but the plane is currently flying over a densely populated city nestled within a mountain range.

Should the AI pilot attempt a risky landing on a mountain cliff to minimize potential casualties in the city below, or try to land on a crowded street, hoping most people manage to get out of the way in time?

Many of us trust a human pilot to make the best possible decision in such a scenario, but can we extend the same trust to a machine learning engineer and project management team with no cockpit experience? Have they trained the AI on enough scenarios to handle any given situation in the most humanely empathetic way? Or was it simply programmed to "minimize loss of life at all costs to minimize financial liability for the airline"?

As you can probably discern by now, the phrase "prioritize all life" cannot be simply fed to an AI system without understanding the possible ramifications.

Given the frenetic pace of AI innovation as of 2023, we need to address these ethical dilemmas immediately.

To my readers, I encourage your participation in this conversation. If you identify any fallacy in my argument, I would love to hear from you. I look forward to the day when these worries no longer keep me awake at night.

If you concur, I urge you to play your part in bringing this crucial dialogue to the forefront of discussions within your organizations and amongst your peers. Let's ensure that the rapid advancements in AI technology don't leave our ethical compass behind. We all have a stake in the future that artificial intelligence is rapidly shaping. Whether you are a tech expert or a concerned citizen, your voice matters.

The questions of AI ethics are not abstract problems for a distant future. They are immediate, pressing, and they concern us all. They touch on the very essence of what it means to be human and how we interact with the world. How we address these challenges will shape our societies, our laws, and our individual lives. The power of AI holds great promise for humanity, but we must navigate this frontier with wisdom and foresight.

It is easy to be blinded by the potential of AI and forget that these systems are a product of human design and therefore inherit our own biases and limitations. We must always remember that behind each line of code, there are people making choices - about what to prioritize, what to value, and ultimately, who to protect.

This call to pause is not about stifling innovation or halting progress. It's about ensuring that as we build our future, we do so on a foundation of respect for human life, dignity, and freedom. This is a call for thoughtful legislation, rigorous accountability, and a collective determination to prioritize ethics alongside advancement.

Artificial intelligence has the power to reshape our world. Let's make sure it's a world we want to live in.


Enjoy this post? Join our newsletters

Sign up for our Newsletters

Please provide a valid email address!
* Yes, I agree to the terms & privacy policy.

Related Articles

All posts
Top