The Moral Dilemma of AI

By bedigital on May 12th, 2023

As you probably already know by now, AI has the potential to unlock a whole world of possibilities. However, as we push the boundaries of AI innovation, we must also confront the moral dilemmas that have emerged from its use. As AI users, we have a responsibility to ensure that both its pros and cons are realised and considered. Let’s take a deep dive into the underbelly of AI to find out what we can do to make our usage of this revolutionary bit of tech as safe as can be.

Battling bias

We all have a duty to ensure that our AI systems are unbiased. You might be thinking “how can a machine (which doesn’t have a brain) have the capacity to think in a biased way?”, which, to be fair, is understandable. However, it’s important to remember that AI feeds directly off the information that we give it. So, by ensuring our data is diverse, representative, and free from bias, we can make sure our AI is too. Unfortunately, though, this isn’t always as easy as it seems.

  • Risk assessment algorithms- AI is used in the employment of risk assessment algorithms that can predict the likelihood of certain scenarios, based on the criteria of specific data provided. These algorithms can then be used to make predictions based on the inputted data. The problem with this formula, however, is that there is guaranteed bias due to predictions being based solely on the past and there is a risk that biases that already exist within data may be replicated.
  • Business recruitment algorithms Through automating various aspects of the hiring process, such as resume screening, identifying qualified candidates, and selecting “best matches”, AI can be an incredibly useful tool in the recruitment process. However, for these algorithms to operate fairly and produce desired outcomes, it is crucial to eliminate bias. Let’s say a company’s data inputs include profiles of its current staff. In that case, the algorithm will likely select candidates who are most like current employees, even if such data doesn’t include race, gender, or sexual orientation. This risks replicating trends and characteristics, unrecognisable to recruiters, that may overlook diversity and intersectionality.

Taking accountability

As a result of these concerns, many experts are calling for increased transparency in all areas of the development and deployment of AI. It’s up to us, both as individuals and as a collective, to ensure that we use AI responsibly and account for all the data we expose it to. As machines become more autonomous and make decisions without human intervention, it sometimes becomes difficult to assign responsibility for their actions; with transparency, this becomes clearer. If a self-driving car causes an accident, who is to blame? Is it the car manufacturer, the software developer, or the owner? Admittedly, this is ‘chicken-and-the-egg’ type of rhetorical question, but having visible data-driven facts will definitely help the quest for an answer.

While AI has the potential to create new jobs and industries and increase the value of certain skills, it also has the potential to displace workers and exacerbate income inequality (and no one needs a new type of pay-gap). We need to ensure that the benefits of AI are shared equitably and that we invest in reskilling our workforce, not replacing them.

We should also consider how our AI innovation requires institutional changes to policy and systems to accommodate such fast-changing technology. If we, as a society, are going to embrace AI with open arms, we need to ensure that teachers are well-prepared for students to submit essays with the help from a ChatGTP ghostwriter and that the courts of law have some sort of handbook on who’s to blame for driverless car accidents- is it me or Elon Musk?

By being open to scrutiny, we’re ultimately being open to safety. If we’re to be AI allies after all, us humans need to put in the groundwork to ensure that we can use artificial intelligence in a sensible and safe way.

Written by

bedigital

bedigital