By Dr Mohd Zafri Baharuddin

New Straits Time, 26 May 2023

SCIENTIFIC journal Science Advances recently published an article by MIT researchers titled "AI models fail to reproduce human judgments about rule violations".

At first glance, it appears toshow that Al systems make harsher judgments compared with humans, given that AI cannot grasp the context of a situation.

The article suggests that if a model is not trained with the right data, it is likely to arrive at different and harsher analysis than the average human.

Remember, Al feeds on nothing but data. This data-driven approach in Al involves training algorithms on vast amounts of digital information to learn patterns and make intelligent decisions.

However, the data that Al consumes can be biased, because the data is created by humans. AI will only be as good as the data that is fed to it.

If the data annotators (humans who prepare the data) impart biasness or lean toward certain biases in their input, the resulting Al model would display the same biases.

Bias in AI refers to the systematic errors or unfairness that can occur when Al systems reflect the nuances in the data used for training.

Since Al algorithms learn from historical data, they may inadvertently inherit societal biases related to race, gender, or other sensitive attributes, leading to discriminatory outcomes or reinforce biases when making decisions.

Hence, it is the responsibility of the AI engineers to ensure their models are balanced.

This can be achieved by rigorous auditing of the dataset, engaging a diverse and inclusive team of co-auditors, employing ethical guidelines for data input, and performing continuous evaluation to ensure the credibility and integrity of the dataset.

Geoffrey Hinton, Turing Award laureate, quit his role at Google to highlight the importance of ethics in AI.

It is important for AI researchers and practitioners to acknowledge that data and labels in Al are naturally biased because they rely on already-biased human judgments.

Because the data Als are trained on has flaws, and those flaws are only human.