OpenAI has given professors at North Carolina University—Duke University—$1M in grant funding, spread over three years, to research and develop AI that can make moral judgments.
The project, named “Research AI Morality” is being led by ethics professors, Walter Sinnott-Armstrong and Jana Borg, who have already released studies showing that AI has the potential to become a “moral GPS”—helping people make better, morally sound decisions—and built an algorithm that was designed to help doctors assess which of their patients should receive a kidney transplant, first.
Developing an AI model that can understand and make moral judgments has been a challenge that many have attempted.
In 2021, for example, The Allen Institute for AI created ‘Ask Delphi,’ an AI chatbot that was built to give ethical responses to questions.
Although Ask Delphi was able to judge simple, black and white morally-sensitive situations—it knew cheating in exams was wrong, for instance—it didn't take much for it to start delivering unethical, biased, and inappropriate answers. This is because it was trained on data from the web and doesn't understand reasoning or emotion: It bases its answers on patterns it finds in data. Plus morals and ethics are highly subjective and personal to an individual, so it will be a challenge for the professors to build an AI model that can make good, morally-sound judgments.