Google’s ‘Generative AI Prohibited Use’ policy previously stated that its AI couldn’t be used to make decisions—that could affect an individual's rights and well-being—in high-stakes environments such as healthcare, employment, housing, insurance, and social welfare. It’s now updated this policy to say that customers can now use its AI systems to make decisions in these areas, as long as it's supervised by humans.
Google feels that including human supervision in these decisions will mitigate bias, as AI systems are notorious for picking up bias in historic training data and giving discriminatory outputs as a result. It will also give humans accountability for their decisions, rather than leaving it all to AI, making it the sole decision-maker.
Its approach is more flexible than many of its rivals, including OpenAI (which prohibits the use of AI in all high-stakes areas), and Anthropic, which allows AI to be used in these areas, but mandates that it must be supervised by industry-qualified people and that the use of AI in these situations is disclosed.
Although Google’s new, relaxed policy aims to enable the benefits of AI to be leveraged, without compromising ethical standards with bias and discrimination, regulators are bound to remain concerned. In the EU, under the newly passed AI Act, AI systems in high-risk domains must be registered, pass quality and risk assessments, and employ human supervisors. And, in states like New York and Colorado, laws have been passed that require AI developers to disclose information about the limitations and capabilities of high-risk AI systems.