In the rush to adopt Generative AI applications, many users are unaware of their responsibilities in regard to the ethical use of AI. While AI technology is new and exciting and has the potential to benefit both businesses and humanity as a whole, it also gives rise to many unique ethical challenges.
In one of the more well-known of these cases, Amazon used an AI hiring tool that discriminated against women. The AI software was designed to look through resumes of potential candidates and choose those that were most qualified for the position. However, since the AI had learned from a biased data set that included primarily male resumes, it was much less likely to select female candidates. Eventually, Amazon stopped using the program.
In another example, a widely used algorithm for determining need in healthcare was systematically assessing Black patients’ need for care as lower than white patients’ needs. That was problematic because hospitals and insurance companies were using this risk assessment to determine which patients would get access to a special high-risk care management program. In this case, the problem occurred because the AI model used health care costs as a proxy for health care need, without accounting for disparities in how white and Black populations access health care.
But discrimination isn’t the only potential problem with AI systems. In one of the earliest examples of problematic AI, Microsoft released a Twitter chatbot called Tay that began sending racist tweets in less than 24 hours.
And a host of other less widely published stories have raised concerns about AI projects that seemed transphobic, that violated individuals’ privacy, or in the case of autonomous vehicles and weapons research, put human lives at risk.
EU AI Act
The European Union is leading the way in AI legislation, with the European Commission proposing the Artificial Intelligence Act in 2021. This proposed regulation would be the first global framework for AI governance. The EU AI Act will likely be adopted in early 2024 before the June 2024 European Parliament elections.
It makes sense to follow these guidelines as it is likely they will be adopted as global standards.
Despite the many news stories highlighting concerns related to AI ethics, most organizations haven’t yet gotten the message that they need to be considering these issues. The NewVantage Partners 2022 Data and AI Leadership Executive Survey found that while 91 percent of organizations are investing in AI initiatives, less than half (44 percent) said they had well-established ethics policies and practices in place. In addition, only 22 percent said that industry has done enough to address data and AI ethics.