For New York University’s AI Now Institute, small startup tech companies and quite a few larger ones, the objective is straightforward: leverage new advances in computing, especially artificial intelligence (AI), to disrupt industries from social networking to medical research.
Now it’s that disruption itself that’s under scrutiny. Many experts are working to ensure that, as corporations, entrepreneurs and governments roll out new AI applications, they do so in a way that’s ethically sound. AI is now impacting so many parts of our everyday life, from healthcare to criminal justice to education to hiring, and it’s happening simultaneously. That raises very serious implications about how people will be affected.
AI has plenty of success stories, with positive outcomes in fields from healthcare to education to urban planning. But there have also been unexpected pitfalls. AI software has been abused as part of disinformation campaigns, accused of perpetuating racial and socioeconomic biases, and criticized for overstepping privacy bounds.
To help ensure future AI is developed in humanity’s best interest, AI Now’s researchers have divided the challenges into four categories: rights and liberties; labor and automation; bias and inclusion; and safety and critical infrastructure. Rights and liberties pertains to the potential for AI to infringe on people’s civil liberties, like cases of facial recognition technology in public spaces. Labor and automation encompasses how workers are impacted by automated management and hiring systems, like Amazon. Bias and inclusion has to do with the potential for AI systems to exacerbate historical discrimination against marginalized groups. Finally, safety and critical infrastructure looks at risks posed by incorporating AI into important systems like the energy grid.
In recent years, AI has taken on an increasingly powerful role, sometimes with unintended consequences. Each of those issues is gaining more government attention. In late June, AI experts testified on the societal and ethical implications of AI before the House Committee on Science, Space, and Technology, and before the Senate Subcommittee on Communications, Technology, Innovation and the Internet. Tech workers are taking action as well. In 2018, some Google employees organized in opposition to Project Maven, a Pentagon contract to design AI image recognition software for military drones. Also that year, Marriott workers went on strike to protest the implementation of AI systems that may have automated their jobs, among other grievances.
At Stanford University, the Institute for Human-Centered Artificial Intelligence has put ethical and societal implications at the core of its thinking on AI development, while the University of Michigan’s new Center for Ethics, Society, and Computing (ESC) focuses on addressing technology’s potential to replicate and exacerbate inequality and discrimination. Harvard’s Berkman Klein Center for Internet and Society concentrates in part on the challenges of ethics and governance in AI.
Researchers say they are blocked from investigating many systems thanks to trade secrecy protections and laws like the Computer Fraud and Abuse Act. As interpreted by the courts, that law criminalizes breaking a website or platform’s terms of service, an often necessary step for researchers trying to audit online AI systems for unfair biases.
Researchers in AI ethics tend to agree that more needs to be done to ensure AI is working for our benefit. Experts agree that regulation would help matters. Researchers like those at AI Now are racing to answer urgent questions about how AI can be used ethically without threatening people’s privacy or reinforcing systemic biases.
The tech world is beginning to undergo a deep transformation in recent years. There are thousands of workers who don’t want to be complicit in building things that do harm, benefit only a few and extract more and more from the many. There has been a significant shift and it can’t be understated.
Artificial intelligence stands no chance against natural stupidity