Ethics Lessons from AI
Civic Science Fellow Christine Custis explores the ethical design, development, and deployment of emerging technology
With the explosion of artificial intelligence tools and capabilities in the last year, many find themselves wondering about its future, from its applications to its consequences. But researchers like Christine Custis have been thinking about it for years. As a Civic Science Fellow supported by The Kavli Foundation and Rita Allen Foundation, she works at the Institute for Advanced Study (IAS) in Princeton, N.J., where she’s a member of the Science, Technology, and Social Values Lab under the leadership of Alondra Nelson.
“I'm focusing on work related to the ethical design, development, and deployment of technology. And it's specifically in the areas of artificial intelligence and quantum science,” Custis explains.
This work started for her when she was the Director of Programs and Research at the Partnership on AI, a nonprofit organization that encourages technology companies to create AI responsibly. “We wanted them to think about how, when it gets out into the world, how it impacts society and some of the ethical implications of those types of technologies on society,” says Custis, “specifically on marginalized underrepresented communities of either users or impacted non-users.”
Society is playing catch-up with generative AI, but Custis has seen how grassroots entities, nonprofits, academic institutions, and civil society organizations are rallying to push toward regulation, monitoring, and community standards, especially to protect the most vulnerable. “We just keep doing good things every day and hope that there are measurable positive impacts.”
When it comes ethical development within quantum science, Custis is hoping to have a more proactive approach to the downstream ethical considerations. “How can we take lessons learned from this generative AI rollercoaster and apply them proactively to quantum science and some of the different areas of quantum technology and information science?” asks Custis.
But there too lies a challenge, for being proactive also means some researchers need to be convicted that there are ethical considerations to begin with. “With any type of deep complex technology or mathematics or physics, I hear, ‘I don't see how that relates to ethics or society.’ And I say, ‘Really? You don't? You don't see how human people are being impacted?”
“Dr. Christine Custis brings extraordinary experience across academia, civil society, and government to one of the great challenge of our time: how to govern new and emerging technologies for the public good,” said Nelson, the Harold F. Linder Professor of Social Science at IAS. “Getting this right will require new paradigms and partnerships in research, policy, and science communication. We are fortunate to have Christine as IAS’s first Civic Science Fellow and for this collaboration with the Rita Allen and Kavli foundations.”
It leads back to an underlying, unanswered question about science and ethics: where in the process does the responsibility to think about consequences begin?
With quantum computing, for instance, a future where data is more difficult to encrypt could lead to myriad privacy and security questions, which is already a constant struggle. Who will lead the charge to think about the effects on various publics today, rather than much later?
These sorts of thorny issues are at the heart of what Custis is most proud to be a part of – the AI Policy and Governance Working Group. Led by Alondra Nelson, this vital group brings together individuals from a wide cross-section of society -- researchers, industry, policymakers, civil society, and the public -- to discuss, debate, and develop ideas for how AI systems and tools can be developed and used responsibly, based on research and expertise, and to disseminate these ideas to key decisionmakers.
“As we know from The Kavli Foundation’s crucial work on the intersections of science and society,” said Custis, “including publics in science is all part of this. It's not complete until we've had those types of conversations.”