Funding for AI and Other Discoveries

by Brooke Smith

Investing in engaging publics in scientific discoveries

The Author

Brooke Smith

I am frequently asked if The Kavli Foundation provides funding in AI. My answer is “not exactly, but yes.” Here’s why.

The Kavli Foundation’s funding is focused on basic research. As part of our Science and Society program we think a lot about discoveries and breakthroughs, and what happens next. Who bears responsibility for broad ethical considerations of scientific discoveries? When is it optimal to consider implications, benefits, and risks when a discovery has been made? How can different publics be empowered to participate in these discussions?

Like most areas of scientific progress – including AI, but also gene editing, neuroscience, and even historically, fission – public and policy attention gain momentum after discoveries carry through to application and/or technology deployment. The Kavli Foundation supported the Danish Board of Technology, an international leader in deliberative dialogue, to examine at what points publics have historically engaged in science and technology issues. Not surprisingly, their report found that publics and communities became engaged at the point of application (or later), including when something became controversial. This is not surprising as times of application or controversy are also when something may be immediately relevant (whether a benefit or threat), or present urgency for a person or community.

Hindsight is 20/20. Looking back in history, perhaps it is clear to see that science, technology, communities, and regulation might have benefitted from thinking through ethical and societal implications, and engaging affected publics, more proactively than was done. AI is no different. In “Experts alone can’t handle AI – social scientists explain why the public needs a seat at the table,” Dietram Scheufele and his colleagues at University of Wisconsin-Madison point to the need for society to be engaged in conversations about such discoveries, to discuss the promises and the pitfalls. They note that AI experts are uneasy about how unprepared societies are for moving forward with the technology. What if publics were included along the way so many decades ago? When should they have been included, and what would be different now?

Philanthropic organizations are rising to the moment of AI relevance and urgency, focusing on important areas to address the impact of AI. Two articles, Foundations Seek to Advance AI for Good — and Also Protect the World From Its Threats and AI Is Suddenly Everywhere, but Philanthropy Has Been Involved for Years, cover who is doing what and why. In summary, most philanthropy is being directed in one of two ways: supporting productive, equitable and positive uses of AI or mitigating its potential dangers. Both are necessary, and largely (necessarily) reactive.

What if there was also investment in mechanisms, structures and incentives to move these deliberations and discussions up in the timeline? What impact would that have on the science, the technology, communities, public policies, and society writ large? These are questions The Kavli Foundation is asking. Given our focus on discovery or basic science, long before application, our efforts focus on engaging publics more proactively in ethical and societal implications born from scientific discovery.

The reason I say “not exactly, but yes” when asked about whether we fund AI is because it reflects our commitment to funding efforts that are empowering scientists making discoveries to collaborate with social scientists, philosophers, civic leaders, publics and more. And for them collectively to consider and discuss potential applications and implications more closely to the point of discovery versus application. We invested in the creation of two Kavli Centers for Ethics, Science, and the Public, one at UC Berkeley and one at Cambridge University dedicated to this purpose. We’ve invested in Civic Science Fellows considering these issues at Johns Hopkins’ Berman Institute and the Institute for Advanced Studies. AI is part of these efforts, of course, but not exclusively as they are focused on broader cultural shifts to think and act differently and more proactively about these issues.

The advent of AI dates back more than 50 years; we have had a long runway to discuss its evolution. (Read the history and future of AI by Stuart Russell, a pioneer in AI and a leader in the Kavli Center for Ethics, Science, and the Public at UC Berkeley). What would today’s world, and funding landscape, look like if there had been investments in efforts to more intentionally, and proactively, consider ethical implications, and engage interested and affected communities, 50 years ago?