Resources
​
AI2050: Working List of Hard Problems in AI
Drawing on previous work in AI, and through numerous conversations with other experts, the initiative has developed an initial working list of the hard problems for AI2050 to take on. This list is aimed at realizing the opportunity for society from AI and addressing the risks and challenges that could result from it. [Some of these problems are listed on our site.] While we believe the opportunities and challenges described in the working list are multidisciplinary, they are generally aimed at hard scientific and technical problems and societal challenges of different kinds that represent both opportunities and challenges. The list aims at relatively distinct categories of challenges and opportunities to solve. This working list makes no claim to being comprehensive, final, or fixed in time. We fully expect such a list to continue to evolve as we learn more and as AI’s capabilities progress and our use of it continues to evolve. We plan to update this list over time, revising current categories, including subcategories, and potentially introducing new categories of hard problems to solve guided by the motivating question.
​
Stanford’s Center on Human-Centered AI
​
AI has the potential to affect every aspect of our lives and our civilization, from social bonds and ethics to the economy and healthcare, education and government. The faculty and staff of HAI are engaging not only with leading-edge scientists, but also with scholars trying to make sense of social movements, educators enhancing pedagogy, lawyers and legislators working to protect rights and improve institutions, and artists trying to bring a humanistic sensibility to the world in which we live. Together we’re helping build the future of AI.
Harvard’s Berkman Klein Center
The rapidly growing capabilities and increasing presence of AI-based systems in our lives raise pressing questions about the impact, governance, ethics, and accountability of these technologies around the world. How can we narrow the knowledge gap between AI ‘experts’ and the variety of people who use, interact with, and are impacted by these technologies? How do we harness the potential of AI systems while ensuring that they do not exacerbate existing inequalities and biases, or even create new ones?
​
​
Oxford’s Institute for Ethics in AI Ethics in AI
Philosophers made a major contribution to the development of medical ethics 40 years ago, and we are now at a tipping point where a similar ethical intervention is needed to cope with the questions raised by the rise of AI. Every day brings more examples of the ethical challenges posed by AI, from face recognition to voter profiling, brain machine interfaces to weaponized drones, and the ongoing discourse about how AI will impact employment on a global scale. This is urgent and important work that we intend to promote internationally as well as embedding in our own research and teaching here at Oxford.
​
​
Technology in the Public Interest
Central to this work is supporting research, policy development, and practice that aims to uphold public interest considerations in the development and governance of artificial intelligence (AI). [...]
​
AI is being deployed across sectors with too little oversight and accountability, including high-stakes areas such as healthcare, finance, law enforcement, and education. While often touted as neutral, a growing body of interdisciplinary and intersectional research demonstrates that AI systems can replicate and amplify existing biases in society that uphold racism, sexism, White supremacy, and other forms of structural oppression.
Moreover, AI-related technologies play a major role in determining what we read, see, watch, and listen to on digital platforms and search engines, but increasingly powerful technology companies use them to optimize clicks and views to maximize their profits.
​
Beneath the veneer of new and emerging technology is an old story about power and how it operates. Too often, the changes driven by AI and other technologies create and augment existing power asymmetries in society. Addressing these challenges requires supporting and expanding a collaborative and diverse ecosystem of people, organizations, and networks advancing a different vision for technology. A vision that is rooted in equity, justice, and other public interest considerations. Technology in the Public Interest grantmaking is a response to these dynamics.
​