112 Pages
    by CRC Press

    112 Pages
    by CRC Press

    The role of artificial intelligence in war is widely recognized, but is there also a role for AI in fostering peace and preventing conflict? AI for Peace provides a new perspective on AI as a potential force for good in conflict-affected countries through its uses for early warning, combating hate speech, human rights investigations, and analyzing the effects of climate change on conflict.

    This book acts as an essential primer for introducing people working on peacebuilding and conflict prevention to the latest advancements in emerging AI technologies and will act as guide for ethical future practice. This book also aims to inspire data scientists to engage in the peacebuilding and prevention fields and to better understand the challenges of applying data science in conflict and fragile settings.

    Introduction. 1. AI and conflict prediction: our reach exceeding our grasp 2. AI and hate speech: the role of algorithms in inciting violence and fighting against it 3. AI, human rights, and peace: machines as enablers of rights work 4. AI, climate, and conflict: the role of data science in climate change and sustaining peace 5. AI, peace, and ethics: from principles to practice

    Biography

    Branka Panic is the Founding Director of AI for Peace, a think tank ensuring that AI benefits peace, security, and sustainable development. Panic is a Fellow at the NYU Center on International Cooperation, Stimson-Microsoft Responsible AI Fellow, a member of UNESCO Women4Ethical AI, and a member of IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Panic is a Senior Adviser on AI and Innovation to the German Federal Foreign Office and works as a Professor of Practice at the University of North Carolina. She holds an MA in International Development Policy from Duke University, Sanford School of Public Policy, and an MS in International Security from the University of Belgrade, Serbia.

    Dr Paige Arthur is the Director of Global Programming at Columbia Global (Columbia University, New York). Arthur was formerly a Senior Fellow and Deputy Director of the NYU Center on International Cooperation, where she led the Center’s work on conflict prevention and peacebuilding, including a three-year initiative on data-driven approaches to prevention. She is the author and editor of several books on transitional justice (University of Cambridge Press) and decolonization (Verso Books), and she holds a PhD from the University of California, Berkeley.

    “In a world where the integration of AI is advancing rapidly, AI for Peace provides a timely and insightful exploration of the untapped potential of AI for promoting peace, countering hate speech, protecting human rights, and addressing climate change. Paige and Branka underscore the imperative of developing AI responsibly, highlighting the ethical considerations and challenges that must accompany this challenging, interdisciplinary, but fascinating journey. As AI evolves, so must our awareness and commitment to harnessing it for the greater good. This book serves as a compelling call to action for individuals, data scientists, policymakers, and communities alike to embrace AI's positive role in creating a more peaceful world. I urge all readers to delve into its pages and reflect on the profound implications AI holds for the future of peace.” -Dr. Eduard Fosch-Villaronga Ph.D. LL.M M.A. Associate Professor and Director of Research at the eLaw Center for Law and Digital Technologies at Leiden University (NL).

     

     

    AI for Peace provides a nuanced exploration of the intricate relationship between artificial intelligence and global peace, making a compelling case for the responsible use of AI in conflict prediction, human rights, and climate action. Its rigorous analysis not only identifies the pitfalls and ethical considerations but also offers a roadmap for leveraging AI as a tool for sustaining peace and promoting social good.” - Dr. Roman V. Yampolskiy, Department of Computer Science and Engineering at the University of Louisville

     

     

    “Artificial intelligence is transforming every aspect of our politics, economics, and social affairs. Much like nuclear technologies during the post-Second World War era, AI emergence has the potential to generate monumental benefits and catastrophic harms. Predictably, concerns are growing over the misuse and weaponization of generative and general AI - from the spread of disinformation and deepfakes to the deployment of drone swarms and chimeric viruses. Yet considerably less attention is devoted to the burgeoning field of “peace tech” and the ways in which machine learning, natural language processing, and spatial processing can prevent armed conflict and build more resilient societies. Paige Arthur and Branka Panic’s timely, compact, and highly readable volume - AI for Peace - fills this knowledge gap. Arthur and Branka have produced a fascinating overview that expertly navigates the intersections of data science, technology and peacebuilding. 


    At the center of AI for Peace is an appeal for ethical AI governance. AI frameworks should ensure that technologies are designed and deployed to maximize fairness, inclusivity, transparency, security, privacy, accountability, as well as “peacefulness”. The authors also recommend that humanitarian, development, and peacebuilding organizations adopt an “ethics in crisis” approach that enables rapid and responsible AI deployment to mitigate harms and save lives. AI for Peace not only builds the theoretical scaffolding for an innovative new discipline, it also offers a powerful roadmap to guide peacebuilding practice on the front lines. Arthur and Panic show that while AI can amplify hate speech, subvert privacy, and fight wars, it can also improve conflict forecasting, monitor ceasefires, counter disinformation, and wage peace.” -Robert Muggah, Founder of SecDev Group and Igarapé Institute

    "The authors of this book wrote their introduction in the summer of 2023, and examples from the Russo-Ukraine war are found throughout the book.  By the time I am writing this review in autumn 2023, the Israel–Palestinian conflict has once again flared into open warfare with the Hamas atrocities leading to massive Israeli retaliation.  Of course, as is often the case, the origins of both conflicts are far deeper: in Israel including partition in 1947, the Holocaust, early 20th century Zionism and post-Roman diaspora; and in Ukraine memories of Stalinism, second-World War Nazi-nationalist alliances, and Russification in the 18th century.  When wounds can take generations to heal, is there any hope for peace?

    As a mathematician by training, I was struck by the authors’ analogy of predicting conflict and predicting cloud formation.  Both are positive-feedback situations where each small action may result in ever-larger consequences, the ‘butterfly-wing’ effect, which inevitable leads to extremes and ultimately chaos.  Most often, engineers do not try to predict or manage chaos, but instead to avoid it.  It is thus encouraging that this book is focused on ‘peace’ not as mere absence of violence, but as a positive state to seek. 

    The chapters on hate speech, human rights and climate get to the heart of some of the key factors that drive violent conflict.  Indeed, it has been argued that the long civil war in Syria was triggered by a combination of deep religious and ethnic divisions, human-rights abuses and the 2006–2010 drought.  Similarly, behind the current Ukrainian conflict lay years of accumulating distrust between eastern and western leaning factions, and also in part a water war as the impact of the cutting off of Crimea’s principal water supply in 2014 was intensified by climate-change exacerbated drought in 2020.

    At its worst, AI can intensify the cycles of mutual antagonism, accelerating the chaotic descent into violence.  However, this book paints an alternative, where AI can be a force to de-escalate the language of division, to help those promoting fair societies, to bring information that can help us deal with environmental conflicts, and ultimately, as we are invited to consider in the final chapter, to ensure that a fundamental ethical principle of all AI is “sustaining peace”. -Alan Dix, Professorial Cardiff Metropolitan University and Director of the Computational Foundry, Swansea University