How will AI related techniques have an impact international security and stability, and what needs to be done to avoid unintended consequences?
Tech4GS set out in 2017 to understand the potential risks posed by novel AI techniques to international security. We were convinced then, as we are now, that there needs to be a much broader policy - and public - discussion about the role of AI related techniques in military decision making, to include but not confined to the ongoing debate over the role of autonomous weapons. The number of publications surrounding AI and warfare has proliferated; well resourced centers have been established; national strategies are proliferating; policy makers are increasingly better informed; more specialists are turning their expertise towards related issues; and the field of AI safety is expanding with greater resources and rigor.
We remain convinced, however, that much of the analysis available for critical decision making remains top-level, almost superficial. Much work remains to be done. There is an array of further research and both theoretical and practical considerations needed to begin to understand what may be one of the greatest challenges to international stability in the coming decades - and what needs to be done today in anticipation of those changes. The corollary, in our minds, may well be the massive effort undertaken by the RAND organization and others during the dawn of the nuclear era: in that light, we only likely find ourselves today as they did in the late 1940's and 50's - at the very beginning phase of understanding the implications of the technologies we are buildin - except now everything is moving much, much more rapidly. Tech4GS work in the AI domain going forward will focus efforts on the following initiatives: - AI and human decision making, specifically what vulnerabilities/risks are created by an increasing reliance on AI-fueled systems of systems - The unintended consequences of integrated AI into command and control systems - AI safety concerns and the implications for international security, and - The impact of AI related techniques on strategic stability and deterrence calculations
We kicked off our investigatory process in late February 2017 with an event hosted by Cooley LLC at their HQ in Palo Alto, with a panel discussion featuring Paul Saffo, John Markoff, and Randy Sabett, to delve into the question of defining the state of the art - and how much of the noise surrounding AI-related technologies is just hyperbole, and what can we really expect to see change in the coming years? What are the real risks?
Building on that foundation, a second workshop was hosted by Andreessen Horowitz in May 2017, focused more intently on establishing a baseline understanding the broad potential societal implications of AI-related technologies - while putting the international security piece in a much broader context.
Our next set of multiple workshops took place in June 2018 in Silicon Valley, as is part of a joint effort with the Center for Global Security Research at Lawrence Livermore National Labs, described in reports that were released in early 2019. We investigated how policymakers should anticipate AI-related technologies impacting international security - from planning and budgeting to understanding how these technologies will impact the information that informs national security decision making.