Read the Report Here
Executive Summary
The ongoing development of artificial intelligence means that humans will simultaneously confront multiple interfaces of AI that exhibit a range of its propensity to contribute to good and bad, progress and destruction, and as act as a perpetrator of violence or a tool for peace. As a dual-use technology, AI can be adopted for military and civilian purposes alike.
Whether on the civilian or military side of adoption, AI contains inherent conflicts. Some of the main sources of conflict that policymakers must attempt to address are about how to ensure human rights values such as individual autonomy are preserved and not destroyed, navigate the organizational culture change necessary to respond to AI political end-uses, construct and upgrade the appropriate institutional arrangements needed for accountability, and ensure safeguards exist to enable trust-building.
Introduction
In 2021 Amazon and Google won a tender to provide the Israeli government with cloud computing services ranging from “mundane Google Meet video chats to a variety of sophisticated machine-learning tools.” The deal, dubbed Project Nimbus, represented under half a percent of Google’s sales in 2021; yet it represented a key strategic move for Google’s cloud services division, and placed the company in a competitive position regarding the “larger cloud businesses at Amazon and Microsoft.” This procurement contract was framed around its civilian-based digital transformation contributions. However, concerns about use of the cloud service for more military-based purposes in the West Bank and the “facilitation of human rights violations” risked tarnishing Google’s reputation.
During the same time, Google’s DeepMind team produced a research report in 2024 titled “AI can help humans find common ground in democratic deliberation” that conveys findings from the “Habermas Machine,” a large language model (LLM) that serves as an AI mediator. The Habermas Machine is an AI system built by Google researchers that can broadly “respect the view of the majority in each of our little groups” and produce an output that also doesn’t “make the minority feel deeply disenfranchised” by acknowledging minority views. Even though this innovation for conflict resolution is still in the early stages and contains flaws, it shows how one company can exist within a flux of advancing technology for peaceful mediation while simultaneously contributing to the perpetuation of violent conflict in other areas.
An AI company’s choice to pursue commercial interests that link with military applications presents the problem of transforming economic behavior into political behavior.
Consequently, such companies serve multiple roles: producers of AI innovation, sellers of infrastructure needed for AI technology to function or to scale software, and collaborators providing government developers with the tools needed to craft mission-critical solutions. This simultaneous pursuit of multiple roles requires tech companies to follow opposing sets of aims: to make money and pursue scientific research and to fill a market need and possess innovative ideas. Technology companies face constant key internal and external pressures as they navigate their commercial interests and the effects they have on society in terms of conflict and resolution.
How can the potential for AI to both cause conflict and enable resolution, depending on how it is used, be reconciled? What sets AI apart from other innovations is its categorization as a general purpose technology. AI systems are capable of spreading widely across sectors and specific domains within those sectors for many uses from policing in the public sector to fraud detection in the finance sector. As such, such systems require their own unique governance conception that really cannot be compared to other dual-use technologies such as nuclear. The most widely used and updated definition of an artificial intelligence system is that it is a “machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” Daily, humans experience and interact with different degrees of AI in many ways, such as the machine learning systems that drive Netflix’s movie recommendation algorithms, so that user feedback can be adapted to in real time, or the natural language processing behind Alexa’s response to a question about the weather, about and the deep learning algorithms that improve the image recognition needed for Waymo’s self-driving taxis.
The controversial nature of private sector involvement in producing dual-use AI means organizational culture and structure must adapt in innovative ways. At the same time, for the public sector, difficult questions must be answered through institutionalizing mechanisms to protect, guide, lead, and build AI in the public interest.
While some technology companies help incorporate AI into military operations, others concentrate on how this technology can enhance conflict resolution processes such as mediation and peace building. Sometimes, this even happens within the same organization. To more effectively tie human ownership to AI-based actions, there must first be a clearer distinguishment between the military-civil fusion at the organizational level, so that tasks are more clearly aligned to the AI’s intended purpose. Second, at the societal level, the appropriate institutions that provide mechanisms placing human responsibility over actions and behaviors that result from AI use must be installed in order to create opportunities for justice, possibilities for reprimand, and penalties for misuse. At the government level, mechanisms for public protection must be carefully considered due to the complex and multifaceted nature of emerging technologies. Finally, at the individual level, more coordinated efforts are required to rethink how current technical and businesses-oriented models and the design choices supporting them affect the democratic-liberal values that are the foundation of democracies.
Policy Recommendations
Actors can navigate the contradictory realties of AI as a tool for conflict and resolution. Here’s how policymakers can facilitate the navigation:
Research and Development
- Increase the emphasis on the research community as a key stakeholder in safeguarding AI; efforts should focus on building multi-disciplinary collaborations across universities and countries to enhance information sharing about the best practices for offensive and defensive responses to AI based vulnerabilities at the technical level; coordinate more wide-spread research agenda sharing for key areas of concern at the research level
Tech Firms
- Large technology companies contracted with government defense departments should further structure the compartmentalization of civilian side and military side of AI development internally; this will require providing the necessary training and knowledge capacity building about AI in the public interest and the ethos and values of public service.
Government
- Governments’ adoption of AI systems must appropriately assess the useful areas for its application and identifying appropriate problems for AI tools to address
Institutions and Norms
- There should be a context-based, domain-specific approach to responsibility for AI outcomes that embrace the multifaceted nature of accountability, rather than a “one-size-fits-all” approach.
- Consider institutional reforms where relevant; upgrade organizational norms to ensure adequate opportunities exist for building the social capital needed for shared understanding between partners during adoption.
The views expressed in this article are those of the author and not an official policy or position of New Lines Institute.