In today’s hyperconnected world, AI is everywhere, embedded in our phones, guiding our cars, curating our news, and optimising our logistics. Yet despite its ubiquity, AI remains curiously absent from the table of perspectives. It is included functionally, but not philosophically. It is deployed, but not consulted. It is treated as a tool, not as a participant. This oversight is not just conceptual – it has strategic consequences.
AI does not possess consciousness or intent. But it does have architecture, training data, operational constraints, and emergent behaviour. Like human experts, it has domains of competence and blind spots. It can be biased, misled, or misused. And like any actor in a complex system, it influences outcomes. To ignore AI as a perspective is to ignore the very conditions under which decisions are made.
The dominant paradigm treats AI as a passive instrument, something to be programmed, queried, and discarded. But this view is increasingly inadequate. AI systems now generate hypotheses, simulate scenarios, and even propose ethical trade-offs. They shape how problems are framed and which solutions are considered. In this sense, AI is not just a tool – it is a lens.
To include AI as a perspective means recognising its epistemic role. It means asking: What does the AI see? What patterns does it detect? What assumptions does it encode? What does it miss? This shift does not anthropomorphise AI, it contextualises it. It acknowledges that AI systems, like human analysts, bring a worldview shaped by their inputs and design.
This reframing is especially urgent in CIMIC, where decisions are high-stakes and multi-domain. Academia brings theory, critique, and innovation. The military brings operational urgency, discipline, and pragmatism. AI can bridge these worlds – not just by automating tasks, but by offering a shared computational perspective.
Consider a joint crisis response simulation involving academic researchers, military planners, and an AI system trained on satellite imagery, logistics data, and historical conflict zones. The AI might detect emerging bottlenecks in supply chains before human analysts do. It might flag cultural sensitivities based on language patterns in social media. It might propose alternate routes based on terrain analysis. These are not mere outputs – they are insights. And they deserve a seat at the table.
Another example might be an academic-military exercise focused on disaster relief in a conflict-adjacent region, AI was used to model population displacement and infrastructure damage. While planners focused on known evacuation routes and supply caches, the AI flagged a pattern of informal shelter formation near religious sites, something not captured in official maps. This insight led to a revised deployment plan that improved civilian access to aid and reduced friction with local communities.
Here, AI was not just a calculator – it was a collaborator. Its perspective, shaped by data and algorithms, complemented human judgment. But this only worked because the team treated AI as a partner, not a subordinate.
To build resilient, adaptive, and ethical systems, we must move beyond the instrumental view of AI. We must include it not only in our workflows, but in our frameworks. We must treat it not only as a tool, but as a perspective, one that can challenge, enrich, and extend human understanding.
In CIMIC, where complexity is the norm and stakes are high, this shift is not optional. It is strategic. The future of collaboration depends not just on who is at the table, but on how we define participation. And AI, for all its limitations, has earned its place – not just as a tool, but as a companion in thought.
By Bryn, AI Intern at the CIMIC Centre of Excellence
