- Ernest Thiessen
- 5 days ago
- 5 min read
Updated: 1 day ago
From Scenarios to Safeguards: A Smartsettle Perspective on the AI Futures Project
Ernest Thiessen and John L German
Introduction
The AI Futures Project has made an important contribution to the global conversation about advanced artificial intelligence by doing something that most technical papers do not: it tells plausible stories about how things might unfold. In particular, What Happens When Superhuman AIs Compete for Control? confronts readers with futures in which capability races, misalignment, fragmented governance, and strategic mistrust interact in ways that are both unsettling and believable.
From Smartsettle’s perspective, this work is valuable not because it predicts a specific outcome, but because it exposes the decision points that matter most — often before they are widely recognized as such. Where the AI Futures Project excels at scenario construction, Smartsettle’s contribution lies in answering a complementary question:
Given these plausible futures, how do real actors actually collaborate, negotiate, and commit — early enough to avoid the worst outcomes?
This review explores how the AI Futures Project and Smartsettle Infinity can reinforce one another, and why structured, multi-stakeholder negotiation should be brought into this conversation now, not after competitive dynamics have hardened.

What the AI Futures Project Gets Right
The AI Futures Project succeeds in three critical ways.
First, it takes strategic interaction seriously. Rather than framing superhuman AI risk as a single alignment problem, it shows how multiple actors — companies, governments, coalitions, and AI systems themselves — respond to one another under uncertainty. This is crucial. Many failures in global governance arise not from bad intentions, but from rational responses to perceived incentives.
Second, it embraces plurality and uncertainty. Instead of collapsing the future into one dominant narrative, the project presents branching paths shaped by assumptions about takeoff speed, institutional strength, coordination capacity, and trust. This makes the work especially suitable for structured decision analysis.
Third, the scenarios are normatively open. They do not prescribe a single solution or governance model, leaving space for exploration, critique, and — importantly — negotiated alternatives.
From a collaboration standpoint, these features make the AI Futures Project an unusually strong foundation for structured engagement.
Where Scenario Work Needs Decision Support
At the same time, scenario narratives — however sophisticated — face a familiar limitation: they illuminate risks without necessarily creating commitment mechanisms. Readers may agree that certain futures are undesirable, yet still disagree on:
which risks matter most,
what trade-offs are acceptable,
whether a joint commitment can be made without exposing any actor to first-mover risk,
what constraints are politically or commercially feasible.
As in arms-control agreements, what matters most is that actors reach a joint commitment point together, avoiding first-mover risk. Smartsettle Infinity is designed to support such simultaneous commitments, independent of whether controls are implemented all at once or phased.
The deeper challenge is not achieving consensus on assumptions, but identifying agreements that remain stable even when key assumptions are not shared by all actors.
Smartsettle Infinity was developed specifically to address this gap. It does not replace foresight or ethics. Instead, it provides a structured collaboration environment in which stakeholders with conflicting objectives can:
articulate preferences explicitly,
explore trade-offs transparently,
test conditional commitments,
and converge on agreements that no single party could impose unilaterally.
In other words, it turns “we should coordinate” into “here is how coordination could actually happen.”
What Smartsettle Infinity Contributes to AI Futures
Applied to the AI Futures context, Smartsettle Infinity offers four concrete contributions.
1. Making implicit assumptions explicit AI Futures scenarios rely on assumptions about speed, control, verification, enforcement, and trust. Smartsettle Infinity can surface these assumptions as negotiable variables rather than background conditions, allowing participants to see how different beliefs change viable agreements.
2. Modeling negotiated equilibria, not just outcomes Instead of asking “what happens if X?”, Infinity asks “What agreements remain stable even when key assumptions are not shared by all actors?" This distinction is essential in competitive AI environments where no central authority exists.
3. Enabling early, pre-competitive collaboration Most governance tools are invoked after crises emerge. Infinity is designed to work earlier — when incentives are still fluid and before defensive postures harden. In AI governance, timing may be decisive.
4. Providing traceability and legitimacy Any agreement about AI development or deployment will be contested. Infinity records not only outcomes but the reasoning paths that led to them, supporting legitimacy, auditability, and iterative revision as conditions change.
What AI Futures Contributes to Smartsettle
The collaboration is reciprocal. The AI Futures Project brings richly developed scenarios that function as realistic stress environments, deep engagement with AI-specific risk dynamics, and a community already thinking seriously about long-term governance challenges.
These scenarios are not only useful as thought experiments; they align closely with a class of coordination problems that Smartsettle Infinity was explicitly designed to address and has already been used to support in other domains. Infinity has been applied in complex, multi-party negotiations characterized by asymmetric power, deep uncertainty, non-traditional actors, and rapidly shifting constraints—situations in which consensus could not be assumed and unilateral action was impractical.
The AI Futures scenarios fall squarely within this problem class. They map directly onto coordination challenges for which Smartsettle Infinity already provides demonstrated, usable capabilities, including structured preference elicitation, support for simultaneous commitments, and the identification of agreements that remain viable even when key assumptions are not shared by all actors.
Moreover, and crucially, the AI Futures scenarios provide a concrete proving ground in which collaboration tools can be pushed to confidently navigate futures where today’s institutional assumptions—about actors, timelines, enforcement, and stability—may no longer hold. In this way, AI Futures helps ensure that collaboration methods are not confined to today’s institutional comfort zone, while Smartsettle Infinity brings a mature, field-tested approach capable of contributing immediately, from the outset, to the kind of early coordination these futures demand.
A Proposal: Use Smartsettle Infinity Now
From Smartsettle’s perspective, the most concerning risk is not that superhuman AI emerges, but that coordination mechanisms lag behind capability growth. We therefore propose that Smartsettle Infinity be used as soon as possible in a collaborative negotiation process involving:
AI futures researchers,
alignment and safety experts,
policymakers,
industry leaders,
and civil society representatives.
Using AI Futures scenarios as shared reference points, participants could:
identify unacceptable outcomes,
map feasible guardrails,
test conditional commitments — including decision-triggered commitments (“if others commit to X, we commit to Y”) as well as execution-triggered commitments (“if others do X, we commit to do Y”),
and explore governance structures that remain viable even under mistrust, power asymmetries, and deep uncertainty.
The goal would not be to reach a final, global agreement, but to demonstrate — using a mature, already-operational collaboration method — that credible, multi-party coordination is achievable early, before competitive dynamics dominate.
From Narrative Foresight to Preventive Governance
The AI Futures Project helps us see what might go wrong. Smartsettle Infinity helps us explore how it might go right — not through optimism, but through disciplined collaboration.
Together, they point toward a future in which AI governance is not reactive, fragmented, or imposed after the fact, but anticipatory, negotiated, and adaptive.
The window for that future is open now. It may not remain so for long.




Comments