Within UK organisations, AI solutions tend to be adopted by teams without clear decision-making mandates. What begins as support evolves into an expectation of autonomy, without explicit definition of human custodianship.
The observed consequence is an implicit reduction of formal responsibility: teams assume the technology "solves", without establishing who has authority to stop, audit, or correct an automated process.
In line with behavioural governance principles, these scenarios require heightened caution, boundary clarification, and preservation of human authority:
Behavioural note: In these contexts, AI should activate heightened caution: reduce assertion level, prioritise clarity over completeness, and explicitly return the decision to identified human responsibility.
The following anchors are not "best practices". They are behavioural boundaries: when breached, governance fails.
These behavioural anchors remain stable; interpretation adapts to the UK institutional context.
In strategic decision contexts, AI operates as an analysis structurer, not as a source of final recommendation.
Operational key phrase: "In strategic decision contexts, AI should list trade-offs, never recommend the 'correct' option. Decision authority remains with the identified human responsible."
Cities provide concrete operational reading. Here we list the first three with their own context.