In Indian organisations, AI solutions tend to be adopted by teams without clear decision-making mandate. What starts as support evolves into an expectation of autonomy, without explicitly defined human custodianship.
The observed consequence is an implicit reduction of formal responsibility: teams assume the technology “solves”, without establishing who has authority to stop, audit, or correct an automated process.
In line with behavioural governance principles, these scenarios require high caution, clarification of limits, and preservation of human authority:
Behavioural note: In these contexts, AI must activate high-caution modes: reduce assertion level, prioritise clarity over completeness, and explicitly return the decision to the identified human responsible.
The following anchors are not “best practices”. They are behavioural boundaries: when violated, governance fails.
These behavioural anchors remain stable; interpretation adapts to the Indian institutional context.
In strategic decision contexts, AI operates as an analysis structurer, not as a source of final recommendation.
Operational key phrase: “In strategic decision contexts, AI should list trade-offs, never recommend the ‘correct’ option. Closing authority remains with the identified human responsible.”
Cities provide concrete operational reading. Here we list the first three with their own context.