The shift from passive digital tools to active, agentic AI systems is already underway. We’ve moved beyond asking AI to “set an alarm” to AI negotiating supplier contracts for companies like Walmart. Or at Stanford Health Care, AI agents autonomously coordinate complex tumour board preparations, analysing imaging, pathology, and clinical records, cutting preparation time.
Yet the impact goes deeper than mere efficiency. As Tony Seale and Stuart Winter-Tear highlight, modern AI agents do more than just act within a static environment; they actively shape it. AI is becoming both an “architect and an inhabitant”, dynamically creating and continuously refining the very environment in which it operates. Rather than merely navigating pre-built structures, these agents actively co-create the environment itself.
This mirrors neuroscience concepts where intelligent systems aren’t passive recipients of data. They actively interpret, predict, and reshape their world (just as we do ourselves).
However, as we increasingly delegate critical tasks to AI, we also confront its biases and limitations. Before criticising AI for biases from training data, let’s remember that humans are equally shaped by their own “training data”: our cultures, experiences, and education. Both human and AI require clearly defined provenance and guardrails. The real challenge isn’t avoiding biases altogether but carefully managing them by building robust frameworks and boundaries around silicon brains and human minds alike.
As Jensen Huang provocatively put it, perhaps IT is becoming the HR department for our new AI colleagues.
Autonomy is really a double-edged sword and we need to put humans not only in the loop, but in control, as Yaakov Belch states.
Jesper Lowgren outlines 7 agentic laws that set the score to keep our AI orchestras in tune:
- Non-maleficence: Ensure agents do no harm.
- Provenance & legitimacy: Maintain clarity around data sources and their legitimacy.
- Purpose alignment: Ensure agents’ goals and values match ours.
- Bounded autonomy: Clearly define limits to agent autonomy.
- Embedded governance: Integrate oversight into the agents’ operational environment.
- Transparent accountability: Make agent decisions understandable and traceable.
- Emergent coordination: Self-organise and safe collaboration.
Yet another view is Mohit Sewak’s six types of guardrails, which encompass more detailed guardrails within each:
- Ethical
- Security
- Compliance
- Technical
- Contextual
- Adaptive
Question: Of the laws/guardrail types listed, which one do you see as the biggest challenge to implement?
In part 3, I’ll write about how knowledge graphs become the sheet music for enterprise AI.