What is an AI agent?
An artificial intelligence agent (AI agent) isn’t a chatbot, nor a simple machine learning model. It is a programmatic actor that operates independently or as part of an orchestrated process. The agent observes the situation, makes inferences and initiates actions, either autonomously or with the support of a professional. It often involves a combination of several technical components: decision logic, NLP, rule engines and integrations. An agent won’t work unless it is connected to an operational environment. It needs access rights, access to data and the ability to operate at interfaces.
In healthcare, an AI agent can suggest a treatment appointment, check medication compatibility, route customer feedback or ensure that no patient is left without a follow-up call. But only if the systems allow it and if someone has designed the AI agent correctly.
The current situation
The information system architecture of social and healthcare in Finland is still highly fragmented. There are several different appointment booking systems, patient information systems, social care customer registers and national services in use without a common level of orchestration. The interfaces are partly closed, partly poorly documented and access rights are decentralised across different organisations. Structured documentation is expanding, but the vast majority of data is still in free-form text. Or at worst, PDF attachments that cannot yet be processed by an agent without heavy NLP structure and OCR processing.
In this context, the use of an AI agent is only realistic when the quality, access and context of the data are under control. For example, an appointment booking agent can suggest a suitable time based on the patient’s previous appointment history and urgency information, if the system provides an open and documented API, and if the calendar information has been modelled to be technically consistent. In addition to integration, authentication, transaction management and access control are required. Without these, the AI agent cannot operate securely and reliably.
For example, an AI agent supporting a professional may detect a missing diagnosis in relation to laboratory results, suggest documentation based on national treatment guidelines, or highlight a discrepancy between medication and the patient’s recorded symptoms. To make this possible, the system must provide access to patient data in a structured format (at least at the level of ICPC, ICD and ATC code level) and enable the linking of a drug database and treatment guidelines (e.g. Current Care) to decision making. Without a valid semantic connection, the agent’s proposals remain guesswork.
One useful application is patient appointment management. An AI agent can identify if a patient hasn’t responded to an appointment, check for cancellation notes or missing information, and suggest a new appointment or an automatic reminder. This can be implemented if the appointment booking system provides status information (e.g. booked, cancelled, unconfirmed, free), and if messaging (SMS, email, Suomi.fi) can be integrated into the agent’s operations. In addition, a link to the patient or customer identifier is required to communicate properly. A transaction error handling and audit trail are also required so that every automated action can be traced.
In contrast, complex care pathways – such as patient follow-up involving X-rays, lab tests, multiple appointments and specialist opinions – cannot currently be automated with an AI agent in Finland. The reason for this is system architecture: scheduling appointments between different information systems and changes to these appointments are challenging, and the responsibility for coordinating the care process remains with the human being. Similarly, a multi-professional assessment, such as processing of a child welfare notification based on information from several different authorities, cannot be carried out by an agent, as the merging of registers isn’t allowed by law without an extensive legal basis and consent management.
To summarise: an AI agent can operate where the data is structured, responsibilities are defined, and systems can be integrated. Every use case where the agent acts autonomously (e.g. suggests a time, produces a document or triggers an action) must be taken into production in a controlled manner. This requires a development environment, a validation pipeline, technical documentation, monitoring and a feedback channel. An agent isn’t an “add-on” but a programmatic actor that must be produced like any other critical component: tested, versioned and responsibly orchestrated.
An AI agent is a process, not a feature
Agent design requires clear modeling. The scope of each agent’s activities, access to information, monitoring and handling exceptions must be defined in advance. An agent cannot be a black box; every decision and action it takes must be auditable. This means state management, event-based architecture and explicit reasoning logic.
In practice, an orchestrating layer is needed over each agent, managing when the agent acts, what it does, and with what information. This can be implemented by a workflow engine, a rules engine, or a state machine, but it must be visible in both the technical documentation and practical monitoring. Agents must function technically in the same way as other critical software components: they must be tested, validated and versioned. This requires a DevOps-based management model and quality assurance.
AI agent orchestration is a critical architectural layer
A single AI agent can perform a task assigned to it, such as retrieving data, suggesting an action, or producing a draft. However, individual agents aren’t enough to achieve practical effectiveness in everyday social and healthcare services. A system-level capability is needed to manage, integrate and monitor the collaborative work of agents. This is called agent orchestration.
Orchestration isn’t an abstract term, but a practical architectural requirement. It means that the system must have a component, often a workflow engine or event-driven architecture, that ensures that the right agent is activated at the right time, with the right data and in the right order. In the context of social and healthcare, this means, for example, that the agent performing the risk classification first identifies the patient, after which the agent responsible for proposing intervention is triggered, and finally, the agent handling the appointment checks the resources and proposes a time for the visit.
Orchestration is also needed in error handling. An agent shouldn’t continue its activity if the data it requires is missing or inconsistent. The orchestration environment must be able to abort, redirect to exception processes and provide an event log for subsequent analysis of the agent’s activity. This is a technical necessity and is also important from a regulatory and accountability perspective.
Most organisations don’t yet have a clear orchestration layer to which AI agents can be connected. Building such an environment isn’t straightforward, as it requires process modelling, interface harmonisation and technical capability to manage parallel, partly asynchronous activities.
Without orchestration, AI agents will remain isolated solutions that may have value in individual use cases but cannot be scaled up or kept under control. Orchestration isn’t an optional extra but a key prerequisite for the controlled and secure expansion of agent use.
Regulations don’t prevent AI, but guide its use
The EU’s AI Act and Medical Device Regulation (MDR) clearly define that if an agent influences patient care or makes clinical decisions, it is a high-risk application. In such cases, the agent must be documented, validated, and CE marked. This doesn’t mean that AI cannot be used, but that it must be built correctly. The responsibility of the agent, the role of the user and the transparency of the operation must be demonstrable. An audit trail isn’t a recommendation, it is mandatory.
From the perspective of the MDR in particular, it is essential that classification as a medical device isn’t based on technical functionality alone, but on the intended use as defined by the manufacturer. If the intended purpose of an AI agent is declared to be clinical decision support, diagnosis or guidance on treatment, the software is considered a medical device, regardless of the technical means by which the tasks are performed. This entails device requirements, CE marking and clinical evaluation.
The AI Act, on the other hand, requires that high-risk AI systems have clear quality management, documentation, and the possibility for humans to monitor or interrupt the agent’s actions. The AI Act doesn’t alone determine whether a system is a medical device on its own but refers to MDR if the AI is part of medical software.
GDPR adds its own layer of regulation. An agent that processes personal and sensitive health data needs a legal basis for processing. If the agent operates without human intervention, the risk increases, and a Data Protection Impact Assessment (DPIA) is a practical necessity. In addition, the agent must be able to fulfil the data subject’s rights, such as transparency, accountability and the right to rectification.
The technical design of an agent alone doesn’t determine its regulatory status. What matters is the purpose of use, the impact on the health of the individual, the degree of automation and the level of transparency. In social and healthcare, there is no room for interpretation. If an agent influences a care decision, it must be treated accordingly, not as light automation but as a medical entity.
In practice, all this means that an autonomous agent may not independently transfer a patient to another care pathway, order further examinations, or modify data in the register without the approval of a professional. Every decision must be traceable, and the handling of errors must be documented. When the agent acts as a support to the human, for example, by suggesting an action that the professional approves, the regulatory requirement is less stringent, but supervision is still necessary.
AI agent access and control require a dedicated system layer
One of the most critical technical challenges in deploying AI agents is related to access control. In the future, solutions will be needed that also allow access rights to be granted to programmatic actors in the same way as user IDs are today. This will require role-based access control (RBAC), possibly a context-based model (ABAC), and the technical ability to control which data sources an agent is allowed to connect to and what actions it is allowed to take.
At the same time, a logging and monitoring layer is needed that not only records the agent’s actions but also monitors their use in real time and reacts to anomalies. This isn’t yet ready in practice in most of the social and healthcare organisations, but the deployment of agents will require such an architecture layer, especially if the activities extend to the processing of registry data.
What is already happening and what can be learned?
There are already some examples of AI agent-like solutions in the social and healthcare sector. Most of these operate within individual organisations, in a strictly defined data environment. For example, agents can help classify customer feedback, suggest next steps based on symptom assessment or support customer service representatives in their search for information. In these cases, activities are limited, data flows are controlled, and responsibilities are clearly defined.
Two lessons can be learned from these implementations: first, agents only deliver value when their operating environment is in order. Second, agents do nothing without a clear process to support their use. The technical implementation isn’t complex in itself, but to succeed, it requires planning, integration expertise and a layer of control.
AI agents aren’t a magic solution, but a technical entity
The utilisation of agents doesn’t start with “implementing AI”. It starts with modelling: what you want to automate, under what conditions, with what responsibilities and within what limits? An agent isn’t a standalone piece of AI, but part of a software architecture. Its design, development and deployment require technical expertise, system-level understanding and knowledge of legislation.
Once these are in place, the agent can operate safely and productively. In the context of social and healthcare, every mistake costs, and that’s why an agent must be finely tuned for the job. Not for hype, but for use.