This siloing of operations is common and quite understandable in today’s fast-paced world. Developing or utilising AI without the rules set by IT can be tempting and seem to enhance innovation. And for a time, it can make things move faster. However, proceeding in silos can also increase risks that can cause significant problems going forward. This text looks at those risks from the viewpoint of mitigation and responsible innovation. It focuses on what each can do to decrease risks in AI adoption. Let’s start with utilising tools someone else has developed.
Living on an interesting edge with new AI features and tools
We are increasingly utilising AI as features come to operating systems and to the familiar software we use daily. Recommendation algorithms have, for a long time, already chosen what shows we see and products we buy. Route algorithms suggest the fastest routes. Computer vision and optical character recognition (OCR) detect the license plate of a car when driving out of a garage. In the last two years, there has been a massive influx of new AI tools, like ChatGPT, Midjourney for image generation, Synthesia for video creation, Windsurf for coding assistance, Perplexity for search and research, and a myriad of others have become both available and affordable. Apple is incorporating a lot of AI features in Mac OS X and Microsoft in Windows through copilots and so on. Pretty much every software vendor incorporates AI features in their products.
This creates a situation where anyone using computers and smartphones will have or already has AI features included and a lot more on offer for free or low cost. For IT departments, this can be a tricky proposition, but the issue shouldn’t remain an IT problem as it is a wider issue for the entire organisation that shouldn’t be ignored. Let’s look at why this is.
Data-hungry AI
The affordability of AI tools often rests on the fact that AI systems are data hungry. Free, and inexpensive subscriptions often carry in them the requirement to share your data and usage with the company to further develop their AI models. Even if this feature can be turned off, some providers will change the way this permission to share data is verbalised at frequent intervals, leading to individuals accidentally accepting sharing of data due to not constantly keeping an eye on what they say yes to.
While this allows for the development of even better AI models and services, it also means company confidential data, or personal data, can be leaked to AI service providers as was discovered by Samsung in the spring 2023, which led them to forbid using AI (**** check when connectivity). Furthermore, as it can sometimes be prompted out of the AI solutions (**** example), it can leak out to everyone.
To mitigate this issue, it is prudent and wise for AI tools and systems to be vetted by the organisations to see what happens to the data fed into them. Secondly, having a list of available and approved tools for employees can then enable them to use the AI capabilities they want while ensuring data is given only to tools where it is kept confidential. Thirdly, it is important to train everyone on AI literacy and have an acceptable use policy so they understand the technology they’re working with, how it works, and what data to give to which tool. Employees shouldn’t be left alone to figure this out.
Speed differences leading to shadow usage
Currently, there is a vast speed difference between the onslaught of new tools and AI features being offered and resources for individuals and departments capable of doing this kind of vetting of the tools. This leads to long-lasting cycles of adopting new tools.
Employees often aren’t patient with waiting to get approved tools and opt to test them on their own. Here’s where having the employees trained in AI literacy really can make a difference in the risk level this causes. If employees understand that they redact personal information and company confidential information and can alter the task to a general level, the risks are low in using tools outside the whitelisted selection. If, on the other hand, training is ignored or doesn’t include a risk-aware approach to AI, accidents are bound to happen.
Another side effect of this kind of usage of unapproved tools is that employees don’t necessarily feel they can share the things they learn through the usage for fear of repercussions for using tools against company policies. This can slow down the growth of understanding of this technology in the organisation. It can also slow down insights on situations where AI creates genuine value and efficiency.
So, while I do fully recommend having company policies and whitelisted tools, I also recommend having an outlet or policy for testing new tools in a way that actively supports sharing the learnings and experiences. We have channels in our Slack for this kind of collaboration and sharing, which provide valuable insight into a rapidly developing field.
AI as a spotlight on underlying inefficiencies
To gain the full benefit of AI, a lot of things need to be done quite well. Or to phrase it differently, trying to adopt AI can, as a process, pinpoint all the places where there are inefficiencies and problems in the AI maturity of an organisation.
Data management and governance are key to determining what kind of data AI solutions have access to, which in turn is a key factor in determining if a particular AI solution can be used. There needs to be an evaluation practice in place for AI tools, a risk management and mitigation practice to complement it, an incident reporting process for situations where people make mistakes, and a host of other processes. Also, adopting AI in an organisation requires AI governance that supports the adoption of responsible AI practices that support the value creation of an organisation without adding unnecessary and unmitigated risks.
Basically, trying to adopt AI and get the promised benefits from it often requires quite a bit of work on the underlying, non-AI-related issues in an organisation. New integrations may be needed, new processes adopted, and new trainings organised, just to mention a few. How well an organisation does the required developments will largely determine the value it gets from AI. To put it bluntly, fully utilising and gaining value from AI can require profound changes in the organisation.
However, no organisation should start with overhauling everything, nor is this needed. To get value out of AI responsibly, one should start with small use cases, parts of processes, or individual tasks, and grow the maturity through iterative small steps that won’t break the bank or the willingness of the employees to participate. Only by using AI does one fully understand what it is, how it works, what it needs, and how to best utilise it.
Developing AI in the organisation
One way to start this journey of experimentation in AI is to develop an AI solution in-house or with a partner like Solita, with a long track record in developing AI and working with data. This allows the organisation to learn by doing, step by step, how to gain value out of this rapidly developing technology.
There are a few tips on responsible development. Let’s return to the topic we started with. AI projects shouldn’t be done without the knowledge of IT departments and by ignoring the cybersecurity and information security practices they have. Role assignments and authorisation should be done following the principle of least privilege. This often needs to be done separately if the company hasn’t developed AI before, as usually, roles for development exclude access to production data, which is the data needed for AI. So, new role descriptions and access may need development with the IT department. Furthermore, it is highly beneficial to share information about what is being developed and what was learned so that AI development projects in different departments around the organisation know about each other and can avoid re-inventing the wheel.
In closing
The rapid development of (generative) AI technologies, AI features in the software, and our operating systems poses a risk of increasing shadow IT and cybersecurity & information security risks. However, this isn’t necessary. Good adoption practices, supporting staff, working with IT, and being willing to look at internal processes and systems with an open mind, as well as working with knowledgeable partners in AI, can pave the way to responsible and more secure value creation with AI.
PS. We can help with all of the mitigations mentioned in this blog post.