Solita’s ethical principles for AI and machine learning

These ethical principles have been designed to guide our operations when designing and building machine learning applications. Solita’s experts design and implement applications and services that utilise machine learning or other methods of machine intelligence as part of their form or function. We are committed to revising our ethical guidelines along the way as technology moves forward and as we learn from implementing these into action.


Better life objective

We commit to use machine learning in making life better for our customers, end-users, employees, stakeholders, society and the environment. We actively seek to prevent potential negative effects to aforementioned parties. While recognising that the idea of a ‘better life’ is partly a cultural and a political issue with ideological entanglements and struggles, we also remember that there are general agreements on certain ethical guidelines, such as the Universal Declaration of Human Rights. We seek to negotiate between these widely accepted principles and the diverse beliefs of various shareholders affected by our work.


Human-centric approach

We commit to a human-centric view, embrace empathy and cultural/societal understanding in our ML approach. We value human diversity in its many forms. We take into account and value the end-user and their views, seeking to serve them with our solutions.


Ethics community

When faced with ethical dilemmas, we encourage our project teams to consult our ethics community.



We understand that the quality of our work, in the end, depends on the competence of our employees. We seek to ensure our employees have the necessary training and that they comply with our company-wide ethical practices.


Multidisciplinary collaboration

We acknowledge the complex nature of ethics in ML implementation requires deep understanding in both technology, and human behaviour and societies. We strive to use teams with people from diverse backgrounds to design solutions using artificial intelligence.



As our solutions have potential the to impact the whole or parts of societies, we acknowledge the need to be open to criticism. We encourage and support collaboration with government and non-governmental organisations, as well as research bodies, in order to improve our work. However, we also acknowledge possible tension between this principle and privacy of our customers, seeking to negotiate this.


Our responsibility

While ML has capabilities in automated decision making, we acknowledge that the responsibility over those decisions lays on us and our clients. We understand that machines serve us and society in general.


Continuous development

We evaluate our ML solutions throughout their lifespan, with the understanding that their performance may vary. We seek to continuously evaluate and improve our models.


Regulatory adherence

We acknowledge that ML solutions require compliance with various national and international regulations.

Impact of AI report