Best Practices for Trustworthy AI Solutions

read

How to Employ Modern Software Engineering Practices and Testing for Ethical Enterprise AI

AI and Machine Learning are no longer just elements of esoteric science fiction

AI and Machine Learning power software and devices we use every day, and a large amount of high-quality AI and ML software is freely available under permissive open-source licenses.

With the growth in the power and accessibility of data science solutions, concerns about the potential for undesired and unexpected behavior has also grown, in no small part due to scandals like the Facebook-Cambridge Analytica scandal in 2016, a highly effective campaign on social media based on advanced data analytics that resulted in a massive 5 billion USD fine by the FTC in the USA as well as a significant negative impact to Facebook’s market capitalization. 

In addition to ethical and overall societal considerations, as the power of data science solutions becomes clearer to all parties, producers of such software have a strong motivation to avoid deploying AI and ML solutions that could be later characterized by the public and regulatory authorities as misuse of powerful technologies. To avoid this, software producers need to define the operational parameters of their solutions carefully and to take explicit measures to ensure that they behave correctly. 

To support these goals, the EU has published seven key requirements for Trustworthy AI. Several of these (highlighted below) can be directly supported by testing, good enterprise software engineering practices, and a development framework that supports transparency and data security applied to data science solution development: 

  1. Human agency and oversight
  2. Technical robustness and safety
  3. Privacy and data governance
  4. Transparency
  5. Diversity, non-discrimination and fairness
  6. Environmental and societal well-being and
  7. Accountability

Ref: Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment, 17 July 2020.

https://ec.europa.eu/digital-single-market/en/news/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment.

The application of standard enterprise software engineering practices supporting Continuous Integration, Continuous Development, and DevOps as well as an implementation framework that supports transparency and data security as well as AI and IT building blocks such as Qorus Integration Engine® can not only provide a technical framework for developing scalable data science solutions but can also directly support five of the trust requirements as follows: 

  1. Human agency and oversight

By enabling a testing framework to be delivered and by supporting CI/CD, users can ensure that AI / Data Science solutions perform within predefined parameters - more on this below 

  1. Technical robustness and safety 

Automated testing infrastructure, CI/CD tools, and support for automatic creation of self-installing release packages, power robust enterprise Quality Assurance. 

An integration platform should provide native support for the underlying technologies and programming languages used for enterprise AI, ML, and data science (including low-level integration for both Python and Java languages)as well as  testing, release packaging, and automatic release installation frameworks. It should also extend modern enterprise software engineering practices, allowing them to be applied to AI, ML, and data science to ensure technical robustness and safety from conception through to production. 

Automated testing must also be applied to any automatic refinements to AI models (made through AutoML for example); by having appropriate tests in place to maintain the behavior of the model within an acceptable range, AI developers can ensure that if ML tries to take the model in a harmful direction, the changes will not be applied to production, as they will not pass the quality gate in the CI/CD pipeline. 

  1. Privacy and data governance

Modern integration platforms and AI implementation frameworks should support the segregation of sensitive data and provide special access controls and/or encryption to support privacy regulations and data governance. For instance, Qorus allows for both sensitive data segregation and special access controls and encryption in orchestration solutions.  When implementing process automation driven by AI- or data-science-derived insights, sensitive data used for business process orchestration is subject to stringent data protection and access controls and is also controlled by automatic mechanisms and APIs that specifically support sensitive-data-processing requirements such as reporting and the “right to be forgotten”. 

  1. Transparency

The AI implementation framework you use should enable a considerable degree of transparency that supports understanding the basis of a particular AI decision. Qorus provides a very high degree of operational transparency and control - and is an ideal platform for DevOps as well as AIOps, empowering operational teams and providing governance, control, and fault-tolerance for operations. 

  1. Accountability

AI solutions should be created and operated to provide an unambiguous rationale for the decisions made. Component versioning and CI/CD can help by providing a versioned component architecture for modular releases where individual components can be upgraded / decommissioned at any time, releases tracked and introspected with an API, and information maintained about the author (as well as other metadata) for each component. Audit trails should also support accountability. 

As an example, Qorus components can be comprised of both AI / ML / data science functionality as well as enterprise IT integration and automation functionality, so that accountability can be supported directly by the enterprise AI platform. Also the architecture enables the reuse of components both to support the development of libraries made up of “building blocks” but also ensures accountability and traceability at the same time. 

Conclusion DevOps

Applying DevOps and CI/CD to Enterprise AI Projects

Enables Trustworthy AI Solutions at Scale

Using modern software engineering practices with AI / ML / data science development in a platform with a component architecture designed to enable AI / ML / data science in enterprises can allow for an automated DevOps pipeline to be created to ensure that solutions are performing within predefined parameters. 

CI/CD applied to AutoML, for example, can ensure that undesired changes to the model are caught in a quality gate and are not propagated automatically to production. 

Qorus Integration Engine

Qorus Integration Engine®: designed for developing and operating

AI / ML / data science solutions at enterprise scale

Qorus Integration Engine® provides an ideal platform for enabling scalable AI / ML / data science solutions in real enterprises, and enables the application of modern software engineering practices in AI / ML / data science solutions to ensure that ethical and potential legal guidelines are respected, as well as providing a documented audit trail to cover aspects critical to the success of all serious enterprise AI projects.

To see how The ACT Framework can boost your team’s testing success at all levels, download the whitepaper.

About Author

David Nichols

David Nichols, CEO, Qore Technologies s.r.o. with more than twenty years of proven management and technical experience in high technology and telecommunications industries applying technology solutions to meet business goals to improve time-to-market, reduce overhead, and become more competitive.

Subscribe to Newsletter

Popular Posts