At the frontiers of AI applications, industry leaders and investors are paying close attention to the influence of AI governance on the future of innovation.
AI Governance in 2019 – A Year in Review

In the competitive rush to design, develop, and deploy AI-driven applications, many organizations fail to give the challenge of AI governance due consideration. Increasingly, however, the realization that failing to adequately address AI governance can have far-reaching ramifications, much in the same way that failing to deal with data security issues or personal privacy mandates can cause serious financial harm and damage a company’s reputation, as well as cross ethical boundaries.

Josh Elliot, head of operations at Modzy, writing for WashingtonExec, noted, “For some, ‘governance’ is viewed as bureaucracy or an obstacle. To the employee jazzed about tinkering with new technologies, it may be discouraging. However, seasoned AI practitioners know that proactive governance not only protects them from decisions made—but when done right—it spurs innovation, optimizes resources and helps realize project or organization benefits.”

The Concept of AI Governance

In a nutshell, the term AI governance refers to the concept that AI systems and machine learning applications need an underlying legal basis and policy framework, and that practices and possible outcomes are researched thoroughly and implemented fairly. Individuals should be informed if they are tracked in an AI system or personal data is collected or used for analysis (other than anonymized data that can’t be connected to an individual). As AI systems continue to sweep across industry sectors—spanning education, public safety, healthcare, transportation, economics, and business—AI governance can establish the standards that unify accountability and ethics as the technology evolves.

Engineering AI Governance Processes into an AI System

As is the case with many areas of system design, including data and content security and personal privacy protections, the most effective designs for handling AI governance are those that incorporate the processes and protections early on, integrating them into the system at the beginning stages of development. After-the-fact AI governance add-ons rarely succeed in meeting necessary requirements.

In developing the AI framework for Nuxeo Insight, engineers integrated features and capabilities that strongly support AI governance.

For example, Insight leverages the existing management architecture of the Nuxeo Platform. For each training set that is used, Insight maintains a copy. The content bots of the individual models are versioned as they change, so that it is always possible to roll back to prior versions of the AI model.
The capability can be useful instances where it might be necessary to distinguish between human-generated values and machine-generated values. Insight tracks the sources of information applied to assets and content and can identify those that are machine generated. If at some point bias is detected or data corruption is found in a machine-learning model, rolling back to a validated version can eliminate the problem. Results and outcomes can be viewed in a visual dashboard that simplifies discovery of degradation or corruption issues and streamlines the process of returning to a trusted version.

Anticipating Future Audit Requirements

The design of Nuxeo Insight, our AI in Content Management] service, was also crafted in anticipation of a future requirement by government regulators or standards bodies to trace and track whatever particular value might have been generated by a bot. This concern arose during past cases where artificial intelligence was being used for processes, such as granted loan approvals and biases in the AI model were discovered, causing the AI tools to be discontinued.

Insight makes it possible to determine directly—if requested by an auditor or regulator—which values were created by a machine-learning model or by artificial intelligence. Insight also can determine how the model trained and then recreate the circumstances through which the model was created and trained. Answers to questions that may have greater future significance are built into the current version of Insight. What information was used to train the model? How was the model defined? What happens when a model becomes corrupted? How do we administer these operations on a day-to-day basis? By addressing these kinds of questions up front, Insight is well equipped to contend with any kind of regulatory review, with all the corresponding information used to define and train the model readily available.

Download this report to venture into the world of AI for enriching dataand adding intelligence and automation to a Content Services Platform.

Frequently Asked Questions

In a nutshell, the term AI governance refers to the concept that artificial intelligence systems and machine learning applications need an underlying legal basis and policy framework, and that practices and possible outcomes are researched thoroughly and implemented fairly. Individuals should be informed if they are tracked in an AI system or personal data is collected or used for analysis (other than anonymized data that can’t be connected to an individual). As AI systems continue to sweep across industry sectors—spanning education, public safety, healthcare, transportation, economics, and business—AI governance can establish the standards that unify accountability and ethics as the technology evolves.