Skip to content
January 10, 2023

Time for XAI: How Businesses Can Have More Intelligence That’s Less Artificial

Traditional discussions about outcomes – in both business and in life – often involve the classic question: “Does the end justify the means?” 

AI’s rising adoption has turned that philosophical thought on its head. At least, for businesses using or considering using AI for their data governance. The focus is now very much on the “means” of arriving at the “end” decision or desired result. 

The drivers and use cases are way beyond simply getting personalized suggestions when watching Netflix or listening to Spotify. And toward how AI is arriving at decisions that have major, long-lasting impacts on society. From authorizing loan agreements and accepting or refusing credit applications, to assessing individual behaviors using PII or predicting company revenues based on internal performance data.    

All that data flowing through all those pipelines and workflows – legislators and auditors need to be able to see evidence of appropriate data governance. The general public too – transparency and visibility are the tools to remedy the “fear of the unknown”.

The business case is strong. Companies using AI responsibly have been found to generate 50% more revenue growth. However, these organizations remain a minority. 

Many organizations don’t have the necessary alignment to ensure holistic application of AI. Engineers are focused on creating the infrastructure for AI. Data scientists are focused on what they can extract using AI. Governance teams are tasked with making sure AI is meeting compliance standards.

“Most companies currently lack effective mechanisms for developing, deploying, and operating AI that is ethical and trustworthy.”

Deloitte

Bridging the silos can be solved by building in responsible AI methods from the start. This also goes toward addressing one of the biggest challenges with AI success: the “black box”.

How to solve the ‘black box’ AI question

Now more organizations are using AI to crunch their data, it’s business-critical to avoid processes that obscure that data-driven visibility. Simply having the outputs means businesses lose the continuous learning and insights for the business to replicate and expand on. 

Removing the opaqueness isn’t simply a business benefit. It’s a regulatory requirement in increasing numbers of regions. For example, in March 2022 the US’s Algorithmic Accountability Act was updated. Companies are now required “to conduct impact assessments for bias, effectiveness and other factors, when using automated decision systems to make critical decisions.”

When it comes to explaining algorithmic decisions, medium and message should also be adapted to each audience, use case and industry. Whether that’s a ride-sharing app showing users a map of available cars and estimated times of arrival. Or a DPO explaining data privacy rights to a customer, or a Head of Governance providing documentation to an auditor. 

These sorts of examples are why delivering AI explainability is a prerequisite – from a governance perspective and also as a competitive advantage.

Democratize Your Data

See how Velotix uses AI to grant data access, remove risk,
and automate policy management at scale.

What is explainable AI (XAI)?

Adding explainability to AI means providing machine learning algorithmic outputs that people can understand and explain. Where processes and methods can be described, identified, and clarified. 

XAI is the opposite of black box AI. Instead, human users gain transparency, visibility and reassurance that outcomes are fair, accurate, and without baked-in bias.

This includes being able to show visualizations and summaries explaining how the AI is operating. Naturally, this helps to build trust, improving the experience of both the data subject, and the user responsible for safeguarding their sensitive and confidential data. The resulting increased confidence helps support a virtuous cycle, where AI is deployed more often and leads to better outcomes.

Alongside the ability to express how the outcome was reached, explainability also involves understanding and explaining the data involved in the processes. Making model behaviors more easily understood, ensuring end-to-end governance throughout the data lifecycle, and providing a clear understanding of the logic behind recommendations.

“By 2026, Gartner anticipates organizations that develop trustworthy purpose-driven AI will see over 75% of AI innovations succeed, compared to 40% among those that don’t.”

Gartner

These trends are why traditional forms of governance, the more linear or laborious methods such as ABAC or RBAC – are no longer suitable. Approaches based on roles or attributes can soon become too complex to manage at scale. Particularly when you consider the highly complex nature of neural networks and deep learning employed by many organizations today and in the future.

The associated explanations for outputs can’t be linear either. However, they still need to be understood by those who are either subjects of, or responsible for governing the data. After all, humans – not algorithms – are accountable for business processes, automated and manual. 

What’s more, demands are magnified for organizations operating in highly regulated, or literal life-and-death, industries such as healthcare, utilities, and financial services.

“In 2030, consumers will hold the rights to their health data, which will increasingly be generated by the consumer and set the terms on provisioning and revoking access to their health data”

Forrester

Of course, there may be thousands of nodes, inputs and variables involved. Here’s where the Contrastive Explainable Method can be employed.

How to explain AI using the Contrastive Explainable Method (CEM)

Developed by IBM Research, CEM uses contrast by identifying the features that should be present for a positive outcome, or absent for a negative outcome. It meets the human need of wanting to know why a certain outcome happened and why another outcome didn’t happen. 

CEM helps highlight the difference between the two, rather than trying to give a complete – and highly complex – causal explanation of every contributing factor. Comprehensive and exhaustive explanations might be necessary if you’re legally required to divulge related information. However, humans prefer shorter explanations when wanting to know:

  • “Why did I get refused when I applied to increase my checking account’s overdraft?”
  • “Why are my insurance premiums higher than my partner’s?”
  • “What activities should I do or avoid to give myself the best chance of recovering from this disease?”

Organizations and those responsible for data supervision and governance can therefore answer, “Why this outcome, and not that outcome?” This method of explainability means:

  • Advisers can cite the positive or negative factors that have influenced the outcomes 
  • Data owners can ensure validity, learn from models, and demonstrate appropriate data governance
  • End users can understand the reasons for the outcome, while also discovering what they should do if they want to change the outcome

“Effective governance of AI manages for impact by considering shifting regulatory requirements and emerging organizational approaches. Responsible technology practices require effective and agile governance — both within an organization and across the regulatory and public policy landscape.”

PWC

How interpretability leads to better policies and protection of data

A good explanation doesn’t just set out the facts. There also has to be an element of understanding cognitive psychology. And that means offering an explanation that fits in with a person’s beliefs, and how they will interpret the outcome. 

Most humans don’t want to be given all the “reasons why” something has happened. A couple of selective “headline” reasons are sufficient. As long as the reasons match the audience’s concerns, environment, and worldview. Your economist friend might want to know all the reasons why a country’s GDP is falling. Meanwhile, your unemployed friend simply wants to know how they can get an interview. 

While an AI model can generate the insights, only a human can make the judgment call on how to conduct the explanations.

 

Human-centered approaches to AI explainability

The human-in-the-loop is cited in McKinsey’s State of AI Report, where AI high performers have a human-in-the-loop verification phase of model deployment that expressly tests and controls for model bias. Humans provide feedback and “real-world” insights. Providing manual interventions that compensate for the machine’s context-free “artificial-world” results.

Businesses that adopt this approach, of bringing humans and machines together, gain the best of both worlds. Data management can be reduced to minutes with automated approvals or denials. 

For the more complex edge cases, humans can step in. Freed from manual routine tasks, they can also have more time for the strategic elements of AI applications. Calibrating decision-making, monitoring processes, and explaining outcomes to stakeholders.

This continuous interaction means policy management can support the speed, complexity and intelligence of modern business and regulatory demands. Maintaining control and transparency while increasing confidence in the modeling and outcomes.

Velotix: Your data access platform for success

The path to this explainable reality relies on finding the right platform. 

After all, data governance and management should be as adaptable and personalized as the subjects and regulators of that data. That’s what you get with Velotix.

The symbolic AI, only available with Velotix, learns how and when to apply your data access policies. Over time, these actions become more accurate as the AI improves – even when you build complex rules. Aggregating data catalog metadata means you find information you didn’t even know you could search for and surface.

You automatically maintain policies across all your datasets, with options for self-service or manual automation. Even when your organization changes, or when new regulations mean updating policies and processes. Configurations can be defined in real-time, at different levels of granularity and discoverability.

Data access is made available to the right people at the right time. Enforced masking and anonymization gives your users peace of mind, greater understanding, and more freedom to focus on their work. Data lineage ensures sharing stays compliant, with interpretable audit logs and reports for usage and exceptions.It’s time to move away from manual and artificial data management, to a dynamic, intelligent data governance platform. Contact us to explore how your business can succeed with Velotix’s AI-based policy automation engine.