Develop key explainable AI examples for industry

AI systems have enormous potential, but the average user has little visibility and knowledge of how machines make their decisions. The explainability of AI can build confidence and further drive the capabilities and adoption of technology.

When humans make a decision, they can usually explain how they came to make their choice. But with many AI algorithms, an answer is provided for no specific reason. It is a problem.

The explainability of AI is a big topic in the tech world right now, and experts have been working to create ways for machines to start explaining what they do. Additionally, experts have identified key explainable AI examples and methods to help create transparent AI models.

What is the explainability of AI?

Determining how a deep learning or machine learning model works isn’t as easy as lifting the hood and examining the programming. For most AI algorithms and models, especially those using deep learning neural networks, it is not immediately obvious how the model made its decision.

AI models can occupy positions of great responsibility, such as when used in autonomous vehicles or when assisting the recruitment process. As a result, users demand clear explanations and information about how these models make decisions.

Explainable AI, also called XAI, is an emerging field of machine learning that aims to determine how decisions by AI systems are made. This area inspects and tries to understand the steps involved in making the AI ​​model decision. Many members of the research community and, in particular, the United States Agency for Defense Advanced Research Projects (DARPA) have attempted to improve understanding of these models. DARPA continues its efforts to produce explainable AI through numerous funded research initiatives and companies helping to bring explainability to AI.

Why explainability matters

For some explainable AI examples and use cases, it is not urgent to describe the decision-making process the AI ​​system has gone through. But when it comes to autonomous vehicles and making decisions that could save or threaten a person’s life, the need to understand the rationale for AI is heightened.

In addition to knowing that the logic used is goodIt is also important to know that AI performs its tasks safely and in compliance with laws and regulations. This is especially important in heavily regulated industries such as insurance, banking and healthcare. If an incident does occur, the humans involved must understand why and how that incident happened.

Behind the desire for a better understanding of AI lies the need to trust people’s systems. For artificial intelligence and machine learning to be useful, it takes trust. To gain that trust, there has to be a way to understand how these intelligence machines make decisions. The challenge is that some of the technologies adopted for AI are not transparent and, therefore, make it difficult to have complete confidence in decisions, especially when humans are only operating in limited capacity or completely removed from the loop.

We also want to make sure that the AI ​​makes fair and impartial decisions. There have been many examples where AI systems have made the news for biased decision-making processes. In one example, the AI ​​created to determine the likelihood of recidivism was biased in favor of people of color. Identifying this type of bias in the data and the AI ​​model is key to creating models that work as expected.

human contribution to AI

The difficulty of building an explainable AI

Today, many AI algorithms lack explainability and transparency. Some algorithms, such as decision trees, can be examined by humans and understood. However, the most sophisticated and powerful neural network algorithms, such as deep learning, are much more opaque and more difficult to interpret.

These popular and successful algorithms have resulted in powerful capabilities for AI and machine learning; however, the result is systems that are not easy to understand. Relying on black box technology can be dangerous.

But the explainability is not as easy as it seems. The more complicated a system becomes, the more that system establishes connections between different data elements. For example, when a system performs facial recognition, it matches an image to a person. But the system cannot explain how the bits in the image are mapped to this person due to the complex set of connections.

Explainable AI, data transparency, clean data
Data transparency is important for explainable AI, and vice versa

How to create an explainable AI

There are two main ways to deliver explainable AI. The first is to use machine learning approaches that are inherently explainable, such as decision trees, or Bayesian classifiers or other explainable approaches. These have some traceability and transparency in their decision making, which can provide the visibility needed by critical AI systems without sacrificing performance or accuracy.

The second is to develop new approaches to explain more complex, but sophisticated neural networks. Researchers and institutions, such as DARPA, are currently working to create methods to explain these more complex machine learning methods. However, progress in this area has been slow.

When considering explainable AI, organizations need to consider who needs to understand their deep learning or machine learning models. Do feature engineers, data scientists, and programmers need to understand models, or do business users need to understand them too?

By deciding who should understand an AI model, organizations can decide the language used to provide an explanation behind a model’s decisions. For example, is it in a programming language or in plain English? If it opts for a programming language, a business may need a process to translate the results of its explainability methods into explanations that a business user or a non-technical user can understand.

The more AI is part of our daily lives, the more we need these black box algorithms to be transparent. Having trustworthy, reliable, and explainable AI without sacrificing performance or sophistication is a must. There are several good examples of tools to help explainability of AI, including vendor offerings and open source options.

Some organizations, such as the Advanced Technology Academic Research Center, are working on transparency assessments. This self-assessed multifactor score takes into account various factors such as the explainability of the algorithm, identification of the data sources used for training and the methods used for data collection.

By taking these factors into account, people can self-assess their models. While not perfect, it’s a necessary starting point for others to get a glimpse of what’s going on behind the scenes. Above all, it’s essential to make AI trustworthy and explainable.

Source link

About Shirley L. Kreger

Check Also

5 amazing examples of augmented reality in common places

Did you know that mankind got the first impression of augmented reality in 1957? Many …

Leave a Reply

Your email address will not be published. Required fields are marked *