Explainable AI

Note: This feature is currently in beta.

Why?

Explainable AI for optimization is a field of research that investigates how a solver makes decisions. This is important for a number of reasons:

  • Transparency: It's important to understand how a solver makes decisions and how a solution .
  • Debugging: Understanding how a solver makes decisions can help integrators and developers debug issues with the parametrization.
  • Trust: Understanding how a solver makes decisions can help users trust the solver more.

The most common way to explain a decision is to provide a list of constraints that are violated in the solution.
Additionally, we provide a list of the actual planning decisions that caused the violation.
This is a good start, but it's not enough.

Over the years, customers frequently asked us questions like:

  • Why did the solver decide to assign this shift to this employee and not this other one that I had in mind?
  • Why did the solver come up with this route? It seems to be much longer than the one I had in mind.
  • This shift is not assigned to anyone, but I don't see any constraints that are violated. Why is that?

This triggered us into building the first version of the explainable AI feature.

How?

Today, we are introducing a new feature that provides a more detailed explanation of the decisions made by the solver. This feature is currently in beta and is available for the following solvers: FILL API and VRP API.

To make sure that we can answer the questions above, we extend the search with a Hyper-local Discovery phase after finding the best solution within the defined stopping criterion.
The goal of this phase is to find all possible alternative assignments for every decision made by the solver. This is quite an expensive operation, so we only do this when the user requests an explanation.
The result is a score evaluation for in most cases n^n alternatives, where n is the number of possible assignments for a solution.
This score can then be used to explain not only the constraint violations in a solution, but also to give feedback on possible quick changes to that solution.

Example: shift scheduling (FILL API)

Let's take a look at an example of how this feature works in the context of shift scheduling.
Given an optimal roster, the solver can provide an explanation of why a any shift is assigned to a any employee.
This is done by providing a list of alternative assignments for each shift and the score of the solution if that assignment was made.
fill-explainable.gif

For more information on how to use this feature, please refer to the FILL API reference.
as well as the explanation of this feature in the FILL API guide.

Activating it in the FILL solver is as easy as setting options.explanation.enabled to true in the request payload.

Example: vehicle routing (VRP API)

Similarly to the FILL API, the VRP API can provide an explanation of why a any job is assigned to a any vehicle or other job.

For more information on how to use this feature, please refer to the VRP API reference.