Explainability
Why is this the best solution?
This section aims to give a high level understanding of our solution explainability and how it helps your business.
Our platform consists of multiple solvers. For any optimisation problem in any sector, a solver will return a solution
with an associated score, which shows how good a solution is. This score is at the core of the explainability question.
The reason why a solution has been picked out as being 'the best', is that in all the available calculation time, no
solution was found with a better score. A practical example of a score is a total cost calculation, and a concrete
example of a solution is a schedule for a fleet of trucks.
Optimisation: reducing costs under constraints
We’ll apply underlying concepts to the total cost calculation of vehicle routing, however, there are analogous ways of
reasoning for shift planning, picking paths and stock optimisation (all our products, really).
Back to vehicle routing: it's normal that a few questions arise, such as: how can the solver know the total cost if it
doesn't know the cost per mile of a business, and what about laws regarding total driving hours per day? We got that
functionality covered. Since we'll calculate the total miles driven internally, all it takes is one extra input that
tells us how important a certain factor such as total mileage is - in general, we call this a 'weight'. For the part
regarding laws: if there are certain legal constraints that cannot be broken, the problem input data will also contain
those (such as breaks, shift hours...). If the solver would not respect those constraints, you can think of them as
having a weight (and thus if broken, a cost) so large that it will never be interesting for the solver to consider
violating them. In practice, the solver handles this more intelligently.
Scoring
What we referred to as a score is actually made up of different tiers of scores.
Mileage would be an example of a lower tier or 'soft' score - which doesn't mean the solver pays no attention to it,
it's just that a higher tier or 'hard' score is more important and gets precedence. Let's think about a law regarding
maximal allowed cargo, or even more obvious, the real-world volume constraints.
There's only so much cargo you can fit inside of a volume. It makes sense that a theoretical solution using up twice the
real-world volume available (even though it means less mileage) is not favoured by the solver, simply because it's
impossible in reality. This means that when we refer to 'the best' when comparing solutions, the lower tier scores are
irrelevant as long as the higher tier scores are different. If higher tier scores are the same, lower tier scores can
start competing against one another.
Score meaning
That being said, just a few numbers don't explain the actual solution. They are but the internal drivers of
optimisation.
That's the reason why we do not only report these numbers as summary statistics, but also show how we came to that total
cost and show the individual factors.
For routing, this can be a combination of many factors such as paid overtime, total mileage and how well delivery
windows are respected. It might happen that the solver returns a solution, such as the earlier mentioned fleet schedule,
and a person with experience in the field thinks there's something off. Since formulating the problem and coming to the
right weights is part of the process, we're prepared for this. What we've spoken of so far is 'solving' the problem,
which internally performs a lot of changes to the solutions and evaluates those changes. There's also the possibility of
suggesting a solution yourself and let the solver 'evaluate' it.
Suggestions based on human expertise
Suggesting a solution based on expertise will produce one of three outcomes relative to what the solver came up with by
itself. Either the newly found score is better, which is a strong indication that the solver needs more time -
especially regarding very small improvements, since more time was spent by the solver
on structurally larger improvements.
Swapping two routing locations or nurse shifts are typical examples thereof and can be handled with an extra calculation
phase. The score could also be the same, even though one solution is qualitatively better in the real world. In this
case, input data or parameters will have to be changed accordingly, because the solver only has the score to go by.
Thus, a real-world improvement should be reflected in a score improvement. Third, the suggested solution is worse.
Therefore, the reporting of all causes of the scores comes in very handy. Maybe swapping two locations alongside a
certain route is more interesting mileage wise, but could also violate the delivery time windows, which is seen as a
more important cost. If in a certain business case it is not, the input can change and a new best solution will come out
accordingly.
Summary
In conclusion, the chosen solution is one of the solutions with the best score. That score is an evaluation of the
solution based on the input data. The solver reports on the factors of this score, so that the cost drivers are
transparent.
There's also the option to send solutions to the solver, be they tweaked, self-made or copied. Just like when the solver
does all the work by itself, here, it'll also report on the causes of the total score, which makes transparent
comparison possible. This comparison forms the basis for defining actions for continuous improvement, such as choosing a
different solve time or changing input so that the model reflects real-world results aptly.
All functionality is in place to get the most out of the solver for your business: the evaluation is developer friendly
and allows for expertise to be added to the solutions, steering towards using the right features and getting the best
results.
Updated 5 months ago