Software engineer

Explainability of static analysis results

Static analysis tools perform complex reasoning to yield warnings. Explaining this reasoning to the users is a known issue for the tools. We present the concept of analysis automata and detail three applications that enhance explainability: (1) Warning understanding, (2) Warning classification, and (3) Detection of bad analysis patterns.

We present MUDARRI, an IntelliJ plugin that illustrates the first use case.

project-explainability.png

Artifacts

Publications