Static Analysis: some UI ideas
dmalcolm at redhat.com
Mon Feb 4 20:04:36 UTC 2013
I've been experimenting with some UI ideas for reporting static analysis
results: I've linked to two different UI reports below.
My hope is that we'll have a server in the Fedora infrastructure for
browsing results, marking things as false positives etc.
However, for the purposes of simplicity during experimentation I'm
simply building static HTML reports.
My #1 requirement when I'm viewing static analysis results is that I
want to *see the code* with the report, so I've attempted to simply show
the code with warnings shown inline.
Note also that when we have a server we can do all kinds of
auto-filtering behaviors so that e.g. package maintainers only see
warnings from tests that have decent signal:noise ratio (perhaps with
other warnings greyed out, or similar).
Results of an srpm build
The first experimental report can be seen here:
It shows warnings from 4 different static analyzers when rebuilding a
particular srpm (policycoreutils-2.1.13-27.2.fc17). There's a summary
table at the top of the report showing for each source files in the
build which analyzers found reports (those that found any are
highlighted in red). Each row has a <a> linking you to a report on each
source file. Those source files that have issues have a table showing
the issues, with links to them. The issue are shown inline within the
syntax-colored source files.
Ideally there'd by support for using "n" and "p" to move to
using "back" in the browser to navigate through the tables.
An example of an error shown inline:
shows a true error in seunshare.c found by cppcheck ("foo =
realloc(foo, , )" is always a mistake, since if realloc fails you get
NULL back, but still have responsibility for freeing the old foo).
The second experimental report can be seen here:
It shows a comparison of the results of two different builds of a
package (python-ethtool), again running multiple analyzers.
(specifically, a comparison between 0.7 and an snapshot of upstream
It's similar to the first report, but instead of showing one file at a
time, it shows a side-by-side diff of old vs new file.
Any issues found in old or new source code are shown inline, so you can
see issues that are fixed, issues that are newly introduced, and issues
that are present in both old and new code.
to next/previous errors. Also my CSS is ugly.
(FWIW, the code that generates these are in:
specifically make-simple-report.py and make-comparative-report.py;
they're reading the output from mock-with-analysis)
More information about the devel