Skip to content

Latest commit

 

History

History
51 lines (40 loc) · 3.18 KB

Statistics.md

File metadata and controls

51 lines (40 loc) · 3.18 KB

Statistics

Profiling

For profiling we have to compile library/driver separately (as described here. TL;DR both library and driver need to be compiled with -fprofile-instr-generate -fcoverage-mapping.

To get the coverage information we have to follow these steps:

  1. Run a normal fuzzer build (without coverage intrumentation) ./fuzzer -any_other_runtime_flags=1 ./corpus
  2. Minimize the corpus
mkdir corpus_min
./fuzzer -merge=1 ./corpus_min ./corpus
  1. Run coverage instrumented build over the minimized corpus ./profile -runs=0 ./corpora_minimized

The last step will generate .profraw file that contains accurate code coverage that fuzzer achieved on step 1. To extract this information, first we need to convert .profraw into .profdata using llvm-profdata tool: llvm-profdata merge -sparse default.profraw -o default.profdata. In this step you can add more profraw files generated by different fuzzers and merge them together into single profdata (for concrete example see this script).

The next step is to read .profdata file using llvm-cov For example, to generate the report one can do llvm-cov report $PROFILE_BINARY -instr-profile=$PROFDATA_NAME.profdata or to get per function level information: llvm-cov report -show-functions $PROFILE_BINARY -instr-profile=$PROFDATA_NAME.profdata $SOURCES here $SOURCES`` is list of all source files, which you can get using $(find $REPO -iname '.h' -or -iname '.cpp' -or -iname '.c' -or -iname '.cc')`

For more examples see start_coverage.sh

If you followed our usual process, i.e. ./run_analysis.sh && ./run_drivergeneration.sh && ./run_fuzzing.sh, then you can just run ./run_coverage.sh next and all these profiling will work seamlessly. The final report will contain Region Coverage, Function Coverage, Line Coverage, and Branch Coverage.

Crash Deduplication

For crash deduplication and clustering we use CASR or casr-libfuzzer to be more specific. Note that CASR does not do root cause clustering, it is simplistic tool that just looks at the stacktrace and clusters them according to some similarity metric, the main advantage of using CASR is that it just works out of the box. For more details see their paper.

Casr should be run inside docker container with exactly same layout/configuration as docker container where fuzzing was done and with privileged flag. First of all you need to install the dependencies:

  1. sudo apt install build-essential clang gdb lsb-release
  2. Install Rust
  3. Install Casr from crates: cargo install casr (or build it manually: git clone https://github.com/ispras/casr && cargo build --release)

To start the clustering process using casr-libfuzzer run: casr-libfuzzer -i path/to/crashes -o path/to/desired/output/dir -- ./fuzzer

If you followed our usual process, i.e. ./run_analysis.sh && ./run_drivergeneration.sh && ./run_fuzzing.sh, then you can just run ./run_clustering.sh next and all these steps will be done for you seamlessly.