Modular performance benchmarking framework for neural network simulations
beNNch is a modular benchmarking framework for neural network simulations. This accompanying webpage illustrates the benefit of a standardized presentation of benchmark results that beNNch
implements.
Click here to see an interactive flip-book style of the performance results published in Albers et al. [citation needed] which has been generated by beNNch
.
Either press the on-screen arrows in the bottom-right corner or use the arrow keys of your keyboard to navigate between results. Notice how differences are immediately visible due to the consistent axis scaling and graph positioning.
The title of each page gives the unique identifier for each benchmark. As all human readable metadata is attached to the result files, this sequence does not need (and, in fact, doesn’t) carry any information other than identification.
The graphs show performance results displayed using default plot styles of beNNch-plot
. These can readily be modified, however, should remain consistent across experiments of a single benchmark study.
The bullet points are a collection of human-readable metadata which give context to the results. They, for example, specify the software conditions under which the simulations were performed or model used.
This guide provides a exemplary walk-through of a typical beNNch use case, designed for helping setting up beNNch.