Reading#

Frameworks, Libraries, tools#

  • FINN

    • neural network inference on FPGAs

    • quantized neural networks

    • FINN examples

      • even the project is active, no newer boards like PYNQ-Z2 or Alveo U50 are tested.

  • QONNX

    • introduces quantized operators for ONNX

  • brevitas

    • quantized implementations of the most PyTorch layers, e.g., QuantConv1d

    • enables low-precision arithmetic (8bit, 4bit etc) which reduces DSP and memory footprint usage

  • QKeras

    • quantized implementations of Keras layers

    • e.g., smooth_sigmoid(x), hard_sigmoid(x)

    • includes an energy consumption estimator

  • Netron

    • ONNX model visualization

  • rule4ml

    • resource and latency estimation for ML on FPGA

  • hls4ml

    • converts neural network models to FPGA firmware

  • HLSFactory

    • framework for HLSing many configurations of a design and comparing the results

  • Vitis Libraries

    • Vitis libraries for HLS

    • docs

  • Vitis AI

    • for flexible AI inference compared to Brevitas & FINN

    • compiled code is run on an a micro-coded DPU (deep learning processing units)

    • docs

  • Ramulator

    • cycle-accurate RAM simulator including HBM

Neural network implementations#

Math libraries#

Reinforcement learning implementations#