Speaker: James Conroy, Arm
Increasingly, devices are performing AI at the furthest point in the system – on the edge or endpoint devices. As the industry-leading foundation for intelligent computing, Arm’s ML technologies give developers the comprehensive hardware IP, software, and ecosystem platform.
Arm NN is an accelerated inference engine for Arm CPUs, GPUs, and NPUs. It executes ML algorithms on-device to make predictions based on input data. Arm NN enables efficient translation of existing neural network frameworks, such as TensorFlow Lite, TensorFlow, ONNX, and Caffe, allowing them to run efficiently and without modification across Arm Cortex-A CPUs, Mali GPUs, and Ethos-N NPUs.
In this technical session, we are going to give you an overview of Arm NN with a focus on its plug-in framework. The audience will walk away with a working knowledge of using Arm NN to run ML models with accelerated performance on a mobile phone or a Raspberry Pi, and writing plug-ins to extend support for new neural network processing units.
Watch video Download slides