LIVE WEBINAR: How to develop a real time human detection application on an FPGA edge device using deep learning (US)

Presenter: Adam Taylor, Embedded Systems Consultant, Adiuvo Engineering & Training LTD

Thursday, April 23, 2020

11:00 AM – 12:00 PM PDT

Abstract: 

The ability to process at the edge is critical to many modern applications such as smart traffic cameras, assisted/autonomous driving and vision guided robotics. While these use cases may implement different high-level algorithms, all of them require a common foundation as follows:

  • Interfacing with one or more high resolution, high bandwidth image sensors to capture images 
  • Processing the captured image to convert from RAW pixel data to RGB and onwards into a format which is suitable for further processing of the specific algorithm. 
  • Implement Convolution Neural Network Machine Learning Inference algorithms classifying objects of interest for the use case
  • Recording data to nonvolatile memory to provide event log/buffer of the application history for diagnosis or later analysis.

TySOM-3a Chart

This webinar will demonstrate an example implementation by using the Aldec TySOM-3A-ZU19EG, FMC-ADAS and FMC-NVMe daughter cards where these applications can be easily prototyped on a Xilinx Zynq UltraScale+ MPSoC. 

In this webinar you’ll learn:

  • Benefits of using FPGAs for inference deep learning object detection on the edge
  • How to capture video into the FPGA using automotive HSD camera, 192-degree wide lens
  • How to implement a deep learning human detection algorithm inside FPGA and accelerate it with the power of Xilinx Deep Learning Processor Unit (DPU)
  • How to record/buffer the processed data at high speed over the PCIe to a NVMe SSD card
  • How to display the output of the system on a HDMI monitor

This webinar will explore each stage of the implementation explaining in detail the design implementation and engineering decisions required to create such a solution. Of course, along with demonstrating the benefits of such an approach for edge processing systems. 

The webinar will wrap up with a live demonstration of the system and questions.

Agenda: 

  • Introduction on edge processing for modern deep learning applications
  • Benefits of using FPGAs as an edge processing system for deep learning applications
  • Developing a real time human detection application using Xilinx Zynq MPSoC FPGA
  • Live demo
  • Conclusion
  • Q&A

Presenter Bio:

Adam Taylor

Adam Taylor is an expert in design and development of embedded systems and FPGA’s for several end applications. Throughout his career, Adam has used FPGA’s to implement a wide variety of solutions from RADAR to safety critical control systems (SIL4) and satellite systems. He also had interesting stops in image processing and cryptography along the way. Adam has held executive positions, leading large developments for several major multinational companies. For many years Adam held significant roles in the space industry he was a Design Authority at Astrium Satellites Payload processing group for 6 years and for three years he was the Chief Engineer of a Space Imaging company, being responsible for several game changing projects. 

FPGAs are Adam ‘s first love, he is the author of numerous articles and papers on electronic design and FPGA design including over 330 blogs and 25 million plus views on how to use the Zynq and Zynq MPSoC for Xilinx.  

Adam is Chartered Engineer, Fellow of the Institute of Engineering and Technology , Visiting Professor of Embedded Systems at the University of Lincoln and Arm Innovator, he is also the owner of the engineering and consultancy company Adiuvo Engineering and Training

Adiuvo Engineering and Training