Smart Image Sensor

Smart Image Sensor

Bio-Inspired Reconfigurable Neuromorphic Image Sensor

Cameras are pervasively used for applications like surveillance, traffic monitoring, and precision agriculture. Most camera systems, however, are used as data collection and relaying units while the processing happens at backend servers. This project focuses on bringing the processing units close to the image sensor to introduce parallelism in the design with the help of three types of processor namely pixel processor, region processor, and sequential processor. This architecture has three logical layers where those processors are distributed. Pixel-processors are distributed in the first logical layer of the digital pixel sensor (DPS), and there is a processor dedicated to each pixel. They work in parallel, handle low-level image processing applications, remove temporal redundancy in an image, and provide pixel-level parallelism. The second logical layer is comprised of a certain number of region-processors. A certain number of the pixel-processors form a group or region and the design has a region-processor for every region. Like pixel-processors, they also work in parallel, take input from the corresponding region to perform mid-\high-level image processing, remove spatial redundancy, and ensures region level parallelism. Those two layers jointly provide massive parallelism in the design. Finally, a sequential processor who resides in the last layer receives the extracted information from the region processors through a bus and completes the remaining task (high-level reasoning operations) to complete the machine vision application. All those processors maintain hierarchical connections among the computational layers and reduce data volume through hierarchical processing. Moreover, those processors are reconfigurable in the ASIC paradigm to handle different machine vision applications. This flexible design emulates some of the concepts of the biological vision system. The simulation result shows the processing archives high acceleration in vision application and saves a significant amount of power through the hierarchical processing.

Block Diagram of the Computational Units in the Image Sensor

 

Keywords

Inter-pixel Processing, image sensor, Image processing, VLSI, FPGA, ASIC, Bio-Inspired processing, Neuromorphic Computing.

Evaluation Platform

We implement the full RTL-to-GDSII flow on Application Specific Integrated Circuit (ASIC) for the Image Sensor at the block level using 1.1 V supply voltage and 800 MHz clock frequency in 45 nm technology. We used Synopsys VCS and Design Compiler to convert RTL to gate-level net-list, Cadence Innovus to Place and Route of the synthesized net-list, Cadence Calibre to check DRC violation, and finally Synopsys Primetime for Static Timing Analysis using Nangate library as process design kit (PDK). Besides, to evaluate the performance, we also implement the design on the FPGA board provided by Xilinx (Kintex Ultra scale plus evaluation board) using Vivado design suite 18.2. While using the FPGA, we concentrate on the RTL design which also implementable on the ASIC platform.

Goal of the Hierarchical Processing

The ultimate goal of the project is to truncate the redundant information to accelerate the machine vision application. The figure illustrated below gives the understanding of extracting the relevant information from an image and which makes event-driven processing.

 

A comparison of frame-based and event-based processing for a scene with three objects is presented in the figure above. Object 1 is a bird sitting on a branch that is on the top right corner of the image. Object 2 contains some insignificant scattered moving objects. Lastly, object 3 is a flying bird that is flying in a serpentine way. In the frame-based processing on the left, each frame is produced after executing each pixel and the frames are produced after a sudden time interval. Conversely, our event-driven processing system, on the right, responds if there is a significant event. Here the processing for object 1 is discarded since there is no temporal change. Though Object 2 has temporal change it does not carry relevant information. Only object 3 has not redundant information and it is our target to perform processing on the serpentine path of object 3 as shown in this figure.

Overall Benefit of the Project

  • The integration of several computational layers in the sensor provides in-sensor processing and brings the computational unit close to the image sensor
  • The pixel-Parallel design gives the benefit of parallel processing and exhibits high acceleration of low/mid-level applications in the machine vision application.
  • Bio-inspired Computing removes the temporal and spatial redundancy and saves significant power and energy in the hierarchical layers. Parallelly, the computing system reduces the data volume in each layer and it reduces the burden to the external sequential processor, and accelerates the sequential operation in the vision application.
  • The processors in each layer are reconfigurable to different applications in ASIC. This allows flexibility in the design and enables us to apply the sensor for different applications.

Simulator

The source code for the python simulator of our region-based event camera can found in this link:

Publications

  • Pankaj Bhowmik, Md Jubaer Hossain Pantho, Marjan Asadinia, and Christophe Bobda. “Design of a Reconfigurable 3D Pixel-Parallel Neuromorphic Architecture for Smart Image Sensor.” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 673-681. 2018.
  • Md Jubaer Hossain Pantho, Pankaj Bhowmik, and Christophe Bobda. “Pixel-Parallel Architecture for Neuromorphic Smart Image Sensor with Visual Attention.” In 2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI), pp. 245-250. IEEE, 2018.
  • Pankaj Bhowmik, Md Jubaer Hossain Pantho, and Christophe Bobda. “Visual Cortex Inspired Pixel-Level Re-configurable Processors for Smart Image Sensors.” In Proceedings of Design and Automation Conference 2019(DAC)
  • Pankaj Bhowmik, Md Jubaer Hossain Pantho, Sujan Saha, and Christophe Bobda. “A Reconfigurable Layered-Based Bio-Inspired Smart Image Sensor.” In 2019 IEEE Computer Society Annual Symposium on VLSI (ISVLSI) [Accepted In DAC2019 as work in process]
  • Md Jubaer Hossain Pantho, Pankaj Bhowmik, and Christophe Bobda.”Neuromorphic Image Sensor Design with Region-Aware Processing.” In 2019 IEEE Computer Society Annual Symposium on VLSI (ISVLSI).
  • Event-Based Re-configurable Hierarchical Processors for Smart Image Sensors.” In the Proceedings of the IEEE Application-Specific Systems, Architecture and Processor 2019.

Patent

  • Pankaj Bhowmik, Md Jubaer Hossain Pantho, Marjan Asadinia, and Christophe Bobda. US Patent, “Reconfigurable 3D Pixel-Parallel Neuromorphic Architecture for Smart Image Sensor”.

Awards

  • Best Poster Presentation Award for presenting the paper titled “Design of a Reconfigurable 3D Pixel-Parallel Neuromorphic Architecture for Smart Image Sensor” at the conference on Computer Vision and Pattern Recognition Workshops, Salt Lake, Utah, USA.

Acknowledgment

National Science Foundation (NSF) is supporting the smart image sensor project under Grant-1618606.