1. Smart Cameras on the Edge Show Details
With the introduction of more and more sophisticated systems on chip (SoC) in smart cameras their range of applications in various applied fields (surveillance, transport, assisted living) broadens substantially. SoCs represent a significant technology driver, since the available hardware units substantially increase in computational power paired with decreased power consumption; characteristics which open up new opportunities (mobile vision, augmented reality, multi-sensor data analysis systems). Moreover, emerging hardware-supported functionalities such as real-time face detection or dense motion fields on embedded devices lead to new applications such as privacy-protection or enhancing detected object appearance by superresolution. Despite the technology advancements, there are still many challenges such as dynamic and changeable environments, ever-increasing image resolution or coordinated camera networks. One promising way to solve these issues is combining the event-driven architecture with the service-oriented processing paradigm. In an event-driven architecture arbitrary (vision) algorithms create new events, which are transmitted to all recipients. In this way, arbitrary information from multiple sensors and video analytics algorithms can be shared and processed leading to highly flexible distributed systems. In the service-oriented paradigm applications and algorithms are not bound to hardware but rather ask for computational power in order to carry out the computations. Hence, the ability of reconfiguring hardware units to the requirements of algorithms boosts the number of applications, which are solved with smart cameras.
This tutorial will focus on three relevant aspects of smart camera networks: (i) SoC-driven hardware designs of smart cameras, (ii) the service-oriented processing paradigm for visual data processing and (iii) detailed presentation of various task-oriented analytic algorithms exploiting today's event-driven smart camera capabilities.
Dr. Axel Weissenfeld, Austrian Institute of Technology (AIT)
Axel Weissenfeld received his Dipl.-Ing. and Dr.-Ing. degree in electrical engineering from the Leibniz University of Hannover in Germany in 2003 and 2010, respectively. His research emphasis was on the fields of image processing, human machine interfaces, multiple view geometry and computer graphics. From 2008 to 2012 he was working on several real-time hardware platforms with the focus on intelligent video analysis and forensic search for Bosch Security Systems. The video analysis was enabled on a wide range of products such as PTZ dome cameras, IP and thermal cameras. Since 2012 he is engaged at AIT Austrian Institute of Technology for the business development of Video and Security Technology. One major focus of his work is the research and development of smart cameras in video surveillance, which are composed of several components such as a video analytics engine, rule engine, event engine and action engine. Furthermore, he targets developing production-ready cameras to researching novel paradigms of event-driven and service-oriented architectures for smart cameras.
2. System-Level Design of System-on-Chip-Based Embedded Smart Cameras Show Details
The use of cameras has become a promising alternative to conventional range sensors due to their advantages in size, cost, and accuracy. In many applications, particularly in camera sensor networks, the high-speed requirements along with low-power, size and low-cost are best served with modern system-on-chip devices. They provide a hardware/software structure in which low-level, time consuming, data intensive and repetitive functions are computed in hardware, while the high-level reasoning is done in software. The hardware parts of the system are usually designed by experienced hardware engineers, using hardware description languages like VHDL. The integration of the hardware and software is classically done at the end of the design process. This approach is error-prone, due to either incorrect specifications or misunderstanding during the translation from high-level specifications to low-level implementation. There is a need for a seamless design environment that would allow experts in the fields of artificial intelligence and image processing to focus on the development of intelligent applications that could automatically be mapped to efficient computing platforms.
In this tutorial, we present an integrated environment that would help solve this problem and provide designers of image processing systems the proper tools to implement, verify, and evaluate their systems in a real environment. Our focus is on building a generic embedded hardware/software architecture and providing the symbolic representation to allow programmability at a very high abstraction level. We propose a four-level flow, starting from a specification in C/C++ using the OpenCV library. Applications are then partitioned at a transaction-level and captured by a combination of OpenCV and SystemC representation. Subsequent refinements with a hardware design language will produce a hardware implementation at the register transfer level, which will then be simulated, verified synthesized and emulated in the RazorCam, a FPGA-based computing infrastructure.
Presenters:Michael Mefenza, Christophe Bobda, Franck Yonga, Kevin Gunn
CSCE Department, University of Arkansas, Fayetteville
Dr. Bobda is with the University of Arkansas, in Fayetteville, Ar since August 2010 as Associate Professor. He received the Licence in mathematics from the University of Yaounde, Cameroon, in 1992, the diploma of computer science and the Ph.D. degree (with honors) in computer science from the University of Paderborn in Germany in 1999 and 2003 (In the chair of Prof. Franz J. Rammig) respectively. In June 2003 he joined the department of computer science at the University of Erlangen-Nuremberg in Germany as Post doc, under the direction of Prof Jürgen Teich. Dr. Bobda received the best dissertation award 2003 from the University of Paderborn for his work on synthesis of reconfigurable systems using temporal partitioning and temporal placement. In 2005 Dr. Bobda was appointed Assistant Professor at the University of Kaiserslautern. There he set the chair cfor Self-Organizing Embedded Systems that he led until October 2007. From 2007 to 2010 Dr. Bobda was Professor at the University of Potsdam and leader of The working Group Computer Engineering.
Dr. Bobda is Senior Member of the ACM and IEEE Computer Society. He is also in the program committee of several conferences (FPL, FPT, RAW, RSP, ERSA, RECOSOC, DRS, ICDSC), the DATE executive committee as proceedings chair (2004, 2005, 2006, 2007, 2008, 2009, 2010). He served as reviewer of several journals (IEEE TC, IEEE TVLSI, IEEE TECS, ACM TODES, IEEE TRETS, Elsevier Journal of Microprocessor and Microsystems, Integration the VLSI Journal) and conferences (DAC, DATE, FPL, FPT, SBCCI, RAW, RSP, ERSA, ICDSC, DASIP), as guest editor of the Elsevier Journal of Microprocessor and Microsystems and member of the editorial board of the Hindawi International Journal of Reconfigurable Computing. Dr. Bobda is the author of one of the first most comprehensive books in the rapidly growing field of Reconfigurable Computing.