How computers see is the through-line running from Lazar Supic’s Bachelor’s degree in Electrical Engineering at the University of Belgrade, to his UC Berkeley PhD in Nuclear Engineering, to his current work as a Postdoc in UC Berkeley’s DeepDrive Industry Consortium (BDD). His work has taken a path that began with computer vision in robots, moved to gamma-ray tracking in atomic reactions, and now focuses on machine learning for automotive perception. Supic’s research combines the most advanced and computationally intensive forms of machine learning with real-time automated-device decision making. “Real-time” in this case means that a driverless car might have less than one tenth of a second to determine whether a rapidly-changing environment requires application of the brakes, a turn of the steering wheel, or maintenance of the current speed and direction.
The Berkeley DeepDrive Industry Consortium is an interdisciplinary effort focusing on deep automotive perception that spans the campus’s Institute of Transportation Studies, Department of Electrical Engineering and Computer Science, the Center for Information Technology Research in the Interest of Society (CITRIS), and the Berkeley Vision and Learning Center. Supic is a member of EECS Professor Vladimir Stojanovic’s research group, alongside fellow postdocs Rawan Naous and Ranko Sredojevic; Professor Stojanovic is among the faculty leading the BDD.
About the Science
“The ultimate goal of the DeepDrive project is enabling self-driving cars ‘to see’ through advanced perception techniques based on deep neural networks (DNNs),” Supic explains. He adds, “Our current DNN work, which focuses on DNN compression, is more general than driverless cars. Our DNN compression algorithms can in fact be applied to many other areas of machine learning, such as natural language processing.”
“Our work on perception using DNNs is motivated by the fact that we now have huge chunks of data; this trend is captured by the term ‘Big Data,’” Supic explains. “However, Big Data alone is not enough; we must have algorithms that can make sense of that data. The interesting thing about using DNNs for this is that they can be trained to make sense of the very large and complex datasets really well. In these state-of-the-art DNNs, convolutional layers extract key features from data, and based on these features, they can start to recognize the differences between classes of data.” In the self-driving car case, classes of images might include cars, people, buildings, or roads, and the importance of correctly differentiating among these classes is paramount.
DNN models are essentially mathematical representations of a learning process. Information about the environment is fed into the DNN model in the form of a training dataset, which teaches the model to make inferences about data it has not previously evaluated. Just two or three years ago, it might have taken days or weeks to train complex DNN models to differentiate between classes of data. However, researchers gain enormous advantage when the computation that trains and executes models takes less time. For driverless vehicle to sense, infer, and respond to environmental conditions in less than 100ms, models must be compressed and computational hardware must run faster and in a smaller physical footprint than even the most advanced commercially-available processors used in high performance computing clusters.
The need for faster training and execution of DNN models defines the research goals of Supic’s project: first, start with the uncompressed DNN models; next, compress them via novel algorithms so that they require fewer computational steps to complete with minimal accuracy loss; then develop hardware specially tuned for running the compressed models. In preliminary results, the project has measured inference model runtimes “almost five times faster than currently available embedded hardware platforms,” Supic explains. A paper describing these results is being submitted to a peer-reviewed journal this spring.
Supporting Berkeley DeepDrive research
Supic has been using the TensorFlow machine learning framework for his DNN work on a variety of computational resources including local workstations and Amazon Web Services virtual machines.
When the BDD team began using Savio, the campus shared High Performance Computing cluster run by Berkeley Research Computing (BRC), they reached out to the BRC support team for help on several items. Together, the teams worked to set up a computing environment optimized for the goal of the project, including operating system choice and TensorFlow version. Supic was interested in using NVIDIA Docker containers optimized to run on NVIDIA GPU processors: “Using Docker and these containers,” Supic explained, “I could easily reproduce my results even when running BDD algorithms on different computational resources.” After consultation with BRC operations intern, Nicolas Chan, Supic learned that his preferred Docker container could be converted to a Singularity container, and then run on Savio and other HPC clusters. Singularity converts Docker containers to a form that can be run under security models typical of HPC clusters, without impacting computational performance (speed).
Supic was thus enabled to take advantage of free Faculty Computing Allowance (FCA) on Savio, pooled from several faculty who participate in Berkeley DeepDrive -- saving his team hundreds of dollars per day in vendor-cloud charges. “Nicolas helped me get it working, and I really appreciated his support,” he says. “That Berkeley Research Computing was willing and able to work with me to achieve a research goal was very helpful.
What's Next?
Supic’s current research focuses on DNN compression algorithms that operate on two-dimensional images captured with a camera. Next steps are likely to include development of compressed DNN models to more rapidly process three-dimensional data sets from LiDAR sensors. “This would let us see what the limits of our approach are when applied to even more complex datasets,” Supic says, “such as point cloud or RGB-D [Red-Green-Blue plus Depth].”
Timely verification of algorithms that compress DNN models, identify their interesting features, and make category inferences on more complex data sets will, of course, require even more computational power. Supic is already working with Nicolas and BRC Domain Consultant Oliver Mullerklein to explore whether he can test BDD’s current algorithms on a larger set of nodes than he can easily access using FCA hours on Savio, by applying for an allocation on national HPC resources provided by XSEDE. The powerful clusters offered through the XSEDE program could prove a valuable resource as the complexity of DNN models being used by Berkeley DeepDrive researchers continues to scale up.
Supic noted that, “At the beginning of the twentieth century, as machine-powered industrialization rapidly evolved, artists and philosophers became interested in the aesthetics of efficiency. Simillary, in my own work I really feel that the machine learning efficiency that we are developing to enable car computers to see is both, powerful and has its own beauty.”