The last decade has seen the emergence of powerful new machine learning techniques which are essential for extracting useful insights from data generated by interconnected sensor enabled devices. However, these algorithms are often too computationally demanding to run on resource-constrained edge devices, which lack the required power and computation resources, necessitating the development of novel algorithmic techniques which can deliver high accuracy while meeting the resource constraints of embedded devices. research in SEE Lab tackles these questions across all levels of system design, using novel algorithmic approaches for machine learning, developing optimized hardware based on emerging technologies, and distributing computation and communication throughout a system to extend system lifetime.

🔗HD Computing and Energy Efficient Learning

SEE Lab’s algorithmic work on improving the efficiency of machine learning techniques takes inspiration from one of the most efficient learning systems devised to date: the human brain! Our work leverages the paradigm of “hyperdimensional computing” (HD) - originally developed by Neuroscientists as a mathematically rigorous model of human memory - to develop highly efficient alternatives to classical machine learning algorithms for classification and clustering. Our lab both develops algorithmic techniques and applies this research to a broad set of learning problems, including: DNA sequencing, secure distributed learning, and low-power learning for IoT applications. Beyond developing novel algorithmic approaches to learning, our group focuses strongly on developing novel hardware platforms to run these algorithms.

Hardware Design and Optimization

The vast strides made by the machine learning community over the past decade have been enabled by equally significant advances in the design of new hardware architectures which can provide the computational speed and memory needed to run state-of-the-art learning algorithms. SEE Lab’s hardware team continues to push this frontier by developing novel hardware accelerators targeting both general purpose machine learning and specialized algorithms based on HD computing. research in SEE Lab focuses on three main hardware platforms: “field-programmable gate-arrays” (FPGAs), GPUs and “processing in memory” (PIM) architectures. Our work targets all levels of the design process, from developing new APIs which allow programmers to rapidly prototype ideas on FPGAs and GPUs, to developing novel techniques for floating point operations in PIM. PIM is a promising approach that will remove a bottleneck in data transfer between memory and a centralized processor, enabling massively parallel computations. One of our recent major achievements was the development of “FloatPIM,” which enabled training of a neural network directly in memory, resulting in 302x improvement in training speed compared to a state-of-the-art GPU based implementation.

Sensing Networks

The size and prevalence of Internet of Things networks continues to grow, enabling us to gain an unprecedented insight into the world around us. As more and more of these sensors are deployed, a critical question is how do we keep these networks functioning and processing information efficiently? At the SEE Lab, we examine this question from several angles: efficient information processing across a network of heterogeneous devices, developing dynamic control strategies for minimizing the often unforeseen and ignored costs of maintenance, and utilizing robotic platforms for maximizing information gained while environmental sensing. As information traverses from the edge nodes that collect the data to another device that can perform an action, multiple strategies can be used to minimize the system energy consumption. We have explored utilizing packet aggregation to optimize information transfer across energy, delay, and real-time constraints. Instead of information travelling from the edge to a centralized node, we have also developed strategies for separating machine learning in a hierarchical fashion. Each computation and transmission requires energy and generates heat, which can be detrimental to the reliability of a network of devices. We have worked to develop simulators that can model the reliability of large scale IoT networks and distributed control strategies and sensor node placements that can help to maximize a network’s lifetime.

🔗Trajectories for Persistent Monitoring

Traditionally, environmental phenomena have been measured using stationary sensors configured into wireless sensor networks or through participatory sensing by user-carried devices. Since the phenomena are typically highly correlated in time and space, each reading from a stationary sensor is less informative than that of a similarly capable sensor on the move. User-carried sensors can take more informative readings, but we have no control over where the sensors travel. Our work in Trajectories for Persistent Monitoring helps to close this gap by optimizing the path that robotic platforms travel to maximize the information gained from data samples. Multi-objective goals are formed using information gain and additional goals, such as system responsiveness to dynamic points of interest, multi-sensor fusion, and information transfer using cognitive radios. The resulting robots can adapt to dynamic environments to rapidly detect evolving wildfires, support first responders in emergency situations, and collect information to improve air quality models for a region.

🔗Calibration Models for Environmental Sensing

Sensor nodes at the edge of the Internet of Things often require sensor-specific calibration functions that relate input features to a phenomenon of interest. For example: in air quality sensing, the calibration function transforms input data from onboard sensors to target pollutant concentrations, and for application power prediction, internal performance metrics can be used to predict device power. Edge devices are typically resource constrained, meaning that traditional machine learning models are difficult to fit into the available storage and on-device training can strain available processing capabilities. We seek novel methods of reducing the complexity of training machine learning models on the edge by efficiently reducing training datasets, focusing calibration efforts into important regions using application-specific loss functions, and improving regression methods for resource-constrained devices.

🔗The Internet of Things, Smart Cities, and Wireless Healthcare

In an increasingly informed world, generating and processing information encompasses several computing domains from embedded systems in smart appliances to datacenters powering the cloud. We have worked on efficient distributed data collection and aggregation for processing the data in a hierarchical, context-focused manner. By using hierarchical processing, systems can distill relevant information, increase privacy, and optimize communication energy for Smart Cities, Data Centers, and distributed Smart Grid and Healthcare applications.

🔗The Internet of Things with Applications to Smart Grid and Green Energy

The emergence of the Internet of Things has resulted in an abundance of data that can help researchers better understand their surroundings and create effective and automated actuation solutions. Our research efforts on this topic target several problems: (1) renewable energy integration and smart grid pricing in large scale systems, (2) individual load energy reduction and automation, and (3) improved predictions mechanisms for context-aware energy management that leverage user activity modeling. We have designed and implemented multiple tools that span from individual device predictors to a comprehensive representation of this vast environment.



Ⓒ All rights reserved | Seelab, UC San Diego