System Energy Efficiency Lab
Home People Research Publications Sponsors Contacts
Current Research 
Past Research 

Current Research:

Hyperdimensional Computing

Our research focuses on system-efficient, accurate learning solutions using brain-inspired hyperdimensional (HD) computing as a next-generation learning method. HD computing is a computation strategy based on a short-term human memory model, Sparse Distributed Memory, that emerged from neuroscience. It is inspired by the observation that the mathematical properties of HD spaces can explain the key aspects of human memory, perception, and cognition. The well-defined set of arithmetic operations for hypervectors mimic essential brain functions, e.g., approximate data memorization using the hypervector addition and inter-data similarity reasoning using the vector distance computation. Our research exploited the HD computing method to develop HD computing-based learning algorithms and applications, e.g., data classification for image/voice/sensory data, human activity recognition, and unsupervised learning. The new algorithm suite first maps given data into a non-linear HD space using the HD operations and performs the learning tasks with light-weight linear combinations of hypervectors. We showed that the HD algorithms learn a given dataset faster than the state-of-the-art deep learning approach, i.e., achieving acceptable accuracy only with a few epochs. It can thus be a suitable online solution for IoT environments by eliminating the need for iterative learning at every layer of the deep learning model, which takes significant energy and memory footprints for large problems. The new algorithm suite based on HD computing is also at heart parallelizable as the HD operations are dimension-independent. These properties provide opportunities to enable efficiently distributed, federated learning on edge computing environments. We are examining the potential of HD computing on various candidates for edge computing devices, including low-power devices, GPU, FPGA, and emerging computing platforms such as processing in-memory. The evaluation showed that HD computing-based algorithms could bring up to three orders of magnitude higher energy efficiency as compared to conventional learning algorithms. We are also exploring the feasibility of HD computing as an alternative computing method integrated with today’s systems to provide a complete computational paradigm that is easily applied to learning problems. We are also investigating how to utilize HD computing for a broader range of data analytics applications, such as bioinformatics, beyond the classical ML problems.

Processing in Memory

We live in a world where technological advances are continually creating more data than what we can cope with. Today several Internet of Things applications analyze raw data by running different algorithms. Running such data-intensive workloads with large datasets on traditional cores results in high energy consumption and slow processing speed due to the large amount of data movement between memory and processing units. In SEELab, we are working towards processing in memory (PIM), a novel solution that alleviates the data transfer bottleneck. PIM moves some computation from processor cores to the memory hierarchy, processing data where it is stored.

We exploit the properties of emerging memory technologies to find innovative ways to implement logic inside the memory. Over the past few years, the group has been actively working to answer the WHY, the WHERE, the WHEN, and the HOW of processing in memory. We are redesigning memory, all the way from systems to architecture down to the low-level circuits, to enable PIM for various applications like machine learning, bioinformatics, data analytics, and graph processing. Recently, we proposed FloatPIM, a highly-parallel and flexible architecture that implemented high precision training and testing of neural networks entirely in memory. The design was flexible enough to support both fixed and floating-point operations and provided stable training of complex neural networks. Such PIM-based architectures have shown multiple orders of magnitude of improvement in performance as well as energy efficiency.

Bioinformatics Acceleration

Outpacing Moore’s law, genomic data is doubling every seven months and is expected to surpass YouTube and Twitter by 2025. Starting with sequencing and alignment (which is one of the slowest steps itself), bioinformatics algorithms run a variety of algorithms from variant calling to classification to graph-based analysis to deep neural networks for different purposes, e.g., to understand the disease-causing mutilations, personalized treatment, and protein harvesting drug production, just to name a few. Memory/storage and computation requirement of these applications extend from hundreds of CPU hours and gigabytes of memory, to millions of CPU hours and petabytes of storage. This tremendous amount of data asks for redesigning the entire system stack with the goal of superior architectural solutions for memory/storage systems, e.g., intelligent use of high-bandwidth memory granted by advances in hardware, as well as significantly faster computation platforms for expedited decision to enable clinical use of technology in real time. To achieve this goal, we jointly put together our experience on microbiome algorithms and datasets, processing-in-memory (PIM) acceleration of alignment (central for bioinformatics acceleration) and of clustering, classification and deep neural network together with the associated system and software for managing applications mapped to PIM, for the goal of developing a full stack infrastructure to map the aforementioned bioinformatics applications to novel hardware (high-bandwidth FPGAs and GPUs and near-data accelerators) in an end-to-end manner, while also rethinking on the design and implementation of novel algorithmic alternatives driven by the new hardware infrastructure, e.g., mapping applications on hyperdimensional computing paradigm which, thanks to its error tolerance, can benefit from novel technologies such as multi-level memory which is perfect to store DNA data.

IoT Management and Reliablitly

The Internet of Things is a growing network of heterogeneous devices, combining commercial, industrial, residential and cloud-fog computing domains. These devices range from low-power sensors with limited capabilities to multi-core platforms on the high-end. The common property for these devices is that they age, degrade and eventually require maintenance in the form of repair, component replacement or complete device replacement. In general, power dissipation on devices makes the temperature rise, which in turn creates temperature stress that dramatically increases the impact of reliability degradation mechanisms leading to early failures. To analyze the effects of reliability degradation in IoT networks, we implemented a reliability framework. Currently, utilizing this framework, we explore and optimize trade-offs between energy, performance, and reliability.

We work on “maintenance preventive” dynamic control strategies of the IoT devices to minimize the often unforeseen and ignored costs of maintenance. Our work has already demonstrated the importance of dynamic reliability management in mobile systems, by controlling the frequency, voltage and core allocations, while respecting user experience constraints. Our goal is to extend this into the whole IoT domain. Initially, we showed that the battery health of IoT devices can be improved with reliability-aware network management. We further propose optimal control strategies for diverse devices, including sensor devices by adjusting their sampling and communication rates and high-end devices by controlling their frequency and voltage levels. The solutions are distributed and work towards preventing maintenance costs while keeping operational costs to a minimum and data quality within desired limits. We also develop smart path selection and workload offloading algorithms that ensure a balanced distribution of reliability across the network. The combined approach is to minimize operational and expected maintenance costs in a distributed and scalable fashion while respecting the user and data quality constraints imposed by the end-to-end IoT applications.

Trajectories for Persistent Monitoring

Traditionally, environmental phenomena have been measured using stationary sensors configured into wireless sensor networks or through participatory sensing by user-carried devices. Since the phenomena are typically highly correlated in time and space, each reading from a stationary sensor is less informative than that of a similarly sensor on the move. User-carried sensors can take more informative readings, but we have no control over where the sensors travel. Our work in Trajectories for Persistent Monitoring help to close this gap by optimizing the path that robotic platforms travel to collect informative data samples. In addition to an informative path, the trajectory is optimized for the length of time a point of interest is observed and to maintain communication networks for teams of drones.

Calibration Models for Environmental Monitoring

Sensor nodes at the edge of the Internet of Things often require sensor-specific calibration functions that relate input features to a phenomenon of interest. For example: in air quality sensing, the calibration function transforms input data from onboard sensors to target pollutant concentrations, and for application power prediction, internal performance metrics can be used to predict device power. The edge devices are typically resource constrained, meaning that traditional machine learning models are difficult to fit into the available storage and on-device training can strain available processing capabilities. We seek novel methods of reducing the complexity of training machine learning models on the edge by efficiently reducing training datasets, focusing calibration efforts into important regions using application-specific loss functions, and improving regression methods for resource-constrained devices.

Past Research:

Approximate Computing

Today’s computing systems are designed to deliver only exact solutions at high energy cost, while many of the algorithms that are run on data are at their heart statistical, and thus do not require exact answers. We are working on a framework to optimally and simultaneously trade-off accuracy and efficiency across software and hardware stacks of IoT applications. In addition, running machine learning algorithms on embedded devices is crucial as many applications require real-time response. However, hardware implementation and high computation energy cost are the main bottlenecks of machine learning algorithms in big data domain. We search for alternative architectures to address the computing cost and memory movement issues of traditional cores.

The Internet of Things, Smart Cities, and Wireless Healthcare

In an increasingly informed world, generating and processing information encompasses several computing domains, from datacenters to embedded systems. SEELab research in this area includes efficient distributed data collection and aggregation to processing and adapting to this data in smart cities, data centers, the distributed smart grid and wireless healthcare applications.

The Internet of Things with Applications to Smart Grid and Green Energy

The Internet of Things creates both new opportunities and challegens in several different domains. The abundace of data helps researchers to better understand their surroudings and create effective and automated actuation solutions. SEELab's research efforts on this topic target to solve several problems, including renewable energy integration in large scale systems, individual load energy reduction and automation, energy storage, context-aware energy management, better prediction mechanisms, user activity modeling and smart grid pricing and load integration. To solve these problems, we design and implement multiple tools that not only model and analyze smaller individual pieces but also create a comprehensive representation of this vast environment.

Wireless Healthcare

With the proliferation of personal mobile computing via mobile phones and the advent of cheap, small sensors, we propose that a new kind of "citizen infrastructure", can be made pervasive at low cost and high value. Though challenges abound in mobile power management, data security, privacy, inference with commodity sensors, and "polite" user notification, the overriding challenge lies in the integration of the parts into a seamless yet modular whole that can make the most of each piece of the solution at every point in time through dynamic adaptation. Using existing integration methodologies would cause components to hide essential information from each other, limiting optimization possibilities. Emphasizing seamlessness and information sharing, on the other hand, would result in a monolithic solution that could not be modularly configured, adapted, maintained, or upgraded.

IoT System Characterization and Management: from Data Centers to Smart Devices and Sensors

The Internet of Things is a growing network of heterogeneous devices, combining commercial, industrial, residential and cloud-fog computing domains. These devices range from low-power sensors with limited capabilities to multi-core platforms on the high-end. The IoT systems creates both new opportunities and challenges in several different domains. The abundance of data helps researchers to better understand their surroundings and create automated solutions which effectively model and manage diverse constrained resources in IoT devices and networks, including power, performance, thermal, reliability and variability. SEELab's research efforts on this topic target to solve these problems, including renewable energy integration in large scale systems, individual load energy reduction and automation, energy storage, context-aware energy management for smart devices, user activity modeling, smart grid pricing and load integration. To solve these problems, we design and implement multiple tools that not only model and analyze smaller individual pieces but also create a comprehensive representation of this vast environment.


Long-term research requiring high-resolution sensor data need platforms large enough to house solar panels and batteries. Leveraging a well-defined sensor appliance created using Sensor-Rocks, we develop novel context-aware power management algorithms to maximize network lifetime and provide unprecedented capability on miniaturized platforms.

Energy Efficient Routing and Scheduling For Ad-Hoc Wireless Networks

In large-scale ad hoc wireless networks, data delivery is complicated by the lack of network infrastructure and limited energy resources. We propose a novel scheduling and routing strategy for ad hoc wireless networks which achieves up to 60% power savings while delivering data efficiently. We test our ideas on a heterogeneous wireless sensor network deployed in southern California - HPWREN.


SHiMmer is a wireless platform that combines active sensing and localized processing with energy harvesting to provide long-lived structural health monitoring. Unlike other sensor networks that periodically monitor a structure and route information to a base station, our device acquires data and processes it locally before communicating with an external device, such as a remote controlled helicopter.

Event-driven Power Management

Power management (PM) algorithms aim at reducing energy consumption at the system-level by selectively placing components into low-power states. Formerly, two classes of heuristic algorithms have been proposed for power management: timeout and predictive. Later, a category of algorithms based on stochastic control was proposed for power management. These algorithms guarantee optimal results as long as the system that is power managed can be modeled well with exponential distributions. Another advantage is that they can meet performance constraints, something that is not possible with heuristics. We show that there is a large mismatch between measurements and simulation results if the exponential distribution is used to model all user request arrivals. We develop two new approaches that better model system behavior for general user request distributions. These approaches are event driven and give optimal results verified by measurements. The first approach is based on renewal theory. This model assumes that the decision to transition to low power state can be made in only one state. Another method we developed is based on the Time-Indexed Semi-Markov Decision Process model (TISMDP). This model allows for transitions into low power states from any state, but it is also more complex than our other approach. The results obtained by renewal model are guaranteed to match results obtained by TISMDP model, as both approaches give globally optimal solutions. We implemented our power management algorithms on two different classes of devices and the measurement results show power savings ranging from a factor of 1.7 up to 5.0 with insignificant variation in performance.

Energy-efficient software design

Time to market of embedded software has become a crucial issue. As a result, embedded software designers often use libraries that have been preoptimized for a given processor to achieve higher code quality. Unfortunately, current software design methodology often leaves high-level arithmetic optimizations and the use of complex library elements up to the designers' ingenuity. We present a tool flow and a methodology that automates the use of complex processor instructions and pre-optimized software library routines using symbolic algebraic techniques. It leverages our profiler that relates energy consumption to the source code and allows designers to quickly obtain energy consumption breakdown by procedures in their source code.

Energy-efficient wireless communication

Today’s wireless networks are highly heterogeneous with diverse range requirements and QoS. Since the battery lifetime is limited, power management of the communication interfaces without any significant degradation in performance has become essential. We show a set of different approaches that efficiently reduce power consumption under different environments and applications.When multiple wireless network interfaces (WNICs) are available, we propose a policy to decides what WNIC to employ for a given application and how to optimize the its usage leading to a large improvement in power savings. In the case of client-server multimedia applications running on wireless portable devices, we can exploit the server knowledge of the workload. We present a client- and a server-PM that by exchanging power control information can achieve more than 67 % with no performance loss. Wireless communication represents a critical aspect also in the design of specific applications such as distributed speech recognition in portable devices. We consider quality-of-service tradeoffs and overall system latency and present a wireless LAN scheduling algorithm to minimize the energy consimption of a distributed speech recognition front-end.