System Energy Efficiency Lab
Home People Research Publications Sponsors Contacts
   
Overview
People
Publications
  Untitled Document

Alternative Memory/Computing Technology

In Alternative Memory/Computing Technology (ACMS), we are searching for alternative computer architectures to address the memory and computing bottlenecks of current processors by techniques including approximate computing, near-data computing and in-memory processing. We are also working toward machine learning and neuromorphic computing to replace traditional computing systems with smart processors which could work as human brain. To make these designs practical, we are working on the technological (emerging non-volatile memories) and design aspects in collaboration with groups at UC San Diego and UC Berkeley. Here is the list of our active projects:

Approximate computing: We propose a novel architecture with software and hardware support for approximate computing. Hardware components are enhanced with ability to dynamically adapt approximation at a quantifiable and controllable cost in terms of accuracy. Software services complement hardware to ensure user’s perception is not compromised, while maximizing the energy savings due to approximations. The changes to hardware design include approximation enabled CPU, GPU, accelerators (DSPs) and storage. ​ See more details here ​​

Brain-Inspired Computing ​​: The mathematical properties of high-dimensional spaces show remarkable agreement with behaviors controlled by the brain. Brain-inspired hyperdimensional (HD) computing explores the emulation of cognition by computing with hypervectors as an alternative to computing with numbers. Hypervectors are high-dimensional (e.g., D=10,000), holographic, and (pseudo)random with independent and identically distributed (i.i.d.) components. These features provide an opportunity for robust computing in an architecture without asymmetric memory protection. See more details here ​​

Machine Learning Accelerator ​​ : We design high performance energy efficient query engines which can accelerate different types of machine learning, multimedia and general streaming application inside or close to main memory. Our design leverages Near-Data Computing (NDC) by putting processing units close to the main memory. So, computation can be accelerated by avoiding the memory/cache bandwidth bottleneck. We speedup many basic machine learning computations including: Nearest Neighbor search, bitwise computations, neuromorphic computation and steaming applications inside the non-volatile memory which can configure as both memory and computing unit. See more details here ​​

Emerging Memory/Logic Design ​​: Designing a memory subsystem is challenging because it involves balancing among multiple goals. The goals include fast access time, programmability, high bandwidth, low power, low cost, reliability, and security. Some of these goals may contradict one another, thus it is important to strike a good balance given the target market of a system. We exploit the feature of emerging technologies, e.g. emerging transistors and non-volatile memories, to design an ultra-efficient and stable memories for embedded applications in circuit, architecture and application levels. ​ See more details here ​​


People

Faculty: Tajana Simunic Rosing, UCSD CSE professor

Project Leader: Mohsen Imani, UCSD CSE Ph.D. Student

Yeseong Kim, UCSD CSE Ph.D. Student

Bekhzod Soliev, UCSD CSE Ph.D. Student

Joonseop Sim, UCSD ECE Ph.D. Student

Daniel Perioni, UCSD CSE Department

Deqian Kong, UCSD CSE Department

Saransh Gupta, UCSD ECE Department

Atl Arredondo, UCSD CSE Department

John F Hwang, UCSD CSE Department

Debanjan Chatterjee, UCSD ECE Department

Yan Cheng, UCSD CSE Department

Former members:

Gaurav Dhiman, UCSD CSE Ph.D. student

Raid Ayoub, UCSD CSE Ph.D. student

Bryan S. Kim, UCSD CSE Ph.D. student

Rajib Nath, UCSD CSE Ph.D. student


Publications

1. M. Imani, A. Rahimi, D. Kong, T. Rosing, J. M. Rabey "Exploring Hyperdimensional Associative Memory", HPCA 2017.

2. M. Imani, D. Peroni, Y. Kim, A. Rahimi, T. Rosing, "Efficient Neural Network Acceleration on GPGPU using Content Addressable Memory", DATE 2017.

3. M. Sarmagh, M. Imani, F. Koushanfar, T. Rosing, "LookNN: Neural Network with No Multiplication", DATE 2017.

4. M. Imani, T. Rosing, "CAP: Configurable Resistive Associative Processor for Near-Data Computing", ISQED 2017.

5. M. Imani, D. Peroni, A. Rahimi, T. Rosing, "Resistive CAM Acceleration for Tunable Approximate Computing", IEEE Transactions on Emerging Topics in Computing (TETC), 2017..

6. M. Imani, Y. Kim, T. Rosing, "MPIM: Multi-Purpose In-Memory Processing using Configurable Resistive Memory", ASP-DAC, 2017.

7. M. Imani, D. Peroni, A. Rahimi, T. Rosing, "Resistive CAM Acceleration for Tunable Approximate Computing", ICCD, 2016. (selected as top ranked paper for publishing in IEEE TETC)

8. M. Imani, Y. Kim, A. Rahimi, T. Rosing, "ACAM: Approximate Computing Based on Adaptive Associative Memory with Online Learning" ISLPED 2016.

9. M. Imani, A. Rahimi, Y. Kim, T. Rosing, "A Low-Power Hybrid Magnetic Cache Architecture Exploiting Narrow-Width Values" NVMSA 2016.

10. M. Imani, Shruti Patil, T. Rosing, "Approximate Computing using Multiple-Access Single-Charge Associative Memory" IEEE Transaction on Emerging Topics in Computing (TETC).

11. M. Imani, A. Rahimi, T. Rosing, "Resistive Configurable Associative Memory for Approximate Computing" DATE 2016.

12. M. Imani, S. Patil, T. Rosing, "MASC: Ultra-Low Energy Multiple-Access Single-Charge TCAM for Approximate Computing" DATE 2016.

13. M. Imani, T. Rosing, "Processing Acceleration with Resistive Memory-based Computation" MEMSYS 2016.

14. M. Imani, P. Mercati, T. Rosing, "ReMAM: Low Energy Resistive Multi-Stage Associative Memory for Energy Efficient Computing" ISQED 2016.

15. M. Imani, S. Patil, T. Rosing, "Low Power Data-Aware STT-RAM based Hybrid Cache Architecture" ISQED 2016.

16. M. Imani, Y. Kim, A. Rahimi, T. Rosing, "Associative Memory with Online Learning for Approximate Computing" DAC 2016 (WIP).

17. M. Imani, S. Patil, T. Rosing, "DCC: Double Capacity Cache for Narrow-Width Data Values" GLSVLSI 2016.

18. M. Imani, B. Aksanli, T. Rosing, "Ultra-Efficient Content Addressable Memory for Tunable GPU Approximation" Techcon 2016.

19. Y. Kim, M. Imani, S. Patil, T. Rosing, “CAUSE: Critical Application Usage-Aware Memory System using Non-volatile Memory for Mobile Devices”, ICCAD 2015.

20. M. Imani, S. Patil, T. Rosing, “Hierarchical Design of Robust and Low Data Dependent FinFET Based SRAM Array”, Nanoarch’15 [PDF]

21. M. Imani, M. Jafari, B. Ebrahimi, T. Rosing, "Ultra-low power FinFET based SRAM cell employing sharing current concept", Microelectronic Reliability Elsevier Journal, 2015.

22. M. Imani, S. Patil, M. Jafari, T. Rosing, “Ultra-Low Read leakage SRAM Cell Utilizing Independently-Controlled-Gate FinFET”, DAC 2015 (WIP)

23. M. Imani, S. Patil, T. Rosing, “Using STT-RAM Based Buffers in Digital Circuits”, NVMW 2015

24. G. Dhiman, R. Ayoub, and T. Rosing, "PDRAM: a hybrid PRAM and DRAM main memory system," in Proceedings of DAC'09, 2009, pp. 664-669.