Sudhakar Yalamanchili
Joseph M. Pettit Professor

The School of Electrical and Computer Engineering
yalamanchiliGeorgia Institute of Technology
Mailing Address:

266 Ferst Drive, KACB 2316

Atlanta, GA 30332-0765

Phone: (404) 894-2940
Fax: (404) 385 1746
Office: KACB 2316


Semester Schedule: I am on leave this semester

Office Hours Fall 2014: See Above

Research: Computer Architecture and Systems Laboratory                    



Sudhakar Yalamanchili received the B.E degree in Electronics from Bangalore University, India in 1978, and the MS. and Ph.D degrees in Electrical and Computer Engineering from the University of Texas at Austin in 1980 and 1984 respectively.

He is currently a Joseph M. Pettit Professor of Computer Engineering in the School of Electrical and Computer Engineering at the Georgia Institute of Technology in Atlanta GA. Prior to joining Georgia Tech in 1989 he was Senior and then Principal Research Scientist at the Honeywell Systems and Research Center in Minneapolis from 1984 to 1989. At Honeywell he was the Principal Investigator for projects in the design and analysis of multiprocessor architectures for embedded applications. During that time he served as a member of Honeywell’s Program Technical Advisory Board to MCC and was an Adjunct Faculty and taught in the Department of Electrical Engineering at the University of Minnesota. He currently serves as a Co-Director of the NSF Industry/University Cooperative Research Center for Experimental Research in Computer Systems (CERCS) (

Dr. Yalamanchili contributes professionally with regular service on editorial boards and conference & workshop program committees. Current and recent service includes the Editorial Board of Computer Architecture Letters (2011- present), Program Co-Chair 2014 IEEE/ACM International Symposium on Networks on Chip, Program Committees for the IEEE Micro Top Picks from Computer Architecture Conferences (2014), IEEE/ACM International Symposium on Code Generation and Optimization (2014), IEEE International Symposium on High Performance Computer Architecture (2014), and IEEE/ACM International Symposium on Computer Architecture (2014). He is a member of the ACM and an IEEE Fellow.



My current research interests are organized along three major themes. The first is scalable modeling and simulation technologies for many core architectures and systems. The Manifold project seeks to develop an open source infrastructure for workload-driven parallel simulation of many core architectures. The project will also provide software support for integrating existing point tools and models, for example through  i) simulation kernels to support via standardized APIs necessary event, synchronization and time management services, and ii) interface specifications between core, cache, network, memory and emulator components to mix and match models.  We are particularly interested in applying this infrastructure to modeling architectures for high performance computing that employ novel packaging structures and connectivity solutions, for example 3D systems and interposer based systems.  The second area of focus is on the emergence of heterogeneous systems comprised of homogeneous general purpose cores intermingled with customized heterogeneous cores and using diverse memory and cache hierarchies.  Our focus is on improving the productivity of software development for such architectures. Our research is coalesced around several system efforts.  The first is the Ocelot infrastructure that includes architecture emulation and dynamic compilation/translation functions across several backend targets Lynx is  Ocelot based just-in-time instrumentation system for GPUs developed jointly with the HVM project.  We are working with LogicBlox Inc. developing Red Fox - a compilation environment from Datalog to large-scale multi-GPU clusters.   The most recent efforts with colleagues in Computer Science is integration of low power heterogeneous cores into die stacked memory packages. The project includes a C++ parametric architecture generation environment and an OpenCL based compiler.  The target application arena is memory intensive, large scale data analytics. The third theme investigates tighter physical dependencies between the physics (e.g., thermal fields and device wear-out) and the operation of the microarchitectural components in terms of delay, reliability, and energy with the goal of creating robust microarchitectures that are operational across a wide dynamic range. This effort utilizes the Energy Introspector library developed in our group for the coordinated modeling of energy, performance and reliability of many core microarchitectures and which is integrated with Manifold. The target architectures are 2.5D/3D processor-memory die stacks.

 We gratefully acknowledge the generous support of our current and recent research efforts by the National Science Foundation, Sandia National Laboratories, SRC, LogicBlox Corporation, Samsung Corporation, HP Labs,  AMD,  Intel Corporation, IBM Corporation, IMPACT Technologies LLC, Qualcomm, and NVIDIA Corporation.



In the recent past I have taught or am teaching the following classes.

ECE 3056: Architecture, Concurrency and Energy in Computations (Spring 2014)
ECE 4100/6100: Advanced Computer Architecture

ECE 8813a: Design and Analysis of Multiprocessor Interconnection Networks

In the past I have devoted time to the development of the following textbooks.

Interconnection Networks, J. Duato, S. Yalamanchili, L. Ni, Morgan Kaufman, 2003.
VHDL Starters Guide, 2nd Edition, Prentice Hall, 2004.
VHDL: From Simulation to Synthesis, Prentice Hall, 2000 (reprinted in Japanese, 2002)




Problems with this page? Please contact: Sudhakar Yalamanchili at