Memory Access Dataflow

File(s)
Date
2014-03-07Author
Sankaralingam, Karthikeyan
Kim, Sung Jin
Ho, Chen-Han
Metadata
Show full item recordAbstract
Specialization and accelerators are an effective way to address the slowdown of Dennard scaling. For a family of accelerators like DySER, NPU, CE, and SSE acceleration that rely on a high performance processor to interface with memory using a decoupled access/execute paradigm, the power/energy benefits of acceleration are curtailed by the host processor?s power consumption. We observe that the host processor is essentially performing three primitive tasks: i) computation to generate recurring address patterns/branches; ii) managing and triggering recurring events like arrival of value from
cache, value from accelerator etc.; iii) actions to move information from one place to another; and iv) the above three are recurring and occur concurrently. Its overarching role is to orchestrate memory access dataflow. A conventional OOO processor is power-inefficient and over-provisioned for this.
We observe that exposing these low level events, actions, and computation enables an efficient dataflow microarchitecture to build a memory access dataflow engine. We propose a new architecture/execution-model called memory access dataflow (MAD) that is built on these primitive tasks, exposes them in the MAD ISA, and an accompanying efficient microarchitecture.
Subject
Dataflow
Accelerators
Memory
computer architecture
Permanent Link
http://digital.library.wisc.edu/1793/68516Type
Technical Report
Citation
TR1802