NCSU Institutional Repository >
NC State Theses and Dissertations >
Please use this identifier to cite or link to this item:
|Title: ||Analysis-Managed Processor (AMP): Exceeding the Complexity Limit in Safe-Real-Time Systems|
|Authors: ||Anantaraman, Aravindh Venkataseshadri|
|Advisors: ||Alexander G. Dean, Committee Member|
Frank Mueller, Committee Member
Thomas M. Conte, Committee Member
Eric Rotenberg, Committee Chair
|Keywords: ||hard real-time systems|
worst-case execution time
worst-case timing analysis
Virtual Simple Architecture
Non-uniform program analysis
|Issue Date: ||28-Apr-2006|
|Discipline: ||Computer Engineering|
|Abstract: ||Safe-real-time systems need tasks' worst-case execution times (WCETs) to guarantee deadlines. With increasing microarchitectural complexity, the analysis required to derive WCETs is becoming complicated and, in some cases, intractable. Thus, complex microarchitectural features are discouraged in safe-real-time systems.
My thesis is that microarchitectural complexity is viable in safe-real-time systems, if control is provided over this complexity. I propose a reconfigurable processor, the Analysis-Managed Processor (AMP), that offers complete control over its complex features. The ability to dynamically manage the AMP enables novel cooperative static and run-time WCET frameworks that break the limitations of the traditional static-only WCET model, allowing complex features to be safely included.
(i) The Virtual Simple Architecture (VISA) framework avoids analyzing complex features. VISA derives tasks' WCETs assuming a simple processor. At run-time, tasks are speculatively attempted on the AMP with complex features enabled. A run-time framework dynamically confirms that WCETs are not exceeded.
(ii) The Non-Uniform Program Analysis (NUPA) framework enables efficient analysis of complex features. NUPA matches different program segments to different operating modes of the AMP. NUPA yields reduced WCETs for program segments that can be analyzed in the context of complex features, without the severe burden of requiring all program segments to be analyzed this way.
I propose that out-of-order execution is not inherently intractable, rather its interaction with control-flow is intractable. Out-of-order processors overlap the execution of 10s to 100s of in-flight instructions. Variable control-flow causes an explosion in the number of potential overlap schedules. I propose two timing analysis techniques that reduce the number of possible schedules.
(i) Repeatable Execution Constraints for Out-of-ORDER (RECORDER) eliminates variable control-flow and implied data-flow variations, guaranteeing a single input-independent execution schedule that can be derived via simulation, using arbitrary (random) program inputs.
(ii) Drain-and-Branch (DNB) restricts instruction overlap by insulating a branch's control-dependent region from the effects of instructions before and after the region.
RECORDER and DNB are complementary, as they work well for branches with short regions and long regions, respectively. Further, in the context of a NUPA framework, different branch regions may favor RECORDER, DNB, or in-order execution mode of the AMP, for achieving a highly optimized overall WCET. Moreover, branch regions analyzed for downgraded in-order execution can still benefit from the VISA run-time framework by speculatively enabling out-of-order mode of the AMP. The flexible combination of all the above techniques multiplies benefits, yielding a powerful framework for fully and safely capitalizing on complex microarchitectures in safe-real-time systems.|
|Appears in Collections:||Dissertations|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.