NCSU Institutional Repository >
NC State Theses and Dissertations >
Please use this identifier to cite or link to this item:
|Title: ||Data Allocation with Real-Time Scheduling (DARTS)|
|Authors: ||Ghattas, Rony|
|Advisors: ||Ralph C. Smith, Committee Member|
Alexander G. Dean, Committee Chair
Thomas M. Conte, Committee Member
Eric Rotenberg, Committee Member
|Keywords: ||embedded systems|
preemption threshold scheduling
|Issue Date: ||21-Dec-2006|
|Discipline: ||Computer Engineering|
|Abstract: ||The problem of utilizing memory and energy efficiently is common to all computing platforms. Many studies have addressed and investigated various methods to circumvent this problem. Nevertheless, most of these studies do not scale well to real-time embedded systems where resources might be limited and particular assumptions that are valid to general computing platforms do not continue to hold.
First, memory has always been considered a bottleneck of system performance. It is well known that processors have been improving at a rate of about 60% per year, while memory latencies have been improving at less than 10% per year. This leads to a growing gap between processor cycle time and memory access Time. To compensate for this speed mismatch problem it is common to use a memory hierarchy with a fast cache that can dynamically allocate frequently used data objects close to the processor. Many embedded systems, however, cannot afford using a cache for many reasons presented later. Those systems opt to use a cacheless system which is particularly very popular for real-time embedded applications. Data is allocated at compile time, making memory access latencies deterministic and predictable. Nevertheless, the burden of allocating the data to memory is now the responsibility of the programmer⁄compiler.
Second, the proliferation of portable and battery-operated devices has made the efficient use of the available energy budget a vital design constraint. This is particularly true since the energy storage technology is also improving at a rather slow pace. Techniques like dynamic voltage scaling (DVS) and dynamic frequency scaling (DFS) have been proposed to deal with these problems. Still, the applicability of those techniques to resource-constrained real-time system has not been investigated.
In this work we propose techniques to deal with both of the above problems. Our main contribution, the data allocation with real-time scheduling (DARTS) framework solves the data allocation and scheduling problems in cacheless systems with the main goals of optimizing memory utilization, energy efficiency, and obviously overall system performance. DARTS is a synergistic optimal approach to allocating data objects and scheduling real-time tasks for embedded systems. It optimally allocates data objects to memory through the use of an integer linear programming (ILP) formulation, which minimizes the system’s worst-case execution times WCET resulting in more scheduling slack. This additional slack is used by our preemption threshold scheduler (PTS) to reduce stack memory requirements while maintaining all hard real-time constraints. The memory reduction of PTS allows these steps to be repeated. The data objects now require less memory, so more can fit into faster memory, further reducing WCET and resulting in more slack time. The increased slack time can be used by PTS to reduce preemptions further, until a fixed point is reached. Using a combination of synthetic and real workloads, we show that the DARTS platform leads to optimal memory utilization and increased energy efficiency.
In addition to our main contribution given by the DARTS platform, we also present several techniques to optimize a system’s memory utilization in the absence of a memory hierarchy using PTS, which we enhance and improve. Furthermore, many advanced energy saving techniques like DFS and DVS are investigated as well, and the tradeoffs in their use is presented and analyzed.|
|Appears in Collections:||Dissertations|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.