NCSU Institutional Repository >
NC State Theses and Dissertations >
Please use this identifier to cite or link to this item:
|Title: ||The Performance of Token Coherence on Scientific Workloads|
|Authors: ||Kuebel, Robert|
|Advisors: ||Dr. G. Byrd, Committee Chair|
|Issue Date: ||20-Jul-2005|
|Discipline: ||Computer Engineering|
|Abstract: ||Broadcast snooping and directory protocols are, by far, the most common coherence protocols in research and commercial systems. These protocols represent two extremes of cache coherence protocol design with seemingly incompatible goals. Directory protocols produce scalable systems by reducing network bandwidth requirements at the cost of increasing latency. Snooping based systems allow low latency at the cost of increased bandwidth. Recently, a promising class of coherence protocols called Token Coherence has been shown to outperform directory and snooping protocols by attempting to combine the best characteristics of both protocols. The concept of token counting allows the protocol to safely multicast requests on an unordered network. This avoids indirection like a snooping system but allows the system to scale by eliminating the need for an ordered network. Additionally, Token Coherence promises to be easier to implement, requiring nothing more than reliable message delivery from the network, and provides a simple set of rules to guarantee correctness.
Token Coherence was developed to improve the performance of multiprocessors running "commercial" applications including web and database servers. The fact that token coherence was designed with a specific class of applications in mind raises questions about its ability to perform under different circumstances. Without a more thorough investigation of the performance of Token Coherence, it is unclear whether its success on commercial applications is representative of its performance on other workloads. The goal of this thesis is to evaluate the performance of Token Coherence using a subset of the Splash2 benchmark suite. Also, variations of Token Coherence described in the literature but whose effects on performance were not published are examined.
This work shows that Token Coherence is not dependent on the peculiarities of commercial workloads and can improve the performance of scientific applications. In fact, Token Coherence performs well despite that assumptions under which it was conceived are not necessarily true on all applications. In addition, some optimizations made to Token Coherence specifically for commercial workloads do not have a significant positive benefit for scientific workloads.|
|Appears in Collections:||Theses|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.