Browsing by Author "Dr. Peng Ning, Committee Member"
Now showing 1 - 20 of 20
- Results Per Page
- Sort Options
- Abstraction-Based Static Analysis of Buffer Overruns in C Programs(2003-07-07) Srinivasa, Gopal Ranganatha; Dr. Matthias Stallmann, Committee Member; Dr. Peng Ning, Committee Member; Dr. Daniel C DuVarney, Committee Member; Dr. S Purushothaman Iyer, Committee ChairBounds violations or buffer overruns have historically been a major source of defects in software systems, making bounds checking a key component in practical automatic verification methods. With the advent of the Internet, buffer overruns have been exploited by attackers to break into secure systems as well. Many security violations ranging from the 1988 Internet worm incident to the AnalogX Proxy server vulnerability have been attributed to buffer overruns. Programs written in the C language, which comprise most of the systems software available today, are particularly vulnerable because of the lack of array bounds checking in the C compiler, presence of pointers that can be used to write anywhere in memory, and the weak type system of the C language. Many methods have been proposed to detect these errors. Runtime methods that detect buffer overruns suffer from significant overhead and incomplete coverage, while compile time methods could suffer from low accuracy and poor scalability. In this thesis, we propose a new technique for bounds checking based on data abstraction that is more accurate, more scalable, and suffers from no runtime overhead. Enhancements have been made to C Wolf, a suite of model generation tools, to handle buffer overflow analysis. Case studies on web2c, a publicly available software package, pico server, an open source web server, and on the wu-ftpd server are presented to demonstrate the practicality of the technique.
- Adaptive Real Time Intrusion Detection Systems(2003-02-22) Thomas, Ashley; Dr. Wenke Lee, Committee Chair; Dr. Douglas Reeves, Committee Co-Chair; Dr. Peng Ning, Committee MemberA real-time intrusion detection system (IDS) has several performance objectives: good detection coverage, economy in resource usage, resilience to stress, and resistance to attacks upon itself. In this thesis, we argue that these objectives are trade-offs that must be considered not only in IDS design and implementation, but also in deployment and in an adaptive manner. A real-time IDS should perform performance adaptation by optimizing its configuration at run-time. We use classical optimization techniques for determining an optimal configuration. We describe an IDS architecture with multiple dynamically configured front-end and back-end detection modules and a monitor. The front-end does the real-time analysis and detection and the less time-critical tasks may be executed at the backend. In order to do performance adaptation, the front-end is modified to have two modules: performance monitoring and dynamic reconfiguration. The IDS run-time performance is measured periodically, and detection strategies and workload are dynamically reconfigured among the detection modules according to the resource constraints and cost-benefit analysis. The back-end also performs scenario (or trend) analysis to recognize on-going attack sequences, so that the predictions of the likely forthcoming attacks can be used to pro-actively and optimally configure the IDS. The adaptive IDS results showed better performance when the operating conditions changed and the IDS was stressed or overloaded. By reconfiguring, the adaptive IDS minimized packet drops and gave priority for critical attacks, with relatively higher damage cost, thereby ensuring maximum value for the IDS. The overheads involved for monitoring as well as reconfiguration was found to be negligible.
- Bluetooth Intrusion Detection(2008-04-15) OConnor, Terrence; Dr. Douglas Reeves, Committee Chair; Dr. Peng Ning, Committee Member; Dr. Vincent Freeh, Committee Member
- A Database Level Implementation To Enforce Fine Grained Access Control(2008-05-06) Arjun, Vinod; Dr. Ting Yu, Committee Chair; Dr. Peng Ning, Committee Member; Dr. Rada Chirkova, Committee MemberAs privacy protection has gained significant importance, organizations have been forced to protect individual preferences and comply with many enacted privacy laws. This has been a strong driving force for access control in relational databases. Traditional relation level access control is insufficient to address the increasingly complex requirements of access control policies where each cell in the relation might be governed by a separate policy. In order to address this demand, we are in need of a more fine grained access control scheme, at the row-level or even the cell-level. A recent research paper proposed correctness criteria for query evaluation algorithms enforcing fine grained access control and showed that existing approaches did not satisfy the criteria. In addition, the paper proposed a query modification approach to implement a sound and secure query evaluation algorithm enforcing fine grained access control. To evaluate queries involving moderate table sizes of 50000 and 100000 records we experimentally find that the implementation takes approximately 8 and 32 seconds respectively. This is approximately 10 times, on an average, slower than query evaluation algorithms without access control. This performance gap increases significantly with increase in table size, thus rendering it impractical. In this thesis, we modify the query evaluation engine of POSTGRESQL to enforce fine grained access control at the database level. We address a few challenges and propose optimizations to counter inefficiencies that we encounter when moving the access control scheme to the database level. We analyze the performance of our implementation using data sets with various properties and find that it performs approximately 10 times better compared to the query modification approach on moderate table sizes of 50000 and 100000 records. Also, we find that our implementation scales well with table size. Experimental results show that our implementation is comparable to the performance of query evaluation algorithms without access control and hence is practical.
- Detection of Denial of QoS Attacks on DiffServ Networks.(2002-08-21) Mahadik, Vinay A.; Dr. Douglas S. Reeves, Committee Chair; Dr. Peng Ning, Committee Member; Dr. Jon Doyle, Committee Member; Dr. Gregory Byrd, Committee MemberIn this work, we describe a method of detecting denial of Quality of Service (QoS) attacks on Differentiated Services (DiffServ) networks. Our approach focusses on real time and quick detection, scalability to large networks, and a negligible false alarm generation rate. This is the first comprehensive study on DiffServ monitoring. Our contributions to this research area are 1. We identify several potential attacks, develop/use research implementations of each on our testbed and investigate their effects on the QoS sensitive network flows. 2. We study the effectiveness of several anomaly detection approaches; select and adapt SRI's NIDES statistical inference algorithm and EWMA Statistical Process Control technique for use in our anomaly detection engine. 3. We then emulate a Wide Area Network on our testbed. We measure the effectiveness of our anomaly detection system in detecting the attacks and present the results obtained as a justification of our work. 4. We verify our findings through simulation of the network and the attacks on NS2 (the Network Simulator, version 2). We believe that given the results of the tests with our implementation of the attacks and the detection system, further validated by the simulations, the method is a strong candidate for QoS-intrusion detection for a low-cost commercial deployment.
- A Floor Control Protocol For SIP-Based Multimedia Conferences(2003-04-01) Gupta, Prashant; Dr. Peng Ning, Committee Member; Dr. Douglas S. Reeves, Committee Chair; Dr. Mladen A. Vouk, Committee MemberThe purpose of this research is to define a protocol to regulate resources among participants in SIP-based centralized multimedia conferences. Centralized conferences are typical of contemporary conferencing architectures. SIP is emerging as the signaling protocol of choice for multimedia multiparty sessions. An important problem that needs to be addressed, in such sessions, is that of controlling and ordering access to multimedia resources among participants. This is also known as floor control. There is, to the best of our knowledge, no standardized protocol that addresses the problem of floor control, though there is one other competing proposal in the pipeline. The work in this thesis proposes a set of primitives that solve the above problem, for a variety of situations. We present a comparison of the approach taken in this thesis and the existing proposal. We have developed software that realizes a subset of the primitives as a preliminary proof of concept of the proposed protocol.
- Formalizing Computer Forensic Analysis: A Proof-Based Methodology(2004-07-18) Sremack, Joseph; Dr. Mladen A. Vouk, Committee Co-Chair; Dr. Jun Xu, Committee Co-Chair; Dr. Peng Ning, Committee MemberComputer forensics is an important subject in the field of computer security. Impenetrably secure systems are not a reality - hundreds of thousands of security breaches are reported annually. When a security breach does occur, certain steps must be taken to understand what happened and how to recover from the incident, including data collection, analysis, and recovery. These responses to an incident comprise one part of computer forensics. A successful forensic investigation of any security breach requires a sound approach. Forensics literature provides a general model for conducting an investigation that can acts as a template for forensic investigations. The current literature, however, has primarily focused on two extremes of forensics: technical details and high-level procedural guidelines. By focusing on the extremes, many of the intermediate steps and logical conclusions that a forensic investigator must make are omitted. This omission leaves the burden of forming the logical structure of an investigation to the investigator. Such ad hoc approaches can lead to inefficient investigations with extraneous investigatory steps, and possibly less accurate results. This thesis explores the formalization of existing computer forensic analysis techniques such that a complete forensic investigation can be conducted in an efficient and meticulous manner. The formalization includes the use of high-level incident information to formulate a broad hypothesis about the entire incident. The hypothesis is then proven by performing a series of lower-level proofs - either by inductive or by deductive (axiomatic inductive) means - each of which acts as a premise for the overall incident hypothesis. The formalized analysis is then applied to actual forensic investigations to demonstrate its effectiveness. The formalized methodology and techniques presented in this thesis demonstrate how forensic investigations can be scientifically rigorous without sacrificing the necessary amount of creativity that is required for a complete investigation.
- Hybrid online/offline optimization of Application Binaries(2004-07-08) Dhoot, Anubhav Vijay; Dr. Frank Mueller, Committee Chair; Dr. Xiaosong Ma, Committee Member; Dr. Peng Ning, Committee MemberLong-running parallel applications suffer from performance limitations particularly due to inefficiencies in accessing memory. Dynamic optimizations, i.e., optimizations performed at execution time, provide opportunities not available at compile or link time to improve performance and remove bottlenecks for the current execution. In particular, they enable one to apply transformations to tune the performance for a particular execution instance. This can potentially include effects of the environment, input values as well as be able to optimize code from other sources like pre-compiled libraries and code from mixed-language sources. This thesis presents design and implementation of components of a dynamic optimizing system for long-running parallel applications that use dynamic binary rewriting. The system uses a hybrid online/offline model to collect a memory profile that guides the choice of functions to be optimized. We describe the design and implementation of a module that enables optimization of a desired function from the executable, i.e., without relying on the source code. We also present the module that enables hot swapping of code of an executing application. Dynamic binary rewriting is used to hot-swap the bottleneck function with an optimized function while the application is still executing. Binary manipulation is used in two ways - first to collect a memory profile through instrumentation to identify bottleneck functions and then to control hot-swapping of code using program transformation. We show experiments as a proof of concept for implementations of remaining components of the framework and for validation of existing modules.
- Instant Messaging Interface and Transport for the MultiAgent Referral System.(2003-04-14) Chatterjee, Subhayu; Dr. Robert St. Amant, Committee Member; Dr. Peng Ning, Committee Member; Dr. Munindar P. Singh, Committee ChairAgent-based systems have been around for quite some time now. They have been extensively used in communication systems involving human interactions. The MultiAgent Referral System (MARS) helps automate the process of expertise location using referral chains. Previously, this system was implemented using email as the transport mechanism for the various referrals and queries generated by the agents. The asynchronous nature of email would prove restrictive in real-life scenario. This thesis develops an infrastructure using an Instant Messaging (IM) system that provides an user interface and transport mechanism for MARS. MARS has a distributed architecture and associates each user with an agent. This system is slightly different from a traditional IM system, which involves a client and a server, whereas in this case the messages from a user are routed through his agent to the server. Our specific approach exploits the open-source Jabber IM system, which enables us to integrate IM with MARS. In this manner, agent-to-agent communication is realized through IM and an IM-based user interface is provided to the users.
- Interactive Assistance for Anomaly-Based Intrusion Detection(2004-04-16) Zheng, Erkang; Dr. Mladen A. Vouk, Committee Chair; Dr. Ana I. Antón, Committee Member; Dr. Peng Ning, Committee MemberNetwork and information security is of increasing concern as intruders utilize more advanced technologies, and attacks are occurring much more frequently. A simple intrusion can cause an enterprise financial disaster, a threat to national safety, or loss of human life. Network-based and computer-based intrusion detection systems (IDS's) started appearing some twenty years ago. Now, there are various synchronous and asynchronous tools for external and internal network and host intrusion detection using models ranging from signature scanning and pattern matching, to statistical anomaly detection. Although modern IDS systems are much more advanced, they still have many limitations, shortcomings, and open issues. This includes a) inability of some to handle high speed network traffic, b) poor ability to detect new or first-time intrusions, c) high false alarm rate, d) deception -- such systems may have problems detecting "below noise" level intrusions, e) overload -- IDS, like any other system, may be vulnerable to the same attacks it is trying to detect, including Denial of Service (DoS) attacks, f) customization and end-user integration - unless the system is open-source, customization and integrations options may be limited – including how to properly augment and integrate human anomaly detection experiences and tool detection capabilities, g) automation of the processes, and h) privacy issues. This work is concerned with exploration of items b) and f) above, specifically on development of a prototype module for assisting human intrusion detection personnel in recognition of new threats. The work builds on system called Resource Usage Monitor (RUM) developed at NC State by developing its IDS assistance module. The intrusion detection module utilizes RUM as its statistical packet capturing and basic analysis engine, utilizes it to cross check its problem detection abilities, and adds to its resource risk assessment ability a facility for intrusion risk assessment using a suite of behavior description measures and intrusion threshold indicators. The RUM IDS module is an exploratory engine designed to set the tableaux for a more complete investigation of a) pro-active anomaly detection, and b) smoother integration of human intrusion detection experiences and a real-time IDS tool. The approach involves analysis of end-host databases for anomalies based on a suite of statistical change metrics. There are two principal "views" of a host and two groups of associated metrics. How it behaves with respect to a set of peers, i.e., network-relative behavior, and how it behaves with respect to itself, i.e., how its behavior changes from sample to sample. According to the behavior during the analyses, each host accumulates an anomaly index value, where a higher number represents a higher potential for misbehavior. Currently, the prototype anomaly index is based on a linear additive model. This may change as the research continues. The idea is that this index, once properly tuned, would correlate better with intuitive problem detection processes of network administrators, than does plain display of, for example, "high talkers". The primary goal of this work is to develop and test a RUM IDS module and its initial set of metrics. , While full investigation of the assistant index idea is beyond the scope of this project, formative results indicate that a subset of the metrics under investigation does indeed provide better high-speed problem detection, when combined with a human analyst, than do some other readily available tools.
- On the Protection of Link State Routing and Discovery of PKI Certificate Chains in MANET(2005-10-23) Huang, He; Dr. Peng Ning, Committee Member; Dr. Shytsun Felix Wu, Committee Co-Chair; Dr. Rudra Dutta, Committee Member; Dr. Arne A. Nilsson, Committee ChairThe growing awareness of the network vulnerability draws much attention to security from both the academic community and industrial society. Security is no longer a luxury but an independent and indispensable service to the current Internet. While various security mechanisms such as cryptographic and intrusion detection techniques have been proposed, designed, and even deployed in the field, the newly exposed network vulnerabilities and the emerging network technologies create new security challenges which make the existing security solutions either inefficient or insufficient. My Ph D research focuses on the efficient protection on link state routing and the self-organizing and self-dependent hierarchical public key certificate management in the emerging mobile ad hoc networks. The contributions of this thesis include two parts. In the first part, a cost reduced secure link state routing protocol with the capability of detecting disruptive links is proposed to efficiently protect the routing control messages (e.g., LSA) and trace the faulty intermediate routers; then a confidence extended routing mechanism enhanced with secure virtual links is designed to increase network reachability through selectively including the uncertain routers in packet relaying and further continuously monitoring the behaviors of those selected uncertain routers. A theoretical security analysis and an experimental evaluation are conducted to prove the feasibility and advantages of this new design under various rates of false alarms. In the second part, an approach is presented to discover the optimal PKI certificate path even without help from centralized certificate entities in the non-centralized and infrastructureless mobile ad hoc network and a secured and distributed certificate-chain searching protocol is developed to collect the needed certificates on the fly in the mobile ad hoc network.
- Performance Analysis of TCP over Wired and Wireless Network(2006-05-10) Wang, Xinbing; Dr. Peng Ning, Committee Member; Dr. Wenye Wang, Committee Co-Chair; Dr. Arne Nilsson, Committee Member; Dr. Robert Martin, Committee Member; Dr. Do Young Eun, Committee ChairTransmission Control Protocol (TCP) currently accounts for about 90% applications and 80% data of network traffic, and plays the dominant role in Internet transmission. In this dissertation, we present three studies of TCP performance over wired and wireless networks. In the first study, we mainly focus on the stability of TCP⁄AQM (Active Queue Management) systems for wired network environment. In particular, we study the local and global stability of TCP-newReno⁄RED under many flows regime. By using a normalized discrete-time model, which is simple, we analyze the global stability in a very efficient manner, and the results show that by properly choosing RED parameters, we can always make the TCP-newReno⁄RED system globally stable. The second study concerns the TCP performance when the buffer size is scaled up differently from the traditional one at a link with large capacity shared by many flows. Specifically, we consider the buffer size chosen on the order of (Nˆalpha) (0< alpha < 1), where N flows share the link of capacity O(N). We then develop a doubly-stochastic model for a TCP⁄AQM system with many flows by taking into account the packet-level dynamics over fine time scales. We show that, under our scale, the system always performs well in the sense that the link utilization goes to 1 and the loss ratio decreases to zero as the system size $N$ increases. We verify our results using extensive ns-2 simulations. Finally, we then analyze the impact of TCP flows on the wireless networks from the resource allocation point of view. We propose a TCP-AIMD aware call admission control scheme over wireless networks. Our scheme is based on a two level CAC framework, which takes both the call level and the packet level dynamics into account. The interaction between two levels is characterized through a single metric called the Quality of Service (QoS) guaranteed capacity. The performance of the proposed scheme is then analyzed and extensive simulation results are presented under different scenarios. Our results show that the proposed scheme can improve the system performance both on call and packet levels.
- Preventing Denial of Service Attacks on Reliable Multicast Networks(2002-12-17) Shah, Nipul Jayvant; Dr. Douglas S. Reeves, Committee Chair; Dr. Peter Wurman, Committee Member; Dr. Peng Ning, Committee MemberMulticast is finding a lot of application in modern day networks and the Internet. There are various existing protocols that support the wide range of requirements demanded by these applications. If all the receivers in a multicast group are required to get all the packets at more or less the same time (i.e. synchronized reliable receiving), then the transmission rate of the source ends up being controlled by the rate of the slowest receiver in this group. Although, this is a requisite in some applications, it poses as a serious threat to the group. In other words, if one or more receivers were to artificially create a packet loss, then the source would be busy sending repairs and will consequentially slow down the overall transmission rate. This leads to a Denial of Service attack on the other group members. The goal of this thesis is to suggest a mechanism to deter, if not prevent, the hostile receiver(s) from causing such an attack. We first study the problem with respect to a specific reliable multicast protocol, viz. Pragmatic Generic Multicast (PGM), by conducting experiments, which prove that PGM is also affected by the 'slowest receiver problem'. If the source can work out an optimum transmitting rate, we may be able to reduce the repair requests in the network and have a more stable system. To achieve this, we look at the possibilities and advantages of using an auction-based mechanism, such as the Generalized Vickrey Auction (GVA) to compute the optimum rate, based on the rate requests from the various participating receivers. We implement our mechanism in PGM and conduct experiments in order to compare its performance to that of the existing PGM protocol. Our results prove that for a network having malicious members, an appropriate auction-based mechanism complemented with policing stabilizes the source transmission rate and hence prevents a Denial of Service attack on other group members.
- Preventing Misbehavior in Cooperative Distributed Systems(2009-12-01) Shin, Kyuyong; Dr. Douglas S. Reeves, Committee Chair; Dr. Injong Rhee, Committee Co-Chair; Dr. George N. Rouskas, Committee Member; Dr. Peng Ning, Committee MemberCooperative distributed systems are becoming increasingly popular as alternatives to the traditional client-server model for many applications, including file sharing, streaming, and distributed computing. In cooperative distributed systems, participants directly cooperate with each other to achieve common goals by sharing resources without the need of any central control. Therefore, in contrast to the client-server model, the system capacity potentially scales as the number of participants in a system increases, providing the participants with information or services with few resource restrictions. The information or services provided by the system can be thought of as a public good, and participants should play a part in the protection and provision of the public good. Thus, cooperation among participants to obtain mutual benefits is the fundamental premise behind the success of such a system. In spite of the importance of cooperation among participants to protect and support the public goods in cooperative distributed systems, a high level of informational integrity of the goods and behavioral integrity of participants toward the goods is difficult to achieve due to malicious or selfish participants. Because such malicious or selfish behavior was not anticipated at the inception of cooperative distributed applications, they are highly vulnerable to such behavior. To address the problem, in this dissertation, we identify two major threats (i.e., pollution and free-riding) to the protection and provision of the public goods, and propose tailored solutions to those specific threats. In addition, a general, fairness-enforcing incentive mechanism is proposed to foster cooperation among participants, which could be readily used to prevent various misbehaviors in a wide range of cooperative distributed systems. Firstly, this dissertation investigates the pollution problem in file sharing systems, and proposes a novel Distributed Hash Table (DHT)-based anti-pollution scheme called winnowing. Winnowing attempts to achieve a high level of informational integrity of the public goods (i.e., shared files in this case) through cooperation among (benign) participants. To attain the goal, publish verification and privacy-preserving object reputation are integrated into DHT as a part of publish and look-up processes. Secondly, this dissertation presents a free-riding prevention mechanism in one of the most famous file sharing systems (i.e., BitTorrent), which depends on the use of secret sharing. By employing secret sharing into file sharing, the proposed scheme, called Treat-Before-Trick (TBeT), enforces cooperation among participants by restricting uncooperative participants from the acquisition of secrets required to complete their work. Therefore, a high level of behavioral integrity on the part of participants toward the public goods can be achieved under TBeT. Finally, this dissertation proposes a general incentive mechanism which can be readily and widely used in many cooperative distributed systems to enforce cooperation among participants, which is named Triangle Chaining (T-Chain). T-Chain strongly depends both on the use of light-weight symmetric cryptography to reduce the opportunity for free-riding, and on the pay-it-forward policy to exploit the potential of multi-lateral participant compatibility.
- QoS Provisioning and Pricing in Multiservice Networks: Optimal and Adaptive Control over Measurement-based Scheduling(2005-08-14) Xu, Peng; Dr. Michael Devetsikiotis, Committee Chair; Dr. George Michailidis, Committee Member; Dr. Peng Ning, Committee Member; Dr. Wenye Wang, Committee Member; Dr. Ioannis Viniotis, Committee MemberIn order to ensure efficient performance under inherently and highly variable traffic in multiservice networks, we propose a generalized adaptive and optimal control framework to handle the resource allocation. Even though this framework addresses rigid Quality of Service concerns for the deterministic delay-bound classes by reserving part of the link capacity and employing appropriate admission control and traffic shaping schemes, our research actually emphasizes the adaptive and optimal control of the shared resources for the flexible delay-bound classes. Therefore, the resource allocation is delivered by a subsystem of this generalized framework, the measurement-based optimal resource allocation (MBORA) system. By applying a simple threshold policy, we first validate the advantages of the adaptivity of our proposed framework through extensive simulation results. Then we introduce a generalized profit-oriented formulation inside decision module of MBORA system, that supplies the network provider with criteria in terms of profit, by leveraging the utility charge revenue and delay-incurred cost. The optimal resource allocation will be affected by the various types of pricing models together with the different levels of service guarantee constraints. As a case study, we investigate this generalized profit-oriented formulation under generalized service models. Combining further with a linear pricing model subject to average queue delay constraints, we propose a fast algorithm for online dynamic and optimal resource allocation under this specific scenario. Finally, we propose a delay-sensitive nonlinear pricing model for the generalized profit-oriented formulation, that realizes two-tier delay differentiation. By better understanding the fluid queueing model, we propose a generalized solution strategy for linear, nonlinear or mixed pricing models that is free of the dimensionality problem and amenable to online implementation.
- Scalable authorization in role-based access control using negative permissions and remote authorization(2003-06-02) Shah, Arpan Pramod; Dr. Peng Ning, Committee Member; Dr. Douglas S. Reeves, Committee Member; Dr. Gregory T. Byrd, Committee ChairAdministration of access control is a major issue in large-scale computer systems. Many such computer systems proposed over recent years aim at reducing the effort required to govern access. Role-based access control (RBAC) systems are a huge benefit to this point. They reduce the tasks of an administrator or authorities when users take on different roles in an organization and need to be assigned different access rights or privileges based on these roles. RBAC is a very expressive and flexible access control mechanism that makes it possible to have security policies based on the principle of least privilege, static and dynamic separation of duties, conflicts between roles and permissions, and many more. This research proposes the use of negative permissions and remote authorization for improving the scalability of an RBAC implementation. We discuss how negative permissions would fit in the proposed RBAC model. The thesis describes a mechanism to implement such an RBAC system utilizing negative authorizations. Our implementation is an extension of the Java 2 security architecture to support negative authorizations. We provide support for hierarchy of roles and de-confliction of positive and negative authorizations using the most specific takes precedence model. Future extensions to the model and optimizations to the implemented algorithm are proposed. Another aspect of this thesis is the application of above RBAC model in a distributed environment utilizing a remote authorization management system. A remote authorization mechanism is appropriate in many client-server systems where there is control over the resources at an intermediate communication stack or a middleware component enforces the access rules. In our client-server architecture, an authorization server uses an RBAC system to control access to resources under its domain, and the enforcement of access rules is provided by a security overlay on privileged resources. We address how our negative permissions and remote authorization schemes augment RBAC scalability. We provide the requisite abstraction through UML and architecture diagrams for implementation in other languages and systems. A comparison of this work to other related research done in the RBAC domain is carried out, and future work in this area is discussed.
- A Simulation Study of Cross Traffic on Expedited Forwarding in Differentiated Services Networks.(2003-02-16) Senapati, Mukul M; Dr. Mladen A. Vouk, Committee Chair; Dr. Mihail Sichitiu, Committee Co-Chair; Dr. Peng Ning, Committee MemberThe purpose of this research was to simulate Differentiated Services enabled network topologies in order to study cross traffic impact on the Expedited Forwarding (EF) Per Hop Behaviors (PHBs). The results help us understand the extent to which packets tagged for Expedited Forwarding can be protected, and the guarantees that can be given to EF flows. Cross traffic and EF can exist and interact in many ways. This study analyzed two types of cross-traffic. In the first case a single EF micro-flow was considered, while all other EF micro-flows as well as any best effort traffic, was considered to be cross traffic that could affect this EF micro-flow. This case was analyzed under two conditions: a) a single EF micro-flow consisting of packets of small size, and b) a single EF micro-flow consisting of large packets. In the second case, the EF aggregate as a whole was considered composed of EF micro-flows of small as well as large packets, and any best effort traffic was considered cross-traffic that could affect the EF aggregate as a whole. Two schedulers, namely the Priority Queue Scheduler and the Weighted Round Robin Scheduler, were used to simulate the EF PHB and a comparison was made to determine which could provide better Quality of Service guarantees. Possible metrics for quantifying Quality of Service provided to the EF PHB were also investigated. Further, the bounds on the recent redefinition of the EF PHB by IETF were also studied. A variant of the EF PHB, called the Delay Bound (DB) PHB was defined by IETF. The DB PHB was studied, and recommended values for parameters that characterize the DB PHB were presented. Finally end-to-end Quality of Service obtained from the EF Per Domain Behavior was studied across multiple autonomous Differentiated Services domains governed by common Service Level Agreements.
- Tracing Intruders behind Stepping Stones(2005-08-06) Wang, Xinyuan; Dr. Douglas S. Reeves, Committee Chair; Dr. Peng Ning, Committee Member; Dr. George N. Rouskas, Committee Member; Dr. Gregory T. Byrd, Committee MemberNetwork based intruders seldom attack directly from their own hosts but rather stage their attacks through intermediate 'stepping stones' to conceal their identity and origin. To track down and apprehend those perpetrators behind stepping stones, it is critically important to be able to correlate connections through stepping stones. Tracing intruders behind stepping stones and correlating intrusion connections through stepping stones are challenging due to various readily available evasive countermeasures by intruders: •Installing and using backdoor relays (i.e. netcat) at intermediate stepping stones to evade logging of normal logins. •Using different types of connections (i.e. TCP, UDP) at different portions of the connection chain through stepping stones to complicate connection matching. •Using encrypted connections (with different keys) across stepping stones to defeat any content based comparison. • Introducing timing perturbation at intermediate stepping stones to counteract timing based correlation of encrypted connections. In this dissertation, we address these challenges in detail and design solutions to them. For unencrypted intrusion connections through stepping stones, we design and implement a novel intrusion tracing framework called Sleepy Watermark Tracing (SWT), which applies principles of steganography and active networking. SWT is "sleepy" in that it does not introduce overhead when no intrusion is detected. Yet it is "active" in that when an intrusion is detected, the host under attack will inject a watermark into the backward connection of the intrusion, and wake up and collaborate with intermediate routers along the intrusion path. Our prototype shows that SWT can trace back to the trustworthy security gateway closest to the origin of the intrusion, with only a single packet from the intruder. With its unique active tracing, SWT can even trace when intrusion connections are idle. Encryption of connections through stepping stones defeats any content based correlation and makes correlation of intrusion connections more difficult. Based on inter-packet timing characteristics, we develop a novel correlation scheme of both encrypted and unencrypted connections. We show that (after some filtering) inter-packet delays (IPDs) of both encrypted and unencrypted, interactive connections are preserved across many router hops and stepping stones. The effectiveness of IPD based correlation requires that timing characteristics be distinctive enough to identify connections. We have found that normal interactive connections such as telnet, SSH and rlogin are almost always distinctive enough to provide correct correlation across stepping stones. The timing perturbation at intermediate stepping stones of packet flows poses additional challenge in correlating encrypted connections through stepping stones. The timing perturbation could either make unrelated flows have similar timing characteristics or make related flows exhibit different timing characteristics, which would either increase the false positive rate or decrease the true positive rate of timing-based correlation. To address this new challenge, we develop a novel watermark based correlation scheme that is designed to be specifically robust against such kinds of timing perturbation. The idea is to actively embed a unique watermark into the flow by slightly adjusting the timing of selected packets of the flow. If the embedded watermark is unique enough and robust enough against the timing perturbation by the adversary, the watermarked flow could be uniquely identified and thus effectively correlated. By utilizing redundancy techniques, we develop a robust watermark correlation framework that reveals a rather surprising result on the inherent limits of independent and identically distributed (iid) random timing perturbations over sufficiently long flows. We also identify the tradeoffs between the defining characteristics of the timing perturbation and the achievable correlation effectiveness. Our experiments show that our watermark based correlation performs significantly better than existing passive timing based correlation in the face of random timing perturbation. In this research, we learn some general lessons about tracing and correlating intrusion connections through stepping stones. Specifically, we demonstrate the significant advantages of active correlation approach over passive correlation approaches in the presence of active countermeasures. We also demonstrate that information hiding and redundancy techniques can be used to build highly effective intrusion tracing and correlation frameworks.
- Traffic Grooming in Translucent Optical Ring Networks.(2003-12-07) Srinivasarao, Koundinya Bangalore; Dr. Rudra Dutta, Committee Chair; Dr. George N Rouskas, Committee Member; Dr. Peng Ning, Committee MemberThe exponential growth of the Internet has resulted in an ever increasing demand for bandwidth. Carrier networks which form the backbone of the Internet, have been designed to carry only voice signals with predictable traffic patterns and anticipating slow growth of the network. With the advances in fiber optics and wavelength division multiplexing (WDM) optical networking is the key to satisfy the data-driven bandwidth demand. These technologies enable simultaneous transmission of signals on separate high-speed channels at different wavelengths. While the bandwidth provided by these channels is very high, individual traffic demands are at the sub-wavelength level. This mismatch can be overcome by multiplexing several lower rate connections onto the high-speed channels in a cost-effective manner. This technique is referred to as traffic grooming. Traffic grooming in WDM networks has been a widely addressed problem in recent years. Traffic grooming and its constituent subproblems have been proven to be NP-complete for even the most elemental of network topologies. The ring topology has been the target of a large number of the studies because of its practical relevance. However, most existing studies concentrate on some objective function that is aggregated over all the network nodes, such as the total number of ADMs used or the total amount of opto-electro-optical (OEO) routing performed. From a practical point of view, it is likely that every network node would be provisioned similarly. Hence a min-max objective, seeking to minimize the OEO equipment needed at the node which needs the maximum of such equipment is more appropriate. Such objectives are usually harder to optimize than aggregate objectives which are themselves known to be computationally intractable. In this thesis, we study traffic grooming in a unidirectional ring network under different traffic patterns for the min-max objective. We define two heuristic approaches based on decomposition; one is based on grouping the nodes, and the other on partitioning the traffic matrix. We show that the second approach is more general, but is costlier in terms of computation; further, we indicate traffic families for which the first approach may be expected to perform nearly as well as the more complex one. We also investigate several variations of these two main approaches. We present numerical results validating the performance of the algorithms.
- Visualization Search Strategies(2004-11-22) Mehta, Reshma Girish; Dr. Munindar Singh, Committee Member; Dr. Peng Ning, Committee Member; Dr. Christopher Healey, Committee ChairInnovations in high performance computing and high bandwidth networks have led to the onset of data explosion. Along with large size, the datasets are typically multivariate. The need for effective exploration of this data has led to the area of multidimensional visualization. Research in low level human visual system has resulted in the construction of perceptual guidelines that can produce effective visualizations. However, application of these guidelines to a dataset requires users to be experts in the visualization domain. ViA is a semi-automated visualization assistant that uses perceptual guidelines along with a heuristic search algorithm to generate perceptually salient visualizations. This thesis aims to study the behavior of the current hint-based search strategy and determine its efficiency. We compare hint-based search with two generic heuristic search algorithms, simulated annealing and reactive tabu search, by adapting them to ViA's search domain. We use time efficiency, space efficiency, ability to find multiple optimal solutions and optimality as performance metrics. Further, in order to "see" the areas of the search space explored by each search algorithm, we have developed a focus + context visualization system using hyperbolic geometry.
