Log In
New user? Click here to register. Have you forgotten your password?
NC State University Libraries Logo
    Communities & Collections
    Browse NC State Repository
Log In
New user? Click here to register. Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Yahya Fathi, Committee Member"

Filter results by typing the first few letters
Now showing 1 - 18 of 18
  • Results Per Page
  • Sort Options
  • No Thumbnail Available
    Algorithms for Selecting Views and Indexes to Answer Queries
    (2009-07-22) Kormilitsin, Maxim; Christopher Healey, Committee Member; Yahya Fathi, Committee Member; Matthias Stallmann, Committee Co-Chair; Rada Chirkova, Committee Chair
    In many contexts it is beneﬠcial to answer database queries using derived data called views. Using views in query answering is relevant in applications in information integration, data warehousing, web-site design, and query optimization. The problem of answering queries using views can be divided into a number of subproblems. The ﬠrst step in the process of view selection is to identify which view can be used to answer queries from the given set. The second step is to determine possible reformulations of the workload queries. The last step is choosing views that can be maintained appropriately and that minimize the processing time of the input query workload. In our work we address the problem of selecting and precomputing indexes and materialized views in a database system, with the goal of improving the processing performance for frequent and important queries. The focus of our work is to develop a uniﬠed quality-centered view- and index-selection approach, for a range of query, view, and index classes that are typical in practical database systems. To the best of our knowledge, we are the ﬠrst to adopt the solution-quality focus for this generic practical problem setting.
  • No Thumbnail Available
    Analysis of Fuel Consumption for an Aircraft Deployment with Multiple Aerial Refuelings
    (2006-06-08) Bush, Brett Alan; Yahya Fathi, Committee Member; Jeffrey R. Thompson, Committee Member; Michael G. Kay, Committee Member; Thom J. Hodgson, Committee Chair; Russell E. King, Committee Member
    The purpose of the research has been to derive an algorithm that finds optimal aerial refueling segments (non-instantaneous) for a single aircraft deployment while also accounting for atmospheric winds. There are two decision variables: (1) Where to locate the refueling segments? (2) How much fuel to offload at each refueling segment? Later in the dissertation, a third decision variable is explored: How much fuel to load onto the aerial refueling aircraft? In previous research, the problem of having a single aircraft deployment with one instantaneous aerial refueling has been explored and solved. This paper piggybacks on that research and extends it. The first step (Problem P1) is deriving an algorithm that finds the optimal aerial refueling points for a single aircraft deployment with multiple instantaneous aerial refuelings. In the next step (Problem P2), one assumes aerial refueling is not instantaneous (in an effort to make the problem and solution more realistic), but requires some time frame depending on an offload rate. In problem (P2), optimal refueling segments are found (versus optimal refueling points). In the last problem (P3), one looks at a very similar algorithm that factors the winds aloft into the minimization algorithm. Finally, this paper looks at three distinct deployment scenarios with two aerial refuelings required. All of the scenarios were first planned by the U.S. Air Force and the results given to the author. Potential fuel and cost savings associated with using the aforementioned algorithms instead of current methods are then analyzed.
  • No Thumbnail Available
    A Comparison of Screening Methods for Colorectal Cancer
    (2004-12-01) Tafazzoli Yazdi, Ali; Reid Ness, Committee Member; Stephen D. Roberts, Committee Chair; Yahya Fathi, Committee Member; James R. Wilson, Committee Member
    Colorectal cancer (CRC) is the second leading cause of cancer death in the United States. This cancer has a very long asymptomatic phase and most patients are not aware of its presence until it grows into an advanced stage when the survival chance is very low. Evidence from several studies suggests that screening for detecting and removing colorectal cancer and precancerous adenomatous polyps can reduce the incidence of CRC and colorectal cancer-related mortality. There are a number of screening methods designed for this purpose, which vary considerably in terms of their performance characteristics and cost. However, because of the long latency of CRC and time needed for clinical trials, it is not practical to provide clinical trials of all the screening strategies for CRC. Simulation models offer an alternative means to evaluate and compare screening strategies. The primary goal of this thesis was to add a screening structure onto an existing discrete-event simulation model of the natural history of the CRC. The enhanced model is capable of simulating various screening interventions. It is the only model that can simulate all the screening strategies recommended by the American Gastroenterological Association clinical guidelines. In order to compare the screening strategies in these guidelines, a deterministic cost-effectiveness analysis (CEA) was first performed. CEA is considered the most appropriate method of evaluating preventive health services from an economic perspective. The model provides the discounted cost and effectiveness of each of the screening strategies which are the key inputs for CEA. Adopting a Bayesian approach, an appropriate distribution was assigned to strategic and economic parameters of the model to compare the screening strategies under the uncertainty. A probabilistic sensitivity analysis was performed for this purpose and the empirical distributions for average cost-effectiveness (CE) ratio of screening methods along with their 95% confidence intervals were developed on the CE plane. Finally, the cost-effectiveness acceptability curves (CEAC) were derived from the CE planes. These curves present the probability that a screening strategy is cost-effective for different ceiling ratios, which is the money society is willing to pay to gain a year of life. Examining the CEAC for all the populations (average and high risk), fecal occult blood test (FOBT), combined sigmoidoscopy & FOBT, and colonoscopy were three screening strategies showing higher probabilities to be the most cost-effective strategy for different ceiling ratios. In general, CEAC provide a probabilistic comparison between screening strategies, which can be used by medical policy makers for guideline development and health-care resource allocation. This thesis is the first effort to compare multiple number of CRC screening strategies by using CEAC.
  • No Thumbnail Available
    Determining Path Flows in Networks: Quantifying the Tradeoff between Observability and Inference
    (2008-04-24) Demers, Alixandra; Billy Williams, Committee Member; William A. Wallace, Committee Member; Nagui Rouphail, Committee Member; Yahya Fathi, Committee Member; George F. List, Committee Chair
  • No Thumbnail Available
    Developing and Fitting a Clearing Function Form: An Experimental Comparison of A Clearing Function Model and Iterative Simulation-Optimization Algorithm for Production Planning of a Semiconductor Fab
    (2009-04-13) Kacar, Necip Baris; Brian Denton, Committee Member; Yahya Fathi, Committee Member; Reha Uzsoy, Committee Chair
    Kacar, Necip Baris. Developing and Fitting a Clearing Function: An Experimental Comparison of A Clearing Function Model and Iterative Simulation-Optimization Algorithm for Production Planning of a Semiconductor Fab. (Under the direction of Professor Reha Uzsoy). We address the fundamental problem of workload – dependent lead times in production planning, known as planning circularity. We focus on a clearing function model and iterative algorithm that addresses planning circularity. We develop a clearing function form that expresses output as a function of the sum of the work released within a period plus any work available at the start of the period. We develop a new clearing function form which is different from the clearing functions based on expected WIP over the period that have been previously studied. We implement our clearing function form in the Allocated Clearing Function (ACF) model of (J. M. Asmundsson, Rardin, R. L., Uzsoy, R., 2002) and compare its performance to that of the Hung and Leachman (HL) procedure which is an iterative algorithm that combines simulation and fixed lead time LP models. In our experimental comparison, we use a simulation model of a re-entrant bottleneck system built with attributes of a real-world semiconductor fabrication environment. We vary the bottleneck utilization, demand patterns, the mean time to failure (MTTF) and mean time to repair (MTTR). Results indicate that the ACF model using our clearing function form performs better than HL procedure, giving less variable production plans and lower discrepancies between the planned and realized output.
  • No Thumbnail Available
    Digital Signal Design for Fault Detection in Linear Continuous Dynamical Systems
    (2007-04-11) Choe, Dongkyoung; Yahya Fathi, Committee Member; Negash Medhin, Committee Member; Stephen L. Campbell, Committee Chair; Robert Buche, Committee Member
    A systematic approach to detect underlying undesirable states of a physical system is to compare its observed behaviors to several competing models and to identify the model that best describes the observation. This model selection process can be enhanced by applying a specially designed auxiliary input signal to the system. This dissertation applies the auxiliary-signal-based model selection approach for recognizing faulty behaviors of systems whose dynamic behaviors can be described by linear differential equations. Using an existing analog--signal--based algorithm, the effect of the modeling error on this particular type of detection approach is examined and a geometrical explanation is provided. We also present a variation of the analog--signal--based algorithm which produces signals that are more practical for certain types of applications. In addition, an alternative auxiliary signal design algorithm is developed, producing digital signals that minimally disturb regular system operation and guarantee fault detection for a given amount of the gap between physical system and the corresponding model. The algorithm implements the analytical solution steps derived mostly by the optimal control problem solution technique while converting a nested optimization problem into an equivalent eigenvalue problem. The algorithm provides an option to optimize the duration of each digital piece to yield even more Steel material is widely used in fabricating automotive seat frames. Unfortunately, these materials are not renewable and take a somewhat longer time to degrade in a landfill than natural based biodegradable materials. Unprecedented growth of bio-based textile composites has drawn interest from various industries, such as automotive and transportation. Bio-based composite materials offer products that are biodegradable, easily recycled and can exceed the physical performance of metallic materials that are commercially available. Additional performance characteristics that composite materials can offer include weight reduction and strength improvement. The purpose of this research is to investigate the physical and mechanical properties of bio-based composite materials incorporating different linear densities of the flax sliver and blend ratios of Epoxy-soybean oil resin. Sliver is defined as a "continuous bundle of loosely untwisted fibers" [46]. The proposed fabrication concept is the impregnation of soybean-Epoxy resin into flax sliver. After resin impregnation of the flax sliver and curing it with the curing agent, the flax sliver — resin mixtures become rigid and support an increase in fiber loading. The resin consolidation method with sliver form is also called Sliver Polymer Matrix Composite (SPMC). One of the potential applications for the particular bio-based composite is automotive seat frames. The properties provided by SPMC, strength, weight reduction and biodegradability, are important to this final product. Three different bundle sizes of flax sliver were used, namely 8, 9 and 10 ply flax sliver. Each of the flax slivers has individual linear density of 253.41 grains⁄yard. Thus plied sliver bundles had linear densities of 2027.3, 2280.7 and 2534 grains⁄yard respectively. Moreover, three blend ratios of Epoxy and Acrylated Epoxidized Soybean Oil (AESO) are also taken into consideration as another variable, namely 100% Epoxy resin, 30% AESO ⁄ 70% Epoxy resin, and 50% AESO ⁄ 50% Epoxy resin. This research analyzes the mechanical and physical properties of the rigid bio-based composite materials employing flax fibers. Physical testing was performed to determine the flexural rigidity (three-point bend), impact strength and biodegradability at varying sliver linear densities and Epoxy-soybean resin blend ratios. Flexural rigidity test utilized 9" x 1" (Length x diameter) samples, impact strength test utilized 3" x 1" (Length x diameter). The highest impact test value was achieved with samples of 10 ply flax sliver and 50% Epoxy ⁄ 50% AESO resin mixture. The impact test value for this particular sample was 57 ft-lb. The highest flexural rigidity test value was also achieved with samples of 10 ply of flax sliver with 50% ⁄ 50% Epoxy-AESO resin mixtures. The average flexural rigidity of this sample was 612.74 lbs. (278.5 kg). The algorithm provides an option to optimize the duration of each digital piece to yield even more "plant-friendly" auxiliary detection signals at the cost of a moderate increase in computational time.
  • No Thumbnail Available
    Fuzzy Relational Equations: Resolution and Optimization
    (2009-12-02) Li, Pingke; Shu-Cherng Fang, Committee Chair; Simon M. Hsiang, Committee Member; Yahya Fathi, Committee Member; James R. Wilson, Committee Member
    Fuzzy relational equations play an important role as a platform in various applications of fuzzy sets and systems. The resolution and optimization of fuzzy relational equations are of our particular interests from both of the theoretical and applicational viewpoints. In this dissertation, fuzzy relational equations are treated in a unified framework and classified according to different aspects of their composite operations. For a given finite system of fuzzy relational equations with a specific composite operation, the consistency of the system can be verified in polynomial time by constructing a potential maximum/minimum solution and characteristic matrix. The solution set of a consistent system can be characterized by a unique maximum solution and finitely many minimal solutions, or dually, by a unique minimum solution and finitely many maximal solutions. The determination of all minimal/maximal solutions is closely related to the detection of all irredundant coverings of a set covering problem defined by the characteristic matrix, which may involve additional constraints. In particular, for fuzzy relational equations with sup-T composition where T is a continuous triangular norm, the existence of the additional constraints depends on whether T is Archimedean or not. Fuzzy relational equation constrained optimization problems are investigated as well in this dissertation. It is shown that the problem of minimizing an objective function subject to a system of fuzzy relational equations can be reduced in general to a 0-1 mixed integer programming problem. If the objective function is linear, or more generally, separable and monotone in each variable, then it can be further reduced to a set covering problem. Moreover, when the objective function is linear fractional, it can be reduced to a 0-1 linear fractional optimization problem and then solved via parameterization methods. However, if the objective function is max-separable with continuous monotone or unimodal components, then the problem can be solved efficiently, and its optimal solution set can be well characterized.
  • No Thumbnail Available
    Global Sensor Management: Real-Time Reallocation of Military Assets among Competing Tasks and Functions
    (2009-01-07) Dulin, Johnathon Louis; Reha Uzsoy, Committee Member; Russell E. King, Committee Member; Yahya Fathi, Committee Member; Thom J. Hodgson, Committee Chair
    The United States military maintains a network of sensor assets for the multiple purposes of detecting threats, collecting intelligence, monitoring space, and other objectives. Because of its nature, the network must achieve high probabilities of successfully completing all of its varying missions. There is a requirement to assign these sensors to tasks and functions in such a manner as to maximize the capability to meet each of the objectives, remaining flexible enough to be changed in response to the dynamic nature of the environment in which it is employed. Due to the environment, it is imperative to quickly obtain a good solution that defines an allocation of sensors to tasks. While it is possible to accurately determine the best network allocation based on a total enumeration of potential sensor assignments, this becomes intractable for large problems. Further, once an initial allocation scheme is determined, some sensors may need to be reassigned in response to certain types of events or sensors may simply fail. This research addresses these issues through the development of a heuristic approach that finds optimal or nearly optimal solutions to representative sensor networks in only a fraction of the time required to guarantee optimality. The approach is also expanded to respond to changes in the network due to the loss of an assigned sensor. Evaluation of the heuristic’s performance using several methods demonstrates its utility in achieving the objectives of determining the best possible allocation of sensors given the time limitations and its ability to respond to the dynamic nature of the environment in which it is intended to operate.
  • No Thumbnail Available
    Human and Machine Co-Investigate Intelligence System (HM-CII) for Fault Diagnosis and Detection in Complex Systems.
    (2010-11-02) Kim, So Yeon; Simon Hsiang, Committee Chair; Yahya Fathi, Committee Member; Shu Fang, Committee Member; Wenbin Lu, Committee Member
  • No Thumbnail Available
    Long-Term Spatial Load Forecasting Using Human-Machine Co-construct Intelligence Framework
    (2008-10-28) Hong, Tao; Shu-Cherng Fang, Committee Member; Yahya Fathi, Committee Member; Simon M. Hsiang, Committee Chair
    This thesis presents a formal study of the long-term spatial load forecasting problem: given small area based electric load history of the service territory, current and future land use information, return forecast load of the next 20 years. A hierarchical S-curve trending method is developed to conduct the basic forecast. Due to uncertainties of the electric load data, the results from the computerized program may conflict with the nature of the load growth. Sometimes, the computerized program is not aware of the local development because the land use data lacks such information. A human-machine co-construct intelligence framework is proposed to improve the robustness and reasonability of the purely computerized load forecasting program. The proposed algorithm has been implemented and applied to several utility companies to forecast the long-term electric load growth in the service territory and to get satisfying results.
  • No Thumbnail Available
    Metaheuristics for solving the Dial-a-Ride problem
    (2004-08-10) Chan, Sook-Yee Edna; Elmor L. Peterson, Committee Member; John R. Stone, Committee Member; John W. Baugh Jr., Committee Chair; Yahya Fathi, Committee Member
    Many transit agencies face the problem of generating routes and schedules to meet customer requests consisting of either pickup or dropoff requests using an available fleet of vehicles. The Dial-a-Ride Problem (DARP) is a mathematical model that closely approximates the problem faced by these agencies. The problem is a generalization of the well-known Pickup and Delivery Vehicle Routing Problem or Vehicle Routing Problem with Time Window. However, due to the high level of service required by this type of transportation service, additional operational constraints must be considered. While the DARP can be solved exactly by various techniques, exact approaches for the solution to real-world problems (typically consisting of hundreds of requests) are not practical. The time required is often excessive as the problem is NP-hard. In this thesis, we develop heuristics that find high quality solutions in a reasonable amount of computer time for the many-to-many, advanced reservation, multi-vehicle, single-depot, static DARP. The objectives considered include the minimization of total travel time and excess ride time, and the problem is subjected to maximum ride time, route duration, vehicle capacity, and wait time constraints. The cluster-first route-second approach is adopted. Clustering is performed using either Tabu Search (TS) or Scatter Search (SS) while routing is performed via insertion. The class of insertion heuristics has been extensively applied to the DARP. Earlier algorithms focused on feasible insertions but recently, heuristics that allow infeasible insertions to be considered during searches have been introduced. In this research, two insertion heuristics are considered: IRAU, which assigns requests only when they are feasible, and IRDU, which assigns all requests even if they result in infeasibilities. Comparison studies show that the benefit of using a particular algorithm depends on the statistical properties of the data sets used. Overall, the algorithms generated better solutions than a previously published real-world (322-request) problem and found the optimal solutions for constructed (32-request and 80-request) problems with known optimal solutions.
  • No Thumbnail Available
    On Scheduling Delivery in a Military Deployment Scenario
    (2002-05-06) Melendez, Barbra Sue; Kristin A. Thoney, Committee Co-Chair; Yahya Fathi, Committee Member; Russell E. King, Committee Member; Thom J. Hodgson, Committee Co-Chair
    The ability to rapidly and accurately perform sensitivity analysis in military deployment planning is a vital tool for force deployment planners. The Deployment Scheduling Analysis Tool (DSAT), a new software tool, provides this ability. DSAT builds the deployment scenario through a graphic user interface, invokes an adaptation of the Virtual Factory to schedule the movement and delivery of the equipment and provides meaningful output in the form of reports and graphics. The Virtual Factory is a job shop scheduling procedure developed at North Carolina State University which is proven to rapidly provide near-optimal solutions to large problems. This research focuses on evaluating both the accuracy and effectiveness of DSAT. An existing tool, the Deployment Analysis Network Tool Enhanced (DANTE), is proven to minimize the time required to deliver the equipment (Cmax). Since DANTE is a relaxation of the original problem, it establishes a lower bound for Cmax. An extension of DANTE, COMFLOW, includes due date information and establishes a lower bound on the maximum lateness of the equipment, Lmax. DSAT's schedule, in terms of Cmax and Lmax, are compared to their lower bounds. Finally, DSAT's schedule, in terms of transporation asset utilization, is compared to accepted asset utilization planning factors. This evaluation indicates that DSAT provides near optimal schedules for air deployments and good schedules for deployments including rail and sea movement.
  • No Thumbnail Available
    Optimization of Appointment Based Scheduling Systems.
    (2010-10-14) Erdogan, Saadet; Brian Denton, Committee Chair; Reha Uzsoy, Committee Member; Thom Hodgson, Committee Member; Yahya Fathi, Committee Member
  • No Thumbnail Available
    A Price Trajectory Algorithm for Solving Iterative Auction Problems
    (2006-12-11) Zhong, Jie; Carla D. Savage, Committee Member; Yahya Fathi, Committee Member; Shu-Cherng Fang, Committee Member; Peter R. Wurman, Committee Chair
    A variety of auctions exist in the literature such as the English auction, the Dutch auction, and the Vickrey auction. The underlying problem in an auction is to find the winners and the corresponding payments. Proxy bidding has proven useful in solving auction problems in many real–world auction formats, most notably eBay. It has been proposed for several iterative combinatorial auctions, such as the Ascending Package auction, the Ascending k-Bundle auction, and the iBundle auction. In this dissertation, a new type of iterative auction called the Simple Combinatorial Proxy auction is proposed. The winners of the new auction are the same as that of the Ascending k-Bundle auction. Simulating the incremental bidding decisions of the agents is a popular method to solve proxy-enabled version of the auction problems. This approach has some disadvantages. First, the outcome depends upon implementation details. Second, the accuracy of the outcome relies on the bid increment. Third, the running time is sensitive to the magnitude of values, the ordering of agents, and the tie–breaking rules. In this dissertation, a new approach called the Price Trajectory Algorithm is presented to solve iterative combinatorial auctions with proxy bidding. This approach computes the agents' allocation of their attention across the bundles only at "inflection points" — the points at which agents change their behavior. Inflections are caused by one the following reasons: (1) an introduction of a new bundle into an agent's demand set, (2) a change in the set of current competitive allocations, or (3) a withdrawal of an agent from the set of active agents. The proposed algorithm tracks the behavior of agents and the competitive allocations of items to establish a connection between the demand set and competitive allocations. With the allocation of agents' attention, one can compute the slopes of price curves to get the bundle prices and speed up the computation by jumping from one inflection point to the next. The price trajectory algorithm can solve the Simple Combinatorial Proxy Auction and the Ascending Package Auction. It has several advantages over alternatives: (1) it computes exact solutions; (2) the solutions are independent of the bid increment or tie-breaking rules; and (3) the solutions are invariant to the magnitude of the bids. For the security consideration, a cryptographic protocol is presented for the price trajectory algorithm. It guarantees that only the auctioneer obtains the correct and necessary information from the agents and there is no leak of private information between agents. The detection of fraud by the auctioneer is also discussed.
  • No Thumbnail Available
    Skart: A Skewness- and Autoregression-Adjusted Batch-Means Procedure for Simulation Analysis
    (2009-01-30) Tafazzoli Yazdi, Ali; Emily K. Lada, Committee Member; James R. Wilson, Committee Chair; Stephen D. Roberts, Committee Member; David A. Dickey, Committee Member; Yahya Fathi, Committee Member; Natalie M. Steiger, Committee Member
    We discuss Skart, an automated batch-means procedure for constructing a skewness- and autoregression-adjusted confidence interval (CI) for the steady-state mean of a simulation output process in either discrete time (i.e., observation-based statistics) or continuous time (i.e., time-persistent statistics). Skart is a sequential procedure designed to deliver a CI that satisfies user-specified requirements concerning not only the CI’s coverage probability but also the absolute or relative precision provided by its half-length. Skart exploits separate adjustments to the half-length of the classical batch-means CI so as to account for the effects on the distribution of the underlying Student’s t-statistic that arise from skewness (nonnormality) and autocorrelation of the batch means. The skewness adjustment is based on a modified Cornish-Fisher expansion for the classical batch-means Student’s t -ratio, and the autocorrelation adjustment is based on an autoregressive approximation to the batch-means process for sufficiently large batch sizes. Skart also delivers a point estimator for the steady-state mean that is approximately free of initialization bias. The duration of the associated warm-up period (i.e., the statistics clearing time) is based on iteratively applying von Neumann’s randomness test to spaced batch means with progressively increasing batch sizes and interbatch spacer sizes. In an experimental performance evaluation involving a wide range of test processes, Skart compared favorably with other simulation analysis methods—namely, its predecessors ASAP3, WASSP, and SBatch as well as ABATCH, LBATCH, the Heidelberger-Welch procedure, and the Law-Carson procedure. Specifically, Skart exhibited competitive sampling efficiency and substantially closer conformance to the given CI coverage probabilities than the other procedures. Also presented is a nonsequential version of Skart, called N-Skart, in which the user supplies a single simulation-generated series of arbitrary length and specifies a coverage probability for a CI based on that series. In the same set of test processes previously mentioned and for a range of data-set sizes, N-Skart also achieved close conformance to the specified CI coverage probabilities.
  • No Thumbnail Available
    Theory and algorithms for cubic L1 splines
    (2003-02-09) Cheng, Hao; Shu-Cherng Fang, Committee Chair; Henry L.W. Nuttle, Committee Co-Chair; Yahya Fathi, Committee Member; John E. Lavery, Committee Member; Elmor L. Peterson, Committee Member; Hien T. Tran, Committee Member
    In modern geometric modeling, one of the requirements for interpolants is that they 'preserve shape well.' Shape preservation has often been associated with preservation of monotonicity and convexity/concavity. While shape preservation cannot yet be defined quantitatively, it is generally agreed that shape preservation involves eliminating extraneous non-physical oscillation. Classical splines, which exhibit extraneous oscillation, do not 'preserve shape well.' Recently, Lavery introduced a new class of cubic L1 splines. Empirical experiment has shown that cubic L1 splines are cable of providing C¹-smooth, shape-preserving, multi-scale interpolation of arbitrary data, including data with abrupt changes in spacing and magnitude, with no need for monotonicity or convexity constraints, node adjustment or other user input. However, the shape-preserving capability of cubic L1 splines has not been proved theoretically. The currently available algorithm only provides an approximation to the coefficients of cubic L1 splines. To lay the groundwork for theoretical analysis and the development of an exact algorithm, this dissertation proposes to treat cubic L1 spline problems in a geometric programming framework. Such a framework leads to a geometric dual problem with a linear objective function and convex quadratic constraints. It also provides a linear system for dual-to-primal conversion. We prove that cubic L1 splines preserve shape well, in particular, in eliminating non-physical oscillations, without review of raw data or any human intervention. We also show that cubic L1 splines perform well for multi-scale data, as well as preserve linearity and convexity/concavity under mild conditions. An exact algorithm based on the geometric programming model is proposed for solving cubic L1 splines. It decomposes the geometric programming problem into several independent small-sized sub-problems and applies a specialized active set algorithm to solve the sub-problems. The algorithm is numerically stable and highly parallelizable. It requires only simple algebraic operations.
  • No Thumbnail Available
    Topology Design of Large-Scale Optical Networks
    (2002-08-30) Xin, Yufeng; Matt Stallmanne, Committee Member; Yahya Fathi, Committee Member; Harry G. Perros, Committee Co-Chair; George N. Rouskas, Committee Chair
    Optical networks consisting of optical cross-connects(OXCs) arranged in some arbitrary topology are emerging as an integral part of the Internet infrastructure. The main functionality of these networks will be to provide reliable end-to-end lightpath connections to large numbers of electronic label switched routers (LSRs). We consider two problems that arise in building such networks. The first problem is related to the topology design of optical networks that can grow to Internet scales, while the second is related to the light-tree routing for the provision of optical multicast services. In the first part of the thesis, we present a set of heuristic algorithms to address the combined problem of physical topology design (i.e., determine the number of OXCs required for a given traffic demand and the fiber links among them) and logical topology design (i.e., determine the routing and wavelength assignment for the lightpaths among the LSRs). We then extend our study to take a shared path-based protection scheme into consideration after presenting a detailed analysis and comparison of different protection strategies. In order to characterize the performance of our algorithms, we have developed lower bounds which can be computed efficiently. We present numerical results for up to 1000 LSRs and for a wide range of system parameters such as the number of wavelengths per fiber, the number of transceivers per LSR, and the number of ports per OXC. In the second part of the thesis, we study the problem of constructing light-trees under optical layer power budget constraints, with a focus on algorithms which can guarantee a certain level of quality for the signals received by the destination nodes. We define a new constrained light-tree routing problem by introducing a set of constraints on the source-destination paths to account for the power losses at the optical layer. We investigate a number of variants of this problem, we characterize their complexity, and we develop a suite of corresponding routing algorithms. We find that, in order to guarantee an adequate signal quality and to scale to large destination sets, light-trees must be as balanced as possible. Our algorithms are designed to construct balanced trees which, in addition to having good performance in terms of signal quality, they also ensure a certain degree of fairness among destination nodes.
  • No Thumbnail Available
    Two-person Investment Game.
    (2010-04-29) Li, Lan; Shu Fang, Committee Chair; Russell King, Committee Member; Ivan Kandilov, Committee Member; Yahya Fathi, Committee Member; Simon Hsiang, Committee Member

Contact

D. H. Hill Jr. Library

2 Broughton Drive
Campus Box 7111
Raleigh, NC 27695-7111
(919) 515-3364

James B. Hunt Jr. Library

1070 Partners Way
Campus Box 7132
Raleigh, NC 27606-7132
(919) 515-7110

Libraries Administration

(919) 515-7188

NC State University Libraries

  • D. H. Hill Jr. Library
  • James B. Hunt Jr. Library
  • Design Library
  • Natural Resources Library
  • Veterinary Medicine Library
  • Accessibility at the Libraries
  • Accessibility at NC State University
  • Copyright
  • Jobs
  • Privacy Statement
  • Staff Confluence Login
  • Staff Drupal Login

Follow the Libraries

  • Facebook
  • Instagram
  • Twitter
  • Snapchat
  • LinkedIn
  • Vimeo
  • YouTube
  • YouTube Archive
  • Flickr
  • Libraries' news

ncsu libraries snapchat bitmoji

×