Log In
New user? Click here to register. Have you forgotten your password?
NC State University Libraries Logo
    Communities & Collections
    Browse NC State Repository
Log In
New user? Click here to register. Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "David Dickey, Committee Member"

Filter results by typing the first few letters
Now showing 1 - 20 of 23
  • Results Per Page
  • Sort Options
  • No Thumbnail Available
    An Accessible Cognitive Modeling Tool for Evaluation of Human-Automation Interaction in the Systems Design Process.
    (2010-11-04) Gil, Guk Ho; David Kaber, Committee Chair; Robert St. Amant, Committee Member; Michael Feary, Committee Member; David Dickey, Committee Member; Yuan-Shin Lee, Committee Member
  • No Thumbnail Available
    Accounting for Within- and Between-Locus Dependencies in Marker Association Tests
    (2003-06-26) Czika, Wendy Ann; Dennis Boos, Committee Member; David Dickey, Committee Member; Dahlia Nielsen, Committee Member; Bruce S. Weir, Committee Chair; Russell Wolfinger, Committee Member
    The importance of marker association tests has recently been established for locating disease susceptibility genes in the human genome, attaining finer-scaled maps than the linkage variety of tests through the detection of linkage disequilibrium (LD). Many of these association tests were originally defined for biallelic markers under ideal assumptions, with multiallelic extensions often complicated by the covariance among genotype or allele proportions. The well-established allele and genotype case-control tests based on Pearson chi-square test statistics are exceptions since they adapt easily to multiallelic versions, however each of these has its shortcomings. We demonstrate that the multiallelic trend test is an attractive alternative that lacks these limitations. A formula for marker genotype frequencies that incorporates the coefficients quantifying various disequilibria is presented, accommodating any type of disease model. This enables the simulation of samples for estimating the significance level and calculating sample sizes necessary for achieving a certain level of power. There is a similar complexity in extending the family-based tests of association to markers with more than two alleles. Fortunately, the nonparametric sibling disequilibrium test (SDT) statistic has a natural extension to a quadratic form for multiallelic markers. In the original presentation of the statistic however, information from one of the marker alleles is needlessly discarded. This is necessary for the parametric form of the statistic due to a linear dependency among the statistics for the alleles, but the nonparametric representation eliminates this dependency. We show how a statistic making use of all the allelic information can be formed. Obstacles also arise when multiple loci affect disease susceptibility. In the presence of gene-gene interaction, single-marker tests may be unable to detect an association between individual markers and disease status. We implement and evaluate tree-based methods for the mapping of multiple susceptibility genes. Adjustments to correlated p-values from markers in LD with each other are also examined. This study of epistatic gene models reveals the importance of three-locus disequilibria of which we discuss various statistical tests.
  • No Thumbnail Available
    Analyzing Variances and Correlations of Quantitative Traits in two long term Randomly Mated Soybean Populations.
    (2010-05-13) Recker, Jill; Joseph Burton, Committee Chair; David Dickey, Committee Member; Andrea Cardinal, Committee Member
  • No Thumbnail Available
    Assessing the Effects of Variability in Interest Rate Derivative Pricing
    (2007-11-22) Crotty, Michael Thomas; Denis Pelletier, Committee Member; Sujit Ghosh, Committee Member; David Dickey, Committee Member; Peter Bloomfield, Committee Chair
  • No Thumbnail Available
    Associations Between Gaussian Markov Random Fields and Gaussian Geostatistical Models with an Application to Model the Impact of Air Pollution on Human Health
    (2006-02-19) Song, Hae-Ryoung; Sujit Ghosh, Committee Co-Chair; David Dickey, Committee Member; Jerry Davis, Committee Member; Montserrat Fuentes, Committee Co-Chair; Peter Bloomfield, Committee Member
    Gaussian geostatistical models (GGMs) and Gaussian Markov random fields (GMRFs) are two distinct approaches commonly used in modeling point referenced and areal data, respectively. In this dissertation, the relations between GMRFs and GGMs are explored based on approximations of GMRFs by GGMs, and vice versa. The proposed framework for the comparison of GGMS and GMRFs is based on minimizing the distance between the corresponding spectral density functions. In particular, the Kullback-Leibler discrepancy of spectral densities and the chi-squared distance between spectral densities are used as the metrics for the approximation. The proposed methodology is illustrated using empirical studies. As a part of application, we model associations between speciated fine particulate matter (PM) and mortality. Mortality counts and PM are obtained at county and point levels, respectively. To combine the variables with different spatial resolutions, we aggregate PM to the county level. The aggregated PM are modeled using GMRFs, and associations between PM and mortality are investigated based on Bayesian hierarchical spatio-temporal framework. This model is applied to speciated PM[subscript 2.5] and monthly mortality counts over the entire U.S. region for 1999-2000. We obtain high relative risks of mortality associated to PM[subscript 2.5] in the Eastern and Southern California area. Particularly, NO₃ and crustal materials have greater health effects in the Western U.S., while SO₄ and NH₄ have more of an impact in the Eastern U.S. We show that the average risk associated with PM[subscript 2.5] is approximately twice what we obtained for PM₁₀.
  • No Thumbnail Available
    Computational Modeling of Cell Signaling Network Using Hill Function and Markov Chain Monte Carlo Methods.
    (2010-07-06) Davis, Xin; Roger McCraw, Committee Chair; Miles See, Committee Chair; Charles Smith, Committee Member; David Dickey, Committee Member; Vytas A. Bankaitis, Committee Member; Maria Correa, Committee Member; Jason Haugh, Committee Member
  • No Thumbnail Available
    The Effect of Obesity on Trunk Kinematics and Ground Reaction Forces during Lifting
    (2006-05-23) Xu, Xu; David Dickey, Committee Member; Simon Hsiang, Committee Co-Chair; Gary Mirka, Committee Co-Chair
    The prevalence of obesity is increasing worldwide. In fact, in the United States, obesity has been recognized as an epidemic. Obesity has been shown to increase the risk of physical injury and illness. Among these physical disorders, low back pain (LBP) is one of the most common phenomena in both obese and non-obese individuals. However, the relationship between obesity and LBP is not fully developed and the causal link between them is insufficient. The objective of this research was to evaluate the differences between people of normal weight and obese people in measures of trunk kinematics and ground reaction force during a lifting task. The main hypothesis of this study was that obese people would have a higher mean value and a higher variability of these measures than of normal weight people. Two subjects groups were used in this research and each group had six subjects. Group I is the normal weight subject group (BMI<25), and Group II is the obese subject group (BMI>30). In the experiment, the subjects were asked to perform a lifting task under two levels of load (10% and 25% of maximum lifting capacity) and two levels of starting asymmetric angle (0° and 45°). The Lumbar Motion Monitor was used to collect the trunk kinematics data and two force plates were used to collect the ground reaction force and moment data. To test the variability, Modified Levene's test was employed. MANOVA and ANOVA were used to assess the effects of BMI, load, and angle on these measures. The results showed that BMI is a significant effect for mean value on several kinematics parameters. From BMI<25 to BMI>30, the rotational velocity increased by 59.2%; the rotational acceleration increased by 57.6%; the sagittal velocity increased by 30.4%; sagittal acceleration increased by 50.5% (all statistically significant at the p<0.05 level). However, the results did not support the hypothesis that BMI would affect variability in these measures. This study provides quantitative data describing lifting task performance for obese people, and has shown that obese people have higher sagittal velocity, sagittal acceleration, rotational velocity, and rotational acceleration during lifting task, which may lead to higher forces on spine, compared to people with normal weight. The results indicated that the regular safety evaluation of lifting job based on normal people may not be appropriate for obese people since they have different lifting pattern, which may increase the risk of injury. The data presented in this work are particularly important as the general workforce continues to get heavier and heavier.
  • No Thumbnail Available
    Estimating the Number of Clusters in Cluster Analysis
    (2007-03-08) Dasah, Julius Berry; David Dickey, Committee Member; Leonard Stefanski, Committee Co-Chair; Dennis Boos, Committee Chair; Jason osborne, Committee Member
    In many applied fields of study such as medicine, psychology, ecology, taxonomy and finance one has to deal with massive amounts of noisy but structured data. A question that often arises in this context is whether or not the observations in these data fall into some "natural" groups, and if so, how many groups? This dissertation proposes a new quantity, called the [it maximal jump function], for assessing the number of groups in a data set. The estimated maximal jump function measures the excess transformed [it distortion] attainable by fitting an extra cluster to a data set. By [it distortion,] we mean the average distance between each observation and its nearest cluster center. [it Distortion] $ d g$ in the above sense, is a measure of the error incurred by fitting $g$ clusters to a data set. Three stopping rules based on the maximal jump function are proposed for determining the number of groups in a data set. A new procedure for clustering data sets with a common covariance structure is also introduced. The proposed methods are tested on a wide variety of real data including DNA microarray data sets as well as on high-dimensional simulated data possessing numerous "noisy" features⁄dimensions. Also, to show the effectiveness of the proposed methods, comparisons are made to some well known clustering methods.
  • No Thumbnail Available
    Implications of Elevated Atmospheric Carbon Dioxide and Tropospheric Ozone for Water Use in Stands of Trembling Aspen and Paper Birch.
    (2010-12-02) Rhea, Lee; Howard Allen, Committee Chair; John King, Committee Chair; William Winner, Committee Member; David Dickey, Committee Member
  • No Thumbnail Available
    An Investigation on the Interactivity between Suspended-load Backpack and Human Gait.
    (2008-07-08) Xu, Xu; Simon Hsiang, Committee Chair; David Kaber, Committee Member; David Dickey, Committee Member; Gary Mirka, Committee Member
  • No Thumbnail Available
    Latent Group-Based Interaction Effects in Unreplicated Factorial Experiments.
    (2010-09-03) Franck, Christopher; Jason Osborne, Committee Chair; Sujit Ghosh, Committee Member; Jeffrey Thompson, Committee Member; David Dickey, Committee Member
  • No Thumbnail Available
    Multivariate Robust Estimation of DCC-GARCH Volatility Model.
    (2010-05-07) LaBarr, Aric; Peter Bloomfield, Committee Chair; Howard Bondell, Committee Member; Denis Pelletier, Committee Member; David Dickey, Committee Member
  • No Thumbnail Available
    New Methods using Levene Type Tests for Hypotheses about Dispersion Differences
    (2006-10-25) Liu, Xiaoni; David Dickey, Committee Member; Jason Osborne, Committee Member; Cavell Brownie, Committee Co-Chair; Dennis Boos, Committee Chair
    Testing equality of scale arises in many research areas including clinical data analysis. In contrast to procedures for tests on means, tests for variances derived assuming normality of the parent populations are highly non-robust to non-normality. Levene type tests are well known to be robust tests for equality of scale for the one-way design; the current standard test uses the ANOVA F test on absolute deviations from the sample medians. We first develop a new modified version of the standard Levene test that improves its null performance and power. Applying the Box-Anderson correction to the ANOVA F test further improves the performance. We also extend the robust Levene type tests to the two-way design with one observation per cell, the randomized complete block design (RCB). Currently, the available Levene type tests for RCB designs employ either standard ANOVA F tests on the absolute values of ordinary least squares (OLS) residuals, or weighted least squares (WLS) ANOVA F tests on the OLS residuals. These two tests can be liberal, especially under non-normal distributions. Instead, we use OLS ANOVA F tests on the absolute values of residuals obtained from models fit by least absolute deviation (LAD) estimation and by Huber Proposal 2 M-estimation. We also apply bootstrap methods to these Levene type tests and compare by simulation these Levene type tests in terms of robustness and power.
  • No Thumbnail Available
    Palatability of Feed Ingredients in Nursery Pigs
    (2008-12-05) Seabolt, Brynn Shea; Eric van Heugten, Committee Chair; Kimberly Ange-van Heugten, Committee Member; David Dickey, Committee Member; Sung Woo Kim, Committee Member
    The objectives of this research were: 1) To evaluate nursery pig preference for diets containing various inclusion levels of dried distillers grains with soluble (DDGS), high protein dried distillers grains (HPDDG) or corn gluten meal (CGM); 2) To evaluate the effect of different qualities of DDGS on nursery pig preference; and 3) To evaluate growth performance and feed preference for diets containing various inclusion levels of DDGS with or without flavor supplementation. For the first objective, 3 double-choice preference experiments were performed using a 2 day assay. In experiment 1, preference for diets containing DDGS (0, 10, 20, and 30%) was examined. A linear decrease (P<0.001) in preference was found with increasing inclusion levels of DDGS on day 1, day 2 and overall. In experiment 2, preference for diets containing CGM (0, 5, 10, and 15%) was examined. On day 1 and overall, a linear decrease (P<0.06) in preference was found with increasing inclusion level of CGM. Preferences for all CGM containing diets were lower (P<0.05) than 50% on day 1, day 2 and overall, indicating preference of the control diet over CGM containing diets, as no preference would result in equal consumption of both feeds (50% of the control feed and 50% of the test feed). In experiment 3, preference for diets containing HPDDG (0, 10, 20, and 30%) was examined. A linear decrease (P<0.001) in preference was found with increasing inclusion levels of HPDDG on day 1, day 2 and overall, and preference for all HPDDG containing diets was less than 50% on day 1, day 2 and overall (P<0.0001). For the second objective, 2 experiments were performed. In experiment 1, preference for diets containing 30% good or poor quality DDGS was examined. DDGS sources were obtained from mills with known good and poor quality DDGS. Color of the sources was observed to ensure poor versus good quality, with the darker source being poor and the lighter source being good quality. Preference for the control diet was not different from the 30% good or poor quality DDGS diets. However, the diet containing 30% poor quality DDGS was preferred (P<0.05) over the diet containing 30% good quality DDGS on day 1, day 2 and overall. In experiment 2, preference for diets containing good quality DDGS (0, 10, or 20%) or poor quality DDGS (0, 10, or 20%) was examined. Inclusion of good quality DDGS linearly decreased (P<0.01) preference on day 1, 2, and overall. For the poor quality DDGS, inclusion of 20% resulted in a preference lower (P<0.05) than 50%. The negative impact of good DDGS on preference was greater compared to the poor DDGS, indicating that poor DDGS may have a higher preference compared to good DDGS. For the third objective, 2 experiments were performed. In experiment 1, growth performance of nursery pigs fed diets containing various inclusion levels of DDGS (0, 10, and 20%) in the presence or absence of flavor was examined. Average daily gain (ADG) and average daily feed intake (ADFI) in the Starter 1 phase were negatively affected (P<0.06) by DDGS inclusion. No other performance parameters, such as feed efficiency and body weight, were affected by DDGS inclusion. ADFI was increased (P=0.02) by flavor in the Starter 1 phase only. No other performance parameters were affected by flavor. In experiment 2, feed preference for diets containing various inclusion levels of DDGS in the presence or absence of flavor was examined. Preference for unflavored and flavored DDGS containing diets was less than preference for the control diet. Presence of flavor decreased preference regardless of DDGS inclusion. Overall, these studies indicate that DDGS, CGM and HPDDG containing diets are not preferred over control diets with corn and soybean meal. However, poor quality DDGS may be preferred over good quality DDGS. Also, addition of the present flavor seems to exacerbate the negative effect of DDGS palatability. Evaluation of volatile components via gas chromatography and headspace analysis in each DDGS and HPDDG sample indicated that compounds associated with rancidity are negatively correlated with palatability, and that the smoky, burnt characteristic of furfural may be palatable to pigs.
  • No Thumbnail Available
    Relationship between Backtest and Coping Styles in Pigs.
    (2010-08-09) Spake, Jessica; Joseph Cassady, Committee Chair; David Dickey, Committee Member; Charles Whisnant, Committee Member; William Flowers, Committee Member
  • No Thumbnail Available
    Second Order Approximations to GMM Statistics
    (2006-05-08) Kyriakoulis, Konstantinos; Denis Pelletier, Committee Member; David Dickey, Committee Member; Atsushi Inoue, Committee Member; Alastair Hall, Committee Chair
    This thesis uses second order approximations to study the finite sample behavior of statistics that are employed under the GMM setting. We present a Nagar (1959) approximation to the MSE of the IV estimates, when the disturbances are elliptically distributed. The accuracy of the approximation is illustrated through a comparison of the Nagar-type expansion with the exact finite sample MSE, that was derived in Knight (1985). The comparison suggests that second order approximations can be quite accurate, even when the sample size is 60 observations. This, alongside with the fact that exact results are more difficult to derive and harder to interpret, suggests that second-order approximations are powerful alternatives to the standard, first-order, asymptotic approximations. We proceed by analyzing the finite sample behavior of the LMstatistic, as it is employed under the GMM setting. This is achieved through a second order expansion, known as Edgeworth Expansion, of the distribution of the LM statistic. Our analysis suggests that the passage from the finite to the limiting distribution of the LM test is based on several measures, such as the variance-covariance matrix of the moments and its first derivative, the fourth product moment of the population moment condition, the covariance between the moments and their variance, the number of parameters, and the number of moments. We conclude with a simulation study that illustrates how these measures drive the passage from the finite sample to the asymptotic distribution.
  • No Thumbnail Available
    Short Term Electric Load Forecasting.
    (2010-09-10) Hong, Tao; Mesut Baran, Committee Chair; Simon Hsiang, Committee Chair; David Dickey, Committee Member; Shu Fang, Committee Member
  • No Thumbnail Available
    Simulation-Based Estimation of Spatial Price Equilibrium Models and Market Integration
    (2003-09-02) Tastan, Huseyin; Atsushi Inoue, Committee Member; Nicholas Piggott, Committee Co-Chair; Paul Fackler, Committee Co-Chair; David Dickey, Committee Member
    This dissertation explores the applicability of recently developed simulation-based econometric methods to the analysis of spatial price determination and integration of markets. As such a measure of market integration is developed within the context of well-known point-location competitive price equilibrium model. Two markets are said to be integrated to the extent that an excess demand shock arising in one region is transmitted to the other region. This model imposes the market efficiency (i.e., the law of one price) and explicitly accounts for the nonlinear relationship between prices and underlying variables. A critical distinction from most of the current literature is that market integration is defined and estimated as a degree rather than a binary outcome. Simulation-based estimation approach to spatial market integration can be outlined as follows: summarize the information contained in prices in an auxiliary (reduced) model and match the parameters of the auxiliary model obtained from observed data to the parameters of the same model obtained from simulated data. The particular estimation framework used is known as the extit[indirect inference] methodology. The auxiliary model is chosen as a finite order Gaussian vector auto-regression for prices. The underlying variables, autarky prices and transaction costs, are modelled as low-order vector auto-regression. Transaction costs on each link is modelled as a function of a small number of common factors. After the structural parameter estimates are obtained the measure of market integration is approximated by simple Monte Carlo integration methods. The results from a set of controlled experiments indicate that the estimation strategy works reasonably well. It is also shown that simulation-based estimation methods can be useful for the two-location switching regime models with serially correlated underlying variables.
  • No Thumbnail Available
    A Stochastic Volatility Model and Inference for the Term Structure of Interest
    (2007-04-25) Liu, Peng; A. Ronald Gallant, Committee Member; Denis Pelletier, Committee Member; William H. Swallow, Committee Member; Peter Bloomfield, Committee Chair; David Dickey, Committee Member
    This thesis builds a stochastic volatility model for the term structure of interest rates, which is also known as the dynamics of the yield curve. The main purpose of the model is to propose a parsimonious and plausible approach to capture some characteristics that conform to some empirical evidences and conventions. Eventually, the development reaches a class of multivariate stochastic volatility models, which is flexible, extensible, providing the existence of an inexpensive inference approach. The thesis points out some inconsistency among conventions and practice. First, yield curves and its related curves are conventionally smooth. But in the literature that these curves are modeled as random functions, the co-movement of points on the curve are usually assumed to be governed by some covariance structures that do not generate smooth random curves. Second, it is commonly agreed that the constant volatility is not a sound assumption, but stochastic volatilities have not been commonly considered in related studies. Regarding the above problems, we propose a multiplicative factor stochastic volatility model, which has a relatively simple structure. Though it is apparently simple, the inference is not, because of the presence of stochastic volatilities. We first study the sequential-Monte-Carlo-based maximum likelihood approach, which extends the perspectives of Gaussian linear state-space modeling. We propose a systematic procedure that guides the inference based on this approach. In addition, we also propose a saddlepoint approximation approach, which integrates out states. Then the state propagates by an exact Gaussian approximation. The approximation works reasonably well for univariate models. Moreover, it works even better for the multivariate model that we propose. Because we can enjoy the asymptotic property of the saddlepoint approximation.
  • No Thumbnail Available
    Three Essays on Trend Analysis and Misspecification in Structural Econometric Models
    (2003-09-02) Doorn, David John; David Flath, Committee Member; David Dickey, Committee Member; Alastair Hall, Committee Chair; John Seater, Committee Member
    The purpose of this research has been to look into several econometric issues of concern to researchers doing applied work in macroeconomics. The first essay looks at Bureau of Economic Analysis data on inventories and sales of finished goods often used in studies of inventory behavior. Applying recently developed methods, the series are rigorously tested to determine their stationarity properties. Results indicate that neither first differencing nor linearly detrending the data is appropriate. For most series a trend function with one or more breaks offers a better fit and also generates stationarity. The results are used to determine the impact on estimation in a simple production-smoothing model of inventory behavior. The impact of different trend specifications on univariate forecasting of inventories is also considered. The second essay considers an alternative method of detrending time series data — the Hodrick-Prescott (HP) filter. Previous research has shown that HP filtering can have serious adverse consequences when used to analyze co-movements between data series at business cycle frequencies. Despite this, the filter has also been used to induce stationarity in a data series prior to estimation of structural econometric models. Little work has been done in analyzing the possible effects this may have on parameter estimates from such models. A simulation study is conducted to assess the impact of HP filtering on parameter estimation and a comparison is made to other detrending methods. It is shown that the HP filter induces bias in the parameter estimates and also increases the root mean squared error of the estimates from the simulations. In addition, there is some adverse impact on the size of certain test statistics. The final essay looks at the impact of misspecification on estimation results from a structural econometric model when using a Generalized Method of Moments estimator. Simulated data consistent with a particular specification of the model is used to estimate two misspecified versions. It is shown that misspecification causes the probability limit of the estimator to differ from the true value. It is further shown that a popular specification test performs poorly in detecting the misspecification. An alternative method of model selection is shown to perform far better. Finally, because the use of conventional asymptotic theory is not appropriate in misspecified models, a recently proposed alternative asymptotic theory is tested to determine whether there is improvement in the ability to perform inference on the parameters from misspecified models.
  • «
  • 1 (current)
  • 2
  • »

Contact

D. H. Hill Jr. Library

2 Broughton Drive
Campus Box 7111
Raleigh, NC 27695-7111
(919) 515-3364

James B. Hunt Jr. Library

1070 Partners Way
Campus Box 7132
Raleigh, NC 27606-7132
(919) 515-7110

Libraries Administration

(919) 515-7188

NC State University Libraries

  • D. H. Hill Jr. Library
  • James B. Hunt Jr. Library
  • Design Library
  • Natural Resources Library
  • Veterinary Medicine Library
  • Accessibility at the Libraries
  • Accessibility at NC State University
  • Copyright
  • Jobs
  • Privacy Statement
  • Staff Confluence Login
  • Staff Drupal Login

Follow the Libraries

  • Facebook
  • Instagram
  • Twitter
  • Snapchat
  • LinkedIn
  • Vimeo
  • YouTube
  • YouTube Archive
  • Flickr
  • Libraries' news

ncsu libraries snapchat bitmoji

×