Browsing by Author "Jason A. Osborne, Committee Member"
Now showing 1 - 13 of 13
- Results Per Page
- Sort Options
- Ammonia Emission from Stored Broiler Cake(2009-08-10) Yao, Haiyan; Dan H. Willits, Committee Member; Philip W. Westerman, Committee Member; Jason A. Osborne, Committee Member; Sanjay B. Shah, Committee Chair; Wayne P. Robarge, Committee Member; Lingjuan Wang, Committee MemberAmmonia emission from animal feeding operations has potential negative impacts on the environment and public health and it also reduces the nutrient value of animal waste. When conditions are not suitable for land application, broiler cake (or litter) may be stored in stockpiles which may contribute to ammonia emission. In this study, summer and winter ammonia emission factors from broiler cake stockpiles stored in a naturally ventilated shed were developed. The lab experiment measured relative ammonia emissions as affected by type of cover and depth of cake. Scrubbers were used to measure ammonia concentration both in the field and lab studies. In the field, the integrated horizontal flux (IHF) method and Fick’s law of diffusion were used to determine ammonia emissions due to forced and natural convections, respectively. The ammonia emission due to natural convection was <0.01% of total emission. However, it may be necessary to calculate emissions based on concentrations measured only during conditions of forced convection. In summer, the estimated total ammonia-N loss was 0.8 % of total N. In winter, the total ammonia loss was 1.4 % of total N during the first 7 d and 2.5 % for the whole 15-d period. The estimated average daily ammonia emission factor in summer (7 d) was 24.5 g NH3-N/m3-d of cake or 7.0 g NH3-N/AU (500 kg LW)-d. The estimated daily ammonia emission factor for the first 7 d in winter was 35.6 g NH3-N/m3–d or 42.5 g NH3-N/AU-d. The total ammonia lost during the 15-d winter study was 33.8 g NH3-N/m3-d or 40.2 g NH3-N/ AU-d. Ammonia losses from the tarp covered cake were significantly lower than the control and double depth treatments by 45% and 49%, respectively, at the end of study. Ammonia losses (g/m3) are lower from stockpiles with lower surface area per unit volume.
- Collision Models for Multilane Highway Segments Incorporating the Effects of Curbs(2008-05-16) Baek, Jongdae; Jason A. Osborne, Committee Member; Billy M. Williams, Committee Member; Nagui M. Rouphail, Committee Member; Joseph E. Hummer, Committee ChairThe main objective of this study was to develop valid statistical collision models for multilane highway segments with or without curbs. For this, road geometric data, traffic data, and collision data for the three years were collected. The data include 2,274 collisions and 885 injury collisions that occurred on 191.85 miles of 199 directional segments. A new modeling method of introducing variables into the model one by one in a multiplicative form was applied. A nonlinear optimizing algorithm for estimating parameters using a negative binomial log likelihood function was adopted for the modeling. The functional form of the variable to be introduced was determined based on the relationship between the recorded number of collisions and the number of collisions predicted by the current model without the variable. The integrate-differentiate method was applied to find candidate functional forms for each variable. Model selections were based on the -2 log likelihood and BIC statistics. The cumulative residuals (CURE) plot method was adopted for checking the goodness of fit of the models. As a result of the modeling efforts, the annual average daily traffic, access point density, shoulder width, and shoulder type variables were introduced to the final model for total collisions. The same variables except the shoulder type variable were introduced to the injury collision model. Overall, then, it appears that curbs mean fewer total collisions and no change in injury collisions as compared to no curbs on the sampled road segments. The models developed in this study were based only on the data for North Carolina and limited number of variables. The developed models can be improved in the future by collecting data on more miles, by bringing more explanatory variables into models, and by using the data from other states. Additionally, the characteristics of vehicles speeds on multilane highways were analyzed and compared. The results showed that the mean speeds for the non-curbed sites were about 2 to 3 mph higher than those for the curbed sites.
- Contagion in Financial Markets: Two Statistical Approaches(2004-08-17) Rao, Harshavardhana; Peter Bloomfield, Committee Chair; Alastair R. Hall, Committee Member; David A. Dickey, Committee Member; Jason A. Osborne, Committee MemberFinancial markets in different countries undergo crises at one point in time or another. These crises can have different causes but they could affect other markets due to trade relations and capital mobility. Some crises affect markets in other countries more than what market fundamentals would dictate. We will model this phenomenon, also defined as contagion, using two approaches viz., one-factor model and volatility spillover, and compare these approaches.
- Genetic and Phenotypic Characterization of Maize Germplasm Resources: Ex-PVPA Inbreds, NCSU Inbreds, and Elite Exotic Inbreds(2008-12-19) Nelson, Paul Thomas; Major M. Goodman, Committee Chair; James B. Holland, Committee Member; J. Paul Murphy, Committee Member; Jason A. Osborne, Committee MemberABSTRACT NELSON, PAUL THOMAS. Genetic and Phenotypic Characterization of Maize Germplasm Resources: Ex-PVPA Inbreds, NCSU Inbreds, and Elite Exotic Inbreds. (Under the direction of Major M. Goodman.) Maize (Zea maize L.) germplasm resources are characterized to illuminate their usefulness and proper placement for temperate maize breeding. Three germplasm pools are examined: 1) maize inbreds that have expired U.S. plant variety protection certificates (Ex-PVPA), 2) the North Carolina State University maize inbred line releases, and 3) elite unadapted tropical maize inbreds. We have used single nucleotide polymorphism (SNP) markers to evaluate the relationships and population structure among 92 ex-PVPA inbred lines in relation to 17 well-known public inbreds. Based on UPGMA clustering, principal component analysis, and model-based clustering, we identified six primary genetic clusters represented by the prominent inbred lines B73, Mo17, PH207, A632, Oh43, and B37. We also determined the genetic background of ex-PVPA inbreds with conflicting, ambiguous, or undisclosed pedigrees. We assessed genetic diversity across subsets of ex-PVPA lines and concluded that the ex-PVPA lines are no more diverse than the public set evaluated here. The NCSU maize breeding germplasm represents a potentially useful resource for maize improvement and diversity in the U.S. While the NC maize inbreds can generally be classified into five germplasm pools, Lancaster, temperate-adapted all-tropical (TAAT), Lancaster × Tropical, Stiff Stalk, and Southern non-Stiff Stalk, analysis of detailed pedigree records and with molecular markers reveals additional substructure within each of these pools. There is general agreement among the four cluster analyses performed, three on SNP data and one on pedigree-derived coefficients of coancestry, as to the organization of this substructure. We performed topcross yield trial evaluation for 128 elite tropical maize inbreds from these institutions and 15 temperate-adapted all-tropical NC maize inbreds. We report, not only performance for yield and other traits of agronomic importance, but also heterotic patterns among many of these lines. We maintain, as reported in previous studies conducted at NCSU, that tropical germplasm, either adapted or unadapted, generally combines equally well with either Stiff Stalk or non-Stiff Stalk U.S. maize germplasm.
- Information Needs of Developers for Program Comprehension during Software Maintenance Tasks(2008-12-16) Layman, Lucas Michael; Laurie A. Williams, Committee Chair; Robert St. Amant, Committee Co-Chair; Tao Xie, Committee Member; Christopher B. Mayhorn, Committee Member; Jason A. Osborne, Committee MemberSoftware engineers undertaking maintenance tasks often work on unfamiliar code, requiring developers to search for, relate, and collect information relevant to the maintenance task. The goal of this research is to create theories that describe the nature of information sought by developers and how that information is used by developers during two types of maintenance tasks: debugging (corrective maintenance) and enhancement (perfective maintenance). To meet this goal, six hypotheses are investigated regarding the navigation activities undertaken by developers to identify, relate, and collect information during software maintenance tasks. These hypotheses were investigated using data from two empirical studies of 18 developers performing enhancement and debugging tasks on three Java programs. Video recordings were used to annotate user interaction logs to create a history of user activities during the maintenance tasks. These data described the activities developers undertake during maintenance tasks, what source code elements the developers examined, and the amount of time developers spent performing various activities. These data were analyzed using a combination of statistical and qualitative methods to compare the different methods of searching for and collecting information relevant to the software maintenance tasks. Analysis of the data showed that the navigation styles used by developers (static navigation, normal navigation, and keyword searching) to find information differ significantly in the amount of time spent collecting information. Furthermore, static navigation techniques were significantly shorter in duration than keyword search techniques. No statistically significant differences were observed in the amount of time developers spent collecting information in debugging and enhancement tasks. During debugging tasks, developers focused on information that controlled the state and behavior a particular element. During enhancement tasks, developers focused on how a element used other elements, rather than how an element is used by other elements. The analysis of the code relationships motivated further study of the nature of the information gathered by developers in enhancement and debugging tasks. The information read by developers (source code, Java documentation, and web search results) was analyzed with respect to the content of the information, how the information was related to the task and code elements being investigated, and how the information was used. This qualitative analysis led to the following new theories on software maintenance: Theory 1: Developers are less likely to progress toward completing a maintenance task when the correct implementation of new code or correct editing of existing code requires logical connections and/or evaluations of other code elements. Theory 2: New code that has been duplicated from another source acts as a self-reference, thereby requiring developers to make fewer logical evaluations and increasing the likelihood the duplicated information will be successfully used in completing a task. Theory 3: Specific software behavior is often identified through analysis of a sequence of events and the control structures that propagate those events through the system, whereas a functional concept is often identified through comparisons, similarities, and references of existing functionality. These theories are new contributions to the field of software maintenance and program comprehension theory. These theories can be further evaluated to help guide the creation of tools and strategies for assisting developers in finding relevant information during software maintenance tasks. One such tool, the Mimec Spotlight, has been proposed and evaluated in this research.
- A Large Scale Evaluation of Commercially Available Biological Filters for Recirculating Aquaculture Systems(2008-09-25) Guerdat, Todd; John J. Classen, Committee Member; Jason A. Osborne, Committee Member; Thomas M. Losordo, Committee Chair
- Measuring the Effectiveness of Advanced Traveler Information Systems (ATIS)(2008-12-19) Hu, Hyejung; Nagui M. Rouphail, Committee Chair; Billy M. Williams, Committee Co-Chair; John R. Stone, Committee Member; Jason A. Osborne, Committee Member; Xuesong Zhou, Committee MemberThe objective of this study was to develop valid methodologies for addressing several limitations of the current Advanced Traveler Information Systems (ATIS) evaluation tools. This study was focused mainly on three enhancements. First, the queue propagation algorithm of the selected tool (DYNASMART-P) was modified to more realistically model traffic congestion. The author proposed the addition of transfer flow capacity and backward gated flow constraints for more accurately calculating transfer flow rate. Second, the study modeled the natural diversion behaviors of drivers who do not receive traveler information. Lastly, statistical models of user responses to traveler information were developed using binary and multinomial logit methods to understand and model the relationship between drivers’ socio-economic characteristics and their responses to traveler information. Among these three enhancements, the first two (improved queue propagation and natural diversion behavior algorithms) were implemented in the enhanced model. The user behavior models, however, were not implemented because their predictive power was not acceptable due to limitations in the data set. The enhanced model was applied to two case studies: 1) verifying the capabilities of the model under a recurring bottleneck scenario on I-40 corridor in the Triangle region of North Carolina, and 2) demonstrating the capability of the enhanced model to measure the effectiveness of U-Transportation (similar to the Vehicle Infrastructure Integration [VII] program in the USA) which has been under development in Korea. The first case study results showed that the improved queue propagation algorithm simulated the bottleneck queue much closer to the real data than the original model. The simulation results also indicated that the actual diversion rate under recurring congestion in the study network was very low. The results of the second case study demonstrated that the enhanced model can evaluate the network impact of new advanced technology in flooding situations and can evaluate the effect of market penetration of the communication technology.
- A New Application for Brookfield Viscometers: Viscoelastic Property Determination(2006-07-29) Tanjore, Deepti; Daniel H. Willits, Committee Co-Chair; Jason A. Osborne, Committee Member; Christopher R. Daubert, Committee ChairViscoelastic properties are traditionally measured using sophisticated and expensive instruments. Brookfield YR-I is an affordable instrument, primarily developed to measure the yield stress of materials with a vane attachment. In recent years, vane attachments have gained popularity from various advantages, including elimination of wall slip, minimum disturbance to sample, quick single point determination, and easy fabrication and cleaning. Prior research has established that the vane can be used to measure elastic shear moduli of materials. This research project attempts to apply the Brookfield YR-I rheometer to the measurement of viscoelastic properties. Different concentrations of gelatin and polyacrylamide gels served as model systems for viscoelastic and elastic materials, respectively. The concepts developed were applied to the torque-time response obtained for these model systems from the Brookfield YR-I. The data compared favorably with viscoelastic data obtained from oscillatory testing with a stress controlled rheometer. The results helped establish a protocol for the Brookfield YR-I to measure viscoelastic properties. Furthermore, certain commercial products were tested using the protocol, but disagreement between the machines was observed in some materials. The effect of cup size and the position of the assumed Newtonian line are potential reasons for the disparity, and future work should focus on refining these aspects.
- Processing Techniques for the Improvement of Peanut Meal(2010-04-21) Seifert, Lauren Elaine; Jason A. Osborne, Committee Member; Timothy H. Sanders, Committee Member; Jack P. Davis, Committee ChairPeanut meal is characterized as the non-food grade material that remains after the extraction of oil from peanuts (Arachis hypogaea L.). Oil is extracted from peanuts that are considered not suitable for human consumption due to discolored, broken, or aflatoxin contaminated seed. Peanut meal is a rich source of protein (45-60%) and can be used in food products if the aflatoxin, an unavoidable contaminant in peanut crop, was eliminated. Recent trends have shown that plant proteins are increasingly being used as a less expensive alternative to animal proteins for fulfilling basic nutritional needs. Technologies are needed to expand the applications for this commercially available material, which is currently sold at a low economic value as either animal feed or fertilizer, dependent upon its aflatoxin concentration. The objectives of this research were to improve the value of peanut meal: 1) through enzymatic hydrolysis to enhance protein functional and nutritional properties, and 2) by sequestering the aflatoxin from contaminated meal using a non-nutritive adsorbent. Defatted peanut meal dispersions (10% w/w) were hydrolyzed with commercial proteases (Alcalase, pepsin and Flavourzyme) and soluble fractions (hydrolysates) were collected for subsequent testing. Degree of hydrolysis ranged from approximately 20-60% for Alcalase, 10-20% for pepsin and 10-70% for Flavourzyme from 3-240 min. Low molecular weight peptides (<14 kDa) were observed in all hydrolysates as determined by SDS-PAGE. Results indicated that total soluble material increased a minimum of 30% regardless of protease at 240 min and antioxidant capacity of all hydrolysates was greater than unhydrolyzed controls. Specifically, Alcalase hydrolysates had the greatest antioxidant capacity and total soluble material. These results suggest that peanut meal could be made more valuable via enzymatic hydrolysis to create small peptides with improved functional and nutritional properties. A method of reducing aflatoxin within peanut meal was also investigated, as it is imperative to have <20 ppb for peanut products intended for human applications. The in vitro efficacy of sodium bentonite clay, Astra-Ben 20â„¢ (AB20), to sequester aflatoxin from contaminated meal was studied. Aqueous peanut meal dispersions (10% w/w) were adjusted to pH 2 and 8 and randomly assigned to one of three treatments: control (no clay), 0.2% AB20 (w/w), or 2% AB20 (w/w). Results revealed that the addition of clay significantly lowered the aflatoxin concentration in the soluble and insoluble fractions to a level that would be permissible by the FDA for use in food products. The pH of the soluble samples did not significantly affect the ability of clay to bind aflatoxin. Soluble fractions subjected to 2% AB20 treatment had significantly lower protein solubility and total soluble material than their respective pH 2 and 8 controls. This novel research not only provides an avenue for high protein peanut meal soluble fractions to be used safely in human food applications, but also for the insoluble fractions to be sold as animal feed at a higher price due to the decreased level of detectable aflatoxin.
- A Rapid Assessment Tool for Determining Uniformity if Irrigation Type Manure Application Systems(2009-08-14) Liu, Zhengzhong; Garry L, Grabow, Committee Chair; Rodney L. Huffman, Committee Co-Chair; Jason A. Osborne, Committee MemberZhengzhong Liu. A Rapid Assessment Tool for Determining Uniformity of Irrigation-Type Manure Application Systems. (Under the direction of Dr. Garry Grabow) Due to the extent of the animal industry in North Carolina, the treatment of liquid manure is of great importance. Since liquid manure contains plant nutrients, it is usually treated through application to agricultural land through irrigation systems as a substitute or partial substitute for commercial fertilizer. However, land application of liquid manure needs to follow some guidelines in order to achieve economic goals as well as to protect the environment. Current guidences suggest calibration of land application equipment be performed once every three years by the “catch can†method, a time- and labor-consuming method. The research goals of this project were to investigate the relationship between liquid manure application uniformity and application system hydraulic measurements and then make tables of the predicted application uniformity for field use. Trials were performed to test the manure application uniformity for different sprinkler types, nozzle types, gun models, nozzle diameters, system types, site types, gun models, and nozzle pressures. The wind speed and direction during the trials was monitored. Different overlaps were achieved by superposition, thereby allowing for assessment of multiple sprinkler overlap extents for one trial. The overlaps used for traveling gun systems reflected sprinkler (lane) spacings of 70%, 75%, 80%, 85%, and 90% of the wetted diameter, and those used for stationary systems were 50%, 55%, 60%, 65%, and 70% of the wetted diameter. Including superpositioned data, there were 722 records. Regression models were constructed using trial data through processes of main effect selection, collinearity checking, interaction term and quadratic term selection, parameter estimation, and residual normality testing. The variables “sprinkler spacing in percent of wetted diameter†and “wind speed†exist as quadratic terms in most of the models. Tables were made using predictions from the selected models. The model for stationary systems performs well with an adjusted R2 of 0.75 and tables of application uniformity showing the expected tendencies of application uniformity to predictive factors. The model for traveling gun systems does not perform as well as that for stationary systems. The adjusted R2 is only 0.14, though it is not unexpected due to the sampling variance of the “catch can†method.
- Sindbis virus interaction with cells(2008-07-11) Wang, Gongbo; Keith Weninger, Committee Co-Chair; Jason A. Osborne, Committee Member; Paul Wollenzien, Committee Member; Stuart Maxwell, Committee Member; Dennis Brown, Committee Chair
- Statistical Methods in Genetic Association Studies(2007-08-01) Gao, Xiaoyi; Dahlia M. Nielsen, Committee Co-Chair; Bruce S. Weir, Committee Chair; Philip Awadalla, Committee Member; Jason A. Osborne, Committee MemberPopulation structure is a serious confounding factor in genetic association studies. It may lead to false positive results or failure to detect true association. We propose a hierarchical clustering algorithm, AW-clust, for using single nucleotide polymorphism (SNP) genetic data to assign individuals to populations. We show that the algorithm can assign sample individuals highly accurately to their corresponding ethic groups: CEU, YRI, CHB+JPT in our tests using HapMap SNP data and it is also robust to admixed populations when tested on Perlegen SNP data. Moreover, it can detect fine-scale population structure as subtle as that between Chinese and Japanese by using genome-wide hight diversity SNP loci. Genotyping errors exist in most genetic data and can influence the biological conclusions of the studies. A simple method is to conduct the Hardy-Weinberg equilibrium (HWE) test in population-based association studies. We investigated the power issue of using the HWE test on genotyping error detection in the presence of current genotyping technologies. Multiple testing is a challenging issue in genetic studies using SNPs that are in linkage disequilibrium (LD) with each other. Failure to adjust for multiple testing appropriately may produce excess false positives or overlook true positive signals. We propose a new multiple testing correction method, CLDMeff , for association studies using SNP markers. It is shown to be simpler and more accurate than the recently developed methods and is comparable to the permutation-based correction using both simulated and real data. The efficiency and accuracy of the CLDMeff method makes it an attractive choice for multiple testing correction when there is high intermarker LD in the SNP dataset.
- Variable Selection For Multivariate Smoothing Splines With Correlated Random Errors(2008-07-24) Demirhan , Eren; Daowen Zhang, Committee Member; Jason A. Osborne, Committee Member; Sujit K. Ghosh, Committee Member; Hao Helen Zhang, Committee Chair
