Log In
New user? Click here to register. Have you forgotten your password?
NC State University Libraries Logo
    Communities & Collections
    Browse NC State Repository
Log In
New user? Click here to register. Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Kazufumi Ito, Committee Member"

Filter results by typing the first few letters
Now showing 1 - 18 of 18
  • Results Per Page
  • Sort Options
  • No Thumbnail Available
    Analysis and Computation for a Fluid Mixture Model of Tissue Deformations
    (2008-06-17) Jiang, Qunlei; Xiaobiao Lin, Committee Member; Kazufumi Ito, Committee Member; Hien Tran, Committee Member; Zhilin Li, Committee Chair; Sharon R. Lubkin, Committee Co-Chair
    A fluid mixture model of tissue deformations in one and two dimensions has been studied in this dissertation. The model is a mixed system of nonlinear hyperbolic and elliptic partial differential equations with interfaces. Both theoretical and numerical analysis are presented. We found the relationship between physical parameters and the resulting pattern of tissue deformations via linear stability analysis. Several numerical experiments support our theoretical analysis. The solution of the system exhibits non-smoothness and discontinuities at the interfaces. The conventional high order finite difference methods (FDM), such as the WENO scheme and TVD Runge Kutta method, for the hyperbolic equation, coupled with the central FDM for the elliptic equation, give spurious oscillations near the interfaces in our problem. By enforcing the jump conditions across the interfaces, our approach, the immersed interface method (IIM), eliminates non-physical oscillations, improves the accuracy of the solution, and maintains the sharp interface as time evolves. The IIM has been applied to solve a one dimensional linear advection equation with discontinuous initial conditions. By building the jump conditions into a conventional finite difference method, the Lax-Wendroff method, solutions of second order accuracy are observed. The IIM showed its robustness in solving the linear advection equation with nonhomogeneous jump conditions across the moving interface. The two dimensional fluid mixture model has been derived asymptotically from the three dimensional model so that the thickness of the gel is taken into account. Many numerical examples have been completed using Clawpack and qualitatively reasonable numerical solutions have been obtained.
  • No Thumbnail Available
    Application of Perturbation Methods to Modeling Correlated Defaults in Financial Markets
    (2007-03-21) Zhou, Xianwen; Kazufumi Ito, Committee Member; Tao Pang, Committee Member; Jean-Pierre Fouque, Committee Chair; John Seater, Committee Member
    In recent years people have seen a rapidly growing market for credit derivatives. Among these traded credit derivatives, a growing interest has been shown on multi-name credit derivatives, whose underlying assets are a pool of defaultable securities. For a multi-name credit derivative, the key is the default dependency structure among the underlying portfolio of reference entities, instead of the individual term structure of default probabilities for each single reference entity as in the case of single-name derivative. So far, however, default dependency modeling is still the most demanding open problem in the pricing of credit derivatives. The research in this dissertation is trying to model the default dependency with aid of perturbation method, which was first proposed by Fouque, Papanicolaou and Sircar (2000) as a powerful tool to pricing options under stochastic volatility. Specifically, after a theoretic result regarding the approximation accuracy of the perturbation method and an application of this method to pricing American options under stochastic volatility by Monte Carlo approach, a multi-dimensional Merton model under stochastic volatility is studied first, and then the multi-dimensional generalization of the first-passage model under stochastic volatility comes next, which is then followed by a copula perturbed from the standard Gaussian copula.
  • No Thumbnail Available
    Curve and Polygon Evolution Techniques for Image Processing
    (2005-07-24) Bozkurt, Gozde; Gianluca Lazzi, Committee Member; Kazufumi Ito, Committee Member; Wesley Snyder, Committee Member; Hamid Krim, Committee Chair
    In this digital era of our world, huge amounts of digital image data are being collected on a daily basis. The collected image data is being stored for subsequent processing and use in a wide variety of applications. For this purpose, it is often important to accurately and precisely extract relevant information out of this data. In computer vision applications, for instance, an important goal is to understand the contents of an image and be able to automatically gain an understanding of a scene, implying an extraction and recognition of an object. This task is, however, greatly complicated by the acquired image data being often noisy, and target objects and background bearing textural variations. As a result, there is a strong demand for reliable and automated image processing algorithms, for image smoothing, textured image segmentation, object extraction, tracking, and recognition. The objective of this thesis is to develop image processing algorithms which are efficient, statistically robust and sufficiently general, in order to account for noise and textural variations in images, and which have the ability to extract and provide compact and useful descriptions of target objects in images, for object recognition and tracking purposes. The main contribution of the thesis is the development of image processing algorithms, which are based on the theory of curve evolution with connections to information theory and probability theory. These connections form the basis for extracting a compact object description, in the form of a polygonal contour. One contribution is the development of a new class of curve evolution equations designed to preserve prescribed polygonal structures in an image while removing noise. In conjunction with these flows, a local stochastic formulation of a well-studied curve evolution equation, namely the geometric heat equation, provides an alternative microscopic as well as macroscopic view, which in turn led to our proposal of vanishing at pre-defined directions. Under these flows, the limiting shape of a curve is a polygon, pre-specified by the form and the parameters of the specific flow. The second contribution of the thesis is the development of a new active contour model which merges the desirable polygonal representation of an object directly to the image segmentation procedure by adapting an information-theoretic measure into an active contour framework with an ultimately unsupervised texture segmentation goal. The polygon-propagating models we develop can capture texture boundaries more reliably than the continuous active contour models because the evolution of an active polygon vertex depends on an overall speed function integrated along its two adjacent polygon edges rather than on pointwise measurements along continuous contour points. In this way, higher-order statistics which provide more adapted information than the first and second-order, are captured through both the nature of the information-theoretic criterion we utilize, and the nature of the polygon-evolving ordinary differential equations we propose. A supplementary contribution in this sequel is a new global polygon regularizer algorithm which uses electrostatics principles. The final contribution of the thesis is the development of a simple and efficient boundary-based object tracking algorithm well-adapted to polygonal objects. This is an extension of the second contribution of the thesis, and the key idea here is centered around tracking a relatively few vertices together with their corresponding edges, which in turn yields a bookkeeping simplicity and hence efficiency. The parsimonious set of features provided by the three methods developed in this thesis are useful for object-based description and recognition tasks, and in addition, may provide a viable solution to a parsimonious, and economical representation of large data sets (e.g. a contour represented by a few landmarks).
  • No Thumbnail Available
    An Electromagnetic Interrogation Technique Utilizing Pressure-dependent Polarization
    (2002-05-28) Raye, Julie Knowles; Kazufumi Ito, Committee Member; H. T. Banks, Committee Chair; Hien T. Tran, Committee Member; Michael Shearer, Committee Member
    This dissertation focuses on an interrogation technique that uses traveling acoustic wavefronts as a virtual reflector for an oncoming electromagnetic wave. Electromagnetic interrogation techniques in general have the potential for wide applicability in practical problems and this technique in particular enjoys that potential. We begin by developing a viable model for pressure-dependent orientational (Debye) polarization. We then incorporate it into a one-dimensional Maxwell system to describe the electromagnetic/acoustic interaction. This system may be generalized to include a wider class of electromagnetic behavior; we establish well-posedness, enhanced regularity, and convergence results for this general system. Under the framework provided by the mathematical theory, we obtain computational results for sample forward and inverse problems relating to the interrogation technique. Our numerical algorithms for the forward problem involve finite difference approximations in time and finite element approximations with piecewise linear basis elements in space. Solving the inverse problem entails least squares minimization using a gradient-free Nelder Mead optimization routine. Finally, as a first step in developing a model in which the pressure wave may be modulated by the electromagnetic wave (unlike the one-way coupling in the model presented here), we consider the system describing an acoustic wave propagating through a layered medium. We derive a weak formulation for this system and present computational findings.
  • No Thumbnail Available
    A Family of Higher-Order Implicit Time Integration Methods for Unsteady Compressible Flows.
    (2011-01-14) Segawa, Hidehiro; Ashok Gopalarathnam, Committee Chair; Hong Luo, Committee Chair; Kazufumi Ito, Committee Member; Hassan Hassan, Committee Member
  • No Thumbnail Available
    Finite Element Methods for Interface Problems with Locally Modified Triangulations
    (2009-08-04) Xie, Hui; Kazufumi Ito, Committee Member; Xiao-Biao Lin, Committee Member; Sharon Lubkin, Committee Member; Zhilin Li, Committee Chair
    Interface problems arise in many applications such as heat conduction in different materials. The partial differential equations (PDEs) that describe these applications have domains that consist of different subdomains. The different subdomains can have complicated shapes or can have different properties. For instance, different subdomains can represent different phases of the same material, such as water and ice. The coefficients of the PDEs can be discontinuous across the interfaces of the subdomains, and the source terms can be singular. Due to these irregularities, the solutions to the PDEs can be nonsmooth or even discontinuous. Here we restrict ourselves to interface problems that do not depend on time and can be expressed in terms of elliptic or elasticity PDEs. We present finite element methods (FEMs) for elliptic and elasticity problems with interfaces. The FEMs are based on body-fitted meshes with a locally modified triangulation. A FEM based on a body-fitted mesh uses a triangulation that is aligned with the interfaces. However, for complicated interfaces it can be difficult and expensive to generate such triangulations. That is why we use a locally modified triangulation based on Cartesian meshes. We first form a Cartesian mesh, then move the grid points near the interfaces to the interfaces. This leads to a locally modified triangulation. We use the standard FEM with the locally modified triangulation to solve the elliptic and elasticity problems with interfaces. By FEM theory, the method is second order accurate in the infinity norm for piecewise smooth solutions. We present some numerical examples to show the second order accuracy of the method. We also present a new second order finite difference method that does not require to compute the curvature. At points away from the interface we can approximate the PDE by using the standard 5-point scheme. At points where the interface crosses the 5-point scheme, we still use the 5-point scheme by introducing some ghost values for the grid points on the other side of interface. The price is that we need to find an equation for each ghost value. We will use the interface conditions, either the jump in Dirichlet or Neumann boundary conditions, to form the equations for the ghost values to complete the linear system. We also present some numerical examples to show the second order accuracy of the method.
  • No Thumbnail Available
    Image Segmentation/Registration: a Variational Framework for 2-D and 3-D Applications
    (2009-01-09) Chen, Ping-Feng; Hamid Krim, Committee Chair; Griff Bilbro, Committee Member; Gianluca Lazzi, Committee Member; Kazufumi Ito, Committee Member
    Segmentation lands itself in the middle level of a computer vision system that it extracts boundary features in images. An accurate extraction of features will lead to the success of later higher level processes such as classification and recognition. Registration, on the other hand, is intimately intertwined with segmentation. An accurate allocation of edges, which are used as feature points, may increase the performance of registration. Whenever a single modality is not sufficient for segmentation and the resort to multi-spectral images is needed, perfect alignment of these multi-spectral images will also ease the segmentation task. In this thesis we propose segmentation and registration methods corresponding to different real applications. In the first biomedical application we propose a constrained Mumford-Shah type energy functional incorporated with an information-theoretic view and tuning weights. This model characterizes higher-order statistical properties of data and give a probabilistic flavor to our segmentation. It successfully segmented T1-Maps and T1-weighted images in both 2-D and 3-D. Validation by experts manual segmentations also shows our method outperform most other techniques. Moreover we propose a joint radiofrequency (RF) -inhomogeneity calibration method to correct the non-uniformity of RF filed for accurate T1-Map generation. We propose a multi-phase joint segmentation and registration technique (MPJSR) for mid-range layered imageries in the second application. Our method in particular may bring the objects of interest in a pair of layered images into perfect alignment and delineate the boundaries simultaneously. Furthermore, based on our technique, we tackle the tracking problem for layered videos. By calculating a constrained optical flow between consecutive frames, a prediction for the contour location may be made in the next frame to expedite and increase the segmentation performance. In the third application we took a step further after segmentation and registration. The study of canonical views of multiple range images launches from the assumption that the segmentation and registration between multiple range images are done. We propose using two methods: minimum description length (MDL) and compressive sensing approaches, to determine the canonical views for a 3-D object.
  • No Thumbnail Available
    MIMO Beamforming With Mutual Coupling, Limited Feedback and Coordination
    (2009-12-04) Dong, Yuhan; Kazufumi Ito, Committee Member; Mihail L. Sichitiu, Committee Member; J. Keith Townsend, Committee Member; Brian L. Hughes, Committee Chair
    Multi-input, multi-output (MIMO) techniques use multiple antennas at both the transmitter and receiver to improve the performance of wireless communications systems over multipath fading channels. In recent years, MIMO techniques that employ transmit beamforming have been adopted in several new and emerging standards for situations where channel state knowledge is available at the transmitter. Most existing studies of MIMO beamforming assume that perfect channel knowledge is available at both the transmitter and receiver, and that the antenna elements in both the transmit and receive arrays are spaced sufficiently far apart so as to be essentially uncoupled. In practice, however, constraints on the physical size of antenna arrays may require elements to be spaced close together, leading to antenna coupling and signal correlation. The capacity of the feedback link from the receiver to the transmitter may also be limited, so that channel knowledge is necessarily imperfect at the transmitter. These challenges become all the more difficult in multiuser scenarios, when efficient coordination among several transmitters is required. In this dissertation, we consider the analysis and design of MIMO beamforming techniques with antenna mutual coupling, limited feedback and multiuser coordination. We begin by introducing a circuit model of a compact wireless MIMO transceiver that incorporates the effects of antenna mutual coupling. We then use this model to derive new MIMO beamforming strategies appropriate for both single-user and multiuser systems. Through numerical examples, we illustrate the performance of the proposed beamforming techniques and their dependence on the properties of the antenna arrays, matching networks, channel estimation errors, and channel state feedback. Finally, we propose new asymmetric-rate coordinated beamforming strategies which improve both the individual rates of each user and the sum-rate subject to zero-interference constraints. These asymmetric-rate strategies can also be combined using time-division to create new, higher-rate symmetric beamforming strategies.
  • No Thumbnail Available
    Model Development and Control Design for Atomic Force Microscopy
    (2004-09-09) Hatch, Andrew Graydon; Ralph C. Smith, Committee Chair; Kazufumi Ito, Committee Member; Zhilin Li, Committee Member; Hien T. Tran, Committee Member
    The development of energy-based models and model-based control designs necessary to achieve present and projected applications involving atomic force microscopy is investigated. Applications include real-time product diagnostics or monitoring of biological processes, nanoelectromechanical systems (NEMS) and employment of atomic force microscope (AFM) technology for spintronics. A crucial component in the AFM design is the piezoceramic (PZT)-based stage used to position the sample. Whereas PZT actuators provide the broadband and extremely high set point capabilities required by the AFM stages, they also exhibit frequency-dependent hysteresis and constitutive nonlinearities. To characterize the field-polarization relation in PZT, low-order macroscopic models are constructed based on a combination of energy analysis at the mesoscopic level along with stochastic homogenization techniques. To account for nonuniformity and inhomogeneities in the material, local coercive field values are assumed to be distributed. Due to interactions among the dipoles, the effective field is also assumed to be distributed. Previous work has employed specific functions to describe these distributions. However, the fact that these choices are not based on energy considerations, motivates the use of general densities. The dynamics of the actuator must be incorporated as well. A rod model is suitable for a stacked actuator whose cross-section is small compared to the length. The equation of motion for the rod can be derived using force balancing with boundary conditions determined from the fact that the rod is fixed at one end and pushes against the stage at the other. At low frequencies, the hysteresis and constitutive nonlinearities inherent in PZT can be accommodated through PID or robust control designs. However, at the higher frequencies required by the previously outlined applications, increasing noise-to-data ratios and diminishing high-pass characteristics of control filters preclude a sole reliance on feedback laws to eliminate hysteresis. This motivates the development of control designs that incorporate and approximately compensate for hysteresis through model inverses employed as filters to linearize transducer responses for linear robust control design and PID control design. The inverse models are also tested in an open loop control experiment on a PZT stacked actuator.
  • No Thumbnail Available
    Model-Based Robust Control Designs for High Performance Magnetostrictive Transducers
    (2003-09-03) Nealis, James Matthew; Fen Wu, Committee Member; Kazufumi Ito, Committee Member; Hien Tran, Committee Member; Ralph C. Smith, Committee Chair
    The increasing employment of smart structures in industrial processes necessitates the study of materials exhibiting constitutive nonlinearities and hysteresis. The high performance and high speed demands of such processes can often be met by transducers utilizing piezoceramic, shape memory alloy, or magnetostrictive elements.Here, the focus is place on magnetostrictive materials. These material provide several benefits such as the ability to generate large forces and strains and provide precision placement. However, to achieve the full potential of magnetostrictive materials, models and control laws which accommodate the inherent nonlinearities and hysteresis must be employed. An emphasis has been placed on the design of models for magnetostrictive transducers and control strategies that are implementable in real time and incorporate realistic operating conditions. To this end, models of the nonlinearities and hysteresis exhibited by magnetostrictive materials are developed considering not only accuracy, but the computational efficiency and the existence of an inverse or partial inverse as well. To attenuate the nonlinear and hysteretic behaviors, we employ the inverses of the material models as filters of the input to the transducer. The models describing the nonlinearities and hysteresis for the smart materials, contain several material dependent parameters which must be identified in order to effectively utilize resulting inverse compensators. A nonlinear adaptive parameter estimation algorithm is developed to identify nonlinearly occurring parameters which may not be identified by physical measurements or may be slowly varying. Once an inverse filter has been developed and the material parameters identified, feedback control laws are designed to meet the performance specifications. A successful controller must provide accurate tracking of a reference signal while accommodating the hysteretic behavior and other external disturbances such as sensor noise. Several initial feedback control methods are considered to motivate the investigation of robust control designs. Robust techniques including H₂ and H[subscript ∞] optimal control as well as multiple objective control designs are employed to control a magnetostrictive transducer and the performance is illustrated through simulations.
  • No Thumbnail Available
    Modeling, Analysis, and Estimation of an in vitro HIV Infection Using Functional Differential Equations
    (2002-09-05) Bortz, David Matthew; H. Thomas Banks, Committee Chair; Marie Davidian, Committee Member; Kazufumi Ito, Committee Member; Hien T. Tran, Committee Member
    This dissertation focuses on developing mathematical and computational tools for use as an aid in understanding the cellular population dynamics of an in vitro HIV experiment. We carefully develop a functional differential equation model which incorporates mathematical mechanisms that account for both the biological delays and the parameter uncertainty inherent in the system. We present the theoretical foundations for our methodology which then allow us to develop a numerical approximation scheme and perform parameter identifications (even on the delay distributions) and sensitivity analyses. We summarize the results of a numerical investigation of the delays followed by the results from the nonlinear least squares inverse problem. We then present a statistical significance argument for the importance of the delay mechanism as well as the results of a sample sensitivity analysis of the system with respect to select parameters.
  • No Thumbnail Available
    Optimal Control, Estimation, and Shape Design: Analysis and Applications
    (2007-11-15) David, John Andrew; Zhilin Li, Committee Member; Thomas Banks, Committee Member; Pierre Gremaud, Committee Member; Hien Tran, Committee Chair; Kazufumi Ito, Committee Member
  • No Thumbnail Available
    Optimization Problems in the Presence of Uncertainty
    (2007-09-17) Grove, Sarah Lynn; Mansoor Haider, Committee Member; Hien Tran, Committee Member; H.T. Banks, Committee Chair; Kazufumi Ito, Committee Member
    We consider optimization problems in the presence of uncertainty for three different scientific examples. The first optimization problem addressed is in the area of parameter estimation. We review the asymptotic theory for standard errors in classical ordinary least squares (OLS) inverse or parameter estimation problems involving general nonlinear dynamical systems where sensitivity matrices can be used to compute the asymptotic covariance matrices. We discuss possible pitfalls in computing standard errors in regions of low parameter sensitivity and/or near a steady state solution of the underlying dynamical system. Next we consider electromagnetic evasion-interrogation games where the evader can use ferroelectric material coatings to avoid detection while the interrogator can manipulate the interrogating frequencies and angles of incidence to enhance detection. Each player in this two-player game wishes to change the amount of reflected signal created in a way that will benefit them the most. Thus both players will attempt to optimize their chances of either remaining undetected or detecting their opponent. With the introduction of uncertainty, the resulting game is carried out over spaces of probability measures. Finally, a one player dynamical game is formulated. The premise here is for the evader to manipulate the ferroelectric material coatings to avoid detection based upon updated information about the frequencies being sent out. The uncertainty is found in the frequencies that the interrogator will employ. We incorporate both drift and diffusion into this optimization problem.
  • No Thumbnail Available
    Quantum Monte Carlo for Transition Metal Systems: Method Developments and Applications
    (2007-02-13) Wagner, Lucas Kyle; Dean J Lee, Committee Member; Lubos Mitas, Committee Chair; Kazufumi Ito, Committee Member; Marco Buongiorno-Nardelli, Committee Member
    Quantum Monte Carlo (QMC) is a powerful computational tool to study correlated systems of electrons, allowing us to explicitly treat many-body interactions with favorable scaling in the number of particles. It has been regarded as a benchmark tool for condensed matter systems containing elements from the first and second row of the periodic table. It holds particular promise for the more complicated transition metals, because QMC treats the correlations between electrons explicitly, and has a computational cost that scales well with the system size. We have developed a QMC framework that is capable of simulating systems containing many electrons efficiently, through advanced algorithms and parallel operation. This framework includes a QMC program using state of the art methods that make many interesting quantities available. We apply a method of finding the minimum and other properties of the potential energy surface in the face of stochastic noise using Bayesian inference and the total energy. We apply these developments to several transition metal systems, including the first five transition metal monoxide molecules and two interesting ABO3 perovskite solids: BaTiO3 and BiFeO3. Where experiment is available, QMC is generally in agreement with a few exceptions that are discussed. In the case where experiment is unavailable, it makes predictions that can help us understand somewhat ambiguous experimental results.
  • No Thumbnail Available
    Signal Processing Tools of MRI Perfusion-weighted Imaging Data Analysis
    (2006-03-14) Wu, Yang; Brian L. Hughes, Committee Member; Kazufumi Ito, Committee Member; Griff Bilbro, Committee Member; Weili Lin, Committee Member; Hamid Krim, Committee Chair; Jeffrey Macdonald, Committee Member
    In dynamic susceptibility contrast (DSC) magnetic resonance (MR) approaches, by injecting a bolus of paramagnetic contrast agent intravenously, the measured MR signal is converted to a concentration time course to estimate hemodynamic parameters like cerebral blood flow (CBF), cerebral blood volume (CBV) and mean transit time (MTT). Before estimating hemodynamic parameters, recirculation effects need to be removed by a gamma-variate fit of the concentration curve. In this dissertation, however, it has been found and demonstrated by simulation that fitting may not discern recirculation from the first-pass in case of cerebral ischemia. A new methodology using temporal independent component analysis (ICA) to remove recirculation in both normal and ischemic brain tissues while preserving the first-pass is therefore proposed. This should improve hemodynamics accuracy particularly in ischemic lesions. In DSC MR approaches, bolus delays between the arterial input function (AIF) and tissue curves may induce significant CBF quantification error. Our second contribution is using ICA to estimate bolus arrival time for each 5x5 region of interest (ROI) throughout the brain parenchyma. A global AIF measured from a major artery can then be shifted in accordance to define a local AIF for each ROI. The bolus delay may therefore be minimized, and the general shape of the AIF is preserved. This should improve the flow quantification. Transfer function has been widely used to characterize an unknown system. In DSC MR approaches, vascular transfer function (VTF) represents the probability density function of the vascular transit time. Our third contribution is to propose a new tool to estimate intracranial VTF non-invasively. This should provide an alterative means of assessing tissue perfusion status, particularly in patients with cerebrovascular diseases. Bolus dispersion between the AIF and tissue curves may induce flow quantification error, which cannot be minimized without the knowledge of vasculature. Our final contribution is to develop an extended cerebral vascular model to minimize delay and dispersion dependence by modelling flow heterogeneity in both bulk small arteries and capillary bed. This should yield more stable flow rates less sensitive to bolus delay and dispersion.
  • No Thumbnail Available
    Terahertz-Based Electromagnetic Interrogation Techniques for Damage Detection
    (2004-06-24) Gibson, Nathan Louis; H. Thomas Banks, Committee Chair; Hien T. Tran, Committee Member; Kazufumi Ito, Committee Member; Negash G. Medhin, Committee Member
    We apply an inverse problem formulation to determine characteristics of a defect from a perturbed electromagnetic interrogating signal. A defect (gap) inside of a dielectric material causes a disruption, via reflections and refractions at the material interfaces, of the windowed interrogating signal. We model these electromagnetic waves inside the material with Maxwell's equations. In order to resolve the dimensions and location of the defect, we use simulations as forward solves in our Newton-based, iterative scheme which optimizes an innovative cost functional appropriate for reflected waves where phase differences can produce ill-posedness in the inverse problem when one uses the usual ordinary least squares criterion. Our choice of terahertz frequency allows good resolution of desired gap widths without significant attenuation. Numerical results are given in tables and plots, standard errors are calculated, and computational issues are addressed. An inverse problem formulation is also developed for the determination of polarization parameters in heterogeneous Debye materials with multiple polarization mechanisms. For the case in which a distribution of mechanisms is present we show continuous dependence of the solutions on the probability distribution of polarization parameters in the sense of the Prohorov metric. This in turn implies well-posedness of the corresponding inverse problem, which we attempt to solve numerically for a simple uniform distribution. Lastly we address an alternate approach to modeling electromagnetic waves inside of materials with highly oscillating dielectric parameters which involves the technique of homogenization. We formulate our model in such a way that homogenization may be applied, and demonstrate the necessary equations to be solved.
  • No Thumbnail Available
    Variance Reduction for Monte Carlo Simulation of European, American or Barrier Options in a Stochastic Volatility Environment
    (2002-07-18) Tullie, Tracey Andrew; Ethelbert Chukwu, Committee Member; Kazufumi Ito, Committee Member; Peter Bloomfield, Committee Member; Jean-Pierre Fouque, Committee Chair
    In this work we develop a methodology to reduce the variance when applying Monte Carlo simulation to the pricing of a European, American or Barrier option in a stochastic volatility environment. We begin by presenting some applicable concepts in the theory of stochastic differential equations. Secondly, we develop the model for the evolution of an asset price under constant volatility. We next present the replicating portfolio and equivalent martingale measure approaches to the pricing of a European style option. Modeling an asset price utilizing constant volatility has been shown to be an inefficient model[8,16]. One way to compensate for this inefficiency is the use of stochastic volatility models, which involves modeling the volatility as a function of a stochastic process[26]. A class of these models is presented and a discussion is given on how to price European options in this framework. After developing the methods of how to price, we begin our discussion on Monte Carlo simulation of European options in a stochastic volatility environment. We start by describing how to simulate Monte Carlo for a diffusion process modeled as a stochastic differential equation. The essential element to our variance reduction technique, which is known as importance sampling, is hereafter presented. Importance sampling requires a preliminary approximation to the expectation of interest, which we obtain by a fast mean-reversion expansion of the pricing partial differential equation[22,6]. A detailed discussion is given on this fast mean-reversion expansion technique, which was first presented in [10]. We shall compare utilizing this method of expansion with that developed in [11], which is know as small noise expansion, and demonstrate numerically the efficiency of the fast mean-reversion expansion, in particular in the presence of a skew. We next wish to apply our variance reduction technique to the pricing of an American and barrier option. A discussion is given on how to price these options under constant volatility and in the presence of stochastic volatililty. Applying the importance sampling variance reduction method to a barrier option is similar to that of a European option since there exists a closed form solution to the price of this option in the context of constant volatility[4,15]. However, in the case of an American option Monte Carlo simulation and applying importance sampling are more complex. We present an algorithm to compute an American option price via Monte Carlo and describe an approximation technique to obtain a preliminary estimate to the pricing function under constant volatility. Hence, we are able to apply our variance reduction methodology to pricing of an American option. We subsequently present numerical results for both of these options.
  • No Thumbnail Available
    Vector Space Methods for Surface Reconstruction from One or More Images Acquired from the Same View with Application to Scanning Electron Microscopy Images
    (2003-08-06) Karacali, Bilge; Kazufumi Ito, Committee Member; Siamak Khorram, Committee Member; Griff Bilbro, Committee Member; Wesley Snyder, Committee Chair
    This dissertation develops novel methods to reconstruct a three-dimensional surface together with a characterization of the surface composition given one or more images obtained from the same viewing direction. First, a vector space method to reconstruct a surface given a gradient field is developed using the linear relationship between a surface and its gradient field in discrete surface domains. The developed gradient field representation is generalized to alleviate the common assumption of uniform integrability in gradient fields to partial integrability, allowing adequate reconstruction of surfaces with non-integrable gradient fields. In addition, the developed technique is further explored for gradient field noise reduction, by embedding multiscale properties providing sparse gradient field representations. Next, the ambiguity in possible surface gradients obtained by a two-image photometric stereo analysis is resolved using a cyclic projections algorithm based on the set of possible gradient fields and the previously developed gradient field representation. An algorithm that provides accurate surface reconstructions and surface type characterizations given two images of an unknown composite surface is established. We then apply this algorithm to Scanning Electron Microscopy (SEM) images to extract specimen surface topography and material type information from a pair of Secondary Electron (SE) and Back-scattered Electron (BSE) images. We then use a similar cyclic projections algorithm to reconstruct a surface from a single image. The simulation results indicate that the developed algorithm solves this classical shape-from-shading problem in a robust and accurate manner for varying illumination conditions. Finally, we establish a unified surface reconstruction framework using previously developed techniques on a photometric stereo image triplet containing shadows. We categorize the surface pixels as those illuminated in all three images, only two images and only one image. We then establish through simulation results that the developed method uses the surface gradient information obtained from the brightness images efficiently and effectively, and provides an accurate surface reconstruction.

Contact

D. H. Hill Jr. Library

2 Broughton Drive
Campus Box 7111
Raleigh, NC 27695-7111
(919) 515-3364

James B. Hunt Jr. Library

1070 Partners Way
Campus Box 7132
Raleigh, NC 27606-7132
(919) 515-7110

Libraries Administration

(919) 515-7188

NC State University Libraries

  • D. H. Hill Jr. Library
  • James B. Hunt Jr. Library
  • Design Library
  • Natural Resources Library
  • Veterinary Medicine Library
  • Accessibility at the Libraries
  • Accessibility at NC State University
  • Copyright
  • Jobs
  • Privacy Statement
  • Staff Confluence Login
  • Staff Drupal Login

Follow the Libraries

  • Facebook
  • Instagram
  • Twitter
  • Snapchat
  • LinkedIn
  • Vimeo
  • YouTube
  • YouTube Archive
  • Flickr
  • Libraries' news

ncsu libraries snapchat bitmoji

×