Browsing by Author "Robert White, Committee Member"
Now showing 1 - 5 of 5
- Results Per Page
- Sort Options
- Active Incipient Fault Detection With Multiple Simultaneous Faults.(2010-10-20) Fair, Martene; Stephen Campbell, Committee Chair; Ernest Stitzinger, Committee Member; Negash Medhin, Committee Member; Robert White, Committee Member
- Boiling Water Reactor In-Core Fuel Management through Parallel Simulated Annealing in FORMOSA-B(2009-04-27) Hays, Ross D; Robert White, Committee Member; Paul J. Turinsky, Committee Chair; Edward Davis, Committee MemberA commercial nuclear power plant with a boiling water reactor will utilize between 368 and 800+ individual fuel assemblies to generate steam for 18 to 24 months between refueling outages. The composition and reactivity of each fuel assembly will vary due to variations in initial enrichment, burnable poison loading and irradiation conditions in the core. These variations pose a challenge to the engineers who must design subsequent reloads because only one quarter to one half of the fuel will be replaced at a time. One of the challenges is to determine the optimum layout of the fuel within the core in order to get the highest value from the fuel without violating any safety or operational limits. The FORMOSA-B program was developed to automatically find a family of optimum loading patterns by combining a robust, accurate 3-D core simulator with a simulated annealing loading pattern search. Other features have been added to allow the program to rapidly compute shutdown margins and optimize control rod programming through the application of heuristic rules. One drawback of the FORMOSA-B program is that long run-times, sometimes exceeding a week, are required to generate and evaluate the large numbers of solutions required by the simulated annealing algorithm. The rising popularity and availability of parallel computing and computational clusters provides a possible solution to the problem of long run-times. To this end, a parallel simulated annealing capability has been developed for the FORMOSA-B program. The parallel simulated annealing driver utilizes standardized Message Passing Interface routines to divide the individual Markov search chains of the simulated annealing algorithm among a large number of processors. By evaluating multiple loading patterns concurrently, run times are significantly reduced. In testing with a 368-assembly BWR/4 model, parallel speedup factors exceeding 32 were observed with 48 processors. Parallel efficiencies are calculated to be in the range of 68% to 95% when correcting for hardware variations and CRP update frequency. Further testing was performed to investigate the effects on the annealing performance of the Control Rod Programming update frequency, Markov chain length versus parallelization width and solution downselect method.
- Geometric and Topological Variational Methods for Imaging and Computer Vision(2004-02-01) Ben Hamza, Abdessamad; Hamid Krim, Committee Chair; Griff Bilbro, Committee Member; Gianluca Lazzi, Committee Member; Robert White, Committee MemberThe great challenge in signal/image processing is to devise computationally efficient and optimal algorithms for estimating signals/images contaminated by noise and preserving their geometrical structure. The first problem addressed is this thesis is image denoising formulated in the calculus of variations framework. We propose robust variational models for image denoising by numerically solving partial differential equations. The core idea behind our proposed approaches is to use geometric insight in helping construct regularizing functionals and avoiding a subjective choice of a prior in maximum a posteriori estimation. Using tools from robust statistics and information theory, we show that we can extend this strategy and develop two gradient descent flows for image denoising with a demonstrated performance through illustrating experimental results. The rest of the thesis is devoted to a joint exploitation of geometry and topology of objects for as parsimonious as possible representation of objects and its subsequent application in object classification and recognition problems. Attempting to extend current approaches to image registration which have generally relied on the assumption of 2D images, we propose a novel technique for 3D object matching using a joint exploitation of geometry and topology. The key idea consists of capturing geometry along all topologically homogeneous parts of an object by way of level curves superimposed on a Reeb graph usually extracted by way of the object critical points. This resulting skeletal representation, however, is not rotationally invariant. We propose a new methodology called {em geodesic shape distribution} that lifts this limitation and which we apply to 3D object matching. The central idea is to encode a 3D shape into a 1D geodesic shape distribution. Object matching is then achieved by calculating an information-theoretic measure of dissimilarity between the resulting geodesic shape distributions in a lower dimensional space. Illustrating numerical experiments with synthetic and real data are provided to demonstrate the potential and the much improved performance of the proposed methodology in 3D object matching.
- Method of evaluating the effect of HPGe design on the sensitivity of physics experiments(2009-05-14) Kephart, Jeremy Dale; Albert Young, Committee Chair; Robert White, Committee Member; Christopher Gould, Committee Member; Paul Huffman, Committee MemberMotivated by planned double beta decay experiments in 76Ge I describe a computational model for the electric fields of solid state diode detectors and the subsequent charge transport. Aspects of detector performance determined by the impurity charge concentration are explored in a series of measurements of comparable point contact" p-type germanium detectors and compared to our computational model. In particular, we measure the capacitance of the germanium detector as a function of the bias voltage to determine the free parameters in a three parameter model of the impurity charge density, selectively mapping out the density at all points within the detector volume. We then use our impurity charge density map to determine the sensitivity of pulse shape analysis applied to various classes of physics events detected in the crystal. When possible, the impact of our refinements on a figure-of-merit for double-beta decay experiments is described.
- Monte Carlo Application for the use of Detector Response Function on Scintillation Detector Spectra(2009-08-07) Speaker, Daniel P; Robin P. Gardner, Committee Chair; Hany S. Abdel-Khalik , Committee Member; Robert White, Committee MemberThe DRF is the pulse height distribution for an incident radiation, and is also a PDF which has the properties of always being greater than or equal to zero and also integrates to unity. The application of the DRF on a simulated spectrum results in the benchmarking of the simulation results with experimental results. The results are the nice Gaussian shapes that are caused by the statistical fluctuations in the energy and collection efficiency of the detector. To find the perfect simulation of the DRF is impossible due to the fact that the detector might have imperfections, where electrons can essentially become trapped and not be collected. One must rely on empirical models of nonlinearity and simulation data to do this. This is what CEAR’s DRF code g03 does. The time consuming task of a code like g03 is the time it takes to simulate the Monte Carlo simulation, in particular the electron transport of it. G03 couples rigorous gamma ray transport with very simple electron transport. By this methodology the non-linearity and the variable flat continua part of the DRF is accounted for. There are some problems and upgrades that needed to be addressed, for instance the difference in the valley region between the Photopeak and Compton Edge and parts of the Compton Continuum. This Monte Carlo simulation also simulates the detector as a bare crystal. It was found that this could account for as much of a reduction of as much 5 percent of the incident energy. And also distort the response function in the lower energy range of the function. For this MCNP was employed to simulate the difference between the bare and covered crystal. The MCNP simulation also included a surface current tally for electrons and photons on the interface between the can and the crystal, and also the interface between the side of the crystal and the can. From the results of the simulation of the can and no can simulation for the pulse height spectra are different. It here when it was determined to add a patch to make the simulation of the detector response function more accurate. This causes a sizeable difference in valley region, which can be explained as many different photopeaks in the valley region, due to Compton scatters in the can. Also one can distinguish between the plots and conclude that the side of the can contributes to the continuum due to the backwards continuum which starts around 0.2 MeV. The way that this will be added is different for the place where in the can contributes and type. For the electrons in the front and the side, the spectrum will be run through a program that will calculate the energy deposited and this will be added directly to the spectrum. The photons on the side will be run though MCNP with an f8 tally which will be in turn added to the spectrum. The photons from the front, and perhaps the most significant, will be added by g03 having a spectrum of incident photons on the crystal instead of the way it is done now with a monoenergetic energy. Then a patch will be added to make the code more accurate.
