Quantifying Shared Information Value in a Supply Chain Using Decentralized Markov Decision Processes with Restricted Observations

dc.contributor.advisorJacqueline Hughes-Oliver, Committee Memberen_US
dc.contributor.advisorHenry Nuttle, Committee Memberen_US
dc.contributor.advisorThom Hodgson, Committee Co-Chairen_US
dc.contributor.advisorRussell King, Committee Co-Chairen_US
dc.contributor.authorWei, Wenbinen_US
dc.date.accessioned2010-04-02T18:48:31Z
dc.date.available2010-04-02T18:48:31Z
dc.date.issued2005-09-27en_US
dc.degree.disciplineIndustrial Engineeringen_US
dc.degree.leveldissertationen_US
dc.degree.namePhDen_US
dc.description.abstractInformation sharing in a two-stage and three-stage supply chain is studied. Assuming the customer demand distribution is known along the supply chain, the information to be shared is the inventory level of each supply chain member. In order to study the value of shared information, the supply chain is examined under different information sharing schemes. A Markov decision process (MDP) approach is used to model the supply chain, and the optimal policy given each scheme is determined. By comparing these schemes, the value of shared information can be quantified. Since the optimal policy maximizes the total profit within a supply chain, allocation of the profit among supply chain members, or transfer cost/price negotiation, is also discussed. The information sharing schemes include full information sharing, partial information sharing and no information sharing. In the case of full information sharing, the supply chain problem is modeled as a single agent Markov decision process with complete observations (a traditional MDP) which can be solved based on the policy iteration method of Howard (1960). In the case of partial information sharing or no information sharing, the supply chain problem is modeled as a decentralized Markov decision process with restricted observations (DEC-ROMDP). Each agent may have complete observation of the process, or may have only restricted observation of the process. In order to solve the DEC-ROMDP, an evolutionary coordination algorithm is introduced, which proves to be effective if coupled with policy perturbation and multiple start strategies.en_US
dc.identifier.otheretd-09252005-143536en_US
dc.identifier.urihttp://www.lib.ncsu.edu/resolver/1840.16/4189
dc.rightsI hereby certify that, if appropriate, I have obtained and attached hereto a written permission statement from the owner(s) of each third party copyrighted matter to be included in my thesis, dissertation, or project report, allowing distribution as specified below. I certify that the version I submitted is the same as that approved by my advisory committee. I hereby grant to NC State University or its agents the non-exclusive license to archive and make accessible, under the conditions specified below, my thesis, dissertation, or project report in whole or in part in all forms of media, now or hereafter known. I retain all other ownership rights to the copyright of the thesis, dissertation or project report. I also retain the right to use in future works (such as articles or books) all or part of this thesis, dissertation, or project report.en_US
dc.subjectinformation sharingen_US
dc.subjectsupply chainen_US
dc.subjectsuccessive approximationen_US
dc.subjectdecentralized Markov desicion process with restricen_US
dc.subjectpartially observable Markov decision processen_US
dc.subjectperturbationen_US
dc.subjectMarkov decision processen_US
dc.subjecttransfer price negotiationen_US
dc.subjectinventory policyen_US
dc.titleQuantifying Shared Information Value in a Supply Chain Using Decentralized Markov Decision Processes with Restricted Observationsen_US

Files

Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
etd.pdf
Size:
679.03 KB
Format:
Adobe Portable Document Format

Collections