Quantifying Shared Information Value in a Supply Chain Using Decentralized Markov Decision Processes with Restricted Observations
| dc.contributor.advisor | Jacqueline Hughes-Oliver, Committee Member | en_US |
| dc.contributor.advisor | Henry Nuttle, Committee Member | en_US |
| dc.contributor.advisor | Thom Hodgson, Committee Co-Chair | en_US |
| dc.contributor.advisor | Russell King, Committee Co-Chair | en_US |
| dc.contributor.author | Wei, Wenbin | en_US |
| dc.date.accessioned | 2010-04-02T18:48:31Z | |
| dc.date.available | 2010-04-02T18:48:31Z | |
| dc.date.issued | 2005-09-27 | en_US |
| dc.degree.discipline | Industrial Engineering | en_US |
| dc.degree.level | dissertation | en_US |
| dc.degree.name | PhD | en_US |
| dc.description.abstract | Information sharing in a two-stage and three-stage supply chain is studied. Assuming the customer demand distribution is known along the supply chain, the information to be shared is the inventory level of each supply chain member. In order to study the value of shared information, the supply chain is examined under different information sharing schemes. A Markov decision process (MDP) approach is used to model the supply chain, and the optimal policy given each scheme is determined. By comparing these schemes, the value of shared information can be quantified. Since the optimal policy maximizes the total profit within a supply chain, allocation of the profit among supply chain members, or transfer cost/price negotiation, is also discussed. The information sharing schemes include full information sharing, partial information sharing and no information sharing. In the case of full information sharing, the supply chain problem is modeled as a single agent Markov decision process with complete observations (a traditional MDP) which can be solved based on the policy iteration method of Howard (1960). In the case of partial information sharing or no information sharing, the supply chain problem is modeled as a decentralized Markov decision process with restricted observations (DEC-ROMDP). Each agent may have complete observation of the process, or may have only restricted observation of the process. In order to solve the DEC-ROMDP, an evolutionary coordination algorithm is introduced, which proves to be effective if coupled with policy perturbation and multiple start strategies. | en_US |
| dc.identifier.other | etd-09252005-143536 | en_US |
| dc.identifier.uri | http://www.lib.ncsu.edu/resolver/1840.16/4189 | |
| dc.rights | I hereby certify that, if appropriate, I have obtained and attached hereto a written permission statement from the owner(s) of each third party copyrighted matter to be included in my thesis, dissertation, or project report, allowing distribution as specified below. I certify that the version I submitted is the same as that approved by my advisory committee. I hereby grant to NC State University or its agents the non-exclusive license to archive and make accessible, under the conditions specified below, my thesis, dissertation, or project report in whole or in part in all forms of media, now or hereafter known. I retain all other ownership rights to the copyright of the thesis, dissertation or project report. I also retain the right to use in future works (such as articles or books) all or part of this thesis, dissertation, or project report. | en_US |
| dc.subject | information sharing | en_US |
| dc.subject | supply chain | en_US |
| dc.subject | successive approximation | en_US |
| dc.subject | decentralized Markov desicion process with restric | en_US |
| dc.subject | partially observable Markov decision process | en_US |
| dc.subject | perturbation | en_US |
| dc.subject | Markov decision process | en_US |
| dc.subject | transfer price negotiation | en_US |
| dc.subject | inventory policy | en_US |
| dc.title | Quantifying Shared Information Value in a Supply Chain Using Decentralized Markov Decision Processes with Restricted Observations | en_US |
Files
Original bundle
1 - 1 of 1
