Theses and Dissertations

Author

Yi-chi Wang

Issuing Body

Mississippi State University

Advisor

Usher, John M.

Committee Member

Smyer, William N.

Committee Member

Daniewicz, Steven R.

Committee Member

Bowden, Royce O.

Committee Member

Bullington, Stanely F.

Date of Degree

12-13-2003

Document Type

Dissertation - Open Access

Major

Industrial Engineering

Degree Name

Doctor of Philosophy

College

James Worth Bagley College of Engineering

Department

Department of Industrial Engineering

Abstract

Reinforcement learning (RL) has received attention in recent years from agent-based researchers because it can be applied to problems where autonomous agents learn to select proper actions for achieving their goals based on interactions with their environment. Each time an agent performs an action, the environment¡Šs response, as indicated by its new state, is used by the agent to reward or penalize its action. The agent¡Šs goal is to maximize the total amount of reward it receives over the long run. Although there have been several successful examples demonstrating the usefulness of RL, its application to manufacturing systems has not been fully explored. The objective of this research is to develop a set of guidelines for applying the Q-learning algorithm to enable an individual agent to develop a decision making policy for use in agent-based production scheduling applications such as dispatching rule selection and job routing. For the dispatching rule selection problem, a single machine agent employs the Q-learning algorithm to develop a decision-making policy on selecting the appropriate dispatching rule from among three given dispatching rules. In the job routing problem, a simulated job shop system is used for examining the implementation of the Q-learning algorithm for use by job agents when making routing decisions in such an environment. Two factorial experiment designs for studying the settings used to apply Q-learning to the single machine dispatching rule selection problem and the job routing problem are carried out. This study not only investigates the main effects of this Q-learning application but also provides recommendations for factor settings and useful guidelines for future applications of Q-learning to agent-based production scheduling.

URI

https://hdl.handle.net/11668/19586

Share

COinS