Machine Learning, 8, 279-292 (1992)
© 1992 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands.
Technical Note
Q,-Learning
CHRISTOPHER J.C.H. WATKINS
25b Framfield Road, Highbury, London N5 IUU, England
PETER DAYAN
Centre for Cognitive Science, University of Edinburgh, 2 Buccleuch Place, Edinburgh EH8 9EH, Scotland
Abstract. Q-learning (Watkins, 1989) is a simple way for agents to learn how to act optimally in controlled Markovian
domains. It amounts to an incremental method for dynamic programming which imposes limited computational
demands. It works by successively improving its evaluations of the quality of particular actions at particular states.
This paper presents and proves in detail a convergence theorem for Q,-learning based on that outlined in Watkins
(1989). We show that Q-learning converges to the optimum action-values with probability 1 so long as all actions
are repeatedly sampled in all states and the action-values are represented discretely.
Watkins/1989/Q-learning/Edinburgh/action-values/discretely/机器/outlined/based/action/
Watkins/1989/Q-learning/Edinburgh/action-values/discretely/机器/outlined/based/action/
-->