q-learning (1).pdf


立即下载 谦逊的毛巾
2025-05-12
Watkins 1989 Q-learning Edinburgh action-values discretely 机器 outlined based action
669.5 KB

Machine Learning, 8, 279-292 (1992)
© 1992 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands.
Technical Note
Q,-Learning
CHRISTOPHER J.C.H. WATKINS
25b Framfield Road, Highbury, London N5 IUU, England
PETER DAYAN
Centre for Cognitive Science, University of Edinburgh, 2 Buccleuch Place, Edinburgh EH8 9EH, Scotland
Abstract. Q-learning (Watkins, 1989) is a simple way for agents to learn how to act optimally in controlled Markovian
domains. It amounts to an incremental method for dynamic programming which imposes limited computational
demands. It works by successively improving its evaluations of the quality of particular actions at particular states.
This paper presents and proves in detail a convergence theorem for Q,-learning based on that outlined in Watkins
(1989). We show that Q-learning converges to the optimum action-values with probability 1 so long as all actions
are repeatedly sampled in all states and the action-values are represented discretely.


Watkins/1989/Q-learning/Edinburgh/action-values/discretely/机器/outlined/based/action/ Watkins/1989/Q-learning/Edinburgh/action-values/discretely/机器/outlined/based/action/
-1 条回复
登录 后才能参与评论
-->