dqn在训练过程中loss越来越⼤_深度强化学习之DQN实战
注:RL系列皆是莫烦教程的学习笔记,笔者仅做记录。
今天我们会将我们上⼀篇⽂章讲解的DQN的理论进⾏实战,实战的背景⽬前仍然是探险者上天堂游戏,不过在下⼀次开始我们会使⽤OpenAI gym的环境库,玩任何我们想玩的游戏。
算法公式
看上去整个算法似乎很复杂,其实就是Q-Learning的框架加了三样东西experience replay(经验池)
神经⽹络计算Q值
暂时冻结q_target参数字模
接下来我们就⼀步步把上篇⽂章学习到的理论实现把。
DQN与环境交互部分factoryret是什么意思
这⾥没有太多需要说明的,就是按照算法流程编写。
1from maze_env import Maze
2from DQN_modified import DeepQNetwork
3
4
5def run_maze():
6 step = 0#⽤来控制什么时候学习
7 for episode in range(25000):
8 # 初始化环境
9 obrvation = ()
10
11 while True:
12 # 刷新环境
13 der()
14
15 # DQN根据观测值选择⾏为
16 action = RL.choo_action(obrvation)
17
18 # 环境根据⾏为给出下⼀个state,reward,是否终⽌
19 obrvation_, reward, done = env.step(action)
20
21 #DQN存储记忆
22 RL.store_transition(obrvation, action, reward, obrvation_)
23
24 #控制学习起始时间和频率(选择200步之后再每5步学习⼀次的原因是先累积⼀些记忆再开始学习)
25 if (step > 200) and (step % 5 == 0):
26 RL.learn()
27
28 # swap obrvation
29 obrvation = obrvation_
30 obrvation = obrvation_
31
32 # break while loop when end of this episode
33 if done:
34 break
35 step += 1
36
37 # end of game
38 print('game over')
bik
39 env.destroy()
40
41
42if __name__ == "__main__":
43 # maze game
追尾是什么意思
44 env = Maze()
45 RL = DeepQNetwork(env.n_actions, env.n_features,
46 learning_rate=0.01,
47 reward_decay=0.9,
48 e_greedy=0.9,
49 replace_target_iter=200,#每200步替换⼀次target_net的参数
50 memory_size=2000, #记忆上线
51 # output_graph=True #是否输出tensorboard⽂件
52 )
53 env.after(100, run_maze)
54 env.mainloop()
55 RL.plot_cost()
编写target_net和eval_net两个⽹络
上⼀篇⽂章提到,我们引⼊两个CNN来降低当前Q值和⽬标Q值的相关性,提⾼了算法的稳定性,所以接下来我们就来搭建这两个神经⽹络。target_net⽤来预测q_target的值,他不会及时更新参数。⽽eval_net⽤来预测q_eval,这个神经⽹络拥有最新的神经⽹络参数。这两个神经⽹络结果是完全⼀样的,只是⾥⾯的参数不同。
两个神经⽹络是为了固定住⼀个神经⽹络(target_net)的参数,也就是说target_net是eval_net的⼀个历史版本,拥有eval_net很久之前的⼀组参数,⽽且这组参数被固定⼀段时间后再被eval_net的新参数所替换。⽽eval_net是不断在被提升的,所以是⼀个可以被训练的⽹
络trainable=True。⽽target_net的trainable=Fal。
1import numpy as np
2import tensorflow as tf
3
4np.random.ed(1)
5tf.t_random_ed(1)
6
7
8# Deep Q Network off-policy
9class DeepQNetwork:
10 def _build_net(lf):
11 # ------------------ all inputs ------------------------
12 lf.s = tf.placeholder(tf.float32, [None, lf.n_features], name='s') # input State
13 lf.s_ = tf.placeholder(tf.float32, [None, lf.n_features], name='s_') # input Next State
霍尔特人14 lf.r = tf.placeholder(tf.float32, [None, ], name='r') # input Reward
待续的英文
15 lf.a = tf.placeholder(tf.int32, [None, ], name='a') # input Action
16
17 w_initializer, b_initializer = tf.random_normal_initializer(0., 0.3), tf.constant_initializer(0.1)
18
19 # ------------------ build evaluate_net ------------------
20 with tf.variable_scope('eval_net'):
21 e1 = tf.layers.den(lf.s, 20, lu, kernel_initializer=w_initializer,
22 bias_initializer=b_initializer, name='e1')
23 lf.q_eval = tf.layers.den(e1, lf.n_actions, kernel_initializer=w_initializer,
24 bias_initializer=b_initializer, name='q')
25
thetend>output是什么意思26 # ------------------ build target_net ------------------
27 with tf.variable_scope('target_net'):
28 t1 = tf.layers.den(lf.s_, 20, lu, kernel_initializer=w_initializer,
29 bias_initializer=b_initializer, name='t1')
30 lf.q_next = tf.layers.den(t1, lf.n_actions, kernel_initializer=w_initializer,
31 bias_initializer=b_initializer, name='t2')
32
32
33 with tf.variable_scope('q_target'):
34 q_target = lf.r + lf.gamma * tf.reduce_max(lf.q_next, axis=1, name='Qmax_s_') # shape=(None, )
35 lf.q_target = tf.stop_gradient(q_target)#使⽤stop_gradient对q_target反传截断,⽅便计算loss
36 with tf.variable_scope('q_eval'):
37 a_indices = tf.stack([tf.range(tf.shape(lf.a)[0], dtype=tf.int32), lf.a], axis=1)
38 lf.q_eval_wrt_a = tf.gather_nd(params=lf.q_eval, indices=a_indices) # shape=(None, )
39 with tf.variable_scope('loss'):
40 lf.loss = tf.reduce_mean(tf.squared_difference(lf.q_target, lf.q_eval_wrt_a, name='TD_error'))
41 with tf.variable_scope('train'):
42 lf._train_op = tf.train.RMSPropOptimizer(lf.lr).minimize(lf.loss)
43
44 def store_transition(lf, s, a, r, s_):
45 if not hasattr(lf, 'memory_counter'):
46 lf.memory_counter = 0
47 #记录⼀条[s,a,r,s_]的记录
48 transition = np.hstack((s, [a, r], s_))maverick
49 # 总memory⼤⼩是固定的,如果超出总⼤⼩,旧memory就被新的memory替换
50 index = lf.memory_counter % lf.memory_size
51 lf.memory[index, :] = transition#替换过程
52 lf.memory_counter += 1
53
54 def choo_action(lf, obrvation):
55 # 统⼀obrvation的shape(1,size_of_obervation)
56 obrvation = waxis, :]
57
58 if np.random.uniform() < lf.epsilon:
59 # 让eval_net神经⽹络⽣成所有action的值,并选择值最⼤的action
60 actions_value = lf.ss.run(lf.q_eval, feed_dict={lf.s: obrvation})
61 action = np.argmax(actions_value)
62 el:
63 action = np.random.randint(0, lf.n_actions)
64 return action
我的英文单词65
66 def learn(lf):
67 # 检查是否替换了target_net的参数
68 if lf.learn_step_counter % lf.replace_target_iter == 0:
69 lf.ss.run(lf.target_replace_op)
70 print('ntarget_params_replacedn')
71
72 # 从memory中随机抽取batch_size这么多记忆
73 _counter > lf.memory_size:
74 sample_index = np.random._size, size=lf.batch_size)
75 el:
76 sample_index = np.random._counter, size=lf.batch_size)
77 batch_memory = lf.memory[sample_index, :]
78
79 #训练eval_net
80 _, cost = lf.ss.run(
81 [lf._train_op, lf.loss],
82 feed_dict={
83 lf.s: batch_memory[:, :lf.n_features],
84 lf.a: batch_memory[:, lf.n_features],
85 lf.r: batch_memory[:, lf.n_features + 1],
86 lf.s_: batch_memory[:, -lf.n_features:],
87 })
88
89 lf.cost_his.append(cost)
90
91 # 逐渐增强epsilon,降低⾏为的随机性
92 lf.epsilon = lf.epsilon + lf.epsilon_increment if lf.epsilon < lf.epsilon_max el lf.epsilon_max
93 lf.learn_step_counter += 1
94
95 def plot_cost(lf):
96 import matplotlib.pyplot as plt
97 plt.plot(np.arange(st_his)), lf.cost_his)