• 【PyTorch深度强化学习】TD3算法(双延迟-确定策略梯度算法)的讲解及实战(超详细 附源码)


    需要源码请点赞关注收藏后评论区留言~~~

    一、双延迟-确定策略梯度算法

    DDPG算法基础上,TD3算法的主要目的在于解决AC框架中,由函数逼近引入的偏差和方差问题。一方面,由于方差会引起过高估计,为解决过高估计问题,TD3将截断式双Q学习(clipped Double Q-Learning)应用于AC框架;另一方面,高方差会引起误差累积,为解决误差累积问题,TD3分别采用延迟策略更新和添加噪声平滑目标策略两种技巧。

    过高估计问题解决方案

       从策略梯度方法已知,基于PG的强化学习存在过高估计问题,但由于DDPG评论家的目标值不是取最优动作值函数的,所以不存在最大化操作。此时,将Double DQN思想直接用于DDPG的评论家,构造如下目标函数:

    y=r+γQ(s′,μ(s′,θ),w′)(bbb.11)(bbb.11)y=r+γQ(s′,μ(s′,θ),w′)
       实际上,这样的处理效果并不好,这是因为在连续动作空间中,策略变化缓慢,行动者更新较为平缓,使得预测QQ值与目标QQ值相差不大,无法避免过高估计问题。

       考虑将Double Q-Learning思想应用于DDPG,采用两个独立的评论家Qw1Qw1、Qw2Qw2和两个独立的行动者μθ1μθ1、μθ2μθ2,以50%的概率利用Q1Q1产生动作,然后更新Q2Q2估计值,而另外50%的概率正好相反。构建更新所需的两个目标值分别为:
    {y1=r+γQ(s′,μ(s′,θ1),w′2)y2=r+γQ(s′,μ(s′,θ2),w′1)(bbb.12)(bbb.12){y1=r+γQ(s′,μ(s′,θ1),w2′)y2=r+γQ(s′,μ(s′,θ2),w1′)
       但由于样本均来自于同一经验池,不能保证样本数据完全独立,所以两个行动者的样本具有一定相关性,在一定的情况下,甚至会加剧高估问题。针对此种情形,秉持“宁可低估,也不要高估”的想法,对Double Q-Learning进行修改,构建基于Clipped Double Q-learning方法的目标值:
    y=r+γmini=1,2Q(s′,μ(s′,θ1),w′i)(bbb.13)(bbb.13)y=r+γmini=1,2Q(s′,μ(s′,θ1),wi′)
       如式(bbb.13)所示,目标值只使用了一个行动者网络μθ1μθ1,取两个评论家网络Qw1Qw1和Qw2Qw2的最小值来作为值函数估计值。
       在更新评论家网络Qw1Qw1和Qw2Qw2时,均采用式(bbb.13)目标值y,共用如下损失函数:
    L(wi)=𝔼s,a,r,s′∼[y−Q(s,a,wi)]2(bbb.14)(bbb.14)L(wi)=Es,a,r,s′∼D[y−Q(s,a,wi)]2

       该算法相比于原算法的区别仅在于多了一个和原评论家Qw1Qw1同步更新的辅助评论家Qw2Qw2,在更新目标值y时取最小值。不过这一修改仍然会让人疑惑,Qw1Qw1和Qw2Qw2只有初始参数不同,后面的更新都一样,这样形成的两个类似的评论家能否有效消除TD误差带来的偏置估计。

    累积误差问题解决方案

       在函数逼近问题中,TD(0)算法的过高估计问题会进一步加剧,每次更新都会产生一定量的TD误差δ(s,a)δ(s,a):

    Q(s,a,w)=r+γ𝔼[Q(s′,a′,w)]−δ(s,a)(bbb.15)(bbb.15)Q(s,a,w)=r+γE[Q(s′,a′,w)]−δ(s,a)
       经过多次迭代更新后,误差会被累积:
    Q(St,At,w)=Rt+1+γ𝔼[Q(St+1,At+1,w)]−δt+1=Rt+1+γ𝔼[Rt+2+γ𝔼[Q(St+2,At+2,w)]−δt+2]−δt+1⋯⋯=𝔼Si∼ρβ,Ai∼μ[∑T−1γi−t(Ri+1−δi+1)](bbb.16)(bbb.16)Q(St,At,w)=Rt+1+γE[Q(St+1,At+1,w)]−δt+1=Rt+1+γE[Rt+2+γE[Q(St+2,At+2,w)]−δt+2]−δt+1⋯⋯=ESi∼ρβ,Ai∼μ[∑T−1γi−t(Ri+1−δi+1)]

       由此可见,估计的方差与未来奖励、未来TD误差的方差成正比。当折扣因子γγ较大时,每次更新都可以引起方差的快速提升,所以通常TD3设置较小的折扣系数γγ。

    延迟的策略更新

       TD3目标网络的更新方式与DDPG相同,都采用软更新,尽管软更新比硬更新更有利于算法的稳定性,但AC算法依然会失败,其原因通常在于行动者和评论家的更新是相互作用的结果:评论家提供的值函数估计值不准确,就会使行动者将策略往错误方向改进;行动者产生了较差的策略,就会进一步加剧评论家误差累积问题,两者不断作用产生恶性循环。
       为解决以上问题,TD3考虑对策略进行延时更新,减少行动者的更新频率,尽可能等待评论家训练收敛后再进行更新操作。延时更新操作可以有效减少累积误差,从而降低方差;同时,也能减少不必要的重复更新操作,一定程度上提升效率。在实际应用时,TD3采取的操作是每隔评论家更新dd次后,再对行动者进行更新。

    目标策略平滑操作

       上节中通过延时更新策略来减小误差累积,接下来考虑误差本身。首先,误差的根源是值函数逼近所产生的偏差,在机器学习中,消除估计偏差的常用方法就是对参数更新进行正则化,同样的,这一思想也可以应用在强化学习中。
       一个很自然的想法是,相似的动作应该拥有相似的价值,动作空间中目标动作周围的一小片区域的价值若能足够平滑,就可以有效减少误差的产生。TD3的具体做法是,为目标动作添加截断噪声:

    ã ←μ(s′,θ′)+εε∼clip(N(0,σ),−c,c)(bbb.17)(bbb.17)a~←μ(s′,θ′)+εε∼clip⁡(N(0,σ),−c,c)
       该噪声处理也是一种正则化方式。通过这种平滑操作,可以增加算法的泛化能力,缓解过拟合问题,减少价值被过高估计的一些不良状态对策略学习的干扰。

    二、TD3算法流程  


      算法bbb.2 TD3算法(Lillicrap al. 2016)


      初始化:
         1. 初始化预测价值网络Qw1Qw1和Qw2Qw2,网络参数分别为w1w1和w2w2
         2. 初始化目标价值网络Qw′1Qw1′和Qw′2Qw2′,网络参数分别为w′1w1′和w′2w2′
         3. 初始化预测策略网络μθμθ和目标策略网络μθ′μθ′,网络参数分别为θθ和θ′θ′
         4. 同步参数w′1←w1w1′←w1,w′2←w2w2′←w2,θ′←θθ′←θ
         5. 经验池D的容量为NN
         6. 总迭代次数MM,折扣因子γγ,τ=0.0001τ=0.0001,随机小批量采样样本数量nn


         7. for ee=1 to MM do:
         8.   初始化状态设置为S0S0
         9.   repeat(情节中的每一时间步t=0,1,2,…t=0,1,2,…):
         10.     根据当前的预测策略网络和探索噪声来选择动作根据当前的预测策略网络和探索噪声来选择动作At=μ(St,θ)+εtAt=μ(St,θ)+εt,
              其中εt∼t(0,σ)εt∼Nt(0,σ)
         11.     执行动作AtAt,获得奖赏Rt+1Rt+1和下一状态St+1St+1
         12.     将经验转换(St,At,Rt+1,St+1)(St,At,Rt+1,St+1)存储在经验池D中
         13.     从经验池D中随机采样小批量的nn个经验转移样本(Si,Ai,Ri+1,Si+1)(Si,Ai,Ri+1,Si+1),计算:
              (1)扰动后的动作ã i+1←μ(Si+1,θ′)+εia~i+1←μ(Si+1,θ′)+εi,其中εi∼clip(t(0,σ̃ ),−c,c)εi∼clip⁡(Nt(0,σ~),−c,c)
              (2)更新目标yi=Ri+1+γmini=1,2Q(Si+1,ã i+1,w′i)yi=Ri+1+γmini=1,2Q(Si+1,a~i+1,wi′)
         14.     使用MBGD,根据最小化损失函数来更新价值网络(评论家网络)参数ww:
    ∇wL(w)≈1N∑iN(yi−Q(Si,Ai,w))∇wQ(Si,Ai,w)∇wL(w)≈1N∑iN(yi−Q(Si,Ai,w))∇wQ(Si,Ai,w)
         15.     if tt mod dd then
         16.       使用MBGA法,根据最大化目标函数来更新策略网络(行动者网络)参数θθ:
    ∇θĴ β(θ)≈1N∑i∇θμ(Si,θ)∇aQ(Si,a,w)|||||a=μ(Si,θ)∇θJ^β(θ)≈1N∑i∇θμ(Si,θ)∇aQ(Si,a,w)|a=μ(Si,θ)
         17.       软更新目标网络:{w′←τw+(1−τ)w′θ′←τθ+(1−τ)θ′{w′←τw+(1−τ)w′θ′←τθ+(1−τ)θ′
         18.   until t=T−1

    三、实验环境

    实验环境:OpenAI Gym工具包中的MuIoCo环境,用了其中四个连续控制任务,包括Ant,HalfCheetah,Walker2d,Hopper

    每次训练 均运行1000000步,并每取5000步作为一个训练阶段,每个训练阶段结束,对所学策略进行测试评估 与环境交互十个情节并取平均返回值 
    结果如下图

    可以发现在Ant和Walker2d任务中TD3由于采用了Clipped Double Q-Learning机制 较好的缓解了高估问题 减少了由于高估问题导致的不良状态对于策略更新乃至后续训练的不良影响,动作值逼近相对更为准确,因而相对DDPG而言,不容易陷入局部最优,Agent与环境交互所获得的回报,相比较会大幅提升,总而言之,与DDPG相比,TD3算法训练各阶段波动性更小,算法整体更加稳定

     

     四、代码

    部分源码如下

    1. import numpy as np
    2. import torch
    3. import gym
    4. import os
    5. import copy
    6. import numpy as np
    7. import torch
    8. import torch.nn as nn
    9. import torch.nn.functional as F
    10. device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    11. class ReplayBuffer(object):
    12. def __init_
    13. self.ptr = 0
    14. self.size = 0
    15. self.state = np.zeros((max_size, state_dim))
    16. self.action = np.zeros((max_size, action_dim))
    17. self.next_state = np.zeros((max_size, state_dim))
    18. self.reward = np.zeros((max_size, 1))
    19. self.not_done = np.zeros((max_size, 1))
    20. self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    21. def add(self, state, action, next_state, reward, done):
    22. self.state[self.ptr] = state
    23. self.action[self.ptr] = action
    24. self.next_state[self.ptr] = next_state
    25. self.reward[self.ptr] = reward
    26. self.not_done[self.ptr] = 1. - done
    27. self.ptr = (self.ptr + 1) % self.max_size
    28. self.size = min(self.size + 1, self.max_size)
    29. def sample(self, batch_size):
    30. ind = np.random.randint(0, self.size, size=batch_size)
    31. return (
    32. torch.FloatTensor(self.state[ind]).to(self.device),
    33. torch.FloatTensor(self.action[ind]).to(self.device),
    34. torch.FloatTensor(self.next_state[ind]).to(self.device),
    35. torch.FloatTensor(self.reward[ind]).to(self.device),
    36. torch.FloatTensor(self.not_done[ind]).to(self.device)
    37. )
    38. class Actor(nn.Module):
    39. def __init__(self, state_dim, action_dim, max_action):
    40. super(Actor, self).__init__()
    41. self.l1 = nn.Linear(state_dim, 256)
    42. self.l2 = nn.Linear(256, 256)
    43. self.l3 = nn.Linear(256, action_dim)
    44. self.max_action = max_action
    45. def forward(self, state):
    46. a = F.relu(self.l1(state))
    47. a = F.relu(self.l2(a))
    48. return self.max_action * torch.tanh(self.l3(a))
    49. class Critic(nn.Module):
    50. def __init__(self, state_dim, action_dim):
    51. super(Critic, self).__init__()
    52. # Q1 architecture
    53. self.l1 = nn.Linear(state_dim + action_dim, 256)
    54. self.l2 = nn.Linear(256, 256)
    55. self.l3 = nn.Linear(256, 1)
    56. # Q2 architecture
    57. self.l4 = nn.Linear(state_dim + action_dim, 256)
    58. self.l5 = nn.Linear(256, 256)
    59. self.l6 = nn.Linear(256, 1)
    60. def forward(self, state, action):
    61. sa = torch.cat([state, action], 1)
    62. q1 = F.relu(self.l1(sa))
    63. q1 = F.relu(self.l2(q1))
    64. q1 = self.l3(q1)
    65. q2 = F.relu(self.l4(sa))
    66. q2 = F.relu(self.l5(q2))
    67. q2 = self.l6(q2)
    68. return q1, q2
    69. def Q1(self, state, action):
    70. sa = torch.cat([state, action], 1)
    71. q1 = F.relu(self.l1(sa))
    72. q1 = F.relu(self.l2(q1))
    73. q1 = self.l3(q1)
    74. return q1
    75. actor1=Actor(17,6,1.0)
    76. for ch in actor1.children():
    77. print(ch)
    78. print("*********************")
    79. critic1=Critic(17,6)
    80. for ch in critic1.children():
    81. print(ch)
    82. class TD3(object):
    83. def __init__(
    84. self,
    85. state_dim,
    86. action_dim,
    87. max_action,
    88. discount=0.99,
    89. tau=0.005,
    90. policy_noise=0.2,
    91. noise_clip=0.5,
    92. policy_freq=2
    93. ):
    94. self.actor = Actor(state_dim, action_dim, max_action).to(device)
    95. self.actor_target = copy.deepcopy(self.actor)
    96. self.actor_optimizer = torch.optim.Adam(self.actor.parameters(), lr=3e-4)
    97. self.critic = Critic(state_dim, action_dim).to(device)
    98. self.critic_target = copy.deepcopy(self.critic)
    99. self.critic_optimizer = torch.optim.Adam(self.critic.parameters(), lr=3e-4)
    100. self.max_action = max_action
    101. self.discount = discount
    102. self.tau = tau
    103. self.policy_noise = policy_noise
    104. self.noise_clip = noise_clip
    105. self.policy_freq = policy_freq
    106. self.total_it = 0
    107. def select_action(self, state):
    108. state = torch.FloatTensor(state.reshape(1, -1)).to(device)
    109. return self.actor(state).cpu().data.numpy().flatten()
    110. def train(self, replay_buffer, batch_size=100):
    111. self.total_it += 1
    112. # Sample replay buffer
    113. state, action, next_state, reward, not_done = replay_buffer.sample(batch_size)
    114. with torch.no_grad():
    115. # Select action according to policy and add clipped noise
    116. noise = (
    117. torch.randn_like(action) * self.policy_noise
    118. ).clamp(-self.noise_clip, self.noise_clip)
    119. next_action = (
    120. self.actor_target(next_state) + noise
    121. ).clamp(-self.max_action, self.max_action)
    122. # Compute the target Q value
    123. target_Q1, target_Q2 = self.critic_target(next_state, next_action)
    124. target_Q = torch.min(target_Q1, target_Q2)
    125. target_Q = reward + not_done * self.discount * target_Q
    126. # Get current Q estimates
    127. current_Q1, current_Q2 = self.critic(state, action)
    128. # Compute critic loss
    129. critic_loss = F.mse_loss(current_Q1, target_Q) + F.mse_loss(current_Q2, target_Q)
    130. # Optimize the critic
    131. self.critic_optimizer.zero_grad()
    132. critic_loss.backward()
    133. self.critic_optimizer.step()
    134. # Delayed policy updates
    135. if self.total_it % self.policy_freq == 0:
    136. # Compute actor losse
    137. actor_loss = -self.critic.Q1(state, self.actor(state)).mean()
    138. # Optimize the actor
    139. self.actor_optimizer.zero_grad()
    140. actor_loss.backward()
    141. self.actor_optimizer.step()
    142. # Update the frozen target models
    143. for param, target_param in zip(self.critic.parameters(), self.critic_target.parameters()):
    144. target_param.data.copy_(self.tau * param.data + (1 - self.tau) * target_param.data)
    145. for param, target_param in zip(self.actor.parameters(), self.actor_target.parameters()):
    146. target_param.data.copy_(self.tau * param.data + (1 - self.tau) * target_param.data)
    147. def save(self, filename):
    148. torch.save(self.critic.state_dict(), filename + "_critic")
    149. torch.save(self.critic_optimizer.state_dict(), filename + "_critic_optimizer")
    150. torch.save(self.actor.state_dict(), filename + "_actor")
    151. torch.save(self.actor_optimizer.state_dict(), filename + "_actor_optimizer")
    152. def load(self, filename):
    153. self.critic.load_state_dict(torch.load(filename + "_critic"))
    154. self.critic_optimizer.load_state_dict(torch.load(filename + "_critic_optimizer"))
    155. self.critic_target = copy.deepcopy(self.critic)
    156. self.actor.load_state_dict(torch.load(filename + "_actor"))
    157. self.actor_optimizer.load_state_dict(torch.load(filename + "_actor_optimizer"))
    158. self.actor_target = copy.deepcopy(self.actor)
    159. # Runs policy for X episodes and returns average reward
    160. # A fixed seed is used for the eval environment
    161. def eval_policy(policy, env_name, seed, eval_episodes=10):
    162. eval_env = gym.make(env_name)
    163. eval_env.seed(seed + 100)
    164. avg_reward = 0.
    165. for _ in range(eval_episodes):
    166. state, done = eval_env.reset(), False
    167. while not done:
    168. action = policy.select_action(np.array(state))
    169. state, reward, done, _ = eval_env.step(action)
    170. avg_reward += reward
    171. avg_reward /= eval_episodes
    172. print("---------------------------------------")
    173. print(f"Evaluation over {eval_episodes} episodes: {avg_reward:.3f}")
    174. print("---------------------------------------")
    175. return avg_reward
    176. policy = "TD3"
    177. env_name = "Walker2d-v4" # OpenAI gym environment name
    178. seed = 0 # Sets Gym, PyTorch and Numpy seeds
    179. start_timesteps = 25e3 # Time steps initial random policy is used
    180. eval_freq = 5e3 # How often (time steps) we evaluate
    181. max_timesteps = 1e6 # Max time steps to run environment
    182. expl_noise = 0.1 # Std of Gaussian exploration noise
    183. batch_size = 256 # Batch size for both actor and critic
    184. discount = 0.99 # Discount factor
    185. tau = 0.005 # Target network update rate
    186. policy_noise = 0.2 # Noise added to target policy during critic update
    187. noise_clip = 0.5 # Range to clip target policy noise
    188. policy_freq = 2 # Frequency of delayed policy updates
    189. save_model = "store_true" # Save model and optimizer parameters
    190. load_model = "" # Model load file name, "" doesn't load, "default" uses file_name
    191. file_name = f"{policy}_{env_name}_{seed}"
    192. print("---------------------------------------")
    193. print(f"Policy: {policy}, Env: {env_name}, Seed: {seed}")
    194. print("---------------------------------------")
    195. if not os.path.exists("./results"):
    196. os.makedirs("./results")
    197. if save_model and not os.path.exists("./models"):
    198. os.makedirs("./models")
    199. env = gym.make(env_name)
    200. # Set seeds
    201. env.seed(seed)
    202. torch.manual_seed(seed)
    203. np.random.seed(seed)
    204. state_dim = env.observation_space.shape[0]
    205. action_dim = env.action_space.shape[0]
    206. max_action = float(env.action_space.high[0])
    207. kwargs = {
    208. "state_dim": state_dim,
    209. "action_dim": action_dim,
    210. "max_action": max_action,
    211. "discount": discount,
    212. "tau": tau,
    213. "policy_noise": policy_noise * max_action,
    214. "noise_clip": noise_clip * max_action,
    215. "policy_freq": policy_freq
    216. }
    217. policy = TD3(**kwargs)
    218. if load_model != "":
    219. policy_file = file_name if load_model == "default" else load_model
    220. policy.load(f"./models/{policy_file}")
    221. replay_buffer = ReplayBuffer(state_dim, action_dim)
    222. # Evaluate untrained policy
    223. evaluations = [eval_policy(policy, env_name, seed)]
    224. state, done = env.reset(), False
    225. episode_reward = 0
    226. episode_timesteps = 0
    227. episode_num = 0
    228. for t in range(int(max_timesteps)):
    229. episode_timesteps += 1
    230. # Select action randomly or according to policy
    231. if t < start_timesteps:
    232. action = env.action_space.sample()
    233. else:
    234. action = (
    235. policy.select_action(np.array(state))
    236. + np.random.normal(0, max_action * expl_noise, size=action_dim)
    237. ).clip(-max_action, max_action)
    238. l = float(done) if episode_timesteps < env._max_episode_steps else 0
    239. # Store data in replay buffer
    240. replay_buffer.add(state, action, next_state, reward, done_bool)
    241. state = next_state
    242. episode_reward += reward
    243. # Train agent after collecting sufficient data
    244. if t >= start_timesteps:
    245. policy.train(replay_buffer, batch_size)
    246. if done:
    247. end(eval_policy(policy, env_name, seed))
    248. np.save(f"./results/{file_name}", evaluations)
    249. if save_model:
    250. policy.save(f"./models/{file_name}")
    251. state_dim

    创作不易 觉得有帮助请点赞关注收藏~~~

  • 相关阅读:
    USB总线-Linux内核USB3.0主机控制器驱动框架分析(十二)
    Java 面试全解析:核心知识点与典型面试题
    mac苹果电脑需要安装杀毒软件吗?
    【系统分析师之路】第五章 复盘软件工程(软件过程改进)
    flink多流操作(connect cogroup union broadcast)
    ARM汇编之程序状态寄存器传输指令
    爬虫基本原理
    【01】区块链技术概述
    浅析Easy Rules规则引擎以及示例
    vue基础教程(7)——构建项目级首页
  • 原文地址:https://blog.csdn.net/jiebaoshayebuhui/article/details/128069864