• 用深度强化学习来玩Chrome小恐龙快跑


    目录

    实机演示

    代码实现


    实机演示

    深度强化学习来玩Chrome小恐龙快跑

    代码实现

    1. import os
    2. import cv2
    3. from pygame import RLEACCEL
    4. from pygame.image import load
    5. from pygame.sprite import Sprite, Group, collide_mask
    6. from pygame import Rect, init, time, display, mixer, transform, Surface
    7. from pygame.surfarray import array3d
    8. import torch
    9. from random import randrange, choice
    10. import numpy as np
    11. mixer.pre_init(44100, -16, 2, 2048)
    12. init()
    13. scr_size = (width, height) = (600, 150)
    14. FPS = 60
    15. gravity = 0.6
    16. black = (0, 0, 0)
    17. white = (255, 255, 255)
    18. background_col = (235, 235, 235)
    19. high_score = 0
    20. screen = display.set_mode(scr_size)
    21. clock = time.Clock()
    22. display.set_caption("T-Rex Rush")
    23. def load_image(
    24. name,
    25. sizex=-1,
    26. sizey=-1,
    27. colorkey=None,
    28. ):
    29. fullname = os.path.join("assets/sprites", name)
    30. image = load(fullname)
    31. image = image.convert()
    32. if colorkey is not None:
    33. if colorkey is -1:
    34. colorkey = image.get_at((0, 0))
    35. image.set_colorkey(colorkey, RLEACCEL)
    36. if sizex != -1 or sizey != -1:
    37. image = transform.scale(image, (sizex, sizey))
    38. return (image, image.get_rect())
    39. def load_sprite_sheet(
    40. sheetname,
    41. nx,
    42. ny,
    43. scalex=-1,
    44. scaley=-1,
    45. colorkey=None,
    46. ):
    47. fullname = os.path.join("assets/sprites", sheetname)
    48. sheet = load(fullname)
    49. sheet = sheet.convert()
    50. sheet_rect = sheet.get_rect()
    51. sprites = []
    52. sizey = sheet_rect.height / ny
    53. if isinstance(nx, int):
    54. sizex = sheet_rect.width / nx
    55. for i in range(0, ny):
    56. for j in range(0, nx):
    57. rect = Rect((j * sizex, i * sizey, sizex, sizey))
    58. image = Surface(rect.size)
    59. image = image.convert()
    60. image.blit(sheet, (0, 0), rect)
    61. if colorkey is not None:
    62. if colorkey is -1:
    63. colorkey = image.get_at((0, 0))
    64. image.set_colorkey(colorkey, RLEACCEL)
    65. if scalex != -1 or scaley != -1:
    66. image = transform.scale(image, (scalex, scaley))
    67. sprites.append(image)
    68. else: #list
    69. sizex_ls = [sheet_rect.width / i_nx for i_nx in nx]
    70. for i in range(0, ny):
    71. for i_nx, sizex, i_scalex in zip(nx, sizex_ls, scalex):
    72. for j in range(0, i_nx):
    73. rect = Rect((j * sizex, i * sizey, sizex, sizey))
    74. image = Surface(rect.size)
    75. image = image.convert()
    76. image.blit(sheet, (0, 0), rect)
    77. if colorkey is not None:
    78. if colorkey is -1:
    79. colorkey = image.get_at((0, 0))
    80. image.set_colorkey(colorkey, RLEACCEL)
    81. if i_scalex != -1 or scaley != -1:
    82. image = transform.scale(image, (i_scalex, scaley))
    83. sprites.append(image)
    84. sprite_rect = sprites[0].get_rect()
    85. return sprites, sprite_rect
    86. def extractDigits(number):
    87. if number > -1:
    88. digits = []
    89. i = 0
    90. while (number / 10 != 0):
    91. digits.append(number % 10)
    92. number = int(number / 10)
    93. digits.append(number % 10)
    94. for i in range(len(digits), 5):
    95. digits.append(0)
    96. digits.reverse()
    97. return digits
    98. def pre_processing(image, w=84, h=84):
    99. image = image[:300, :, :]
    100. # cv2.imwrite("ori.jpg", image)
    101. image = cv2.cvtColor(cv2.resize(image, (w, h)), cv2.COLOR_BGR2GRAY)
    102. # cv2.imwrite("color.jpg", image)
    103. _, image = cv2.threshold(image, 127, 255, cv2.THRESH_BINARY)
    104. # cv2.imwrite("bw.jpg", image)
    105. return image[None, :, :].astype(np.float32)
    106. class Dino():
    107. def __init__(self, sizex=-1, sizey=-1):
    108. self.images, self.rect = load_sprite_sheet("dino.png", 5, 1, sizex, sizey, -1)
    109. self.images1, self.rect1 = load_sprite_sheet("dino_ducking.png", 2, 1, 59, sizey, -1)
    110. self.rect.bottom = int(0.98 * height)
    111. self.rect.left = width / 15
    112. self.image = self.images[0]
    113. self.index = 0
    114. self.counter = 0
    115. self.score = 0
    116. self.isJumping = False
    117. self.isDead = False
    118. self.isDucking = False
    119. self.isBlinking = False
    120. self.movement = [0, 0]
    121. self.jumpSpeed = 11.5
    122. self.stand_pos_width = self.rect.width
    123. self.duck_pos_width = self.rect1.width
    124. def draw(self):
    125. screen.blit(self.image, self.rect)
    126. def checkbounds(self):
    127. if self.rect.bottom > int(0.98 * height):
    128. self.rect.bottom = int(0.98 * height)
    129. self.isJumping = False
    130. def update(self):
    131. if self.isJumping:
    132. self.movement[1] = self.movement[1] + gravity
    133. if self.isJumping:
    134. self.index = 0
    135. elif self.isBlinking:
    136. if self.index == 0:
    137. if self.counter % 400 == 399:
    138. self.index = (self.index + 1) % 2
    139. else:
    140. if self.counter % 20 == 19:
    141. self.index = (self.index + 1) % 2
    142. elif self.isDucking:
    143. if self.counter % 5 == 0:
    144. self.index = (self.index + 1) % 2
    145. else:
    146. if self.counter % 5 == 0:
    147. self.index = (self.index + 1) % 2 + 2
    148. if self.isDead:
    149. self.index = 4
    150. if not self.isDucking:
    151. self.image = self.images[self.index]
    152. self.rect.width = self.stand_pos_width
    153. else:
    154. self.image = self.images1[(self.index) % 2]
    155. self.rect.width = self.duck_pos_width
    156. self.rect = self.rect.move(self.movement)
    157. self.checkbounds()
    158. if not self.isDead and self.counter % 7 == 6 and self.isBlinking == False:
    159. self.score += 1
    160. self.counter = (self.counter + 1)
    161. class Cactus(Sprite):
    162. def __init__(self, speed=5, sizex=-1, sizey=-1):
    163. Sprite.__init__(self, self.containers)
    164. self.images, self.rect = load_sprite_sheet("cacti-small.png", [2, 3, 6], 1, sizex, sizey, -1)
    165. self.rect.bottom = int(0.98 * height)
    166. self.rect.left = width + self.rect.width
    167. self.image = self.images[randrange(0, 11)]
    168. self.movement = [-1 * speed, 0]
    169. def draw(self):
    170. screen.blit(self.image, self.rect)
    171. def update(self):
    172. self.rect = self.rect.move(self.movement)
    173. if self.rect.right < 0:
    174. self.kill()
    175. class Ptera(Sprite):
    176. def __init__(self, speed=5, sizex=-1, sizey=-1):
    177. Sprite.__init__(self, self.containers)
    178. self.images, self.rect = load_sprite_sheet("ptera.png", 2, 1, sizex, sizey, -1)
    179. self.ptera_height = [height * 0.82, height * 0.75, height * 0.60, height * 0.48]
    180. self.rect.centery = self.ptera_height[randrange(0, 4)]
    181. self.rect.left = width + self.rect.width
    182. self.image = self.images[0]
    183. self.movement = [-1 * speed, 0]
    184. self.index = 0
    185. self.counter = 0
    186. def draw(self):
    187. screen.blit(self.image, self.rect)
    188. def update(self):
    189. if self.counter % 10 == 0:
    190. self.index = (self.index + 1) % 2
    191. self.image = self.images[self.index]
    192. self.rect = self.rect.move(self.movement)
    193. self.counter = (self.counter + 1)
    194. if self.rect.right < 0:
    195. self.kill()
    196. class Ground():
    197. def __init__(self, speed=-5):
    198. self.image, self.rect = load_image("ground.png", -1, -1, -1)
    199. self.image1, self.rect1 = load_image("ground.png", -1, -1, -1)
    200. self.rect.bottom = height
    201. self.rect1.bottom = height
    202. self.rect1.left = self.rect.right
    203. self.speed = speed
    204. def draw(self):
    205. screen.blit(self.image, self.rect)
    206. screen.blit(self.image1, self.rect1)
    207. def update(self):
    208. self.rect.left += self.speed
    209. self.rect1.left += self.speed
    210. if self.rect.right < 0:
    211. self.rect.left = self.rect1.right
    212. if self.rect1.right < 0:
    213. self.rect1.left = self.rect.right
    214. class Cloud(Sprite):
    215. def __init__(self, x, y):
    216. Sprite.__init__(self, self.containers)
    217. self.image, self.rect = load_image("cloud.png", int(90 * 30 / 42), 30, -1)
    218. self.speed = 1
    219. self.rect.left = x
    220. self.rect.top = y
    221. self.movement = [-1 * self.speed, 0]
    222. def draw(self):
    223. screen.blit(self.image, self.rect)
    224. def update(self):
    225. self.rect = self.rect.move(self.movement)
    226. if self.rect.right < 0:
    227. self.kill()
    228. class Scoreboard():
    229. def __init__(self, x=-1, y=-1):
    230. self.score = 0
    231. self.tempimages, self.temprect = load_sprite_sheet("numbers.png", 12, 1, 11, int(11 * 6 / 5), -1)
    232. self.image = Surface((55, int(11 * 6 / 5)))
    233. self.rect = self.image.get_rect()
    234. if x == -1:
    235. self.rect.left = width * 0.89
    236. else:
    237. self.rect.left = x
    238. if y == -1:
    239. self.rect.top = height * 0.1
    240. else:
    241. self.rect.top = y
    242. def draw(self):
    243. screen.blit(self.image, self.rect)
    244. def update(self, score):
    245. score_digits = extractDigits(score)
    246. self.image.fill(background_col)
    247. if len(score_digits) == 6:
    248. score_digits = score_digits[1:]
    249. for s in score_digits:
    250. self.image.blit(self.tempimages[s], self.temprect)
    251. self.temprect.left += self.temprect.width
    252. self.temprect.left = 0
    253. class ChromeDino(object):
    254. def __init__(self):
    255. self.gamespeed = 5
    256. self.gameOver = False
    257. self.gameQuit = False
    258. self.playerDino = Dino(44, 47)
    259. self.new_ground = Ground(-1 * self.gamespeed)
    260. self.scb = Scoreboard()
    261. self.highsc = Scoreboard(width * 0.78)
    262. self.counter = 0
    263. self.cacti = Group()
    264. self.pteras = Group()
    265. self.clouds = Group()
    266. self.last_obstacle = Group()
    267. Cactus.containers = self.cacti
    268. Ptera.containers = self.pteras
    269. Cloud.containers = self.clouds
    270. self.retbutton_image, self.retbutton_rect = load_image("replay_button.png", 35, 31, -1)
    271. self.gameover_image, self.gameover_rect = load_image("game_over.png", 190, 11, -1)
    272. self.temp_images, self.temp_rect = load_sprite_sheet("numbers.png", 12, 1, 11, int(11 * 6 / 5), -1)
    273. self.HI_image = Surface((22, int(11 * 6 / 5)))
    274. self.HI_rect = self.HI_image.get_rect()
    275. self.HI_image.fill(background_col)
    276. self.HI_image.blit(self.temp_images[10], self.temp_rect)
    277. self.temp_rect.left += self.temp_rect.width
    278. self.HI_image.blit(self.temp_images[11], self.temp_rect)
    279. self.HI_rect.top = height * 0.1
    280. self.HI_rect.left = width * 0.73
    281. def step(self, action, record=False): # 0: Do nothing. 1: Jump. 2: Duck
    282. reward = 0.1
    283. if action == 0:
    284. reward += 0.01
    285. self.playerDino.isDucking = False
    286. elif action == 1:
    287. self.playerDino.isDucking = False
    288. if self.playerDino.rect.bottom == int(0.98 * height):
    289. self.playerDino.isJumping = True
    290. self.playerDino.movement[1] = -1 * self.playerDino.jumpSpeed
    291. elif action == 2:
    292. if not (self.playerDino.isJumping and self.playerDino.isDead) and self.playerDino.rect.bottom == int(
    293. 0.98 * height):
    294. self.playerDino.isDucking = True
    295. for c in self.cacti:
    296. c.movement[0] = -1 * self.gamespeed
    297. if collide_mask(self.playerDino, c):
    298. self.playerDino.isDead = True
    299. reward = -1
    300. break
    301. else:
    302. if c.rect.right < self.playerDino.rect.left < c.rect.right + self.gamespeed + 1:
    303. reward = 1
    304. break
    305. for p in self.pteras:
    306. p.movement[0] = -1 * self.gamespeed
    307. if collide_mask(self.playerDino, p):
    308. self.playerDino.isDead = True
    309. reward = -1
    310. break
    311. else:
    312. if p.rect.right < self.playerDino.rect.left < p.rect.right + self.gamespeed + 1:
    313. reward = 1
    314. break
    315. if len(self.cacti) < 2:
    316. if len(self.cacti) == 0 and len(self.pteras) == 0:
    317. self.last_obstacle.empty()
    318. self.last_obstacle.add(Cactus(self.gamespeed, [60, 40, 20], choice([40, 45, 50])))
    319. else:
    320. for l in self.last_obstacle:
    321. if l.rect.right < width * 0.7 and randrange(0, 50) == 10:
    322. self.last_obstacle.empty()
    323. self.last_obstacle.add(Cactus(self.gamespeed, [60, 40, 20], choice([40, 45, 50])))
    324. # if len(self.pteras) == 0 and randrange(0, 200) == 10 and self.counter > 500:
    325. if len(self.pteras) == 0 and len(self.cacti) < 2 and randrange(0, 50) == 10 and self.counter > 500:
    326. for l in self.last_obstacle:
    327. if l.rect.right < width * 0.8:
    328. self.last_obstacle.empty()
    329. self.last_obstacle.add(Ptera(self.gamespeed, 46, 40))
    330. if len(self.clouds) < 5 and randrange(0, 300) == 10:
    331. Cloud(width, randrange(height / 5, height / 2))
    332. self.playerDino.update()
    333. self.cacti.update()
    334. self.pteras.update()
    335. self.clouds.update()
    336. self.new_ground.update()
    337. self.scb.update(self.playerDino.score)
    338. state = display.get_surface()
    339. screen.fill(background_col)
    340. self.new_ground.draw()
    341. self.clouds.draw(screen)
    342. self.scb.draw()
    343. self.cacti.draw(screen)
    344. self.pteras.draw(screen)
    345. self.playerDino.draw()
    346. display.update()
    347. clock.tick(FPS)
    348. if self.playerDino.isDead:
    349. self.gameOver = True
    350. self.counter = (self.counter + 1)
    351. if self.gameOver:
    352. self.__init__()
    353. state = array3d(state)
    354. if record:
    355. return torch.from_numpy(pre_processing(state)), np.transpose(
    356. cv2.cvtColor(state, cv2.COLOR_RGB2BGR), (1, 0, 2)), reward, not (reward > 0)
    357. else:
    358. return torch.from_numpy(pre_processing(state)), reward, not (reward > 0)
    1. import torch.nn as nn
    2. class DeepQNetwork(nn.Module):
    3. def __init__(self):
    4. super(DeepQNetwork, self).__init__()
    5. self.conv1 = nn.Sequential(nn.Conv2d(4, 32, kernel_size=8, stride=4), nn.ReLU(inplace=True))
    6. self.conv2 = nn.Sequential(nn.Conv2d(32, 64, kernel_size=4, stride=2), nn.ReLU(inplace=True))
    7. self.conv3 = nn.Sequential(nn.Conv2d(64, 64, kernel_size=3, stride=1), nn.ReLU(inplace=True))
    8. self.fc1 = nn.Sequential(nn.Linear(7 * 7 * 64, 512), nn.ReLU(inplace=True))
    9. self.fc2 = nn.Linear(512, 3)
    10. self._initialize_weights()
    11. def _initialize_weights(self):
    12. for m in self.modules():
    13. if isinstance(m, nn.Conv2d) or isinstance(m, nn.Linear):
    14. nn.init.uniform_(m.weight, -0.01, 0.01)
    15. nn.init.constant_(m.bias, 0)
    16. def forward(self, input):
    17. output = self.conv1(input)
    18. output = self.conv2(output)
    19. output = self.conv3(output)
    20. output = output.view(output.size(0), -1)
    21. output = self.fc1(output)
    22. output = self.fc2(output)
    23. return output
    1. import argparse
    2. import torch
    3. from src.model import DeepQNetwork
    4. from src.env import ChromeDino
    5. import cv2
    6. def get_args():
    7. parser = argparse.ArgumentParser(
    8. """Implementation of Deep Q Network to play Chrome Dino""")
    9. parser.add_argument("--saved_path", type=str, default="trained_models")
    10. parser.add_argument("--fps", type=int, default=60, help="frames per second")
    11. parser.add_argument("--output", type=str, default="output/chrome_dino.mp4", help="the path to output video")
    12. args = parser.parse_args()
    13. return args
    14. def q_test(opt):
    15. if torch.cuda.is_available():
    16. torch.cuda.manual_seed(123)
    17. else:
    18. torch.manual_seed(123)
    19. model = DeepQNetwork()
    20. checkpoint_path = "{}/chrome_dino.pth".format(opt.saved_path)
    21. checkpoint = torch.load(checkpoint_path)
    22. model.load_state_dict(checkpoint["model_state_dict"])
    23. model.eval()
    24. env = ChromeDino()
    25. state, raw_state, _, _ = env.step(0, True)
    26. state = torch.cat(tuple(state for _ in range(4)))[None, :, :, :]
    27. if torch.cuda.is_available():
    28. model.cuda()
    29. state = state.cuda()
    30. out = cv2.VideoWriter(opt.output, cv2.VideoWriter_fourcc(*"MJPG"), opt.fps, (600, 150))
    31. done = False
    32. while not done:
    33. prediction = model(state)[0]
    34. action = torch.argmax(prediction).item()
    35. next_state, raw_next_state, reward, done = env.step(action, True)
    36. out.write(raw_next_state)
    37. if torch.cuda.is_available():
    38. next_state = next_state.cuda()
    39. next_state = torch.cat((state[0, 1:, :, :], next_state))[None, :, :, :]
    40. state = next_state
    41. if __name__ == "__main__":
    42. opt = get_args()
    43. q_test(opt)
    1. import argparse
    2. import os
    3. from random import random, randint, sample
    4. import pickle
    5. import numpy as np
    6. import torch
    7. import torch.nn as nn
    8. from src.model import DeepQNetwork
    9. from src.env import ChromeDino
    10. def get_args():
    11. parser = argparse.ArgumentParser(
    12. """Implementation of Deep Q Network to play Chrome Dino""")
    13. parser.add_argument("--batch_size", type=int, default=64, help="The number of images per batch")
    14. parser.add_argument("--optimizer", type=str, choices=["sgd", "adam"], default="adam")
    15. parser.add_argument("--lr", type=float, default=1e-4)
    16. parser.add_argument("--gamma", type=float, default=0.99)
    17. parser.add_argument("--initial_epsilon", type=float, default=0.1)
    18. parser.add_argument("--final_epsilon", type=float, default=1e-4)
    19. parser.add_argument("--num_decay_iters", type=float, default=2000000)
    20. parser.add_argument("--num_iters", type=int, default=2000000)
    21. parser.add_argument("--replay_memory_size", type=int, default=50000,
    22. help="Number of epoches between testing phases")
    23. parser.add_argument("--saved_folder", type=str, default="trained_models")
    24. args = parser.parse_args()
    25. return args
    26. def train(opt):
    27. if torch.cuda.is_available():
    28. torch.cuda.manual_seed(123)
    29. else:
    30. torch.manual_seed(123)
    31. model = DeepQNetwork()
    32. if torch.cuda.is_available():
    33. model.cuda()
    34. optimizer = torch.optim.Adam(model.parameters(), lr=opt.lr)
    35. if not os.path.isdir(opt.saved_folder):
    36. os.makedirs(opt.saved_folder)
    37. checkpoint_path = os.path.join(opt.saved_folder, "chrome_dino.pth")
    38. memory_path = os.path.join(opt.saved_folder, "replay_memory.pkl")
    39. if os.path.isfile(checkpoint_path):
    40. checkpoint = torch.load(checkpoint_path)
    41. iter = checkpoint["iter"] + 1
    42. model.load_state_dict(checkpoint["model_state_dict"])
    43. optimizer.load_state_dict(checkpoint["optimizer"])
    44. print("Load trained model from iteration {}".format(iter))
    45. else:
    46. iter = 0
    47. if os.path.isfile(memory_path):
    48. with open(memory_path, "rb") as f:
    49. replay_memory = pickle.load(f)
    50. print("Load replay memory")
    51. else:
    52. replay_memory = []
    53. criterion = nn.MSELoss()
    54. env = ChromeDino()
    55. state, _, _ = env.step(0)
    56. state = torch.cat(tuple(state for _ in range(4)))[None, :, :, :]
    57. while iter < opt.num_iters:
    58. if torch.cuda.is_available():
    59. prediction = model(state.cuda())[0]
    60. else:
    61. prediction = model(state)[0]
    62. # Exploration or exploitation
    63. epsilon = opt.final_epsilon + (
    64. max(opt.num_decay_iters - iter, 0) * (opt.initial_epsilon - opt.final_epsilon) / opt.num_decay_iters)
    65. u = random()
    66. random_action = u <= epsilon
    67. if random_action:
    68. action = randint(0, 2)
    69. else:
    70. action = torch.argmax(prediction).item()
    71. next_state, reward, done = env.step(action)
    72. next_state = torch.cat((state[0, 1:, :, :], next_state))[None, :, :, :]
    73. replay_memory.append([state, action, reward, next_state, done])
    74. if len(replay_memory) > opt.replay_memory_size:
    75. del replay_memory[0]
    76. batch = sample(replay_memory, min(len(replay_memory), opt.batch_size))
    77. state_batch, action_batch, reward_batch, next_state_batch, done_batch = zip(*batch)
    78. state_batch = torch.cat(tuple(state for state in state_batch))
    79. action_batch = torch.from_numpy(
    80. np.array([[1, 0, 0] if action == 0 else [0, 1, 0] if action == 1 else [0, 0, 1] for action in
    81. action_batch], dtype=np.float32))
    82. reward_batch = torch.from_numpy(np.array(reward_batch, dtype=np.float32)[:, None])
    83. next_state_batch = torch.cat(tuple(state for state in next_state_batch))
    84. if torch.cuda.is_available():
    85. state_batch = state_batch.cuda()
    86. action_batch = action_batch.cuda()
    87. reward_batch = reward_batch.cuda()
    88. next_state_batch = next_state_batch.cuda()
    89. current_prediction_batch = model(state_batch)
    90. next_prediction_batch = model(next_state_batch)
    91. y_batch = torch.cat(
    92. tuple(reward if done else reward + opt.gamma * torch.max(prediction) for reward, done, prediction in
    93. zip(reward_batch, done_batch, next_prediction_batch)))
    94. q_value = torch.sum(current_prediction_batch * action_batch, dim=1)
    95. optimizer.zero_grad()
    96. loss = criterion(q_value, y_batch)
    97. loss.backward()
    98. optimizer.step()
    99. state = next_state
    100. iter += 1
    101. print("Iteration: {}/{}, Loss: {:.5f}, Epsilon {:.5f}, Reward: {}".format(
    102. iter + 1,
    103. opt.num_iters,
    104. loss,
    105. epsilon, reward))
    106. if (iter + 1) % 50000 == 0:
    107. checkpoint = {"iter": iter,
    108. "model_state_dict": model.state_dict(),
    109. "optimizer": optimizer.state_dict()}
    110. torch.save(checkpoint, checkpoint_path)
    111. with open(memory_path, "wb") as f:
    112. pickle.dump(replay_memory, f, protocol=pickle.HIGHEST_PROTOCOL)
    113. if __name__ == "__main__":
    114. opt = get_args()
    115. train(opt)

  • 相关阅读:
    这份SVN命令备忘清单,请查收
    深度学习入门(五十六)循环神经网络——循环神经网络RNN
    python+playwright 学习-81 page.expect_request()捕获网络请求
    arm服务器运行onlyoffice
    ffmpeg基础三:H264,从MP4文件获取(av_bsf_get_by_name(“h264_mp4toannexb“))和从TS流获取保存H264
    使用 React 和 MUI 创建多选 Checkbox 树组件
    【开源代码】-基于国民N32G43x系列MCU使用JLINK的开发组件工具-RTTViewer/logger/Client/调试打印
    什么是作业指导书sop?sop作业指导书是什么意思?
    探索人工智能领域——每日30个名词详解【day3】
    积加(跨境ERP)与金蝶云星空单据集成对接
  • 原文地址:https://blog.csdn.net/timberman666/article/details/132630661