模型和数据需要在同一个设备上,才能正常运行:
model = Model()
input = dataloader()
output = model(input)
device自己定义好,可以是cpu或者gpu
device = 'gpu'
model = model.to(device)
data = data.to(device)
#
device = 'cpu'
model = model.to(device)
data = data.to(device)
model = model.cuda()
data = data.cuda()
model = model.cpu()
data = data.cpu()
print('Is model on gpu: ', next(model.parameters()).is_cuda)
输出若是True,则model在gpu上;若是False,则model在cpu上。
print('data device: ', data.device)
输出gpu则在gpu上,输出cpu则在cpu上。
————————————————
https://blog.csdn.net/iLOVEJohnny/article/details/106021547
https://wenku.baidu.com/view/99bec6ef8aeb172ded630b1c59eef8c75fbf95ed.html