实例空间内的文件保存不是永久的,当代码执行程序被断开时,实例空间内的所有资源都会被释放,因此训练过后的模型日志和其他重要的文件需要保存到谷歌云盘,而不是本地的实例空间
用户所能连接的会话数量是有限的,因此到达上限时再开启新会话需要主动断开之前的会话。免费用户只能开启 1 个会话,Pro 用户则可以开启多个会话。不同的用户可以在一个笔记本上可以进行多个会话,但只能有一个会话执行代码块。如果某个代码块已经开始执行,另一个用户连接到笔记本的会话会显示 “忙碌状态”,需要等待代码块执行完后才能执行其他的代码块。注意:掉线重连、切换网络、刷新页面等操作也会使笔记本进入 “忙碌状态”





!” 的方式来执行 UNIX 终端命令。Colab 已经预装了大多数常见的深度学习库,比如 pytorch,tensorflow 等等,如果有需要额外安装的库可以通过 “!pip3 install ” 命令来安装。下面是一些常见的命令# 加载云端硬盘
from google.colab import drive
drive.mount('/content/drive') # 谷歌云盘默认的加载路径是 "/content/drive/MyDrive"
# 查看分配到的 GPU
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
print('Not connected to a GPU')
else:
print(gpu_info)
# 安装 python 包
!pip3 install <package>
# 使用 Tensorboard
%reload_ext tensorboard
%tensorboard --logdir "pathoflogfile"
# 克隆仓库到 /content/my-repo 目录下
!git clone https://github.com/my-github-username/my-git-repo.git
%cd my-git-repo
!./train.py --logdir /my/log/path --data_root /my/data/root --resume
import sys
sys.path.append('/content/my-git-repo') # 把 git 仓库的目录添加到系统目录
from train import my_training_method
my_training_method(arg1, arg2, ...)
pip install kaggle
To create a new token, click on the “Create New API Token” button. This will download a fresh authentication token onto your machine. If you are using the Kaggle CLI tool, the tool will look for this token at ~/.kaggle/kaggle.json on Linux, OSX, and other UNIX-based operating systems, and at C:\Users.kaggle\kaggle.json on Windows.# list the currently active competitions
kaggle competitions list
# download files associated with a competition
kaggle competitions download -c [COMPETITION]
# make a competition submission
kaggle competitions submit -c [COMPETITION] -f [FILE] -m [MESSAGE]
# list all previous submission
kaggle competitions submissions -c [COMPETITION NAME]
# list datasets matching a search term
kaggle datasets list -s [KEYWORD]
# download files associated with a dataset
kaggle datasets download -d [DATASET]
# generate a metadata file
kaggle datasets init -p /path/to/dataset
# create the dataset
kaggle datasets create -p /path/to/dataset
Your dataset will be private by default. You can also add a
-uflag to make it public when you create it.
# generate a metadata file
# Make sure the id field in dataset-metadata.json (or datapackage.json) points to your dataset
kaggle datasets init -p /path/to/dataset
# create a new dataset version
kaggle datasets version -p /path/to/dataset -m "Your message here"
# list Notebooks matching a search term
kaggle kernels list -s [KEYWORD]
# create and run a Notebook on Kaggle
kaggle kernels push -k [KERNEL] -p /path/to/kernel
# download code files and metadata associated with a Notebook
kaggle kernels pull [KERNEL] -p /path/to/download -m
# generate a metadata file
kaggle kernels init -p /path/to/kernel
kernel-metadata.json; As you add your title and slug, please be aware that Notebook titles and slugs are linked to each other. A Notebook slug is always the title lowercased with dashes (-) replacing spaces and removing special characters.# create and run the Notebook on Kaggle
kaggle kernels push -p /path/to/kernel