环境信息:
1.mindformers指导链接:
https://gitee.com/mindspore/mindformers/blob/dev/docs/model_cards/llama2.md#%E6%A8%A1%E5%9E%8B%E6%9D%83%E9%87%8D%E4%B8%8B%E8%BD%BD%E4%B8%8E%E8%BD%AC%E6%8D%A2
2.镜像:mindformers0.8.0_ms2.2.0-cann7.0rc1_py_3.9:dog
代码:mindformers-dev 代码
权重转换过程中:需要安装pytorch, 原始的中文权重需要下载到存放的目录。[lfx-obs]
英文权重:obs://lfx/weights/llama2_7b_english/
中文权重:obs://lfx/weights/llama2-Chines-7b-Chat
4.推理脚本:单卡
要保证MA有网,可以直接执行推理,若没有,则需要在当前目录下创建checkpoint_download文件夹,切里面包含llama/llama-7b.yaml的配置文件
python run_mindformer.py --config configs/llama2/run_llama2_7b.yaml --load_checkpoint /home/ma-user/work/mindformers-dev/llama2-hy.ckpt --run_mode predict --predict_data ‘我爱北京, 因为’ --use_parallel False
python run_mindformer.py --config configs/llama2/run_llama2_7b.yaml --run_mode predict --predict_data ‘i love beijing ,because’ --use_parallel False
4.预训练脚本:
数据集转换:
# 使用tools/dataset_preprocess/llama/llama_preprocess.py进行数据预处理+Mindrecord数据生成
python llama_preprocess.py
–dataset_type wiki
-input_glob /home/ma-user/work/mindformers-dev/dataset/wikitext-2/wiki.train.tokens
-model_file /home/ma-user/work/mindformers-dev/dataset/wikitext-2/tokenizer.model
-seq_length 4096
-output_file /home/ma-user/work/mindformers-dev/dataset/mindrecord/wiki4096.mindrecord
5.lora微调脚本:
python alpaca_converter.py
–data_path /home/ma-user/work/weitiao_dataset/alpaca_data.json
–output_path /home/ma-user/work/weitiao_dataset/alpaca-data-conversation.json
pip install fschat
python llama_preprocess.py
–dataset_type qa
–input_glob /home/ma-user/work/weitiao_dataset/alpaca-data-conversation.json
–model_file /home/ma-user/work/mindformers-dev/dataset/wikitext-2/tokenizer.model
–seq_length 2048
–output_file /home/ma-user/work/mindformers-dev/dataset/wikitext-2/alpaca-fastchat2048.mindrecord
中文权重文件:/home/ma-user/work/llama2-hy.ckpt
configs/llama2/run_llama2_7b_lora_910b.yaml 修改dataset_dir:
load_checkpoint: /home/ma-user/work/llama2-hy.ckpt
auto_trans_ckpt: False
use_parallel参数置为False
启动微调:
cd scripts
bash run_standalone.sh …/configs/llama2/run_llama2_7b_lora_910b.yaml [DEVICE_ID] finetune
./run_standalone.sh …/configs/llama2/run_llama2_7b_lora_910b.yaml 0 finetune > lora.txt 2>&1 &