云服务器内容精选

  • ChatGLMv3-6B 在训练开始前,针对ChatGLMv3-6B模型中的tokenizer文件,需要修改代码。修改文件chatglm3-6b/tokenization_chatglm.py 。 文件最后几处代码中需要修改,具体位置可根据上下文代码信息进行查找,修改后如图2所示。 图2 修改ChatGLMv3-6B tokenizer文件 图3 修改ChatGLMv3-6B tokenizer文件
  • Yi模型 在使用Yi模型的chat版本时,由于transformer 4.38版本的bug,导致在读取tokenizer文件时,加载的vocab_size出现类似如下尺寸不匹配的问题。 RuntimeError: Error(s) in loading state_dict for VocabParallelEmbedding: size mismatch for weight: copying a param with shape torch.Size([64000, 4096]) from checkpoint, the shape in current model is torch.Size([63992, 4096]). 需要在训练开始前,修改llm_train/AscendSpeed/yi/3_training.sh文件,并添加--tokenizer-not-use-fast参数。修改后如图1所示。 图1 修改Yi 模型3_training.sh文件
  • ds_z1_config.json样例模板 { "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "zero_allow_untested_optimizer": true, "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "bf16": { "enabled": "auto" }, "zero_optimization": { "stage": 1, "allgather_partitions": true, "allgather_bucket_size": 5e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 5e8, "contiguous_gradients": true, "round_robin_gradients": true } }
  • dpo_yaml样例模板 ### model model_name_or_path: /home/ma-user/ws/tokenizers/Qwen2-72B ### method stage: dpo do_train: true # lora finetuning_type: lora lora_target: all pref_beta: 0.1 pref_loss: sigmoid deepspeed: examples/deepspeed/ds_z3_config.json ### dataset dataset: dpo_en_demo dataset_dir: /home/ma-user/ws/llm_train/LLaMAFactory/LLaMA-Factory/data template: qwen cutoff_len: 4096 packing: true max_samples: 50000 overwrite_cache: true preprocessing_num_workers: 16 ### output output_dir: /home/ma-user/ws/saves/dpo/llama3-8b/lora logging_steps: 2 save_steps: 5000 plot_loss: true overwrite_output_dir: true ### train per_device_train_batch_size: 1 gradient_accumulation_steps: 8 learning_rate: 5.0e-6 num_train_epochs: 3.0 lr_scheduler_type: cosine warmup_ratio: 0.1 bf16: true flash_attn: sdpa ddp_timeout: 180000000 include_tokens_per_second: true include_num_input_tokens_seen: true
  • rm_yaml样例模板 ### model model_name_or_path: /home/ma-user/ws/tokenizers/llama3-8b ### method stage: rm do_train: true # 全参 # finetuning_type: full # lora finetuning_type: lora lora_target: all deepspeed: examples/deepspeed/ds_z0_config.json ### dataset dataset: dpo_en_demo template: llama3 cutoff_len: 4096 max_samples: 50000 overwrite_cache: true preprocessing_num_workers: 16 packing: true ### output output_dir: /home/ma-user/ws/saves/rm/llama3-8b/lora logging_steps: 1 save_steps: 500 plot_loss: true overwrite_output_dir: true ### train per_device_train_batch_size: 1 gradient_accumulation_steps: 8 learning_rate: 1.0e-4 num_train_epochs: 3.0 lr_scheduler_type: cosine warmup_ratio: 0 bf16: true ddp_timeout: 180000000 include_tokens_per_second: true include_num_input_tokens_seen: true
  • tune_yaml样例模板 ### model model_name_or_path: /home/ma-user/ws/tokenizers/Qwen2-72B ### method stage: sft do_train: true # 全参 finetuning_type: full # lora # finetuning_type: lora # lora_target: all deepspeed: examples/deepspeed/ds_z3_config.json ### dataset dataset: identity,alpaca_en_demo dataset_dir: /home/ma-user/ws/llm_train/LLaMAFactory/LLaMA-Factory/data template: qwen cutoff_len: 4096 packing: true max_samples: 100000 overwrite_cache: true preprocessing_num_workers: 16 ### output output_dir: /home/ma-user/ws/saves/tune/Qwen2-72B/sft logging_steps: 2 save_steps: 5000 plot_loss: true overwrite_output_dir: true ### train per_device_train_batch_size: 1 gradient_accumulation_steps: 8 learning_rate: 2.0e-5 num_train_epochs: 10.0 lr_scheduler_type: cosine warmup_ratio: 0.1 bf16: true flash_attn: sdpa ddp_timeout: 180000000 include_tokens_per_second: true include_num_input_tokens_seen: true
  • ppo_yaml样例模板 ### model model_name_or_path: /home/ma-user/ws/tokenizers/llama3-8b reward_model: /home/ma-user/ws/saves/rm/llama3-8b/lora ### method stage: ppo do_train: true # 全参 # finetuning_type: full # reward_model_type: full # lora finetuning_type: lora lora_target: all deepspeed: examples/deepspeed/ds_z0_config.json ### dataset dataset: identity,alpaca_en_demo template: llama3 cutoff_len: 4096 max_samples: 50000 overwrite_cache: true preprocessing_num_workers: 16 packing: true ### output output_dir: /home/ma-user/ws/saves/ppo/llama3-8b/lora logging_steps: 1 save_steps: 500 plot_loss: true overwrite_output_dir: true ### train per_device_train_batch_size: 1 gradient_accumulation_steps: 8 learning_rate: 1.0e-5 num_train_epochs: 3.0 lr_scheduler_type: cosine warmup_ratio: 0 bf16: true ddp_timeout: 180000000 flash_attn: sdpa include_tokens_per_second: true include_num_input_tokens_seen: true ### generate max_new_tokens: 512 top_k: 0 top_p: 0.9
  • lora_yaml样例模板 ### model model_name_or_path: /home/ma-user/ws/tokenizers/Qwen2-72B ### method stage: sft do_train: true finetuning_type: lora lora_target: all deepspeed: examples/deepspeed/ds_z3_config.json ### dataset dataset: identity,alpaca_en_demo template: qwen cutoff_len: 4096 packing: true max_samples: 1000 overwrite_cache: true preprocessing_num_workers: 16 ### output output_dir: /home/ma-user/ws/tokenizers/Qwen2-72B/lora logging_steps: 2 save_steps: 5000 plot_loss: true overwrite_output_dir: true ### train per_device_train_batch_size: 1 gradient_accumulation_steps: 8 learning_rate: 1.0e-5 num_train_epochs: 10.0 lr_scheduler_type: cosine warmup_ratio: 0.1 fp16: true ddp_timeout: 180000000 include_tokens_per_second: true include_num_input_tokens_seen: true
  • sft_yaml样例模板 ### model model_name_or_path: /home/ma-user/ws/tokenizers/Qwen2-72B ### method stage: sft do_train: true finetuning_type: full deepspeed: examples/deepspeed/ds_z3_config.json ### dataset dataset: identity,alpaca_en_demo template: qwen cutoff_len: 4096 packing: true max_samples: 1000 overwrite_cache: true preprocessing_num_workers: 16 ### output output_dir: /home/ma-user/ws/tokenizers/Qwen2-72B/sft logging_steps: 2 save_steps: 5000 plot_loss: true overwrite_output_dir: true ### train per_device_train_batch_size: 1 gradient_accumulation_steps: 8 learning_rate: 1.0e-5 num_train_epochs: 10.0 lr_scheduler_type: cosine warmup_ratio: 0.1 fp16: true ddp_timeout: 180000000 include_tokens_per_second: true include_num_input_tokens_seen: true
  • ds_z1_config.json样例模板 { "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "zero_allow_untested_optimizer": true, "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "bf16": { "enabled": "auto" }, "zero_optimization": { "stage": 1, "allgather_partitions": true, "allgather_bucket_size": 5e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 5e8, "contiguous_gradients": true, "round_robin_gradients": true } }
  • 模型推荐的参数与NPU卡数设置 不同模型推荐的训练参数和计算规格要求如表2所示。规格与节点数中的1*节点 & 4*Ascend表示单机4卡,以此类推。 表2 不同模型推荐的参数与NPU卡数设置 序号 支持模型 支持模型参数量 文本序列长度 并行参数设置 规格与节点数 1 llama2 llama2-7b SEQ_LEN=4096 TP(tensor model parallel size)=1 PP(pipeline model parallel size)=4 1*节点 & 4*Ascend SEQ_LEN=8192 TP(tensor model parallel size)=2 PP(pipeline model parallel size)=4 1*节点 & 8*Ascend 2 llama2-13b SEQ_LEN=4096 TP(tensor model parallel size)=8 PP(pipeline model parallel size)=1 1*节点 & 8*Ascend SEQ_LEN=8192 TP(tensor model parallel size)=8 PP(pipeline model parallel size)=1 1*节点 & 8*Ascend 3 llama2-70b SEQ_LEN=4096 TP(tensor model parallel size)=8 PP(pipeline model parallel size)=4 4*节点 & 8*Ascend SEQ_LEN=8192 TP(tensor model parallel size)=8 PP(pipeline model parallel size)=8 8*节点 & 8*Ascend 4 llama3 llama3-8b SEQ_LEN=4096 TP(tensor model parallel size)=4 PP(pipeline model parallel size)=1 1*节点 & 4*Ascend SEQ_LEN=8192 TP(tensor model parallel size)=4 PP(pipeline model parallel size)=1 1*节点 & 4*Ascend 5 llama3-70b SEQ_LEN=4096 TP(tensor model parallel size)=8 PP(pipeline model parallel size)=4 4*节点 & 8*Ascend SEQ_LEN=8192 TP(tensor model parallel size)=8 PP(pipeline model parallel size)=8 8*节点 & 8*Ascend 6 Qwen qwen-7b SEQ_LEN=4096 TP(tensor model parallel size)=4 PP(pipeline model parallel size)=1 1*节点 & 4*Ascend SEQ_LEN=8192 TP(tensor model parallel size)=4 PP(pipeline model parallel size)=1 1*节点 & 4*Ascend 7 qwen-14b SEQ_LEN=4096 TP(tensor model parallel size)=8 PP(pipeline model parallel size)=1 1*节点 & 8*Ascend SEQ_LEN=8192 TP(tensor model parallel size)=8 PP(pipeline model parallel size)=1 1*节点 & 8*Ascend 8 qwen-72b SEQ_LEN=4096 TP(tensor model parallel size)=8 PP(pipeline model parallel size)=4 4*节点 & 8*Ascend SEQ_LEN=8192 TP(tensor model parallel size)=8 PP(pipeline model parallel size)=8 8*节点 & 8*Ascend 9 Qwen1.5 qwen1.5-7b SEQ_LEN=4096 TP(tensor model parallel size)=4 PP(pipeline model parallel size)=1 1*节点 & 4*Ascend SEQ_LEN=8192 TP(tensor model parallel size)=4 PP(pipeline model parallel size)=1 1*节点 & 4*Ascend 10 qwen1.5-14b SEQ_LEN=4096 TP(tensor model parallel size)=8 PP(pipeline model parallel size)=1 1*节点 & 8*Ascend SEQ_LEN=8192 TP(tensor model parallel size)=8 PP(pipeline model parallel size)=1 1*节点 & 8*Ascend 11 qwen1.5-32b SEQ_LEN=4096 TP(tensor model parallel size)=8 PP(pipeline model parallel size)=2 2*节点 & 8*Ascend SEQ_LEN=8192 TP(tensor model parallel size)=8 PP(pipeline model parallel size)=4 4*节点 & 8*Ascend 12 qwen1.5-72b SEQ_LEN=4096 TP(tensor model parallel size)=8 PP(pipeline model parallel size)=4 4*节点 & 8*Ascend SEQ_LEN=8192 TP(tensor model parallel size)=8 PP(pipeline model parallel size)=8 8*节点 & 8*Ascend 13 Yi yi-6b SEQ_LEN=4096 TP(tensor model parallel size)=1 PP(pipeline model parallel size)=4 1*节点 & 4*Ascend SEQ_LEN=8192 TP(tensor model parallel size)=2 PP(pipeline model parallel size)=4 1*节点 & 8*Ascend 14 yi-34b SEQ_LEN=4096 TP(tensor model parallel size)=4 PP(pipeline model parallel size)=4 2*节点 & 8*Ascend SEQ_LEN=8192 TP(tensor model parallel size)=8 PP(pipeline model parallel size)=4 4*节点 & 8*Ascend 15 ChatGLMv3 glm3-6b SEQ_LEN=4096 TP(tensor model parallel size)=1 PP(pipeline model parallel size)=4 1*节点 & 4*Ascend SEQ_LEN=8192 TP(tensor model parallel size)=2 PP(pipeline model parallel size)=4 1*节点 & 8*Ascend 16 Baichuan2 baichuan2-13b SEQ_LEN=4096 TP(tensor model parallel size)=8 PP(pipeline model parallel size)=1 1*节点 & 8*Ascend SEQ_LEN=8192 TP(tensor model parallel size)=8 PP(pipeline model parallel size)=1 1*节点 & 8*Ascend 17 Qwen2 qwen2-0.5b SEQ_LEN=4096 TP(tensor model parallel size)=2 PP(pipeline model parallel size)=1 1*节点 & 2*Ascend SEQ_LEN=8192 TP(tensor model parallel size)=2 PP(pipeline model parallel size)=1 1*节点 & 2*Ascend 18 qwen2-1.5b SEQ_LEN=4096 TP(tensor model parallel size)=2 PP(pipeline model parallel size)=1 1*节点 & 2*Ascend SEQ_LEN=8192 TP(tensor model parallel size)=2 PP(pipeline model parallel size)=1 1*节点 & 2*Ascend 19 qwen2-7b SEQ_LEN=4096 TP(tensor model parallel size)=4 PP(pipeline model parallel size)=1 1*节点 & 4*Ascend SEQ_LEN=8192 TP(tensor model parallel size)=4 PP(pipeline model parallel size)=1 1*节点 & 4*Ascend 20 qwen2-72b SEQ_LEN=4096 TP(tensor model parallel size)=8 PP(pipeline model parallel size)=4 4*节点 & 8*Ascend SEQ_LEN=8192 TP(tensor model parallel size)=8 PP(pipeline model parallel size)=8 8*节点 & 8*Ascend 21 GLMv4 glm4-9b SEQ_LEN=4096 TP(tensor model parallel size)=2 PP(pipeline model parallel size)=4 1*节点 & 8*Ascend SEQ_LEN=8192 TP(tensor model parallel size)=2 PP(pipeline model parallel size)=4 1*节点 & 8*Ascend 22 mistral mistral-7b SEQ_LEN=4096 TP(tensor model parallel size)=1 PP(pipeline model parallel size)=4 1*节点 & 8*Ascend 23 mixtral mixtral-8x7b SEQ_LEN=4096 TP(tensor model parallel size)=2 PP(pipeline model parallel size)=8 2*节点 & 8*Ascend SEQ_LEN=8192 TP(tensor model parallel size)=2 PP(pipeline model parallel size)=8 2*节点 & 8*Ascend
  • 用户自定义执行数据处理脚本修改参数说明 若用户要自定义数据处理脚本并且单独执行,同样以 llama2 为例。 方法一:用户可打开scripts/llama2/1_preprocess_data.sh脚本,将执行的python命令复制下来,修改环境变量的值。在Notebook进入到 /home/ma-user/work/llm_train/AscendSpeed/ModelLink 路径中,再执行python命令。 方法二:用户在Notebook中直接编辑scripts/llama2/1_preprocess_data.sh脚本,自定义环境变量的值,并在脚本的首行中添加 cd /home/ma-user/work/llm_train/AscendSpeed/ModelLink 命令,随后在Notebook中运行该脚本。 其中环境变量详细介绍如下: 表1 数据预处理中的环境变量 环境变量 示例 参数说明 RUN_TYPE pretrain、sft、lora 数据预处理区分: 预训练场景下数据预处理,默认参数:pretrain 微调场景下数据预处理,默认:sft / lora ORIGINAL_TRAIN_DATA_PATH /home/ma-user/work/training_data/finetune/moss_LossCompare.jsonl 原始数据集的存放路径。 TOKENIZER_PATH /home/ma-user/work/model/llama-2-13b-chat-hf tokenizer的存放路径,与HF权重存放在一个文件夹下。请根据实际规划修改。 PRO CES SED_DATA_PREFIX /home/ma-user/work/llm_train/processed_for_input/llama2-13b/data/pretrain/alpaca 处理后的数据集保存路径+数据集前缀。 TOKENIZER_TYPE PretrainedFromHF 可选项有:['BertWordPieceLowerCase','BertWordPieceCase','GPT2BPETokenizer','PretrainedFromHF'],一般为 PretrainedFromHF 。 SEQ_LEN 4096 要处理的最大seq length。脚本会检测超出SEQ_LEN长度的数据,并打印log。
  • 预训练数据集预处理参数说明 预训练数据集预处理脚本 scripts/llama2/1_preprocess_data.sh 中的具体参数如下: --input:原始数据集的存放路径。 --output-prefix:处理后的数据集保存路径+数据集名称(例如:alpaca_gpt4_data)。 --tokenizer-type:tokenizer的类型,可选项有['BertWordPieceLowerCase','BertWordPieceCase','GPT2BPETokenizer','PretrainedFromHF'],一般为PretrainedFromHF。 --tokenizer-name-or-path:tokenizer的存放路径,与HF权重存放在一个文件夹下。 --seq-length:要处理的最大seq length。 --workers:设置数据处理使用执行卡数量 / 启动的工作进程数。 --log-interval:是一个用于设置日志输出间隔的参数,表示输出日志的频率。在训练大规模模型时,可以通过设置这个参数来控制日志的输出。
  • 微调数据集预处理参数说明 微调包含SFT和LoRA微调。数据集预处理脚本参数说明如下: --input:原始数据集的存放路径。 --output-prefix:处理后的数据集保存路径+数据集名称(例如:alpaca_gpt4_data) --tokenizer-type:tokenizer的类型,可选项有['BertWordPieceLowerCase','BertWordPieceCase','GPT2BPETokenizer','PretrainedFromHF'],一般为PretrainedFromHF。 --tokenizer-name-or-path:tokenizer的存放路径,与HF权重存放在一个文件夹下。 --handler-name:生成数据集的用途,这里是生成的指令数据集,用于微调。 GeneralPretrainHandler:默认。用于预训练时的数据预处理过程中,将数据集根据key值进行简单的过滤。 GeneralInstructionHandler:用于sft、lora微调时的数据预处理过程中,会对数据集full_prompt中的user_prompt进行mask操作。 --seq-length:要处理的最大seq length。 --workers:设置数据处理使用执行卡数量 / 启动的工作进程数。 --log-interval:是一个用于设置日志输出间隔的参数,表示输出日志的频率。在训练大规模模型时,可以通过设置这个参数来控制日志的输出。