156 Star 597 Fork 540

Ascend / ModelLink

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
evaluate_mixtral_8x7b_ptd.sh 1.69 KB
一键复制 编辑 原始数据 按行查看 历史
fengliangjun 提交于 2024-05-17 01:32 . !1285更新 mixtral-moe 模型至32K
#!/bin/bash
# The number of parameters is not aligned
export HCCL_CONNECT_TIMEOUT=1200
export COMBINED_ENABLE=1
export CUDA_DEVICE_MAX_CONNECTIONS=1
export TOKENIZERS_PARALLELISM=false
MASTER_ADDR=localhost
MASTER_PORT=6000
NNODES=1
NODE_RANK=0
GPUS_PER_NODE=8
TP=8
PP=1
CHECKPOINT="Your ckpt file path"
TOKENIZER_PATH="Your vocab file path"
DATA_PATH="Your data path (such as ./mmlu/test/)"
TASK="mmlu"
DISTRIBUTED_ARGS="
--nproc_per_node $GPUS_PER_NODE \
--nnodes $NNODES \
--node_rank $NODE_RANK \
--master_addr $MASTER_ADDR \
--master_port $MASTER_PORT
"
GPT_ARGS="
--tensor-model-parallel-size ${TP} \
--pipeline-model-parallel-size ${PP} \
--task $TASK \
--task-data-path $DATA_PATH \
--max-new-tokens 1 \
--num-layers 32 \
--hidden-size 4096 \
--ffn-hidden-size 14336 \
--num-attention-heads 32 \
--group-query-attention \
--num-query-groups 8 \
--tokenizer-type PretrainedFromHF \
--tokenizer-name-or-path ${TOKENIZER_PATH} \
--seq-length 32768 \
--max-position-embeddings 32768 \
--micro-batch-size 1 \
--make-vocab-size-divisible-by 1 \
--untie-embeddings-and-output-weights \
--disable-bias-linear \
--position-embedding-type rope \
--normalization RMSNorm \
--use-fused-rmsnorm \
--swiglu \
--no-masked-softmax-fusion \
--attention-softmax-in-fp32 \
--load ${CHECKPOINT} \
--no-load-optim \
--no-load-rng \
--bf16 \
--seed 42
"
MOE_ARGS="
--num-experts 8 \
--moe-router-topk 2 \
--moe-train-capacity-factor 8.0
"
torchrun $DISTRIBUTED_ARGS evaluation.py \
$GPT_ARGS \
$MOE_ARGS \
--distributed-backend nccl | tee logs/evaluation_mixtral_${TASK}.log
Python
1
https://gitee.com/ascend/ModelLink.git
git@gitee.com:ascend/ModelLink.git
ascend
ModelLink
ModelLink
master

搜索帮助