{"id":743,"date":"2026-04-07T01:10:11","date_gmt":"2026-04-06T17:10:11","guid":{"rendered":"https:\/\/www.liaoxinghui.com\/?p=743"},"modified":"2026-04-07T01:10:11","modified_gmt":"2026-04-06T17:10:11","slug":"deepspeed-gradient-accumulation-failure","status":"publish","type":"post","link":"https:\/\/www.liaoxinghui.com\/?p=743","title":{"rendered":"LoRA\u591a\u5361\u8bad\u7ec3\u68af\u5ea6\u7d2f\u79ef\u5931\u6548\uff1a\u6709\u6548batch size\u8fdc\u5c0f\u4e8e\u9884\u671f\u5bfc\u81f4\u6a21\u578b\u6536\u655b\u5f02\u5e38"},"content":{"rendered":"<h2>\u5148\u8bf4\u7ed3\u8bba\uff0c\u7701\u5f97\u4f60\u8e29\u5751<\/h2>\n<p>gradient_accumulation_steps \u8fd9\u4e2a\u53c2\u6570\uff0c\u5982\u679c\u4f60\u5728 DeepSpeed \u7684 json \u914d\u7f6e\u6587\u4ef6\u548c\u547d\u4ee4\u884c\u90fd\u8bbe\u4e86\uff0c\u522b\u9ad8\u5174\u592a\u65e9\u2014\u2014<strong>DeepSpeed Zero \u7684\u67d0\u4e9b\u7248\u672c\u4f1a\u4ee5\u914d\u7f6e\u6587\u4ef6\u4e3a\u51c6\uff0c\u4e0d\u7ba1\u4f60\u547d\u4ee4\u884c\u4f20\u4e86\u4ec0\u4e48<\/strong>\u3002<\/p>\n<p>\u6211\u8fd9\u6b21\u7684\u95ee\u9898\u5c31\u662f\u8fd9\u6837\uff1a\u547d\u4ee4\u884c\u91cc\u4f20\u4e86 <code>--gradient_accumulation_steps 8<\/code>\uff0c\u4f46 ds_config.json \u91cc\u8fd9\u4e2a\u5b57\u6bb5\u662f 1\uff08\u6216\u8005\u5e72\u8106\u6ca1\u5199\uff0c\u7528\u4e86\u9ed8\u8ba4\u503c\uff09\uff0c\u7ed3\u679c\u5b9e\u9645\u8dd1\u7684\u8fd8\u662f 1\u3002<\/p>\n<p>\u6709\u6548 batch size \u4e0d\u662f 32\uff0c\u662f 1\u3002<\/p>\n<p>loss \u964d\u5f97\u98de\u5feb\u4e0d\u662f\u5b66\u4e60\u7387\u592a\u9ad8\uff0c\u662f batch \u592a\u5c0f\u6a21\u578b\u5728\u8fc7\u62df\u5408\u6bcf\u4e2a batch\u3002<\/p>\n<p>\u8fd9\u4e2a bug \u6211\u6392\u67e5\u4e86\u5927\u6982 40 \u4e2a\u5c0f\u65f6\uff0c\u635f\u5931\u4e86\u5dee\u4e0d\u591a 3 \u5929\u7684\u8bad\u7ec3\u65f6\u95f4\u3002\u5199\u51fa\u6765\u8ba9\u5404\u4f4d\u5c11\u8d70\u5f2f\u8def\u3002<\/p>\n<h2>\u4e1a\u52a1\u573a\u666f<\/h2>\n<p>\u4e8b\u60c5\u662f\u8fd9\u6837\u7684\u2014\u2014\u4e0a\u5468\u4e09\uff0c\u6211\u63a5\u624b\u4e86\u4e00\u4e2a LoRA \u5fae\u8c03\u4efb\u52a1\u3002<\/p>\n<p>\u7528\u7684\u662f 4 \u5361 A100 80G \u7684\u673a\u5668\uff0c\u6a21\u578b\u662f LLaMA-3-8B\u3002\u663e\u5b58\u4e0d\u591f\u8dd1\u5927 batch\uff0c\u8fd9\u5927\u5bb6\u90fd\u61c2\uff0c\u5355\u5361\u6700\u591a\u585e\u4e2a batch_size=1 \u518d\u5e26\u4e2a LoRA \u6a21\u5757\u3002\u6211\u5f53\u65f6\u60f3\u7740\uff0c\u65e2\u7136\u663e\u5b58\u8fd9\u4e48\u7d27\u5f20\uff0c\u68af\u5ea6\u7d2f\u79ef\u603b\u5f97\u5f00\u8d77\u6765\u5427\u3002<\/p>\n<p><strong>\u6570\u636e\u8bf4\u660e<\/strong>\uff1a\u8bad\u7ec3\u6570\u636e\u662f\u516c\u53f8\u5185\u90e8\u7684\u4e2d\u6587\u5bf9\u8bdd\u6570\u636e\u96c6\uff0c\u5927\u6982 50 \u4e07\u6761\u5bf9\u8bdd\uff0c\u683c\u5f0f\u662f instruction-output \u5bf9\u3002\u6211\u7528\u8fd9\u4e2a\u6570\u636e\u96c6\u8dd1\u4e86 2 \u4e2a epoch\uff0c\u671f\u95f4\u76d1\u63a7\u4e86\u8bad\u7ec3 loss\u3001\u9a8c\u8bc1 loss\u3001\u68af\u5ea6\u8303\u6570\u548c\u663e\u5b58\u5360\u7528\u3002<\/p>\n<p>\u4e8e\u662f\u914d\u4e86\uff1a<\/p>\n<pre><code class=\"lang-bash language-bash bash\">torchrun --nproc_per_node=4 \\\n    train.py \\\n    --batch_size 1 \\\n    --gradient_accumulation_steps 8 \\\n    --learning_rate 2e-4<\/code><\/pre>\n<p>\u7406\u8bba\u6709\u6548 batch = 1 \u00d7 8 \u00d7 4\u5361 = 32\u3002<\/p>\n<p>\u8fc7\u4e86\u5927\u6982 2 \u4e2a epoch\uff0c\u6211\u4e00\u770b wandb \u7684 loss \u66f2\u7ebf\u2014\u2014<\/p>\n<p>\u597d\u5bb6\u4f19\uff0c\u8fd9 loss \u4e0b\u964d\u7684\u901f\u5ea6\u6bd4\u6211\u5f53\u5e74\u505a SGD \u7684\u65f6\u5019\u8fd8\u731b\uff0c\u4e00\u5ea6\u4ee5\u4e3a\u662f\u81ea\u5df1\u5929\u8d4b\u5f02\u7980\u8c03\u51fa\u4e86\u7edd\u5999\u8d85\u53c2\u3002<\/p>\n<p>\u7136\u540e\u6211\u770b\u4e86\u4e0b\u9a8c\u8bc1\u96c6\u3002<\/p>\n<p>\u9a8c\u8bc1 loss \u51e0\u4e4e\u4e0d\u52a8\uff0ctrain loss \u72c2\u6389\uff0c\u8fd9\u5473\u513f\u6211\u592a\u719f\u4e86\u2014\u2014<strong>\u8fc7\u62df\u5408<\/strong>\u3002<\/p>\n<h2>\u6392\u67e5\u8fc7\u7a0b\uff1a\u6211\u662f\u600e\u4e48\u4e00\u6b65\u6b65\u8d70\u8fdb\u5751\u91cc\u7684<\/h2>\n<h3>\u7b2c\u4e00\u9636\u6bb5\uff1a\u4ee5\u4e3a\u662f\u5b66\u4e60\u7387\u7684\u95ee\u9898<\/h3>\n<p>Loss \u4e0b\u964d\u592a\u5feb\uff0c\u7b2c\u4e00\u53cd\u5e94\u5c31\u662f\u5b66\u4e60\u7387\u592a\u9ad8\u4e86\u3002<\/p>\n<p>\u6211\u628a\u5b66\u4e60\u7387\u4ece 2e-4 \u964d\u5230 5e-5\uff0c\u53c8\u8dd1\u4e86\u534a\u5929\uff0c\u6ca1\u5565\u53d8\u5316\u3002<\/p>\n<p><strong>\u56de\u5934\u770b\u8fd9\u6b65\u64cd\u4f5c\uff0c\u7eaf\u5c5e\u778e\u8c03<\/strong>\u2014\u2014\u5f53\u65f6\u6211\u6839\u672c\u6ca1\u60f3\u5230\u662f batch size \u7684\u95ee\u9898\u3002\u6211\u5f53\u65f6\u8fd8\u6000\u7591\u662f\u5b66\u4e60\u7387\u7684\u95ee\u9898\uff0c\u8c03\u4e86\u4e24\u5929\u5b8c\u5168\u6ca1\u6548\u679c\uff0c\u73b0\u5728\u60f3\u60f3\u5e94\u8be5\u7b2c\u4e00\u65f6\u95f4\u770b\u663e\u5b58\u5360\u7528\u624d\u5bf9\u3002\u8fd9\u662f\u6211\u5f53\u65f6\u6700\u8822\u7684\u51b3\u5b9a\u3002<\/p>\n<h3>\u7b2c\u4e8c\u9636\u6bb5\uff1a\u770b\u68af\u5ea6\u8303\u6570<\/h3>\n<p>\u540e\u6765\u5728 discord \u4e0a\u95ee\u4eba\uff0c\u6709\u4eba\u8ba9\u6211\u6253\u5370\u4e00\u4e0b\u68af\u5ea6\u8303\u6570\u770b\u770b\u3002<\/p>\n<p>\u6211\u5728\u8bad\u7ec3\u811a\u672c\u91cc\u52a0\u4e86\u4e2a\u65e5\u5fd7\uff1a<\/p>\n<pre><code class=\"lang-python language-python python\"># \u653e\u5728 optimizer.step() \u4e4b\u524d\ntotal_norm = 0.0\nfor p in model.parameters():\n    if p.grad is not None:\n        param_norm = p.grad.data.norm(2)\n        total_norm += param_norm.item() ** 2\ntotal_norm = total_norm ** 0.5\nlogger.info(f&quot;Step {global_step}, grad_norm: {total_norm:.4f}&quot;)<\/code><\/pre>\n<p>\u8dd1\u51fa\u6765\u4e00\u770b\uff0cgrad_norm \u5927\u6982\u5728 0.01~0.05 \u4e4b\u95f4\u6d6e\u52a8\u3002<\/p>\n<p>\u8bf4\u5b9e\u8bdd\u8fd9\u4e2a\u503c\u770b\u8d77\u6765\u4e0d\u7b97\u592a\u79bb\u8c31\uff0c\u6211\u5f53\u65f6\u4e5f\u5c31\u6ca1\u5f53\u56de\u4e8b\u3002<\/p>\n<p><strong>\u4f46\u95ee\u9898\u662f\uff0c\u8fd9\u662f\u5728\u5355\u6b65\u66f4\u65b0\u4e4b\u524d\u6253\u5370\u7684\uff0c\u6bcf\u6b21\u66f4\u65b0\u90fd\u53ea\u79ef\u7d2f\u4e86 1 \u4e2a batch \u7684\u68af\u5ea6\uff0c\u800c\u4e0d\u662f 8 \u4e2a\u3002<\/strong><\/p>\n<h3>\u7b2c\u4e09\u9636\u6bb5\uff1a\u7ec8\u4e8e\u60f3\u5230\u770b DeepSpeed \u7684\u914d\u7f6e<\/h3>\n<p>\u5927\u6982\u8fc7\u4e86 30 \u4e2a\u5c0f\u65f6\uff0c\u6211\u5f00\u59cb\u6000\u7591\u662f\u4e0d\u662f DeepSpeed \u7684\u914d\u7f6e\u6709\u95ee\u9898\u3002<\/p>\n<p>\u6211\u7528\u7684 ds_config.json \u957f\u8fd9\u6837\uff1a<\/p>\n<pre><code class=\"lang-json language-json json\">{\n  &quot;train_batch_size&quot;: 32,\n  &quot;gradient_accumulation_steps&quot;: 1,\n  &quot;fp16&quot;: {\n    &quot;enabled&quot;: true\n  },\n  &quot;zero_optimization&quot;: {\n    &quot;stage&quot;: 3,\n    &quot;offload_optimizer&quot;: {\n      &quot;device&quot;: &quot;cpu&quot;\n    }\n  }\n}<\/code><\/pre>\n<p>\u7b49\u7b49\u2014\u2014<\/p>\n<p>\u6211\u770b\u5230\u95ee\u9898\u4e86\u3002<\/p>\n<p><code>gradient_accumulation_steps<\/code> \u5728\u914d\u7f6e\u6587\u4ef6\u91cc\u662f 1\uff0c\u4f46\u6211\u5728\u547d\u4ee4\u884c\u4f20\u7684\u662f 8\u3002<\/p>\n<p><strong>DeepSpeed \u52a0\u8f7d\u914d\u7f6e\u7684\u4f18\u5148\u7ea7\u662f\uff1ajson \u6587\u4ef6 &gt; \u547d\u4ee4\u884c\u53c2\u6570<\/strong>\u3002<\/p>\n<p>\u81f3\u5c11\u5728\u6211\u7528\u7684 0.14.2 \u7248\u672c\u662f\u8fd9\u6837\u3002<\/p>\n<h3>\u9a8c\u8bc1\uff1a\u52a0\u4e2a\u8ba1\u6570\u5668\u786e\u8ba4<\/h3>\n<p>\u4e3a\u4e86\u786e\u8ba4\u8fd9\u4e2a\u95ee\u9898\uff0c\u6211\u5728\u8bad\u7ec3\u5faa\u73af\u91cc\u52a0\u4e86\u4e2a\u8ba1\u6570\u5668\uff1a<\/p>\n<pre><code class=\"lang-python language-python python\">class AccumulationCounter:\n    def __init__(self):\n        self.count = 0\n\n    def step(self):\n        self.count += 1\n        if self.count % 8 == 0:\n            logger.info(f&quot;Effective batch boundary reached at step {self.count}&quot;)\n            # \u6253\u5370\u8fd9 8 \u6b65\u7684\u5e73\u5747 loss\n\ncounter = AccumulationCounter()<\/code><\/pre>\n<p>\u7136\u540e\u6211\u53d1\u73b0\uff0c\u8fd9\u4e2a &#8220;effective batch boundary&#8221; \u6253\u5370\u51fa\u6765\u7684 loss \u53d8\u5316\uff0c\u8fdc\u6ca1\u6709\u6211\u9884\u671f\u7684\u90a3\u4e48\u5927\u3002<\/p>\n<p>\u8fd9\u8bf4\u660e 8 \u6b65\u624d\u66f4\u65b0\u4e00\u6b21\u53c2\u6570\u8fd9\u4e2a\u4e8b\u513f\uff0c\u6839\u672c\u6ca1\u53d1\u751f\u3002<\/p>\n<h3>\u7b2c\u56db\u9636\u6bb5\uff1a\u4fee\u590d\u548c\u5bf9\u6bd4<\/h3>\n<p>\u4fee\u6539\u540e\u7684 ds_config.json\uff1a<\/p>\n<pre><code class=\"lang-json language-json json\">{\n  &quot;train_batch_size&quot;: 32,\n  &quot;gradient_accumulation_steps&quot;: 8,\n  &quot;gradient_clipping&quot;: 1.0,\n  &quot;fp16&quot;: {\n    &quot;enabled&quot;: true\n  },\n  &quot;zero_optimization&quot;: {\n    &quot;stage&quot;: 3,\n    &quot;offload_optimizer&quot;: {\n      &quot;device&quot;: &quot;cpu&quot;\n    }\n  }\n}<\/code><\/pre>\n<p>\u540c\u65f6\u547d\u4ee4\u884c\u6539\u6210\uff1a<\/p>\n<pre><code class=\"lang-bash language-bash bash\">torchrun --nproc_per_node=4 \\\n    train.py \\\n    --batch_size 1 \\\n    --gradient_accumulation_steps 8 \\\n    --learning_rate 2e-4 \\\n    --deepspeed ds_config.json<\/code><\/pre>\n<p>\u8dd1\u8d77\u6765\u4e4b\u540e\uff0cgrad_norm \u4ece 0.01~0.05 \u53d8\u6210\u4e86 0.08~0.3\uff0c\u8fd9\u4e2a\u8303\u56f4\u66f4\u5408\u7406\u4e00\u4e9b\u3002<\/p>\n<p>Loss \u66f2\u7ebf\u4e5f\u6b63\u5e38\u4e86\uff1atrain loss \u548c val loss \u540c\u65f6\u4e0b\u964d\uff0c\u5927\u81f4\u540c\u6b65\u3002<\/p>\n<h2>\u663e\u5b58\u5360\u7528\u5bf9\u6bd4<\/h2>\n<p><strong>\u8fd9\u662f\u6700\u6709\u8bf4\u670d\u529b\u7684\u8bc1\u636e\u3002<\/strong><\/p>\n<p>\u4fee\u590d\u524d\uff08gradient_accumulation_steps=1\uff0c\u5b9e\u9645\u751f\u6548\uff09\uff1a<\/p>\n<pre><code class=\"lang-bash language-bash bash\">$ nvidia-smi\n|===============================+======================+======================|\n| GPU 0        |  62&deg;C |  28GiB \/ 80GiB |  8%    |\n| GPU 1        |  61&deg;C |  27GiB \/ 80GiB |  7%    |\n| GPU 2        |  63&deg;C |  28GiB \/ 80GiB |  8%    |\n| GPU 3        |  62&deg;C |  27GiB \/ 80GiB |  7%    |\n+-------------------------------+----------------------+----------------------+<\/code><\/pre>\n<p>\u4fee\u590d\u540e\uff08gradient_accumulation_steps=8\uff0c\u771f\u6b63\u751f\u6548\uff09\uff1a<\/p>\n<pre><code class=\"lang-bash language-bash bash\">$ nvidia-smi\n|===============================+======================+======================|\n| GPU 0        |  71&deg;C |  45GiB \/ 80GiB |  52%   |\n| GPU 1        |  70&deg;C |  44GiB \/ 80GiB |  51%   |\n| GPU 2        |  72&deg;C |  45GiB \/ 80GiB |  52%   |\n| GPU 3        |  71&deg;C |  44GiB \/ 80GiB |  51%   |\n+-------------------------------+----------------------+----------------------+<\/code><\/pre>\n<p>\u663e\u5b58\u5360\u7528\u4ece 28GB \u8df3\u5230 45GB\uff0c\u591a\u4e86 17GB\u3002\u8fd9\u5dee\u5f97\u53ef\u4e0d\u662f\u4e00\u661f\u534a\u70b9\u2014\u2014\u5982\u679c gradient_accumulation_steps=8 \u771f\u7684\u751f\u6548\uff0c\u663e\u5b58\u4f1a\u5728\u8fd9 8 \u6b65\u91cc\u9010\u6b65\u7d2f\u79ef\u68af\u5ea6\uff0c\u5cf0\u503c\u663e\u5b58\u5e94\u8be5\u6bd4\u5355\u6b65\u9ad8\u4e0d\u5c11\u3002<\/p>\n<p>\u540e\u6765\u6211\u60f3\u660e\u767d\u4e86\uff1a\u5f53\u65f6\u914d\u7f6e\u5931\u6548\u65f6\uff0c\u663e\u5b58\u53ea\u6709\u5355\u6b65\u7684\u68af\u5ea6+\u4f18\u5316\u5668\u72b6\u6001+\u6a21\u578b+\u6fc0\u6d3b\u503c\uff1b\u800c\u771f\u6b63\u7d2f\u79ef 8 \u6b65\u7684\u8bdd\uff0c\u68af\u5ea6\u7f13\u51b2\u533a\u8981\u4e58\u4ee5 8\uff0c\u8fd9 17GB \u7684\u5dee\u8ddd\u5c31\u662f\u8fd9\u4e48\u6765\u7684\u3002<\/p>\n<p><strong>\u6240\u4ee5\u4e0b\u6b21\u914d\u7f6e\u4e0d\u751f\u6548\uff0cnvidia-smi \u4e00\u773c\u5c31\u80fd\u770b\u51fa\u6765\u3002<\/strong><\/p>\n<h2>DeepSpeed \u8c03\u7528\u65b9\u5f0f\uff1a\u6b63\u786e\u7684\u6253\u5f00\u59ff\u52bf<\/h2>\n<p>\u8fd9\u90e8\u5206\u6211\u8981\u91cd\u70b9\u8bf4\uff0c\u56e0\u4e3a\u6392\u67e5\u7684\u65f6\u5019\u6211\u624d\u53d1\u73b0\uff0c\u597d\u591a\u4eba\u5176\u5b9e\u4e0d\u77e5\u9053 DeepSpeed \u6b63\u786e\u7684\u8c03\u7528\u65b9\u5f0f\u957f\u4ec0\u4e48\u6837\u3002\u6211\u628a\u6b63\u786e\u7684\u4ee3\u7801\u6a21\u677f\u8d34\u51fa\u6765\uff0c\u5404\u4f4d\u5bf9\u7167\u7740\u68c0\u67e5\u81ea\u5df1\u7684\u811a\u672c\u3002<\/p>\n<h3>\u521d\u59cb\u5316\u9636\u6bb5\u600e\u4e48\u5199<\/h3>\n<p>\u9996\u5148\u4f60\u5f97\u7528 <code>deepspeed.initialize<\/code> \u6765\u5305\u88c5\u6a21\u578b\u548c\u4f18\u5316\u5668\uff0c\u8fd9\u662f DeepSpeed \u7684\u6838\u5fc3\u5165\u53e3\u3002\u9519\u8bef\u5199\u6cd5\u662f\u76f4\u63a5\u7528 PyTorch \u539f\u751f\u7684 <code>model = model.cuda()<\/code> \u548c <code>optimizer = torch.optim.AdamW(model.parameters())<\/code>\u3002<\/p>\n<pre><code class=\"lang-python language-python python\">import deepspeed\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\n\n# 1. \u5148\u52a0\u8f7d\u6a21\u578b\u548c\u5206\u8bcd\u5668\nmodel = AutoModelForCausalLM.from_pretrained(\n    &quot;meta-llama\/Llama-3-8B&quot;,\n    device_map=&quot;auto&quot;,\n    torch_dtype=torch.float16,\n)\n\n# 2. \u7528 DeepSpeed \u5305\u88c5\u6a21\u578b\uff0c\u540c\u65f6\u521b\u5efa\u4f18\u5316\u5668\n# \u6ce8\u610f\uff1a\u8fd9\u91cc\u4f20\u8fdb\u53bb\u7684\u662f\u666e\u901a optimizer\uff0c\u4e0d\u9700\u8981\u5148.cuda()\nmodel, optimizer, _, _ = deepspeed.initialize(\n    model=model,\n    config=&quot;ds_config.json&quot;,  # \u914d\u7f6e\u6587\u4ef6\u8def\u5f84\u5fc5\u987b\u4f20\n    optimizer=None,           # \u53ef\u4ee5\u4f20 None \u8ba9 DeepSpeed \u81ea\u52a8\u521b\u5efa AdamW\n    lr=2e-4,\n)\n\n# 3. \u4ece\u914d\u7f6e\u6587\u4ef6\u8bfb\u53d6\u771f\u5b9e\u751f\u6548\u7684\u914d\u7f6e\u503c\uff08\u91cd\u70b9\uff01\uff09\nds_config = json.load(open(&quot;ds_config.json&quot;))\neffective_grad_accum = ds_config.get(&quot;gradient_accumulation_steps&quot;, 1)\nprint(f&quot;[DeepSpeed Init] gradient_accumulation_steps = {effective_grad_accum}&quot;)<\/code><\/pre>\n<p><strong>\u5173\u952e\u70b9<\/strong>\uff1adeepspeed.initialize \u4f1a\u8bfb\u53d6 ds_config.json\uff0c\u7136\u540e\u7528\u6587\u4ef6\u91cc\u7684\u503c\u8986\u76d6\u4f60\u4f20\u5165\u7684\u4efb\u4f55\u53c2\u6570\u3002\u6240\u4ee5\u4f60\u5728\u8fd9\u91cc\u4f20\u7684 lr\u3001optimizer \u4e4b\u7c7b\u7684\uff0c\u5176\u5b9e\u53ea\u662f\u5360\u4f4d\u7b26\uff0c\u771f\u6b63\u751f\u6548\u7684\u662f json \u6587\u4ef6\u91cc\u7684\u914d\u7f6e\u3002<\/p>\n<h3>\u8bad\u7ec3\u5faa\u73af\u600e\u4e48\u5199<\/h3>\n<p>\u8bad\u7ec3\u5faa\u73af\u91cc\u6700\u5e38\u89c1\u7684\u5751\u662f\u624b\u52a8\u505a\u4e86\u68af\u5ea6\u7d2f\u79ef\uff0c\u4f46 DeepSpeed \u672c\u8eab\u4e5f\u5728\u505a\u7d2f\u79ef\uff0c\u4e24\u8fb9\u4e0d\u540c\u6b65\u3002\u6211\u89c1\u8fc7\u6709\u4eba\u8fd9\u4e48\u5199\uff1a<\/p>\n<pre><code class=\"lang-python language-python python\"># \u274c \u9519\u8bef\u5199\u6cd5\uff1a\u81ea\u5df1\u505a\u4e86\u68af\u5ea6\u7d2f\u79ef\uff0c\u4f46\u6ca1\u544a\u8bc9 DeepSpeed\nfor step, batch in enumerate(dataloader):\n    outputs = model(**batch)\n    loss = outputs.loss \/ gradient_accumulation_steps\n    loss.backward()\n\n    if (step + 1) % gradient_accumulation_steps == 0:\n        optimizer.step()\n        optimizer.zero_grad()<\/code><\/pre>\n<p>\u8fd9\u79cd\u5199\u6cd5\u5728\u666e\u901a PyTorch \u8bad\u7ec3\u91cc\u6ca1\u95ee\u9898\uff0c\u4f46\u914d\u5408 DeepSpeed \u5c31\u4f1a\u51fa\u4e8b\u2014\u2014DeepSpeed \u4f1a\u6309\u7167 json \u6587\u4ef6\u91cc\u7684 gradient_accumulation_steps \u6765\u63a7\u5236\u4f55\u65f6\u6267\u884c optimizer.step()\uff0c\u4f60\u624b\u52a8\u5199\u7684 if \u5224\u65ad\u6839\u672c\u4e0d\u4f1a\u88ab\u8c03\u7528\u5230\u3002<\/p>\n<p>\u6b63\u786e\u5199\u6cd5\u662f\u8ba9 DeepSpeed \u63a7\u5236\u6574\u4e2a\u5faa\u73af\uff1a<\/p>\n<pre><code class=\"lang-python language-python python\"># \u2705 \u6b63\u786e\u5199\u6cd5\uff1a\u8ba9 DeepSpeed \u63a5\u7ba1\u8bad\u7ec3\u5faa\u73af\nfor step, batch in enumerate(dataloader):\n    # 1. Forward pass\n    outputs = model(\n        input_ids=batch[&quot;input_ids&quot;].to(model.device),\n        attention_mask=batch[&quot;attention_mask&quot;].to(model.device),\n        labels=batch[&quot;labels&quot;].to(model.device),\n    )\n\n    # 2. Backward pass - \u4f20\u5165 scale_by_lr=True \u8ba9 DeepSpeed \u81ea\u52a8\u5904\u7406\u68af\u5ea6\u7d2f\u79ef\n    model.backward(loss)\n\n    # 3. Optimizer step - DeepSpeed \u5185\u90e8\u4f1a\u6839\u636e gradient_accumulation_steps \u51b3\u5b9a\u4f55\u65f6\u6267\u884c\n    model.step()\n\n# \u4e0d\u9700\u8981\u624b\u52a8\u5199 optimizer.zero_grad()\uff0cDeepSpeed \u81ea\u52a8\u7ba1\u7406\n# \u4e0d\u9700\u8981\u624b\u52a8\u5224\u65ad step % gradient_accumulation_steps == 0<\/code><\/pre>\n<h3>\u914d\u7f6e\u52a0\u8f7d\u7684\u4f18\u5148\u7ea7\u95ee\u9898<\/h3>\n<p>\u6211\u67e5\u4e86 DeepSpeed \u7684\u6e90\u7801\uff0cdeepspeed.initialize \u5185\u90e8\u7684\u8c03\u7528\u94fe\u662f\u8fd9\u6837\u7684\uff1a<\/p>\n<pre><code class=\"lang-python language-python python\"># \u4f2a\u4ee3\u7801\uff0c\u6765\u81ea deepspeed\/runtime\/engine.py\nclass DeepSpeedEngine:\n    def __init__(self, ..., config=None, ...):\n        # 1. \u52a0\u8f7d json \u914d\u7f6e\u6587\u4ef6\n        self.config = self._load_config(config)\n\n        # 2. \u7528 json \u914d\u7f6e\u8986\u76d6\u4f20\u5165\u7684\u53c2\u6570\n        # \u8fd9\u4e00\u6b65\u4f1a\u5ffd\u7565\u4f60\u4f20\u5165\u7684 optimizer\u3001lr \u7b49\u53c2\u6570\n        if &quot;optimizer&quot; in self.config:\n            self.optimizer = self._configure_optimizer(self.config[&quot;optimizer&quot;])\n        elif optimizer is not None:\n            self.optimizer = optimizer  # \u53ea\u6709 json \u91cc\u6ca1\u6709\u65f6\u624d\u7528\u4f20\u5165\u7684\n\n        if &quot;learning_rate&quot; in self.config:\n            self.global_grad_norm = self.config.get(&quot;gradient_accumulation_steps&quot;, 1)\n\n        # 3. \u8bbe\u7f6e gradient_accumulation_steps\n        self.gradient_accumulation_steps = self.config.get(&quot;gradient_accumulation_steps&quot;, 1)<\/code><\/pre>\n<p>\u6240\u4ee5\u95ee\u9898\u5c31\u51fa\u5728\u7b2c 2 \u6b65\uff1a<strong>json \u6587\u4ef6\u91cc\u7684\u503c\u4f1a\u8986\u76d6\u4f60\u4f20\u5165\u7684\u4efb\u4f55\u53c2\u6570<\/strong>\u3002\u8fd9\u548c argparse \u7684\u4e60\u60ef\u5b8c\u5168\u76f8\u53cd\uff0c\u6240\u4ee5\u7279\u522b\u5bb9\u6613\u8e29\u5751\u3002<\/p>\n<h3>\u6211\u4e3a\u4ec0\u4e48\u540e\u6765\u6539\u7528\u914d\u7f6e\u6587\u4ef6\u4e3a\u4e3b<\/h3>\n<p>\u4e00\u5f00\u59cb\u6211\u662f\u6297\u62d2\u7684\uff0c\u89c9\u5f97\u547d\u4ee4\u884c\u66f4\u76f4\u89c2\u3002\u540e\u6765\u60f3\u660e\u767d\u4e86\uff0c\u6709\u4e24\u4e2a\u539f\u56e0\u8ba9\u6211\u63a5\u53d7\u8fd9\u4e2a\u8bbe\u8ba1\uff1a<\/p>\n<p>\u7b2c\u4e00\uff0c<strong>\u914d\u7f6e\u96c6\u4e2d\u65b9\u4fbf\u6392\u67e5<\/strong>\u3002\u51fa\u4e86\u95ee\u9898\u4f60\u53ea\u9700\u8981\u770b\u4e00\u4e2a\u6587\u4ef6\uff0c\u4e0d\u7528\u5728\u4ee3\u7801\u548c\u547d\u4ee4\u884c\u4e4b\u95f4\u6765\u56de\u5bf9\u3002\u5206\u5e03\u5f0f\u8bad\u7ec3\u7684\u5751\u672c\u6765\u5c31\u591a\uff0c\u914d\u7f6e\u8d8a\u5206\u6563\u8d8a\u5bb9\u6613\u51fa\u9519\u3002<\/p>\n<p>\u7b2c\u4e8c\uff0c<strong>\u65b9\u4fbf\u7248\u672c\u7ba1\u7406\u548c\u590d\u73b0<\/strong>\u3002json \u914d\u7f6e\u6587\u4ef6\u53ef\u4ee5\u76f4\u63a5 git commit\uff0c\u547d\u4ee4\u884c\u53c2\u6570\u5199\u5b8c\u5c31\u5fd8\uff0c\u4e0b\u6b21\u590d\u73b0\u8fd8\u5f97\u7ffb\u5386\u53f2\u8bb0\u5f55\u3002<\/p>\n<p>\u6211\u559c\u6b22\u7528 JSON \u7ba1\u7406\u914d\u7f6e\uff0c\u8fd8\u6709\u4e00\u4e2a\u79c1\u5fc3\u662f\u65b9\u4fbf\u505a\u5b9e\u9a8c\u5bf9\u6bd4\u2014\u2014\u540c\u4e00\u4e2a\u76ee\u5f55\u653e 3 \u4e2a\u914d\u7f6e\u6587\u4ef6\uff0cds_baseline.json\u3001ds_lr5e5.json\u3001ds_grad accum16.json\uff0c\u6539\u4e00\u884c\u4ee3\u7801\u5c31\u80fd\u5207\u6362\u5b9e\u9a8c\u3002<\/p>\n<h2>\u6280\u672f\u7ec6\u8282\uff1a\u914d\u7f6e\u4f18\u5148\u7ea7\u62c6\u89e3<\/h2>\n<p>\u6211\u67e5\u4e86 DeepSpeed \u7684\u5b98\u65b9\u6587\u6863\uff0c\u6587\u6863\u91cc\u5199\u5f97\u5f88\u6e05\u695a\uff1a<\/p>\n<blockquote>\n<p><code>gradient_accumulation_steps<\/code> must be set in the DeepSpeed JSON config file. The value set via command line arguments will be ignored.<\/p>\n<\/blockquote>\n<p>\u8bf4\u5b9e\u8bdd\uff0c\u8fd9\u4e2a\u8bbe\u8ba1\u633a\u8ba9\u4eba\u56f0\u60d1\u7684\u3002<strong>\u53c2\u6570\u8bf4\u660e<\/strong>\u65b9\u9762\uff0cDeepSpeed \u7684\u601d\u8def\u662f\u628a\u6240\u6709\u8bad\u7ec3\u76f8\u5173\u7684\u8d85\u53c2\u90fd\u96c6\u4e2d\u5728 JSON \u91cc\u7ba1\u7406\uff0c\u8fd9\u6837\u6392\u67e5\u95ee\u9898\u7684\u65f6\u5019\u770b\u4e00\u4e2a\u6587\u4ef6\u5c31\u591f\u4e86\uff0c\u914d\u7f6e\u7684\u4e00\u81f4\u6027\u4e5f\u66f4\u597d\u3002\u4f46\u95ee\u9898\u662f\uff0c\u5b83\u548c argparse \u7684\u4e60\u60ef\u5b8c\u5168\u76f8\u53cd\u2014\u2014\u5927\u591a\u6570\u4eba\u672c\u80fd\u5730\u4f1a\u5f80\u547d\u4ee4\u884c\u91cc\u4f20\u53c2\u6570\u3002<\/p>\n<p><strong>\u4e3a\u4ec0\u4e48\u6211\u6700\u540e\u9009\u62e9\u628a\u53c2\u6570\u5199\u5728 JSON \u91cc\u800c\u4e0d\u662f\u547d\u4ee4\u884c\u91cc\uff1f<\/strong><\/p>\n<p>\u6709\u4e24\u4e2a\u539f\u56e0\uff1a<\/p>\n<p>\u7b2c\u4e00\uff0cDeepSpeed \u5b98\u65b9\u63a8\u8350\u7684\u5c31\u662f\u8fd9\u79cd\u65b9\u5f0f\uff0c\u6587\u6863\u660e\u786e\u8bf4\u4e86\u8981\u5728 JSON \u91cc\u5199\u3002\u65e2\u7136\u6846\u67b6\u672c\u8eab\u5c31\u8fd9\u4e48\u8bbe\u8ba1\u7684\uff0c\u90a3\u6211\u987a\u7740\u6846\u67b6\u6765\uff0c\u51fa\u95ee\u9898\u7684\u6982\u7387\u6700\u5c0f\u3002<\/p>\n<p>\u7b2c\u4e8c\uff0cJSON \u6587\u4ef6\u53ef\u4ee5 git \u7ba1\u7406\u3001\u53ef\u4ee5\u505a\u5b9e\u9a8c\u5bf9\u6bd4\u3001\u53ef\u4ee5\u52a0\u6ce8\u91ca\u8bf4\u6e05\u695a\u4e3a\u4ec0\u4e48\u8fd9\u4e48\u914d\u3002\u547d\u4ee4\u884c\u53c2\u6570\u5199\u5b8c\u5c31\u8dd1\uff0c\u4e0b\u6b21\u60f3\u590d\u73b0\u5b9e\u9a8c\u8fd8\u5f97\u7ffb bash history\u3002<\/p>\n<p><strong>\u81f3\u4e8e\u8fd9\u4e2a\u8bbe\u8ba1\u7684\u5229\u5f0a<\/strong>\uff0c\u6211\u89c9\u5f97\uff1a<\/p>\n<ul>\n<li>\u597d\u5904\u662f\u914d\u7f6e\u96c6\u4e2d\u3001\u5bb9\u6613\u5ba1\u8ba1\u3001\u9002\u5408\u591a\u5b9e\u9a8c\u5bf9\u6bd4<\/li>\n<li>\u574f\u5904\u662f\u548c PyTorch \u539f\u751f\u7684 argparse \u98ce\u683c\u5272\u88c2\uff0c\u65b0\u4eba\u5bb9\u6613\u8e29\u5751\uff08\u6bd4\u5982\u6211\uff09<\/li>\n<\/ul>\n<p>\u6211\u540e\u6765\u60f3\uff0c\u5982\u679c DeepSpeed \u80fd\u50cf Hydra \u90a3\u6837\u505a\u4e00\u4e2a\u914d\u7f6e\u4f18\u5148\u7ea7\u5408\u5e76\uff0c\u6309\u547d\u4ee4\u884c &gt; JSON \u7684\u987a\u5e8f\u8986\u76d6\uff0c\u53ef\u80fd\u5c31\u4e0d\u4f1a\u6709\u8fd9\u4e2a bug \u4e86\u3002\u4f46\u5b83\u6ca1\u8fd9\u4e48\u505a\uff0c\u6240\u4ee5\u6211\u53ea\u80fd\u9002\u5e94\u5b83\u3002<\/p>\n<table>\n<thead>\n<tr>\n<th>\u914d\u7f6e\u6765\u6e90<\/th>\n<th>gradient_accumulation_steps<\/th>\n<th>\u662f\u5426\u751f\u6548<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>ds_config.json<\/td>\n<td>1<\/td>\n<td>\u2705 \u751f\u6548<\/td>\n<\/tr>\n<tr>\n<td>\u547d\u4ee4\u884c &#8211;gradient_accumulation_steps 8<\/td>\n<td>8<\/td>\n<td>\u274c \u88ab\u8986\u76d6<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u6240\u4ee5\u6700\u7b80\u5355\u7684\u89e3\u6cd5\u5c31\u662f\uff1a<strong>\u522b\u5728\u4e24\u4e2a\u5730\u65b9\u540c\u65f6\u5199\uff0c\u8981\u4e48\u53ea\u5199\u5728 json \u91cc\uff0c\u8981\u4e48\u53ea\u5199\u5728\u547d\u4ee4\u884c\u91cc<\/strong>\u3002<\/p>\n<p>\u6211\u5efa\u8bae\u5199\u5728 json \u91cc\uff0c\u56e0\u4e3a\u8fd9\u6837\u66f4\u660e\u786e\uff0c\u4e5f\u65b9\u4fbf\u505a\u5b9e\u9a8c\u5bf9\u6bd4\u3002<\/p>\n<h2>\u590d\u76d8\uff1a\u9a82\u81ea\u5df1\u7684\u73af\u8282<\/h2>\n<p>\u5176\u5b9e\u8fd9\u4e2a\u95ee\u9898\uff0c\u5728 DeepSpeed \u5b98\u65b9\u6587\u6863\u91cc\u6709\u5199\u3002<\/p>\n<p>\u6211\u5f53\u65f6\u6025\u7740\u8dd1\u5b9e\u9a8c\uff0c\u6839\u672c\u6ca1\u4ed4\u7ec6\u770b\u6587\u6863\u3002<\/p>\n<p><strong>\u6587\u6863\u6ca1\u770b\u5c31\u4e0a\u624b\u914d\u7f6e\uff0c\u8fd9\u6bdb\u75c5\u6211\u72af\u4e86\u4e0d\u662f\u4e00\u6b21\u4e24\u6b21\u4e86\u3002<\/strong><\/p>\n<p>\u53e6\u5916\uff0c\u6211\u5e94\u8be5\u65e9\u70b9\u6ce8\u610f\u5230\u51e0\u4e2a\u4fe1\u53f7\uff1a<\/p>\n<ol>\n<li>\n<p><strong>\u663e\u5b58\u4f7f\u7528\u91cf\u6bd4\u9884\u671f\u4f4e<\/strong>\uff1a\u5982\u679c gradient_accumulation_steps=8\uff0c\u5b9e\u9645\u53ea\u6709 1\uff0c\u90a3\u663e\u5b58\u4f1a\u7701\u5f88\u591a\u3002\u6211\u5f53\u65f6\u6ca1\u6ce8\u610f\u5230\u8fd9\u4e00\u70b9\u3002<\/p>\n<\/li>\n<li>\n<p><strong>loss \u66f2\u7ebf\u5f62\u6001\u4e0d\u5bf9<\/strong>\uff1a\u8bad\u7ec3 loss \u548c\u9a8c\u8bc1 loss \u5dee\u8ddd\u592a\u5927\uff0c\u57fa\u672c\u5c31\u662f\u8fc7\u62df\u5408\u7684\u4fe1\u53f7\u3002\u6211\u5f53\u65f6\u5149\u987e\u7740\u8c03\u5b66\u4e60\u7387\uff0c\u6ca1\u5f80 batch size \u65b9\u5411\u60f3\u3002<\/p>\n<\/li>\n<li>\n<p><strong>\u6ca1\u6709\u5355\u72ec\u9a8c\u8bc1\u6709\u6548 batch<\/strong>\uff1a\u5e94\u8be5\u5728\u8bad\u7ec3\u4e2d\u9014\u52a0\u4e2a\u8ba1\u6570\u5668\uff0c\u9a8c\u8bc1\u4e00\u4e0b 8 \u6b65\u4e4b\u540e\u624d\u66f4\u65b0\u53c2\u6570\u8fd9\u4ef6\u4e8b\u5230\u5e95\u6709\u6ca1\u6709\u53d1\u751f\u3002<\/p>\n<\/li>\n<\/ol>\n<h2>\u5e38\u89c1\u5751\uff1a\u8fd8\u6709\u8c01\u4e5f\u8e29\u8fc7\u7c7b\u4f3c\u7684<\/h2>\n<p>\u5199\u8fd9\u7bc7\u6587\u7ae0\u7684\u65f6\u5019\uff0c\u6211\u53c8\u60f3\u8d77\u4ee5\u524d\u7528 FSDP \u7684\u65f6\u5019\u4e5f\u8e29\u8fc7\u7c7b\u4f3c\u7684\u5751\u2014\u2014FSDP \u7684 <code>backward_prefetch<\/code> \u53c2\u6570\u4e5f\u662f JSON \u914d\u7f6e\u4f18\u5148\u4e8e\u547d\u4ee4\u884c\u3002\u8fd9\u4f3c\u4e4e\u662f\u5206\u5e03\u5f0f\u8bad\u7ec3\u6846\u67b6\u7684\u4e00\u4e2a\u901a\u75c5\uff0c\u4e0d\u77e5\u9053\u662f\u8bbe\u8ba1\u95ee\u9898\u8fd8\u662f\u6587\u6863\u95ee\u9898\u3002<\/p>\n<p>\u8fd8\u6709\u4e00\u4e2a\u5751\u662f Megatron-LM\uff0c\u5b83\u7684 tensor parallelism \u548c pipeline parallelism \u7684\u914d\u7f6e\u65b9\u5f0f\u53c8\u4e0d\u4e00\u6837\uff0c\u7ecf\u5e38\u6709\u4eba\u628a global batch size \u548c micro batch size \u641e\u6df7\u3002\u8fd9\u4e9b\u6846\u67b6\u5404\u6709\u5404\u7684\u914d\u7f6e\u98ce\u683c\uff0c\u4f46\u5171\u540c\u70b9\u5c31\u662f\uff1a\u914d\u7f6e\u4e0d\u5bf9\uff0c\u52aa\u529b\u767d\u8d39\u3002<\/p>\n<h2>\u4e0a\u7ebf\u540e\u8bc4\u4f30<\/h2>\n<p>\u6539\u5b8c\u914d\u7f6e\u91cd\u65b0\u8dd1\u4e86 2 \u4e2a epoch\uff0c\u8fd9\u6b21\u7684\u6548\u679c\u5c31\u597d\u591a\u4e86\uff1a<\/p>\n<table>\n<thead>\n<tr>\n<th>\u6307\u6807<\/th>\n<th>\u4fee\u590d\u524d<\/th>\n<th>\u4fee\u590d\u540e<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\u8bad\u7ec3 loss (step 1000)<\/td>\n<td>0.12<\/td>\n<td>0.85<\/td>\n<\/tr>\n<tr>\n<td>\u9a8c\u8bc1 loss (step 1000)<\/td>\n<td>2.31<\/td>\n<td>0.92<\/td>\n<\/tr>\n<tr>\n<td>loss gap<\/td>\n<td>2.19<\/td>\n<td>0.07<\/td>\n<\/tr>\n<tr>\n<td>grad_norm \u8303\u56f4<\/td>\n<td>0.01~0.05<\/td>\n<td>0.08~0.30<\/td>\n<\/tr>\n<tr>\n<td>\u6bcf\u6b65\u663e\u5b58\u5360\u7528<\/td>\n<td>28GB<\/td>\n<td>45GB<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u4fee\u590d\u524d\u7684\u9a8c\u8bc1 loss \u51e0\u4e4e\u4e0d\u52a8\uff0c\u53ea\u6709\u8bad\u7ec3 loss \u5728\u72c2\u6389\uff0c\u8fd9\u5c31\u662f\u5178\u578b\u7684\u8fc7\u62df\u5408\u3002\u4fee\u590d\u540e\u4e24\u8005\u57fa\u672c\u540c\u6b65\uff0c\u8fd9\u624d\u662f\u6b63\u5e38\u7684\u6536\u655b\u3002<\/p>\n<p>\u53e6\u5916\u6211\u6ce8\u610f\u5230\uff0cgrad_norm \u4ece 0.01 \u6da8\u5230\u4e86 0.08~0.30\u3002\u8fd9\u4e2a\u53d8\u5316\u4e5f\u5f88\u597d\u7406\u89e3\u2014\u2014\u5f53\u68af\u5ea6\u7d2f\u79ef 8 \u6b65\u65f6\uff0c\u5355\u6b65\u7684\u68af\u5ea6\u4f1a\u66f4\u5927\uff08\u56e0\u4e3a\u8981\u7d2f\u79ef\u66f4\u591a\u6837\u672c\u7684\u4fe1\u606f\uff09\uff0c\u6240\u4ee5 grad_norm \u6574\u4f53\u4e0a\u4e86\u4e00\u4e2a\u53f0\u9636\u3002\u5982\u679c\u4f60\u7684 grad_norm \u957f\u671f\u504f\u4f4e\uff0c\u4e5f\u53ef\u4ee5\u6000\u7591\u4e00\u4e0b\u662f\u4e0d\u662f\u6709\u6548 batch size \u592a\u5c0f\u4e86\u3002<\/p>\n<h2>\u600e\u4e48\u907f\u514d\u7c7b\u4f3c\u95ee\u9898<\/h2>\n<p>\u5982\u679c\u518d\u8ba9\u6211\u6765\u4e00\u6b21\uff0c\u6211\u4f1a\u505a\u8fd9\u51e0\u4ef6\u4e8b\uff1a<\/p>\n<ol>\n<li><strong>\u5728\u8bad\u7ec3\u811a\u672c\u91cc\u52a0\u4e00\u4e2a\u542f\u52a8\u68c0\u67e5<\/strong><\/li>\n<\/ol>\n<pre><code class=\"lang-python language-python python\">def validate_batch_config(model, ds_config):\n    &quot;&quot;&quot;\u8bad\u7ec3\u5f00\u59cb\u524d\u6253\u5370\u5173\u952e\u914d\u7f6e\uff0c\u9a8c\u8bc1\u662f\u5426\u6b63\u786e&quot;&quot;&quot;\n    world_size = torch.distributed.get_world_size()\n\n    # \u4ece DeepSpeed config \u8bfb\u53d6\u771f\u5b9e\u503c\n    effective_batch = ds_config.get(&quot;train_batch_size&quot;, &quot;auto&quot;)\n    grad_accum = ds_config.get(&quot;gradient_accumulation_steps&quot;, 1)\n\n    # \u5982\u679c\u662f auto\uff0c\u4ece\u6a21\u578b\u5e76\u884c\u5ea6\u7b97\n    if effective_batch == &quot;auto&quot;:\n        effective_batch = batch_size * grad_accum * world_size\n\n    print(f&quot;[Config Check] world_size = {world_size}&quot;)\n    print(f&quot;[Config Check] grad_accum_steps = {grad_accum}&quot;)\n    print(f&quot;[Config Check] effective_batch = {effective_batch}&quot;)\n\n    # \u53ef\u4ee5\u52a0\u4e2a\u65ad\u8a00\uff0c\u9632\u6b62\u914d\u7f6e\u9519\u8bef\n    if effective_batch &lt; 16:\n        print(&quot;\u26a0\ufe0f WARNING: effective_batch is too small, likely misconfiguration&quot;)\n        print(&quot;\u26a0\ufe0f This usually means gradient_accumulation_steps is not working&quot;)<\/code><\/pre>\n<ol start=\"2\">\n<li><strong>\u5728\u8bad\u7ec3\u521d\u671f\u89c2\u5bdf\u68af\u5ea6\u8303\u6570\u7684\u53d8\u5316\u5468\u671f<\/strong><\/li>\n<\/ol>\n<p>\u68af\u5ea6\u8303\u6570\u5e94\u8be5\u5728\u4e00\u4e2a accumulation cycle \u5185\u7d2f\u79ef\u3002\u5982\u679c\u4f60\u53d1\u73b0\u6bcf\u4e00\u6b65\u7684\u68af\u5ea6\u8303\u6570\u90fd\u5dee\u4e0d\u591a\uff0c\u6ca1\u6709\u5468\u671f\u6027\u53d8\u5316\uff0c\u90a3\u5f88\u53ef\u80fd gradient_accumulation_steps \u6ca1\u6709\u751f\u6548\u3002<\/p>\n<ol start=\"3\">\n<li><strong>\u5bf9\u4e00\u4e0b nvidia-smi \u7684\u663e\u5b58\u5360\u7528\u66f2\u7ebf<\/strong><\/li>\n<\/ol>\n<p>\u6709\u6548 batch size \u8d8a\u5927\uff0c\u663e\u5b58\u5360\u7528\u8d8a\u9ad8\u3002\u5982\u679c\u663e\u5b58\u5360\u7528\u6bd4\u9884\u671f\u4f4e 30% \u4ee5\u4e0a\uff0c\u57fa\u672c\u53ef\u4ee5\u6000\u7591\u914d\u7f6e\u6ca1\u751f\u6548\u3002<\/p>\n<h2>\u7ed3\u5c3e<\/h2>\n<p>\u8fd9\u6b21\u8e29\u5751\u7684\u6839\u672c\u539f\u56e0\u8bf4\u8d77\u6765\u5f88\u7b80\u5355\uff1a<strong>DeepSpeed \u7684\u914d\u7f6e\u4f18\u5148\u7ea7\u89c4\u5219\u6211\u6ca1\u641e\u6e05\u695a\uff0c\u5c31\u76f4\u63a5\u4e0a\u624b\u8dd1\u4e86<\/strong>\u3002<\/p>\n<p>\u4ee3\u4ef7\u662f\u6d6a\u8d39\u4e86 3 \u5929\u8bad\u7ec3\u65f6\u95f4\uff0c\u6a21\u578b\u6700\u540e\u8fc7\u62df\u5408\u5f97\u4e00\u584c\u7cca\u6d82\u3002<\/p>\n<p><strong>\u5982\u679c\u4f60\u4e5f\u5728\u7528 DeepSpeed\uff0c\u6211\u5efa\u8bae\u5199\u4e2a\u542f\u52a8\u68c0\u67e5\u51fd\u6570\u628a\u8fd9\u4e2a\u5751\u5835\u6b7b\uff0c\u5426\u5219\u8fdf\u65e9\u4f1a\u518d\u8e29\u4e00\u6b21\u3002<\/strong> \u8fd9\u4e2a\u68c0\u67e5\u51fd\u6570\u4e0d\u590d\u6742\uff0c\u4f46\u80fd\u8ba9\u4f60\u5728\u8bad\u7ec3\u5f00\u59cb\u7684\u7b2c\u4e00\u5206\u949f\u5c31\u53d1\u73b0\u914d\u7f6e\u95ee\u9898\uff0c\u800c\u4e0d\u662f\u8dd1\u4e86\u4e24\u5929\u4e4b\u540e\u624d\u4ece loss \u66f2\u7ebf\u4e0a\u770b\u51fa\u6765\u3002<\/p>\n<p>\u914d\u7f6e\u4e0d\u5bf9\uff0c\u52aa\u529b\u767d\u8d39\u3002<\/p>","protected":false},"excerpt":{"rendered":"<p>\u8bbe\u4e86gradient_accumulation_steps=8\uff0c\u7406\u8bba\u6709\u6548batch=32\uff0c\u7ed3\u679c\u6bcf\u4e2astep\u7684loss\u964d\u5f97\u50cf\u706b\u7bad\u4e00\u6837\u5feb\uff0c\u8fd8\u4ee5\u4e3a\u5b66\u4e60\u7387\u592a\u9ad8\u3002\u6392\u67e5\u4e86\u4e24\u5929\u624d\u53d1\u73b0DeepSpeed\u7684\u914d\u7f6e\u8986\u76d6\u95ee\u9898\uff0c\u5b9e\u9645\u6709\u6548batch\u53ea\u67091\u3002<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[401],"tags":[589,520,144,590,570],"class_list":["post-743","post","type-post","status-publish","format-standard","hentry","category-ai","tag-deepspeed","tag-lora","tag-pytorch","tag-590","tag-570"],"views":4,"_links":{"self":[{"href":"https:\/\/www.liaoxinghui.com\/index.php?rest_route=\/wp\/v2\/posts\/743","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.liaoxinghui.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.liaoxinghui.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.liaoxinghui.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.liaoxinghui.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=743"}],"version-history":[{"count":1,"href":"https:\/\/www.liaoxinghui.com\/index.php?rest_route=\/wp\/v2\/posts\/743\/revisions"}],"predecessor-version":[{"id":746,"href":"https:\/\/www.liaoxinghui.com\/index.php?rest_route=\/wp\/v2\/posts\/743\/revisions\/746"}],"wp:attachment":[{"href":"https:\/\/www.liaoxinghui.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=743"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.liaoxinghui.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=743"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.liaoxinghui.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=743"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}