<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" 
  xmlns:content="http://purl.org/rss/1.0/modules/content/" 
  xmlns:dc="http://purl.org/dc/elements/1.1/" 
  xmlns:atom="http://www.w3.org/2005/Atom" 
  xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" 
  xmlns:media="http://search.yahoo.com/mrss/">
  <channel>
    <title>k8s on 行李の底に収めたり[YuWd]</title>
    <link>https://yuiga.dev/blog/en/tags/k8s/</link>
    <description>Recent content in k8s on 行李の底に収めたり[YuWd]</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en</language>
    <copyright>©2026, All Rights Reserved</copyright>
    <lastBuildDate>Sun, 17 Sep 2023 09:58:05 +0900</lastBuildDate>
    
        <atom:link href="https://yuiga.dev/blog/en/tags/k8s/index.xml" rel="self" type="application/rss+xml" />
    

      
      <item>
        <title>【k8s・DDP】クラスタ上でのtorch.loadが遅い</title>
        <link>https://yuiga.dev/blog/en/ja/posts/k8sddp%E3%82%AF%E3%83%A9%E3%82%B9%E3%82%BF%E4%B8%8A%E3%81%A7%E3%81%AEtorch.load%E3%81%8C%E9%81%85%E3%81%84/</link>
        <pubDate>Sun, 17 Sep 2023 09:58:05 +0900</pubDate>
        
        <atom:modified>Sun, 17 Sep 2023 09:58:05 +0900</atom:modified>
        <guid>https://yuiga.dev/blog/en/ja/posts/k8sddp%E3%82%AF%E3%83%A9%E3%82%B9%E3%82%BF%E4%B8%8A%E3%81%A7%E3%81%AEtorch.load%E3%81%8C%E9%81%85%E3%81%84/</guid>
        <description>概要 巨大なembeddingをチャンクで外部に保存し，DDP(Distributed Data Parallel)を使った学習時に各GPUで読み込みたい そんなときtorch.load(path, map_location=f&amp;quot;cuda:{rank}&amp;quot;)にかかる時間の分散が大きい場合がある 前提: torch.loa</description>
        
        <dc:creator>YuWd (Yuiga Wada)</dc:creator>
        
        
        
        
          
            
              <category>k8s</category>
            
          
            
              <category>PyTorch</category>
            
          
            
              <category>post</category>
            
          
        
        
        
          
            
          
        
      </item>
      

    
  </channel>
</rss>
