Navigation

    Gpushare.com

    • Register
    • Login
    • Search
    • Popular
    • Categories
    • Recent
    • Tags

    如何看待谷歌发布迄今最大的视觉模型V-MoE,参数达150亿,在ImageNet上90.35%?

    技术交流
    2
    2
    64
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • 188****7632
      188****7632 last edited by

      Sparsely-gated Mixture of Experts networks (MoEs) have demonstrated excellent scalability in Natural Language Processing. In Computer Vision, however, almost all performant networks are “dense”, that is, every input is processed by every parameter. We present a Vision MoE (V-MoE), a sparse version of the Vision Transformer, that is scalable and competitive with the largest dense networks. When applied to image recognition, V-MoE matches the performance of state-of-the-art networks, while requiring as little as half of the compute at inference time. Further, we propose an extension to the routing algorithm that can prioritize subsets of each input across the entire batch, leading to adaptive per-image compute. This allows V-MoE to trade-off performance and compute smoothly at test-time. Finally, we demonstrate the potential of V-MoE to scale vision models, and train a 15B parameter model that attains 90.35% on ImageNet.
      https://arxiv.org/abs/2106.05974
      我是做nlp,有没有小伙伴科普一下,像那些经典Resnet,ResXnet模型等等的参数都有多大呀🤔

      1 Reply Last reply Reply Quote 2
      • Alice_恒源云
        Alice_恒源云 last edited by

        lz可以多介绍一些吗~

        1 Reply Last reply Reply Quote 0
        • First post
          Last post