Code Monkey home page Code Monkey logo

gm-nas's Introduction

GM-NAS

This repository contains the PyTorch implementation of the paper:
Generalizing Few-Shot NAS with Gradient Matching in ICLR 2022.

By Shoukang Hu*, Ruochen Wang*, Lanqing Hong, Zhenguo Li, Cho-Jui Hsieh, and Jiashi Feng.

For experiments on NASBench-201 and DARTS space, please refer to WS-GM/README.md

For experiments on ProxylessNAS space, please refer to ProxylessNAS-GM/README.md

For experiments on OFA space, please refer to once-for-all-GM/README.md

For evaluating searched architectures from ProxylessNAS and OFA space, please refer to Imagenet_train/README.md

Patch Note (Oct 30, 2022)

There has been a logging error in NB201's architecture selection phase that causes some confusion in reproducibility. We've updated the logging. For more details on the architecture selection method, please refer to Appendix C of the paper.

Citation

If you find our codes or trained models useful in your research, please consider to star our repo and cite our paper:

@inproceedings{hu2022generalizing,
  title={Generalizing Few-Shot NAS with Gradient Matching},
  author={Hu, Shoukang and Wang, Ruochen and Lanqing, HONG and Li, Zhenguo and Hsieh, Cho-Jui and Feng, Jiashi},
  booktitle={International Conference on Learning Representations},
  year={2022}
}

gm-nas's People

Contributors

skhu101 avatar ruocwang avatar

Stargazers

 avatar  avatar  avatar  avatar danniel avatar Jeff Carpenter avatar Abhishek Aich avatar 爱可可-爱生活 avatar Shun Lu avatar  avatar Haotong DU 杜昊桐 avatar Peyton avatar YOUNGMIN OH avatar  avatar  avatar  avatar HeyangXue1997 avatar Max W. Y. Lam avatar  avatar  avatar Rumen Dangovski avatar

Watchers

 avatar  avatar

gm-nas's Issues

Could you please provide the source code&weights for reproducing the results of Table 7?

Thanks for the great work again!

I've successfully run WS-GM/exp_scripts/rsws-ws-201.sh, and now I obtained four checkpoints of supernets, e.g., supernets_0/1/2/3.pt.

I want to reproduce the results of Table 7, but here are some questions:

  1. what do you mean by top 0.2%/0.5%/1% architectures in NAS-Bench-201? Are these architectures ranked according to their real test accuracy or proxy accuracy?
  2. Table 7 shows that you obtained16 sub-Supernets after partition. However, I only got 4 sub-Supernets.
  3. Could you please provide the source code and model weights for reproducing the spearman score on the top1% models?

Can't get the performance reported in the paper

Hello,

I'm trying to run the provided code but the results are inconsistent with the paper.

NASBench-201 space on CIFAR-10, after running train_search.py,
With DARTS: I get train: 86.59 and test: 90.44
With SNAS: I get valid: 89.38 and test: 92.76.

Both experiments are conducted on seed = 0.

Could you please provide some information about the version (python, pytorch,..) or some other possible solutions?
Thank you.

How do you train the sub-supernets before splitting the supernet by grads?

Thanks for the great work.

I wonder how you train the sub-supernets before splitting the supernet by grads?

Let's take NASBench201 as an example, say we have a sub-supernet with encodings of

 tensor([[1., 0., 1., 1., 0.],
        [1., 1., 1., 1., 1.],
        [1., 1., 1., 1., 1.],
        [1., 0., 1., 1., 0.],
        [1., 1., 1., 1., 1.],
        [1., 1., 1., 1., 1.]], device='cuda:0')

Are all operations with the value of 1 involved in the forward and backward processes? Or do you randomly sample only one operation for each edge for each training batch?

iter(dataset) in training loop for NASBench201

In WS-GM/nasbench201/train_search.py the train function has
input, target = next(iter(train_queue))
inside the batches for loop. This creates an iter object every batch which significantly slows down training. It also makes it so 1 epoch doesn't loop over the entire dataset. Because iter(train_queue) is initialized every batch, it grabs the first sample, then reinitializes, grabs the first sample etc. You get a completely random sample from your dataset, but there is no guarantee that it is disjoint.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.