Code Monkey home page Code Monkey logo

condensenetv2's Introduction

Hi there 👋

My name is Haojun Jiang(蒋昊峻) [My Google Scholar], a fourth-year Ph.D. student in the Department of Automation at Tsinghua University, advised by Prof. Gao Huang. Before that, I received my B.E. degree in Automation at Tsinghua University. 😄

Beginning in May (2023), I will be shifting my focus to the research of autonomous medical ultrasound systems. My commitment will be towards revolutionizing ultrasound examinations by making them more intelligent and autonomous, thereby liberating doctors' hands and enhancing medical efficiency.

If you are also passionate about research in this area, please feel free to reach out using the contact details below. I am always open for a fruitful exchange of ideas.

I’m currently working on vision and language.

😄 Projects

  • The World's First Autonomous Echocardiography (TTE) Robotic System (Under Progress)
  • Towards Expert-level Autonomous Ultrasonography Using AI-Driven Robotic System (Under Review on Nature Machine Intelligence)
  • Cardiac Copilot MICCAI'24 Early Accept [Paper][Code]
  • Cross-Modal Adapter [Paper][Code]
  • Pseudo-Q CVPR'22 [Paper][Code]
  • CondenseNetV2 CVPR'21 [Paper][Code]
  • AdaFocus ICCV'21 [Paper][Code]

😄 Awesome Collections

  • Awesome Parameter Efficient Transfer Learning [Repo]
  • Awesome Autonomous Medical Ultrasound System [Repo]

💬 News

[2024/05]: Cardiac Copilot: Automatic Probe Guidance for Echocardiography with World Model is early accepted by MICCAI 2024!
[2024/04]: Our intelligent autonomous ultrasound robot won the Silver Award at the Tsinghua Challenge Cup Entrepreneurship Competition and was recommended to represent Tsinghua University in Beijing municipal competitions.
[2024/03]: I was selected for the Tsinghua University 'Qi Chuang' Student Entrepreneurship Talent Program (20/60000) due to our innovative prototype in the field of intelligent autonomous ultrasound robots.
[2024/01]: Our intelligent autonomous ultrasound robot won the First Prize (1/245) at the Global Artificial Intelligence and Robotics Innovation Competition organized by the Guoqiang Research Institute of Tsinghua University.
[2023/12]: Our intelligent autonomous ultrasound robot has been selected as one of the Top-10 in the Tsinghua Medical-Engineering Innovation Competition.
[2023/07]: Deep Incubation: Training Large Models by Divide-and-Conquering is accepted by ICCV 2023! Paper is available at arXiv.
[2023/05]: I will be shifting my focus to the research of autonomous medical ultrasound systems.
[2023/01]: Text4Point now is available at arXiv. This work propose a novel Text4Point framework to construct language-guided 3D point cloud models. The key idea is utilizing 2D images as a bridge to connect the point cloud and the language modalities.
[2022/12]: A curated list about Parameter Efficient Transfer Learning in computer vision and multimodal is created.
[2022/12]: Deep Incubation: Training Large Models by Divide-and-Conquering now is available at arXiv. This work explores a novel Modular Training paradigm which divides a large model into smaller modules, trains them independently, and reassembles the trained modules to obtain the target model.
[2022/11]: Cross-Modal Adapter now is available at arXiv. This work explores the adapter-based parameter-efficient transfer learning for text-video retrieval domain. It reduces 99.6% of fine-tuned parameters without performance degradation.
[2022/09]: An introduction about Parameter Efficient Transfer Learning is given in BAAI dynamic neural network seminar.
[2022/07]: Glance and Focus Networks for Dynamic Visual Recognition is accepted by TPAMI (IF=24.31)!
[2022/07]: AI Time invites me to give a talk about Pseudo-Q.
[2022/04]: An introduction about 3D Visual Grounding is given in BAAI dynamic neural network seminar.
[2022/04]: A curated list about 3D Vision and Language is created.
[2022/03]: Pseudo-Q and AdaFocusV2 are accepted by CVPR 2022!
[2021/07]: AdaFocus is accepted by ICCV 2021!
[2021/03]: CondenseNetV2 is accepted by CVPR 2021!

🌱 Academic Services

  • Conference Reviewer: CVPR, ICCV, ECCV

📫 Contact

Please include a brief note about the reason for reaching out when you contact me.

  • E-mail:jhj20 at mails.tsinghua.edu.cn
  • Wechat:LebronJames5Champ

✨ GitHub Stats

Haojun's GitHub stats

condensenetv2's People

Contributors

jianghaojun avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

condensenetv2's Issues

issue about SFR-ShuffleNetV2

Sfr-shufflenetv2 is mentioned in the paper, but I can't find its code. Is Sfr-shufflenetv2 add or concat 1 * 1 convolution and the other half channel of the input?
And the paper mentioned that you shuffle the channel after using the SFR module in condensenetv2, but I didn't see the use of channel shuffle in the corresponding condensenetv2.py file, so channel shuffle is not necessary, right? thanks!!

训练时,imagenet数据集结构是怎样的呢?

您好,我看源代码中参考Pytorch Image Models的代码,但是数据集训练的代码有所修改。请问训练时,关于imagenet数据集的文件结构以及目录是怎样的呢?标签文件怎么放置?

作者训练好imagenet 模型

您好,请问我能用作者训练好imagenet 模型进行迁移学习吗,我并没有发现该模型,因为我简单的把该结构拿出来进行图像检索,具体操作为去掉最后的分类层,添加l连接层(1024,128)发现结果并不是很好,大概是我做错了,而且我发现收敛的很慢,运行同样的epoch和损失函数,该网络65个epoch时只有50%,而imagenet预训练下的alexnet都有70%,实验设备有限,我希望能快速得到结果,希望作者能给合适imagenet模型,邮箱是[email protected],万分感谢。这是我修改的地方

class CondenseNetV2(nn.Module):
def init(self, args, bit):

    super(CondenseNetV2, self).__init__()

    self.stages = args.stages
    self.growth = args.growth
    assert len(self.stages) == len(self.growth)
    self.args = args
    self.progress = 0.0
    if args.dataset in ['cifar10', 'cifar100']:
        self.init_stride = 1
        self.pool_size = 8
    else:
        self.init_stride = 2
        self.pool_size = 7

    self.features = nn.Sequential()
    ### Initial nChannels should be 3
    self.num_features = 2 * self.growth[0]
    ### Dense-block 1 (224x224)
    self.features.add_module('init_conv', nn.Conv2d(3, self.num_features,
                                                    kernel_size=3,
                                                    stride=self.init_stride,
                                                    padding=1,
                                                    bias=False))
    for i in range(len(self.stages)):
        activation = 'HS' if i >= args.HS_start_block else 'ReLU'
        use_se = True if i >= args.SE_start_block else False
        ### Dense-block i
        self.add_block(i, activation, use_se)

#只修改了这里
#self.fc = nn.Linear(self.num_features, args.fc_channel)
#self.fc_act = HS()
self.hash = nn.Linear(self.num_features, bit)
self.fc_act = HS()
### Classifier layer
#self.classifier = nn.Linear(args.fc_channel, args.num_classes)
self._initialize()
#
************************

def cdnv2_b(self,args, bit):#这里没改,就加个bit
args.stages = '2-4-6-8-6'
args.growth = '6-12-24-48-96'
print('Stages: {}, Growth: {}'.format(args.stages, args.growth))
args.stages = list(map(int, args.stages.split('-')))
args.growth = list(map(int, args.growth.split('-')))
args.condense_factor = 6
args.trans_factor = 6
args.group_1x1 = 6
args.group_3x3 = 6
args.group_trans = 6
args.bottleneck = 4
args.last_se_reduction = 16
args.HS_start_block = 2
args.SE_start_block = 3
args.fc_channel = 1024
return CondenseNetV2(args, bit)

#************************************************args参数只传了这些,其他都没改
import os
import argparse
import warnings
warnings.filterwarnings("ignore")
parser = argparse.ArgumentParser(description='PyTorch Condensed Convolutional Networks')
args, unknown = parser.parse_known_args()
args.dataset = 'cifar100'
args.num_classes = 100
cdnv2_b(args, 64):

Pruning issue

Awesome job!
Recently, I have read your paper and have a question about the way you prune the weight. In the paper, you write as following

M^g_{i,j} is set to zero for all j in g-th group for each pruned output feature map i.

Maybe it should be implemented as
self._mask[d, i:i+d_in, :, :].fill_(0)
rather than

self._mask[d, i::self.groups, :, :].fill_(0)

Is there anything wrong of my understanding?

the converted model get more larger

hi,thanks for your best contributions!
When i converted the trained model on cifar10, i find the converted model get more larger,e.g. 10723->24418, how can i converted the model right, or the fact is that.

the train arguments on cifar10 is :
--model condensenetv2 -b 64 -j 12 --data cifar10 --stages 14-14-14 --growth 8-16-32

thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.