Code Monkey home page Code Monkey logo

weatherlearn's Introduction

WeatherLearn

Implementation of the PyTorch version of the Weather Deep Learning Model Zoo.

Dependencies

python = "^3.11"
torch = "2.1.0"
timm = "0.9.10"
numpy = "1.23.5"

Model-zoo

Pangu-Weather

Model Architecture

pangu_architecture

Example

# Pangu
from weatherlearn.models import Pangu

import torch

if __name__ == '__main__':
    B = 1  # batch_size
    surface = torch.randn(B, 4, 721, 1440)  # B, C, Lat, Lon
    surface_mask = torch.randn(3, 721, 1440)  # topography mask, land-sea mask, soil-type mask
    upper_air = torch.randn(B, 5, 13, 721, 1440)  # B, C, Pl, Lat, Lon

    pangu_weather = Pangu()

    output_surface, output_upper_air = pangu_weather(surface, surface_mask, upper_air)
# Pangu_lite
from weatherlearn.models import Pangu_lite

import torch

if __name__ == '__main__':
    B = 1  # batch_size
    surface = torch.randn(B, 4, 721, 1440)  # B, C, Lat, Lon
    surface_mask = torch.randn(3, 721, 1440)  # topography mask, land-sea mask, soil-type mask
    upper_air = torch.randn(B, 5, 13, 721, 1440)  # B, C, Pl, Lat, Lon

    pangu_lite = Pangu_lite()

    output_surface, output_upper_air = pangu_lite(surface, surface_mask, upper_air)

References

@article{bi2023accurate,
  title={Accurate medium-range global weather forecasting with 3D neural networks},
  author={Bi, Kaifeng and Xie, Lingxi and Zhang, Hengheng and Chen, Xin and Gu, Xiaotao and Tian, Qi},
  journal={Nature},
  volume={619},
  number={7970},
  pages={533--538},
  year={2023},
  publisher={Nature Publishing Group}
}
@article{bi2022pangu,
  title={Pangu-Weather: A 3D High-Resolution Model for Fast and Accurate Global Weather Forecast},
  author={Bi, Kaifeng and Xie, Lingxi and Zhang, Hengheng and Chen, Xin and Gu, Xiaotao and Tian, Qi},
  journal={arXiv preprint arXiv:2211.02556},
  year={2022}
}

Fuxi

Model Architecture

fuxi_architecture

Example

from weatherlearn.models import Fuxi

import torch

if __name__ == '__main__':
    B = 1  # batch_size
    in_chans = out_chans = 70  # number of input channels or output channels
    input = torch.randn(B, in_chans, 2, 721, 1440)  # B C T Lat Lon
    
    fuxi = Fuxi()  
    # patch_size : Default: (2, 4, 4)
    # embed_dim : Default: 1536
    # num_groups : Default: 32
    # num_heads : Default: 8
    # window_size : Default: 7
    
    output = fuxi(input)  # B C Lat Lon

References

FuXi: A cascade machine learning forecasting system for 15-day global weather forecast

Published on npj Climate and Atmospheric Science: FuXi: a cascade machine learning forecasting system for 15-day global weather forecast

by Lei Chen, Xiaohui Zhong, Feng Zhang, Yuan Cheng, Yinghui Xu, Yuan Qi, Hao Li

License

BY-NC-SA 4.0 license

TODO

weatherlearn's People

Contributors

lizhuoq avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

weatherlearn's Issues

Data doesn't appear to be normalised?

For the pangu scripts, there doesn't appear to be any modifications made to the data to normalise it. I can see that the 'cal_mean_std.py' file calculates the mean & standard deviation of each variable but this isn't then used anywhere else from what I can tell.

Is this intentional?

Dimensions mismatch when doing a forward pass in Fuxi()

Hi,
I am trying to test FuXi() model on a weather dataset. However, I am facing an error when running it through the SwinTransformerV2Block(). The details are:

The input dimension is (B, in_chans = 1, T = 2, 46, 90)
patch_size=(2, 4, 4)
window_size = 8

While passing through the UTransformer block -> self.layer(x) -> SwinTransformerV2Block.forward() -> window_partition function, I am facing a dimension mismatch at this line..

Would you please help? It looks like there is an issue with the padding function, or the window_size is not compatible.

My code:

B = 1  
in_chans = 1
out_chans = 1  # number of input channels or output channels
input = torch.randn(B, in_chans, 2, 46, 90)  # B C T Lat Lon
model = fuxi.Fuxi(img_size=(2, 46, 90), 
                patch_size=(2, 4, 4), 
                in_chans=in_chans, out_chans=out_chans,
                embed_dim=128, 
                num_groups=32, 
                num_heads=4, 
                window_size=8)  

output = model(input)  # B C Lat Lon


Error:

46 x = x.view(B, H // window_size[0], window_size[0], W // window_size[1], window_size[1], C)
47 windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size[0], window_size[1], C)
48 return windows

RuntimeError: shape '[1, 1, 8, 2, 8, 128]' is invalid for input of size 18432

Your support would be very helpful.

Skip Connection

Hi, first of all, thank you, this is super useful.
I was wondering about the skip connection between layer 1 and layer 4 of Pangu Weather model.
In your implementation the skip is added after layer4, just before patch recovery, following original Pangu pseudo-code https://github.com/198808xc/Pangu-Weather/blob/403b7b69ee66f220f78afeeafa117dd0f52c0b7d/pseudocode.py#L236

However, this does not match the figure of PanguWeather (seems to indicate that skip is before layer4), and intuitively adding the skip right before patch recovery is way too late to do any meaningful processing of the skip connection. I think it might be an error in their pseudo-code.

Thoughts ?

Finetuning Existing Model Rather than training completely new Pangu

Hi Lizhuoq, thanks for this code - it has been really helpful. The Panguweather portion of this code is an accurate implementation of the official Panguweather repository in Pytorch. That is to say, it takes the full Pangu 24 hr .onnx model and runs a prediction + the code to train a pangu_lite & run this locally.

I'm looking to change the format of Panguweather slightly for downscaling and wanted to get your thoughts on taking the full .onnx model and finetuning the first few layers of this rather than training a completely new model. In examples\pangu-lite\train.py, how would I amend this to take up where the full Pangu .onnx model left off?

Appreciate any suggestions

About the launch of FuXi.

Hello!
I have a number of questions about launching FuXi:

  1. How much is estimated it may take to launch the model (32GB of CUDA was not enough for me to test run the model from your examples (There may be a memory leak?))?
  2. The original model has an input parameter encoding the time shift ("temb" from "time_encoding"). Is there any way to combine the weights of the model and retrain this network from Pytorch ?

P.S. My task is fine-tuning the model on a one-hour time grid.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.