Code Monkey home page Code Monkey logo

lompe's Introduction

Overview

LOcal Mapping of Polar ionospheric Electrodynamics (Lompe)

Lompe is a tool for estimating regional maps of ionospheric electrodynamics using measurements of plasma convection and magnetic field disturbances in space and on ground.

We recommend to use the examples to learn how to use Lompe, but the general workflow is like this:

>>> # prepare datasets (as many as you have - see lompe.Data doc string for how to format)
>>> my_data1 = lompe.Data(*data1)
>>> my_data2 = lompe.Data(*data2)
>>> # set up grid (the parameters depend on your region, target resoultion etc):
>>> grid = lompe.cs.CSgrid(lompe.cs.CSprojection(*projectionparams), *gridparams)
>>> # initialize model with grid and functions to calculate Hall and Pedersen conductance
>>> # The Hall and Pedersen functions should take (lon, lat) as parameters
>>> model = lompe.Emodel(grid, (Hall_function, Pedersen_function))
>>> # add data:
>>> model.add_data(my_data1, my_data2)
>>> # run inversion
>>> model.run_inversion()
>>> # now the model vector is ready, and we can plot plasma flows, currents, magnetic fields, ...
>>> model.lompeplot()
>>> # or calculate some quantity, like plasma velocity:
>>> ve, vn = model.v(mylon, mylat)

Install

(NB: In the below, if you do not have mamba, replace mamba with conda)

Option 0: using pip directly

The package is pip-installable from GitHub directly with:

pip install "lompe[deps-from-github,extras] @ git+https://github.com/klaundal/lompe.git@main"

You can omit some of the optional packages by removing ,extras.

This could also be done within a minimal conda environment created with, e.g. mamba create -n lompe python=3.10 fortran-compiler

Option 1: without development install of dipole, polplot, secsy

Get the code, create a suitable conda environment, then use pip to install the package in editable (development) mode:

git clone https://github.com/klaundal/lompe
mamba env create -f lompe/binder/environment.yml -n lompe
mamba activate lompe
pip install --editable ./lompe[extras,deps-from-github]

Editable mode (-e or --editable) means that the install is directly linked to the location where you cloned the repository, so you can edit the code.

Note that in this case, the deps-from-github option means that the dipole, polplot, secsy packages are installed directly from their source on GitHub.

Option 2: including development install of dipole, polplot, secsy

Get all the repositories, create a suitable conda environment, then use pip to install all of them in editable (development) mode:

git clone https://github.com/klaundal/dipole
git clone https://github.com/klaundal/polplot
git clone https://github.com/klaundal/secsy
git clone https://github.com/klaundal/lompe
mamba env create -f lompe/binder/environment.yml -n lompe
mamba activate lompe
pip install -e ./dipole -e ./secsy -e ./polplot -e ./lompe[local,extras]

Note that in this case, all four are installed in editable mode. And the local option instructs the lompe install to use those local versions of the package.

Hint: you can use pip list | grep -E 'dipole|polplot|secsy|lompe' to identify which versions you are using.

Hint: you can use pytest ./lompe/tests to check it installed correctly.

Dependencies

You should have the following modules installed:

  • apexpy
  • matplotlib
  • numpy
  • pandas
  • ppigrf (install with pip install ppigrf)
  • scipy
  • xarray
  • astropy (if you use the AMPERE Iridium data preprocessing scripts)
  • cdflib (for running lompe paper figures example 05)
  • madrigalWeb (if you use the DMSP SSIES data preprocessing scripts)
  • netCDF4 (if you use the DMSP SSUSI data preprocessing scripts)
  • pyAMPS (for running code paper figures example 08)
  • pydarn (if you use the SuperDARN data preprocessing scripts)

You should also have git version >= 2.13

Lompe papers

Funding

The Lompe development is funded by the Trond Mohn Foundation, and by the Research Council of Norway (300844/F50)

lompe's People

Contributors

08walkersj avatar amalieohovland avatar billetd avatar bingmm avatar fasilgibdaw avatar jpreistad avatar klaundal avatar smithara avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

lompe's Issues

FAC regularization

Add the possibility of regularize FAC 2-norm and first derivatives.
This should not be in contrast to regularizing the model 2-norm.

save function crashing with multiple lompe objects of the same type

save_model function error occurrence when model inputs include multiple data objects of the same type. In this instance I was trying to use two separate convection inputs (one for a SuperDARN data set, one for a PFISR data set) from the same experiment/event timeframe. The below code in the MWE works for just PFISR or just SuperDARN as inputs, but produces the below error when trying to use both simultaneously.

Error:

 Traceback (most recent call last):

  File ~/miniconda3/envs/lompe/lib/python3.11/site-packages/spyder_kernels/py3compat.py:356 in compat_exec
    exec(code, globals, locals)

  File ~/Projects/isr_3d_vefs/pfrr_runscript.py:29
    run_lompe_pfisr(start_time, end_time, time_step, Kp, x_resolution,

  File ~/Projects/isr_3d_vefs/TEST_pfrr_run_lompe.py:153 in run_lompe_pfisr
    save_model(model, file_name=savefile) # one file per time stamp

  File ~/miniconda3/envs/lompe/lib/python3.11/site-packages/lompe/utils/save_load_utils.py:181 in save_model
    if data_locs: data_vars1.update(data_locs_to_dict(model))

  File ~/miniconda3/envs/lompe/lib/python3.11/site-packages/lompe/utils/save_load_utils.py:351 in data_locs_to_dict
    coords= np.array(coords)

ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 2 dimensions. The detected shape was (2, 2) + inhomogeneous part. 

MWE (pseudo code - functions that collect data are not included):

    # set up grid
    position = (-147, 65) # lon, lat
    orientation = (-1, 2) # east, north
    L, W, Lres, Wres = 500e3, 500e3, x_resolution, y_resolution # dimensions and resolution of grid
    grid = lompe.cs.CSgrid(lompe.cs.CSprojection(position, orientation), L, W, Lres, Wres, R = 6481.2e3)

    # set up conductances and model
    SH = lambda lon = grid.lon, lat = grid.lat: hardy_EUV(lon, lat, Kp, time_intervals[0], 'hall'    )
    SP = lambda lon = grid.lon, lat = grid.lat: hardy_EUV(lon, lat, Kp, time_intervals[0], 'pedersen')
    model = lompe.Emodel(grid, Hall_Pedersen_conductance = (SH, SP))

    # Collect datasets here (uses outside functions to collect data)
    pfisr_data = pfisr.collect_data(pfisrfn, time_intervals)
    mag_data = mag.collect_data(pokermagfn, time_intervals)
    superdarn_ksr_data = sd.collect_data(superdarn_direc, time_intervals, 'ksr/')
        
    for i, (stime, etime) in enumerate(time_intervals):
        t = stime
        print("t: ",t)
    
        SH = lambda lon = grid.lon, lat = grid.lat: hardy_EUV(lon, lat, Kp, t, 'hall'    )
        SP = lambda lon = grid.lon, lat = grid.lat: hardy_EUV(lon, lat, Kp, t, 'pedersen')

        model.clear_model(Hall_Pedersen_conductance = (SH, SP)) # reset
    
        # add datasets for this time
        model.add_data(pfisr_data[i])
        model.add_data(mag_data[i])
        model.add_data(superdarn_ksr_data[i])

        # run model
        gtg, ltl = model.run_inversion(l1 = 2, l2 = 0.1)

        # USE FOR SAVING MODEL NCs
        savefile = "test_output" # create directory to save output as nc to read in
                
        save_model(model, file_name=savefile) # one file per time stamp

Where to get Iridium data

Hello, I am following the notebook Data Handling with Lompe and I have got it working, however, when I tried to do a new event. I can't not find the new place that produces the ncdf files for iridium. I was able to find https://ampere.jhuapl.edu/download/ which gives me the 2min resolution stipulated buts its saved as a different file name so it doesnt seem lompe pulls it correctly.

image

image

Decompositions

Add functionality to decompose predictions of electric currents into DF/CF and Hall/Pedersen.
If an argument is added to the existing functions varcheck has to be updated.

Data object not initializing N

Coming across issue when attempting to initialize data object - could be due to the fact that the error field has not been specified

Error message:

C:\Users\hzibi\miniconda3\envs\lompe\Lib\site-packages\lompe\model\data.py:146: UserWarning: 'error' keyword not set for datatype 'convection'! Using error=50
  warnings.warn(f"'error' keyword not set for datatype '{datatype}'! Using error={error}", UserWarning)
C:\Users\hzibi\miniconda3\envs\lompe\Lib\site-packages\lompe\model\data.py:150: UserWarning: 'iweight' keyword not set for datatype 'convection'! Using iweight=1.0
  warnings.warn(f"'iweight' keyword not set for datatype '{datatype}'! Using iweight={iweight}", UserWarning)
Traceback (most recent call last):

  File ~\miniconda3\envs\lompe\Lib\site-packages\spyder_kernels\py3compat.py:356 in compat_exec
    exec(code, globals, locals)

  File c:\users\hzibi\documents\python scripts\untitled0.py:136
    pfisr_data = prepare_data(t - DT, t + DT)

  File c:\users\hzibi\documents\python scripts\untitled0.py:108 in prepare_data
    pfisr_data = lompe.Data(vlos, coordinates = coords, LOS = los, datatype = 'convection')

  File ~\miniconda3\envs\lompe\Lib\site-packages\lompe\model\data.py:187 in __init__
    self.error = np.full(self.N, error)

AttributeError: 'Data' object has no attribute 'N'

Pandas 2.0 removing `append`

Just tagging this as I ran into when doing the default pip install of Lompe.

The current version of pandas installed using pip is greater than 2.0, which means append has been fully removed rather than just depreciated (https://pandas.pydata.org/docs/dev/whatsnew/v2.0.0.html). This threw up an error immediately when trying to use the read_sdarn pre-process script at L498. I got passed this by installing an older version of pandas (1.5.3 in my case), but the issue can also be resolved by using the pandas concat function everywhere append was used.

Quivers not showing up on lompeplot for Efield data

Hello, I am using swarm to calculate E

ds= requester(
"SW_EXPT_EFIB_TCT02", #Mag B, high resolution, 50Hz B (Magnetic field)
measurements_E, #Magnetic field in NEC coordinates
True,
asynchronous=False,
show_progress=False)
velocity=np.array([ds["VsatN"], ds["VsatE"], ds["VsatC"]]) #We need the velocities of the satellite in NEC so we can get the unit vector and then get our Electric field
velocity_unit=unit_array(velocity) #function that gives the unitary vector
ENEC=np.multiply(np.array([ds["Evx"], ds["Evy"], ds["Evz"]]), velocity_unit)

Eused=np.array([ENEC[1],ENEC[0]]) #East North
coords=np.array([ ds['Longitude'].to_numpy(),ds['Latitude'].to_numpy()])
Eswarm = lompe.Data(Eused, coordinates = coords, datatype = 'Efield', error = 5e-3, iweight=1.0 )

Gets my E, however, when I plot this it does show the velocities when plotted in lompe.plot by itself but not orange quivers are produced.
model = lompe.Emodel(grid, (cmod.hall, cmod.pedersen)) model.add_data(Eswarm) model.run_inversion(l1 = 1, l2 = 10)
image

https://github.com/CassandraAuri/Physics_Work/blob/Deployment-version/Aurora_Work/Data_handling_with_Lompe.ipynb Here is the jupyter notebook

index error with 3 regularization parameters

I am receiving an error message related to the reg_E and run_inversion functions within lompe/lompe/model/model.py with the addition of the third regularization parameter. The existing version of the code I run which calls this funcrion has worked previously with just l1 and l2. Any variation of FAC_reg with any variation of tuples or floats for l1, l2, an l3 produce this same error. I initially followed the documentation/descriptions for these (i.e. following: Parameters ---------- l1 : float or tuple Damping parameter for model norm. If FAC_reg=True l1 can be a tuple where the first and second entry target the FAC and E norm, respectively. If it is just a float the E norm will be ignored. l2 : float Damping parameter for variation in the magnetic eastward direction. Functionality similar to l1 in regards to FAC_reg. l3 : float Damping parameter for variation in the magnetic northward direction Functionality similar to l1 in regards to FAC_reg. FAC_reg : boolean Activates FAC based regularization if True (default is False). Read l1 description for details on mixed FAC and E 2-norm regularization.)

Error message:

Traceback (most recent call last):
  File "/Users/clevenger/Projects/isr_3d_vefs/isr_3d_vefs/pfrr_runscript.py", line 59, in <module>
    run_lompe_pfisr(start_time, end_time, time_step, Kp, x_resolution,
  File "/Users/clevenger/Projects/isr_3d_vefs/isr_3d_vefs/pfrr_run_lompe.py", line 120, in run_lompe_pfisr
    gtg, ltl = model.run_inversion(l1 = 1, l2 = 0.1, l3 = 0.1, FAC_reg=False)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/clevenger/miniconda3/envs/lompe/lib/python3.11/site-packages/lompe/model/model.py", line 434, in run_inversion
    LTL += reg_E(self, l1, l2, l3)
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/clevenger/miniconda3/envs/lompe/lib/python3.11/site-packages/lompe/model/model.py", line 399, in reg_E
    LTL_l1 = np.eye(self.GTG.shape[0])
                    ~~~~~~~~~~~~~~^^^
IndexError: tuple index out of range

MWE (just including where run_inversion function is called, at the end of this block):

# Loop through times and save
    for i, (stime, etime) in enumerate(time_intervals):
        t = stime
        print("t: ",t)
    
        SH = lambda lon = grid.lon, lat = grid.lat: hardy_EUV(lon, lat, Kp, t, 'hall'    )
        SP = lambda lon = grid.lon, lat = grid.lat: hardy_EUV(lon, lat, Kp, t, 'pedersen')

        model.clear_model(Hall_Pedersen_conductance = (SH, SP)) # reset
    
        # Add datasets for this time
        if pfisrfn:
            model.add_data(pfisr_data[i])
        if pokermagfn:
            model.add_data(mag_data[i])
        if superdarn_direc:
            model.add_data(superdarn_kod_data[i])
            #model.add_data(superdarn_ksr_data[i])
        if swarm_a_prime:
            model.add_data(swarm_a_mag_data[i])
        if swarm_b_prime:
            model.add_data(swarm_b_mag_data[i])
        if swarm_c_prime:
            model.add_data(swarm_c_mag_data[i])

        # Run model
        #gtg, ltl = model.run_inversion(l1 = 1, l2 = 0.1)
        gtg, ltl = model.run_inversion(l1 = 1, l2 = 0.1, l3 = 0.1, FAC_reg=False)

Any insight to properly calling run_inversion with this new parameter will be greatly appreciated. Thanks!

Pandas `to_hdf` causing crash on Macs

Found this when trying to use the AMPERE pre-processing script read_iridium(). It was constantly running into a client crash (python quit unexpectedly) whilst trying to save the AMPERE dataframe to a HDF5 file on line 747. The problem isn't just here though, it's when trying to save any dataframe to a HDF5 (even a very simple one), so it's not a Lompe problem.

This seems like an issue with pytables on Macs, possible related to the M1+ CPU architecture (my speculation, I actually have no idea). Anyway, I managed to fix the crash and get the to_hdf5() method to work everywhere by downgrading pytables to version 3.8:

pip uninstall tables
pip install tables==3.8.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.