Code Monkey home page Code Monkey logo

tennetlib.jl's Introduction

TenNetLib.jl

A Tensor Network Library (TenNetLib.jl) built on top of ITensors.jl for quantum many-body problems.

Build Status Documentation
Build Status Build Status

The source code for TenNetLib.jl can be found on GitHub.

The documentation for TenNetLib.jl can be found here.

This library requires Julia 1.7+.

Overview

TenNetLib.jl features widely-used Tensor Network (TN) codes, designed with a multi-layered abstraction to cater to diverse user needs. The library provides users with varying levels of control over their computations. Currently, TenNetLib.jl presents an array of functionalities for:

  • (a) Finite-size Matrix-Product States (MPS): Different variants of Density Matrix Renormalization Group (DMRG) and Time Dependent Variational Principle (TDVP) (including subspace expansion) methods.
  • (b) Tree Tensor Network (TTN): Variational search for the ground state and first few excited states.

Installation

TenNetLib.jl is registered on Julia General Registry. To install the library (along with ITensors.jl), you can use the following steps:

$ julia

julia> ]

pkg> add ITensors

pkg> add TenNetLib

Found an issue or bug?

"Beware of bugs in the above code; I have only proved it correct, not tried it." -- Donald Knuth

If you find bugs or mistakes of any kind, please let us know by adding an issue to the GitHub issue tracker. You are also welcome to submit a pull request.

Future functionality?

Here is a list for future additions in the decreasing order of priority. Any help / suggestion is welcome.

  • Augmented Tree Tensor Network (aTTN) for variational ground state search for 2D problems.
  • Infinite DMRG (iDMRG) and/or Variational Uniform Matrix Product States (VUMPS) to tackle 1D / quasi-1D problems directly at the thermodynamic limit.
  • Projected Entangled Pair States (PEPS) for 2D problems.
  • Real-time evolution method using PEPS and TTN.

Also, please feel free to ask about a new feature by adding a new request to the GitHub issue tracker labelled feature request. Note that submitting a pull request, providing the needed changes to introduced your requested feature, will speed up the process.

Example: A simple DMRG code

The following code is for a simple DMRG run at the highest level of abstraction without any additional control.

using ITensors
using TenNetLib

let
    N = 32
    sites = siteinds("S=1/2",N)
    os = OpSum()
    
    for j=1:N-1
        os += 1, "Sz", j,"Sz", j+1
        os += 0.5, "S+", j, "S-", j+1
        os += 0.5, "S-", j, "S+", j+1
    end
    
    H = MPO(os,sites)
    states = [isodd(n) ? "Up" : "Dn" for n in 1:N]
    psi0 = MPS(sites, states)

    params = DMRGParams(;nsweeps = [5, 5], maxdim = [20, 50],
                        cutoff = 1e-14, noise = 1e-3, noisedecay = 2,
                        disable_noise_after = 3)

    # dmrg2 for two-site DMRG
    en, psi = dmrg2(psi0, H, params)
end

Example: A simple TDVP code

The following code is for a simple TDVP run at the highest level of abstraction without any additional control.

using ITensors
using TenNetLib

let
    N = 32
    sites = siteinds("S=1/2",N)
    os = OpSum()
    
    for j=1:N-1
        os += 1, "Sz", j,"Sz", j+1
        os += 0.5, "S+", j, "S-", j+1
        os += 0.5, "S-", j, "S+", j+1
    end
    
    H = MPO(os,sites)
    states = [isodd(n) ? "Up" : "Dn" for n in 1:N]
    psi0 = MPS(sites, states)

    tau = -0.01im    
    engine = TDVPEngine(psi0, H)
    for ii = 1:100

    	# `nsite = "dynamic"` for dynamical selection between
	# single- and two-site variants at different bonds
        tdvpsweep!(engine, tau,
                   nsite = "dynamic";
                   maxdim = 200,
                   cutoff = 1E-12,
                   extendat = 5)

	psi = getpsi(engine)
	# DO STUFF
    end
end

Example: A simple TTN ground-state optimzation code

The following code is for a simple TTN ground-state optimzation run at the highest level of abstraction without any additional control. Here we use OpStrings and CouplingModel instead of OpSum and MPO.

using ITensors
using TenNetLib

let
    N = 32
    sites = siteinds("S=1/2",N)
    os = OpStrings()
    
    for j=1:N-1
        os += 1, "Sz" => j,"Sz" => j+1
        os += 0.5, "S+" => j, "S-" => j+1
        os += 0.5, "S-"=> j, "S+" => j+1
    end
    
    H = CouplingModel(os,sites)
    psi0 = TTN(sites, 64, QN("Sz", 0))

    sweeppath = default_sweeppath(psi0)
    
    params = OptimizeParamsTTN(; maxdim = [64, 128], nsweeps = [5, 10], 
                               cutoff = 1e-14, noise = 1e-2, noisedecay = 5, 
                               disable_noise_after = 5)
			       
    en, psi = optimize(psi0, H, params, sweeppath)
end

tennetlib.jl's People

Contributors

titaschanda avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

tennetlib.jl's Issues

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

Using CUDA in TDVP algorithm

Hi,

I tried to use a GPU backend using CUDA.jl for the TDVP method, however, I get an error.
Would you please help me understand why is that? Maybe there is a quick fix?
ITensors and ITensorsTDVP are working great with CUDA, as I experienced.

Thanks,
Yotam

Here is a minimal code example:


using ITensors
using TenNetLib
using CUDA

let
    N = 4
    sites = siteinds("S=1/2",N)
    os = OpSum()
    
    for j=1:N-1
        os += 1, "Sz", j,"Sz", j+1
        os += 0.5, "S+", j, "S-", j+1
        os += 0.5, "S-", j, "S+", j+1
    end
    
    H = MPO(os,sites)
    states = [isodd(n) ? "Up" : "Dn" for n in 1:N]
    psi0 = MPS(sites, states)

    tau = -0.01im    
    engine = TDVPEngine(cu(psi0), cu(H))
    for ii = 1:100

    	# `nsite = "dynamic"` for dynamical selection between
	# single- and two-site variants at different bonds
        tdvpsweep!(engine, tau,
                   nsite = "dynamic";
                   maxdim = 200,
                   cutoff = 1E-12,
                   extendat = 5)

	psi = getpsi(engine)
	# DO STUFF
    end
end

and here is the error message that I get:

ERROR: Setting the type parameter of the type `DenseVector` at position `NDTensors.SetParameters.Position{1}()` to `Float64` is not currently defined. Either that type parameter position doesn't exist in the type, or `set_parameter` has not been overloaded for this type.
Stacktrace:
  [1] error(s::String)
    @ Base ./error.jl:35
  [2] set_parameter(type::Type, position::NDTensors.SetParameters.Position{1}, parameter::Type)
    @ NDTensors.SetParameters ~/.julia/packages/NDTensors/pey4a/src/lib/SetParameters/src/interface.jl:10
  [3] set_parameters(type::Type, position::NDTensors.SetParameters.Position{1}, parameter::Type)
    @ NDTensors.SetParameters ~/.julia/packages/NDTensors/pey4a/src/lib/SetParameters/src/set_parameters.jl:20
  [4] set_eltype(arraytype::Type{DenseVector}, eltype::Type)
    @ NDTensors ~/.julia/packages/NDTensors/pey4a/src/abstractarray/set_types.jl:8
  [5] similartype(::Type{SimpleTraits.Not{NDTensors.Unwrap.IsWrappedArray{…}}}, arraytype::Type{DenseVector}, eltype::Type)
    @ NDTensors ~/.julia/packages/NDTensors/pey4a/src/abstractarray/similar.jl:109
  [6] similartype
    @ ~/.julia/packages/SimpleTraits/l1ZsK/src/SimpleTraits.jl:331 [inlined]
  [7] promote_rule(::Type{NDTensors.Dense{Float64, Vector{…}}}, ::Type{NDTensors.Dense{Float32, CuArray{…}}})
    @ NDTensors ~/.julia/packages/NDTensors/pey4a/src/dense/dense.jl:127
  [8] promote_type
    @ ./promotion.jl:313 [inlined]
  [9] promote_rule(::Type{NDTensors.DenseTensor{…}}, ::Type{NDTensors.DenseTensor{…}})
    @ NDTensors ~/.julia/packages/NDTensors/pey4a/src/tensor/tensor.jl:262
 [10] promote_type
    @ ./promotion.jl:313 [inlined]
 [11] permutedims!!(R::NDTensors.DenseTensor{…}, T::NDTensors.DenseTensor{…}, perm::Tuple{…}, f::Function)
    @ NDTensors ~/.julia/packages/NDTensors/pey4a/src/dense/densetensor.jl:198
 [12] _map!!(f::Function, R::NDTensors.DenseTensor{…}, T1::NDTensors.DenseTensor{…}, T2::NDTensors.DenseTensor{…})
    @ ITensors ~/.julia/packages/ITensors/Gf9aD/src/itensor.jl:1960
 [13] map!(f::Function, R::ITensor, T1::ITensor, T2::ITensor)
    @ ITensors ~/.julia/packages/ITensors/Gf9aD/src/itensor.jl:1965
 [14] copyto!
    @ ~/.julia/packages/ITensors/Gf9aD/src/broadcast.jl:330 [inlined]
 [15] materialize!
    @ ./broadcast.jl:914 [inlined]
 [16] materialize!
    @ ./broadcast.jl:911 [inlined]
 [17] -(A::ITensor, B::ITensor)
    @ ITensors ~/.julia/packages/ITensors/Gf9aD/src/itensor.jl:1882
 [18] _krylov_addbasis!(psi::MPS, phis::Vector{MPS}, extension_cutoff::Float64)
    @ TenNetLib ~/.julia/packages/TenNetLib/tHJWh/src/mps/sweep.jl:483
 [19] krylov_extend!(psi::MPS, H::MPO; kwargs::@Kwargs{…})
    @ TenNetLib ~/.julia/packages/TenNetLib/tHJWh/src/mps/sweep.jl:405
 [20] krylov_extend!
    @ ~/.julia/packages/TenNetLib/tHJWh/src/mps/sweep.jl:386 [inlined]
 [21] macro expansion
    @ ~/.julia/packages/TenNetLib/tHJWh/src/mps/sweep.jl:436 [inlined]
 [22] macro expansion
    @ ./timing.jl:395 [inlined]
 [23] krylov_extend!(sysenv::StateEnvs{…}; kwargs::@Kwargs{…})
    @ TenNetLib ~/.julia/packages/TenNetLib/tHJWh/src/mps/sweep.jl:434
 [24] krylov_extend!
    @ ~/.julia/packages/TenNetLib/tHJWh/src/mps/sweep.jl:424 [inlined]
 [25] dynamic_fullsweep!(sysenv::StateEnvs{…}, solver::Function, swdata::SweepData; eigthreshold::Float64, extendat::Int64, kwargs::@Kwargs{…})
    @ TenNetLib ~/.julia/packages/TenNetLib/tHJWh/src/mps/sweep.jl:261
 [26] dynamic_fullsweep!
    @ ~/.julia/packages/TenNetLib/tHJWh/src/mps/sweep.jl:250 [inlined]
 [27] #tdvpsweep!#175
    @ ~/.julia/packages/TenNetLib/tHJWh/src/mps/tdvp.jl:257 [inlined]
 [28] top-level scope
    @ ~/Documents/repos/gatesimulator/TDVP/TenNetLib_cuda_example.jl:26

Minor release

@JuliaRegistrator register

Release notes:

Fixed issues regarding breaking changes incorporated in ITensors.jl. Pinned ITensors.jl to an old working release.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.