Code Monkey home page Code Monkey logo

peacock.jl's People

Contributors

paniash avatar sp94 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

peacock.jl's Issues

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

Rename Mode and HilbertSpace to Eigenmode and Eigenspace

Mode is for storing an eigenvector that has an associated eigenvalue.

HilbertSpace(modes) takes the modes, discards the eigenvalues, and orthonormalises the eigenvectors - useful for then calculating new eigenmodes, such as those of symmetry operators or Wilson loop operators.

I think renaming Mode to Eigenmode and HilbertSpace to Eigenspace will make this intention clearer.

Clarifications for the JOSS paper

  • In the first paragraph you state that the package is for "studying photonic crystals" but not what you study about them.

  • In the statement of need you then refer to eigensolutions of a crystal, but it's not clear which object you are calculating eigenproperties of.

  • You mix "eigensolutions" and "eigenmodes".

  • At the end of the paragraph you talk about photonic bands. Thus it just seems like some rearranging is required. But perhaps more mathematical detail about the system should be included?

  • In the first reference, "hgte" should presumably be "HgTe"

  • Physical review letters -> Physical Review Letters (capitalisation)

  • Please check the details of the remaining references. Journals should have initial letters of each word capitalised.

Thanks!

Allow plot_wilson_loop_winding for arbitrary solver

Currently plot_wilson_loop_winding(solver::Solver, ks, polarisation...) internally calls solve(solver, polarisation, k)

This assumes the user wants to use Peacock to solve the problem.

We should allow for users who want to use their own solver, eg any mysolve(k) that wraps the solutions in Peacock.Mode object

Other solvers (Eg FDFD)

While plane-wave expansion method is efficient for modelling dielectric materials, it is inefficient at modelling materials such as metals or modelling finite (non-periodic) systems

It would therefore be useful to add other methods of solution to Peacock, such as the finite-difference frequency-domain (FDFD) method. We could begin with an implementation of 2D FDFD to solve for eigenmodes of a periodic structure - these results could be compared directly against the existing plane-wave expansion method in Peacock.

A priority would be to keep the interface simple and consistent between the different solvers.

Symmetry and dimensions?

I just came across this repo when I am exploring some interesting projects on GitHub. To be honest, I am new to photonics crystals. Just wonder if you guys consider the crystal packings with different symmetries? If yes, my ongoing project maybe useful for the design of new photonic crystals.

https://pyxtal.readthedocs.io/en/latest/

Currently, we support the generation of 1D/2D/3D packings based on rod/layer/space group symmetry relations. Please let me know if this would be linked to your project.

How to build a supercell model

How can we build a supercell model to calculate the edge states? like below:

----------
AAAAABBBBB
----------

e.g., A is a topological non-trivial photonic cell and B is trivial, and "----" means Bloch boundary conditions.

By the way, it seems that we cannot build a rectangular lattice, such as a1=[1, 0], a2=[0, 2], an error would appear:

ERROR: AssertionError: norm(b1) ≈ norm(b2)

Thanks!

Shapes for constructing geometries

Problem

Currently the geometry is defined by specifying an epf(x,y) function and a muf(x,y) function,

geometry = Geometry(epf, muf, ...)

This is straightforward for simple geometries but it becomes cumbersome if the geometry is made of multiple shapes.

We should introduce an additional interface for creating geometries,

geometry = Geometry(shapes, ...)

where shapes is a (nested) list of AbstractShape subtypes, eg

# accept nested list so we can use list comprehensions inside
shapes = [
    Circle(r,ep=4),
    [Rectangle(Lx,Ly,ep=3,origin=[n,0]) for n in -1:1]
]

and epf(x,y) and muf(x,y) are automatically generated from the flattened shapes, working in reverse order such that the last shape is on top.

Shapes

We should include some simple shapes and transforms for defining geometry, such as Ellipse or Rectangle. For example these shapes are available in meep:
https://meep.readthedocs.io/en/latest/Python_User_Interface/#geometricobject

We should also include a Background shape if the user wants to override the default values of ep=1 and mu=1, eg

shapes = [Circle(ep=4)]  # assumes background is ep=1, mu=1
shapes = [Background(ep=2), Circle(ep=4)]  # overrides default background

Shapes should default to ep=1, mu=1, origin=[0,0], angle=0, with dimensions (eg radius) as the positional arguments.

Transforms and piping

We can make use of Julia's function composition/piping for a nice interface:
https://docs.julialang.org/en/v1/manual/functions/#Function-composition-and-piping

Desired interface:

ellipses = [Ellipse(r1,r2,ep=11.7) |> translate(R,0) |> rotate(60n) for n in 1:6]
geometry = Geometry(ellipses, 1, a1, a2, d1, d2)

image

Transforming eigenmodes

We should provide users with a simple interface for transformation eigenmodes.

Sometimes an analysis requires us rotate or mirror an eigenmode (eg studying symmetry eigenvalues) or to shift it by a reciprocal lattice vector in k-space (we already do this when calculating 'open' Wilson loops that begin and end at same k-point, but in different Brillouin zones).

The user should define the transformation with a k_map(k).
For example, inversion and a shift by a lattice vector would be k_map(k) = -k + b1

transform(mode, k_map) should return a new Mode where

  • new_mode.basis == old_mode.basis
  • new_mode.k0 == k_map(old_mode.k0)
  • new_mode.data[i_] = old_mode.data[i] where the data has been rearranged such that...
    • k_map([basis.kxs[i_], basis.kyx[i_]]) == [basis.kxs[i], basis.kys[i]]

transform(mode, k_map) can then replace the shift_k0 function used by wilson_loop_matrix, as well as be used in other applications

Faster method to calculate total Chern number

Currently we have Wilson loop methods which are very general and can be used to calculate

However, when calculating the total Chern number

  1. you must manually count the winding of the Wilson loop spectrum to find the Chern number
  2. using the Wilson loop winding method is slower than other 'plaquette' based methods

We should add a direct Chern number calculation based on plaquettes, as in eg
a. Fukui et al https://doi.org/10.1143/JPSJ.74.1674
b. Zhao et al https://doi.org/10.1364/OE.380077

Generalise Mode to be any eigenmode, not just frequency eigenmode

The Mode object returned by solve(solver, k, polarisation) could be generalised

Currently it is assumed to be a frequency eigenmode

Mode(k0::Vector, frequency::Number, data::Vector...)

We should rename frequency to eigval, data to eigvec, and add a label string

Mode(k0::Vector, eigval::Number, eigvec::Vector, label::String, ...)

solve(solver, k, polarisation) could return array of Mode(k0, eigval, eigvec, "Frequency")

wilson_eigen() could return array of Mode(k0, eigval, eigvec, "Wilson loop eigenvalue")

The label field could be used by eg plot_band_diagram to automatically label the y-axis

Markersize not defined in plot_wilson_loop_winding function

On line 172 of wilson_loops.jl markersize I get an error that markersize is not defined.

plot_band_diagram(my_solve, ks, dk=dk_outer, labels=labels, show_vlines=false, markersize=markersize)
Stacktrace:
 [1] plot_wilson_loop_winding(::Solver, ::Array{BrillouinZoneCoordinate,1}, ::Peacock.Polarisation, ::UnitRange{Int64}; dk_outer::Float64, dk_inner::Nothing, labels::Array{Any,1}, delta_brillouin_zone::BrillouinZoneCoordinate) at ${HOME}/.julia/dev/Peacock/src/wilson_loops.jl:172

Modelling 3D photonic crystals

Issue

Sometimes full 3D simulations (photonic crystal periodic in x, y, z directions) will be required, but Peacock.jl currently supports only 2D simulations (crystal periodic in x and y directions, and assumed homogeneous along the z direction).

Extending the solver (plane wave expansion method) to 3D is relatively straightforward. But the 2D and 3D solvers have different inputs and outputs, in particular the 2D solution is a scalar field (out of plane component of either E or H), and the 3D solution is a vector field (all three components of either E or H, for the simplest approach).

Current interface

Only for 2D

epf(x,y) = ...
muf(x,y) = ...
geometry = Geometry(epf, muf, a1, a2, d1, d2)
solver = Solver(geometry, fourier_space_cutoff)
modes = solve(solver, k, polarisation) # returns array of Mode with out-of-plane E or H, depending on polarisation

Desired interfaces

For 3D

epf(x,y,z) = ...
muf(x,y,z) = ...
geometry = Geometry(epf, muf, a1, a2, a3, d1, d2, d3)
solver = Solver3D(geometry, fourier_space_cutoff)
modes = solve(solver, k) # returns array of Mode with all components of E and H

For 2D

epf(x,y) = ...
muf(x,y) = ...
geometry = Geometry(epf, muf, a1, a2, d1, d2)
solver = SolverTE(geometry, fourier_space_cutoff) # <-- different Solvers for different polarisations
modes = solve(solver, k) # returns array of Mode with all components of E and H

Required changes

  • Make Geometry 3D, but create two constructors

    • Geometry(epf, muf, a1, a2, a3, d1, d2, d3) should produce a 3D geometry
    • Geometry(epf, muf, a1, a2, d1, d2) should produce a 2D geometry, internally represented as a thin 3D geometry
  • Separate Solver into SolverTE, SolverTM, and Solver3D

    • Allowing SolverTE and SolverTM to be constructed separately would reduce efficiency in cases where users wanted to solve for both polarisations, as their construction can share data. However, for 2D simulations this normally won't be too expensive, and if it was, we could create another function to construct SolverTE and SolverTM simultaneously.
  • Extend Mode to hold all six components of the electric/magnetic fields

struct Mode
    k0::Vector{Float64}
    ...
    # electric field components
    Sx::Vector{ComplexF64}
    Sy::Vector{ComplexF64}
    Sz::Vector{ComplexF64}
    # magnetic field components
    Ux::Vector{ComplexF64}
    Uy::Vector{ComplexF64}
    Uz::Vector{ComplexF64}
end
  • This would require storing more data for each Mode. However, this wouldn't be too expensive as the Fourier-space representations of the fields are efficient, and solve would return the same object for 3D, TE, and TM polarised simulations.
  • Allows the same Mode object to be returned for TE, TM, and 3D simulations
  • Users would also have instant access to all components of the electric/magnetic fields.

GPU and CPU parallel

Issue

The computer cost will very large if we extent Peacock.jl to 3D or need a large cutoff. Using parallel technology can significantly reduce the computing time. Can we parallelize the solver?

GPU parallel

First, We can use CUDA.jl to make computing run on Nvidia GPU. CuSolver is a linear algebra library of CUDA. It is easy to solve dense linear systems of the form Ax = b. We only need to convert Julia Array to CuArray, almost all function of LinearAlgebra.jl can be running on GPU:

# running on CPU
a = rand(2, 2); b = rand(2, 2)
c = a / b
# running on GPU
using CUDA
a_d = CuArray(a); b_d = CuArray(b)
c_d = a_d / b_d

Unfortunately, it may be inefficient to use CUDA to solve the eigenvalue and eigenvector of dense matrix (discourse1 discourse2). I also run benchmarks :

julia>  a= rand(100,100); b = rand(100,100);
julia> @btime CUDA.CUSOLVER.syevjBatched!('V','U',CuArray(b)/CuArray(a));
  36.093 ms (577 allocations: 11.58 KiB)

julia> @btime eigen(a,b);
  29.361 ms (21 allocations: 428.78 KiB)

It should be note that the function CUDA.eigen of CUDA.jl only can run with Julia Array and just same as LinearAlgebra.eigen of LinearAlgebra.jl. By the wave, syevjBatched! only support Float32/Float64.

But we can still speed up Peacock.jl by using CUDA.jl, since GPU can solve Ax = b efficiently. I focus on the model of square lattice:

using PyPlot
using Peacock
using LinearAlgebra
using SparseArrays
using BenchmarkTools
using CUDA

get_k = Peacock.get_k
DiagonalMatrix = Peacock.DiagonalMatrix

# Permittivity
function epf(x,y)
    # equation of a circle with radius 0.2a
    if x^2+y^2 <= 0.2^2
        # dielectric inside the circle
        return 8.9
    else
        # air outside the circle
        return 1
    end
end

# Permeability is unity everywhere
function muf(x,y)
    return 1
end

a1 = [1, 0]  # first lattice vector
a2 = [0, 1]  # second lattice vector
d1 = 0.01  # resolution along first lattice vector
d2 = 0.01  # resolution along second lattice vector
geometry = Geometry(epf, muf, a1, a2, d1, d2)
fourier_space_cutoff = 17 #greater than tutorials
solver = Solver(geometry, fourier_space_cutoff)
G = BrillouinZoneCoordinate(  0,   0, "Γ")
X = BrillouinZoneCoordinate(1/2,   0, "X")
M = BrillouinZoneCoordinate(1/2, 1/2, "M")
ks = [G,X,M,G]
ks = [typeof(x)==BrillouinZoneCoordinate ? get_k(x,solver.basis) : x for x in ks]

polarisation = TE
k = ks[1]
basis = solver.basis
epc, muc = solver.epc, solver.muc
Kx = DiagonalMatrix(basis.kxs) + k[1]*I
Ky = DiagonalMatrix(basis.kys) + k[2]*I
Kx_d = CuArray(Kx); Ky_d = CuArray(Ky);
epc_d = CuArray(epc); muc_d = CuArray(muc)

and run benchmarks like that:

julia> @btime LHS_d = Kx_d/epc_d*Kx_d + Ky_d/epc_d*Ky_d;
  22.768 ms (20762 allocations: 337.38 KiB)

julia> @btime LHS_d = Kx_d/epc_d*Kx_d + Ky_d/epc_d*Ky_d;
  22.931 ms (18741 allocations: 303.83 KiB)

julia> @btime LHS = Kx/epc*Kx + Ky/epc*Ky;
  2.433 s (20 allocations: 6.96 MiB)

julia> @btime LHS = Kx/epc*Kx + Ky/epc*Ky;
  2.660 s (20 allocations: 6.96 MiB)

julia> size(LHS)
(225, 225)

GPU-based program is about 90 times faster than CPU! The other benchmark is test eigen:

julia> @btime eigen(RHS \ LHS);
  1.362 s (29 allocations: 3.55 MiB)

julia> @btime eigen(LHS, RHS);
  542.625 ms (19 allocations: 2.46 MiB)

julia> @time eigen(Array(RHS_d \ LHS_d));
  0.209265 seconds (17.91 k allocations: 3.053 MiB)

Despite eigen(LHS, RHS) is more efficient than eigen(RHS \ LHS), eigen(Array(RHS_d \ LHS_d)) is still the fastest one!

Now, we can change the code of solver.jl. Only one function is added:

function solve(solver::Solver, k::AbstractVector{<:Real}, polarisation::Polarisation, GPU::Int; bands=:)
    basis = solver.basis
    if GPU == 1
        epc, muc = CuArray(solver.epc), CuArray(solver.muc)
        Kx = CuArray(DiagonalMatrix(basis.kxs) + k[1]*I)
        Ky = CuArray(DiagonalMatrix(basis.kys) + k[2]*I)
        if polarisation == TE
            LHS = Kx/epc*Kx + Ky/epc*Ky
            RHS = muc
            label = L"H_z"
        elseif polarisation == TM
            LHS = Kx/muc*Kx + Ky/muc*Ky
            RHS = epc
            label = L"E_z"
        end
        freqs_squared, modes_data = eigen(Array(RHS \ LHS));
    else
        epc, muc = solver.epc, solver.muc
        Kx = DiagonalMatrix(basis.kxs) + k[1]*I
        Ky = DiagonalMatrix(basis.kys) + k[2]*I
        if polarisation == TE
            LHS = Kx/epc*Kx + Ky/epc*Ky
            RHS = muc
            label = L"H_z"
        elseif polarisation == TM
            LHS = Kx/muc*Kx + Ky/muc*Ky
            RHS = epc
            label = L"E_z"
        end
        freqs_squared, modes_data = try
            eigen(LHS, RHS)
        catch
            eigen(RHS \ LHS)
        end
    end
    
    freqs = sqrt.(freqs_squared)
    # Sort by increasing frequency
    idx = sortperm(freqs, by=real)
    freqs = freqs[idx][bands]
    modes_data = modes_data[:,idx][:,bands]
    # Eigenmodes are weighted by the RHS of the generalised eigenvalue problem
    weighting = RHS
    modes = Mode[]
    for i in 1:length(freqs)
        mode = Mode(k, freqs[i], modes_data[:,i], weighting, basis, label)
        push!(modes, mode)
    end
    return modes
end

The last function of band_diagrams.jl is change to:

function plot_band_diagram(solver::Solver, ks, polarisation::Polarisation, GPU::Int;
            dk=nothing, labels=[], bands=1:10, frequency_scale=1, color="k", markersize=nothing)
    # Convert BrillouinZoneCoordinate to labelled positions in k space
    if labels == []
        labels = [hasproperty(x,:label) ? x.label : "" for x in ks]
    end
    ks = [typeof(x)==BrillouinZoneCoordinate ? get_k(x,solver.basis) : x for x in ks]
    # Wrap all the variables into a single function of k that returns frequencies
    function my_solve(k)
        modes = solve(solver, k, polarisation, GPU)
        return [mode.frequency for mode in modes]
    end
    # Pass on to more general function
    plot_band_diagram(my_solve, ks; dk=dk, labels=labels, bands=bands, frequency_scale=frequency_scale, color=color, markersize=markersize)
end

And we can run the last benchmark -- plot a band diagram, the model is just same as before:

using PyPlot
using Peacock
using CUDA
GPU = 1 # running on GPU

function epf(x,y)
    # equation of a circle with radius 0.2a
    if x^2+y^2 <= 0.2^2
        # dielectric inside the circle
        return 8.9
    else
        # air outside the circle
        return 1
    end
end

# Permeability is unity everywhere
function muf(x,y)
    return 1
end

a1 = [1, 0]  # first lattice vector
a2 = [0, 1]  # second lattice vector
d1 = 0.01  # resolution along first lattice vector
d2 = 0.01  # resolution along second lattice vector
geometry = Geometry(epf, muf, a1, a2, d1, d2)
fourier_space_cutoff = 7
solver = Solver(geometry, fourier_space_cutoff)
G = BrillouinZoneCoordinate(  0,   0, "Γ")
X = BrillouinZoneCoordinate(1/2,   0, "X")
M = BrillouinZoneCoordinate(1/2, 1/2, "M")
ks = [G,X,M,G]

The results when running on terminal:

julia> @time plot_band_diagram(solver, ks, TE, 1, color="red",
                   bands=1:4, dk=0.1, frequency_scale=1/2pi)
  3.792055 seconds (510.63 k allocations: 124.970 MiB, 0.71% gc time, 0.14% compilation time)
(0, 0.9109956614327619)

julia> @time plot_band_diagram(solver, ks, TM, 1, color="red",
                   bands=1:4, dk=0.1, frequency_scale=1/2pi)
  3.321212 seconds (571.62 k allocations: 125.828 MiB, 0.57% gc time)
(0, 0.9109956614327619)

julia> @time plot_band_diagram(solver, ks, TE, 0, color="red",
                   bands=1:4, dk=0.1, frequency_scale=1/2pi)
 29.343061 seconds (296.76 k allocations: 50.585 MiB, 0.03% gc time)
(0, 0.9109956614327619)

julia> @time plot_band_diagram(solver, ks, TM, 0, color="red",
                   bands=1:4, dk=0.1, frequency_scale=1/2pi)
 28.964005 seconds (294.21 k allocations: 50.534 MiB, 0.03% gc time)
(0, 0.9109956614327619)

When running on terminal, the diagram will be displayed in real time at one point by one, which may reduce efficiency. So I also run this benchmarks on Jupyter:

julia> @time plot_band_diagram(solver, ks, TE, 1, color="red",
                   bands=1:4, dk=0.1, frequency_scale=1/2pi)
  1.423407 seconds (549.43 k allocations: 125.620 MiB, 3.38% gc time)
(0, 0.9109956614327619)

julia> @time plot_band_diagram(solver, ks, TM, 1, color="red",
                   bands=1:4, dk=0.1, frequency_scale=1/2pi)
  0.942141 seconds (568.10 k allocations: 125.764 MiB, 2.36% gc time)
(0, 0.9109956614327619)

julia> @time plot_band_diagram(solver, ks, TE, 0, color="red",
                   bands=1:4, dk=0.1, frequency_scale=1/2pi)
 36.569779 seconds (303.74 k allocations: 51.115 MiB, 0.03% gc time, 0.16% compilation time)
(0, 0.9109956614327619)

julia> @time plot_band_diagram(solver, ks, TM, 0, color="red",
                   bands=1:4, dk=0.1, frequency_scale=1/2pi)
 30.940971 seconds (294.82 k allocations: 50.577 MiB, 0.03% gc time, 0.02% compilation time)
(0, 0.9109956614327619)

GPU-based computing is about 30 times faster than CPU-based.

My versioninfo():

Julia Version 1.6.0
Commit f9720dc2eb (2021-03-24 12:55 UTC)
Platform Info:
  OS: Linux (x86_64-pc-linux-gnu)
  CPU: Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-11.0.1 (ORCJIT, skylake)

CUDA.versioninfo():

CUDA toolkit 10.1.243, artifact installation
CUDA driver 10.1.0
NVIDIA driver 430.50.0

Libraries: 
- CUBLAS: 10.2.1
- CURAND: 10.1.1
- CUFFT: 10.1.1
- CUSOLVER: 10.2.0
- CUSPARSE: 10.3.0
- CUPTI: 12.0.0
- NVML: 10.0.0+430.50
- CUDNN: 8.0.4 (for CUDA 10.1.0)
- CUTENSOR: 1.2.2 (for CUDA 10.1.0)

Toolchain:
- Julia: 1.6.0
- LLVM: 11.0.1
- PTX ISA support: 3.2, 4.0, 4.1, 4.2, 4.3, 5.0, 6.0, 6.1, 6.3, 6.4
- Device support: sm_30, sm_32, sm_35, sm_37, sm_50, sm_52, sm_53, sm_60, sm_61, sm_62, sm_70, sm_72, sm_75

1 device:
  0: GeForce MX150 (sm_61, 957.250 MiB / 1.956 GiB available)

CPU Parallel (TODO)

We can also use Multi-Threading (@threads) or Multi-Programming (using Distributed; @distributed) to parallel the for loop like:

for (x,k) in zip(xs,ks)
    frequencies = my_solve(k)
    xs_k = [x for _ in 1:length(frequencies[bands])]
    ys = frequency_scale * frequencies[bands]
    PyPlot.plot(xs_k, real(ys), ".", color=color, alpha=0.5, markersize=markersize)
end

Switch PyPlot to Plots

Currently using Peacock will fail if PyPlot isn't installed correctly

  • We should avoid plotting related error if the user is doing no plotting
  • Can we switch from PyPlot to Plots.jl to let the user choose from their preferred plotting backend?

documentation: add example of non-orthogonal lattice

We should add examples of non-orthogonal lattices to the documentation, with some pictures

Interface

Assuming both lattice vectors a1 and a2 should have length == 1

In this case it is only necessary to specify the angles of the lattice vectors:
geometry = Geometry(epf, muf, th1, th2, d1, d2)
where th1 and th2 are scalars

More generally

If the lattice vector lengths should not be equal to 1, then use
geometry = Geometry(epf, muf, a1, a2, d1, d2) where a1 and a2 are 2-vectors

Example

Triangular lattice example:
geometry = Geometry(epf, muf, 0, 60, d1, d2)
or equivalently
geometry = Geometry(epf, muf, [cosd(0),sind(0)], [cosd(60),sind(60)], d1, d2)

To do

Add an example from Joannopoulos' book, such as the TE and TM modes of triangular array of air columns in dielectric substrate

Test with Julia LTS too

Given the compat tag of the package lists Julia as 1.x compatible, it may be a good idea to test with both the LTS version of Julia, as well as a more recent (1.4/1.5) version. Otherwise, updating the compat to the lowest supported version and testing with this should be sufficient.

GPU support for Peacock.Zoo

Add GPU=true keyword option (passed on to the solver constructor Solver(... GPU=GPU)) when loading crystals from the Peacock.Zoo

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.