Code Monkey home page Code Monkey logo

instant-ngp-windows's Introduction

Instant Neural Graphics Primitives

This is a forked Windows Installation Tutorial and the main codes will not be updated

Follow this YouTube tutorial to understand the installation process more easily and if you have any questions feel free to join my discord and ask there.

Ever wanted to train a NeRF model of a fox in under 5 seconds? Or fly around a scene captured from photos of a factory robot? Of course you have!

Here you will find an implementation of four neural graphics primitives, being neural radiance fields (NeRF), signed distance functions (SDFs), neural images, and neural volumes. In each case, we train and render a MLP with multiresolution hash input encoding using the tiny-cuda-nn framework.

Instant Neural Graphics Primitives with a Multiresolution Hash Encoding
Thomas Müller, Alex Evans, Christoph Schied, Alexander Keller
arXiv:2201.05989 [cs.CV], Jan 2022
Project page ] [ Paper ] [ Video ] [ BibTeX ]

For business inquiries, please visit our website and submit the form: NVIDIA Research Licensing

Requirements

  • An NVIDIA GPU; tensor cores increase performance when available. All shown results come from an RTX 3090.
  • Python ver: 3.9.*
  • Visual Studio Community 2019 (Latest the best, ~8GB) Below are the install requirements image
  • CUDA v11.6. You can check ur CUDA version via nvcc --version in any prompt and if it's not CUDA11.6, refer to this to swap/install the correct version.
  • On some machines, pyexr refuses to install via pip. This can be resolved by installing OpenEXR from here. See later.
  • This installation tutorial will be using Anaconda. Download anaconda prompt here.
  • OptiX 7.3 or higher for faster mesh SDF training. You need to either login or join to obtain the installer. Set the system environment variables OptiX_INSTALL_DIR to the installation directory if it is not discovered automatically. Should look like this: image

Compilation

copy these files C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\extras\visual_studio_integration\MSBuildExtensions to here C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Microsoft\VC\v160\BuildCustomizations

cd into a directory that you want to download the codes at. Eg. cd F:\Tutorial\ngp\

Begin by cloning this repository and all its submodules using the following command (if you don't have git, download here and add to path):

$ git clone --recursive https://github.com/nvlabs/instant-ngp
$ cd instant-ngp

if your python is not 3.9 (check with command python --version) then you need to run the following command to get it to ver 3.9.*

conda install python=3.9

Then, open Developer Command Prompt, you can find this in your search bar.

image

Then cd to where you cloned your repository so you are in its root folder /instant-ng/:

cmake . -B build
cmake --build build --config RelWithDebInfo -j 16

If the any of these build fails, please consult this list of possible fixes before opening an issue.

If automatic GPU architecture detection fails, (as can happen if you have multiple GPUs installed), set the TCNN_CUDA_ARCHITECTURES enivonment variable for the GPU you would like to use. The following table lists the values for common GPUs. If your GPU is not listed, consult this exhaustive list.

RTX 30X0 A100 RTX 20X0 TITAN V / V100 GTX 10X0 / TITAN Xp GTX 9X0 K80
86 80 75 70 61 52 37

Interactive Training and Rendering on Custom Image Sets

Install COLMAP, I used ver 3.7

Add it to your system environment variables at Environment Variables > System Variables Path > Edit environment variable

image

open anaconda prompt, if you don't have you don't have you can get it here cd into isntant-ngp as root

conda create -n ngp python=3.9
conda activate ngp
pip install -r requirements.txt

if pyexr cannot be installed via pip install pyexr, download OpenEXR‑1.3.2‑cp39‑cp39‑win_amd64.whl and move it to your root folder. Then you can run:

pip install OpenEXR-1.3.2-cp39-cp39-win_amd64.whl

Place your custom image set under data/<image_set_name>

Get transform.json from the following command. Insert your path to your images at <image/path>

python scripts/colmap2nerf.py --colmap_matcher exhaustive --run_colmap --aabb_scale 16 --images <image/path>

transform.json will be generated at the root folder, drag and drop it into your data/<image_set_name> folder.

You have to reorganize the folder structure due to how transforms.json is created...

For example:

File Structure BEFORE generating transform.json

📂instant-ngp/ # this is root
├── 📂data/
│	├── 📂toy_truck/
│	│	├── 📜toy_truck_001.jpg
│	│	├── 📜toy_truck_002.jpg
│	│	│...
│   │...
│...

File Structure AFTER generating transform.json

📂instant-ngp/ # this is root
├── 📂data/
│	├── 📂toy_truck/
│	│	├── 📜transforms.json/
│	│	├── 📂data/
│	│	│	├── 📂toy_truck/
│	│	│	│	├── 📜toy_truck_001.jpg
│	│	│	│	├── 📜toy_truck_002.jpg
│	│	│	│	│...
│	│	│	│...
│	│	│...
│	│...
│...

Note: adjusting the "aabb_scale" inside transform.json can reduce load on GPU VRAM. The lower the value the less intensive it'll be.

Finally, to run instant-ngp:

<path_to_your_ngp>\instant-ngp\build\testbed.exe --scene data/<image_set_name>

eg.

C:\user\user\download\instant-ngp\build\testbed.exe --scene data/toy_truck

And it should launch the GUI and everything amazing with it

Rendering custom camera path

  1. May need to install more dependencies. Install pip install tqdm scipy pillow opencv-python, conda install -c conda-forge ffmpeg, might be needed in the conda virtual environment. Refer to installation of pyexr above in the installation section if you didn't install that too.
  2. Train any image set like above.
  3. After you have reached a point that you are satisfied with your training, save a Snapshot on the GUI. (one of the tabs & no need to edit the path & the name)
  4. Find another GUI called camera path, it'll play hide and seek with you but it is there so find that window.
  5. The GUI is so well made, if you know how to use any 3D engine, it's really similar. Add camera path will give you a new angle of the camera.
  6. After you have finished adding your camera points, save the camera path. (no need to edit the path & the name)
  7. Render the path with the following command:
python scripts/render.py --scene <scene_path> --n_seconds <seconds> --fps <fps> --render_name <name> --width <resolution_width> --height <resolution_height>

eg.

python scripts/render.py --scene data/toy --n_seconds 5 --fps 60 --render_name test --width 1920 --height 1080

Your video will be saved at root. You might have to play around with the fps and n_seconds to speed up or slow down. I couldn't get it accurately because of the lack of information and this is the best I could come up with. To be honest, this is only a short-term solution too, since the author has promised to publish an official one. So stay tuned!

And my fork edits end here.

Interactive training and rendering

This codebase comes with an interactive testbed that includes many features beyond our academic publication:

  • Additional training features, such as extrinsics and intrinsics optimization.
  • Marching cubes for NeRF->Mesh and SDF->Mesh conversion.
  • A spline-based camera path editor to create videos.
  • Debug visualizations of the activations of every neuron input and output.
  • And many more task-specific settings.
  • See also our one minute demonstration video of the tool.

NeRF fox

One test scene is provided in this repository, using a small number of frames from a casually captured phone video:

instant-ngp$ ./build/testbed --scene data/nerf/fox

Alternatively, download any NeRF-compatible scene (e.g. from the NeRF authors' drive). Now you can run:

instant-ngp$ ./build/testbed --scene data/nerf_synthetic/lego/transforms_train.json

For more information about preparing datasets for use with our NeRF implementation, please see this document.

SDF armadillo

instant-ngp$ ./build/testbed --scene data/sdf/armadillo.obj

Image of Einstein

instant-ngp$ ./build/testbed --scene data/image/albert.exr

To reproduce the gigapixel results, download, for example, the Tokyo image and convert it to .bin using the scripts/image2bin.py script. This custom format improves compatibility and loading speed when resolution is high. Now you can run:

instant-ngp$ ./build/testbed --scene data/image/tokyo.bin

Volume Renderer

Download the nanovdb volume for the Disney cloud, which is derived from here (CC BY-SA 3.0).

instant-ngp$ ./build/testbed --mode volume --scene data/volume/wdas_cloud_quarter.nvdb

Python bindings

To conduct controlled experiments in an automated fashion, all features from the interactive testbed (and more!) have Python bindings that can be easily instrumented. For an example of how the ./build/testbed application can be implemented and extended from within Python, see ./scripts/run.py, which supports a superset of the command line arguments that ./build/testbed does.

Happy hacking!

Troubleshooting compile errors

Before investigating further, make sure all submodules are up-to-date and try compiling again.

instant-ngp$ git submodule sync --recursive
instant-ngp$ git submodule update --init --recursive

If instant-ngp still fails to compile, update CUDA as well as your compiler to the latest versions you can install on your system. It is crucial that you update both, as newer CUDA versions are not always compatible with earlier compilers and vice versa. If your problem persists, consult the following table of known issues.

Problem Resolution
CMake error: No CUDA toolset found / CUDA_ARCHITECTURES is empty for target "cmTC_0c70f" Windows: the Visual Studio CUDA integration was not installed correctly. Follow these instructions to fix the problem without re-installing CUDA. (#18)
Linux: Environment variables for your CUDA installation are probably incorrectly set. You may work around the issue using cmake . -B build -DCMAKE_CUDA_COMPILER=/usr/local/cuda-<your cuda version>/bin/nvcc (#28)
CMake error: No known features for CXX compiler "MSVC" Reinstall Visual Studio & make sure you run CMake from a developer shell. (#21)
Compile error: undefined references to "cudaGraphExecUpdate" / identifier "cublasSetWorkspace" is undefined Update your CUDA installation (which is likely 11.0) to 11.3 or higher. (#34 #41 #42)
Compile error: too few arguments in function call Update submodules with the above two git commands. (#37 #52)
Python error: No module named 'pyngp' It is likely that CMake did not detect your Python installation and therefore did not build pyngp. Check CMake logs to verify this. If pyngp was built in a different folder than instant-ngp/build, Python will be unable to detect it and you have to supply the full path to the import statement. (#43)

If you cannot find your problem in the table, please feel free to open an issue and ask for help.

Thanks

Many thanks to Jonathan Tremblay and Andrew Tao for testing early versions of this codebase and to Arman Toorians and Saurabh Jain for the factory robot dataset. We also thank Andrew Webb for noticing that one of the prime numbers in the spatial hash was not actually prime; this has been fixed since.

This project makes use of a number of awesome open source libraries, including:

  • tiny-cuda-nn for fast CUDA MLP networks
  • tinyexr for EXR format support
  • tinyobjloader for OBJ format support
  • stb_image for PNG and JPEG support
  • Dear ImGui an excellent immediate mode GUI library
  • Eigen a C++ template library for linear algebra
  • pybind11 for seamless C++ / Python interop
  • and others! See the dependencies folder.

Many thanks to the authors of these brilliant projects!

License and Citation

@article{mueller2022instant,
    title = {Instant Neural Graphics Primitives with a Multiresolution Hash Encoding},
    author = {Thomas M\"uller and Alex Evans and Christoph Schied and Alexander Keller},
    journal = {arXiv:2201.05989},
    year = {2022},
    month = jan
}

Copyright © 2022, NVIDIA Corporation. All rights reserved.

This work is made available under the Nvidia Source Code License-NC. Click here to view a copy of this license.

instant-ngp-windows's People

Contributors

bycloudai avatar davidvfx07 avatar mmalex avatar myagues avatar pwais avatar sazoji avatar tom94 avatar useronym avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

instant-ngp-windows's Issues

Vulkan error : build\testbed --scene data/fox

when run build\testbed --scene data/fox :
• Vulkan error: loader_validate_device_extensions: Device extension VK_NVX_binary_import not supported by selected physical device or enabled layers.
• Vulkan error: vkCreateDevice: Failed to validate extensions in list
• WARNING Could not initialize Vulkan and NGX. DLSS not supported

windows 10 x64
nvdia gtx 1080ti
Python  3.9
cuda 11.6
instant-ngp  version=2022-11-16
cmake 3.22
VulkanSDK 1.3.231.1
(py39) D:\Program\Anaconda3\envs\py39\Lib\site-packages\instant-ngp>build\testbed --scene data/fox
04:51:54 INFO     Loading NeRF dataset from
04:51:54 INFO       data\fox\transforms.json
04:51:54 SUCCESS  Loaded 50 images after 0s
04:51:54 INFO       cam_aabb=[min=[1.12716,-0.673166,-0.135153], max=[2.06779,1.13592,1.24793]]
04:51:55 INFO     Loading network config from: configs\nerf\base.json
04:51:55 INFO     GridEncoding:  Nmin=16 b=1.66248 F=2 T=2^19 L=16
Warning: FullyFusedMLP is not supported for the selected architecture 61. Falling back to CutlassMLP. For maximum performance, raise the target GPU architecture to 75+.
Warning: FullyFusedMLP is not supported for the selected architecture 61. Falling back to CutlassMLP. For maximum performance, raise the target GPU architecture to 75+.
Warning: FullyFusedMLP is not supported for the selected architecture 61. Falling back to CutlassMLP. For maximum performance, raise the target GPU architecture to 75+.
04:51:55 INFO     Density model: 3--[HashGrid]-->32--[FullyFusedMLP(neurons=64,layers=3)]-->1
04:51:55 INFO     Color model:   3--[Composite]-->16+16--[FullyFusedMLP(neurons=64,layers=4)]-->3
04:51:55 INFO       total_encoding_params=13623184 total_network_params=10240

04:51:55 WARNING  Vulkan error: loader_validate_device_extensions: Device extension VK_NVX_binary_import not supported by selected physical device or enabled layers.

04:51:55 WARNING  Vulkan error: vkCreateDevice: Failed to validate extensions in list

04:51:55 WARNING  Could not initialize Vulkan and NGX. DLSS not supported. (D:\Program\Anaconda3\envs\py39\Lib\site-packages\instant-ngp\src\dlss.cu:358 vkCreateDevice(vk_physical_device, &device_create_info, nullptr, &vk_device) failed)

04:51:55 ERROR    Uncaught exception: D:\Program\Anaconda3\envs\py39\Lib\site-packages\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/cutlass_matmul.h:332 status failed with error Error Internal

Error-[option_manager.cc:811] Check failed: ExistsDir(*image_path)

I made it to "Get transform.json" successfully. However when I use the command: python scripts/colmap2nerf.py --colmap_matcher exhaustive --run_colmap --aabb_scale 16 --images /image/vibe I get the error below.

I am using an admin terminal, and have tried the conda prompt as well. I am in the data directory of the repo C:\stable-diffusion\instant-ngp\data. My goal is to create a directory named 'vibe' where I am going to put my photos.

instant-ngp-
                    |_data-
                              |_image-
                                           |_vibe

I'm so close after days of resolving an issue with Git and Windows ASLR!
What am I doing wrong, and what info can I give to help? I am on Windows 10x64.

C:\stable-diffusion\instant-ngp>python scripts/colmap2nerf.py --colmap_matcher exhaustive --run_colmap --aabb_scale 16 --images /image/vibe
running colmap with:
        db=colmap.db
        images="/image/vibe"
        sparse=colmap_sparse
        text=colmap_text
warning! folders 'colmap_sparse' and 'colmap_text' will be deleted/replaced. continue? (Y/n)y
==== running: colmap feature_extractor --ImageReader.camera_model OPENCV --ImageReader.camera_params "" --SiftExtraction.estimate_affine_shape=true --SiftExtraction.domain_size_pooling=true --ImageReader.single_camera 1 --database_path colmap.db --image_path "/image/vibe"
[option_manager.cc:811] Check failed: ExistsDir(*image_path)
ERROR: Invalid options provided.
FATAL: command failed

I don't know why the error is occurring. Is it a problem with setting environmental variables?

C:\Users\user\Desktop\Tutorial\ngp\instant-ngp>cmake . -B build
-- Selecting Windows SDK version 10.0.19041.0 to target Windows 10.0.22621.
-- The CUDA compiler identification is unknown
CMake Error at CMakeLists.txt:11 (project):
No CMAKE_CUDA_COMPILER could be found.

-- Configuring incomplete, errors occurred!
See also "C:/Users/user/Desktop/Tutorial/ngp/instant-ngp/build/CMakeFiles/CMakeOutput.log".
See also "C:/Users/user/Desktop/Tutorial/ngp/instant-ngp/build/CMakeFiles/CMakeError.log".

How can I convert transformation matrix from transforms.json to base_cam.json format?

Hi.

I'd like to render an image with exact same pose from transforms.json file using render.py.

For example, datan/nerf/fox/, first image images/0001.jpg has transform matrix as below.

      "transform_matrix": [
    [
      0.8926439112348871,
      0.08799600283226543,
      0.4420900262071262,
      3.168359405609479
    ],
    [
      0.4464189982715247,
      -0.03675452191179031,
      -0.8940689141475064,
      -5.4794898611466945
    ],
    [
      -0.062425682580756266,
      0.995442519072023,
      -0.07209178487538156,
      -0.9791660699008925
    ],
    [
      0.0,
      0.0,
      0.0,
      1.0
    ]

Here, I have to convert it to quaternion & translation to use it as base_cam.json format

for example, ...

"R":[0.8109075427055359,0.03559545800089836,0.5819807648658752,-0.04959719628095627],
"T": -2.2435450553894043,0.11715501546859741,1.4327683448791504],"dof":0.0,"fov":50.625,"scale":2.9230759143829346,"slice":0.03611110895872116

I tried to convert it using code below, the result does not match to what I want. Am I missing something?

import numpy as np
from scipy.spatial.transform import Rotation as R

T = np.array(
[[
	0.8926439112348871,
	0.08799600283226543,
	0.4420900262071262,
	3.168359405609479
],
[
	0.4464189982715247,
	-0.03675452191179031,
	-0.8940689141475064,
	-5.4794898611466945
],
[
	-0.062425682580756266,
	0.995442519072023,
	-0.07209178487538156,
	-0.9791660699008925
]]
)

# Extract rotation matrix (R) and translation vector (t)
R_mat = T[:3, :3]
t = T[:3, 3]

# Convert the rotation matrix to a quaternion
rot = R.from_matrix(R_mat)
q = rot.as_quat()

colmap error

Encountered below error while preparing the dataset:

==== running: colmap exhaustive_matcher --SiftMatching.guided_matching=true --database_path colmap.db

==============================================================================
Exhaustive feature matching

Matching block [1/1, 1/1] in 0.890s
Elapsed time: 0.023 [minutes]
==== running: mkdir colmap_sparse
==== running: colmap mapper --database_path colmap.db --image_path "C:\nvidia-instant-ngp\instant-ngp\data\nerf\vivid-alast" --output_path colmap_sparse

==============================================================================
Loading database

Loading cameras... 1 in 0.000s
Loading matches... 0 in 0.000s
Loading images... 12 in 0.000s (connected 0)
Building correspondence graph... in 0.000s (ignored 0)

Elapsed time: 0.000 [minutes]

WARNING: No images with matches found in the database.

ERROR: failed to create sparse model
FATAL: command failed

I think this is the issue with colmap...did anyone encountered the same? any suggestions?

Visual Studio Community 2022 v17.10 Not Compatible...Visual Studio Professional 2022 v17.9 Not Supported

Visual Studio Community 2022 v17.10 throws up an error stating only VS 2017 to 2022 is supported. Upon further research I found VS 2022 17.9 is support BUT...older versions VS Community 2022 cannot be downloaded and only the current version is supported by Microsoft. Removed VS Community 2022 and downloaded VS 2022 Professional v17.9 and ran cmake and now get this error because Nvidia does not update their projects in the past 2 years.

Thanks for wasting 2 weeks of my time troubleshooting errors.

C:\NGP\instant-ngp>cmake . -B build
CMake Error at CMakeLists.txt:11 (project):
Generator

Visual Studio 17 2022

could not find specified instance of Visual Studio:

C:/Program Files/Microsoft Visual Studio/2022/Community

-- Configuring incomplete, errors occurred!

Nerf with coral

Since it use tinnyml
Is it possible to use a Google coral with this project?

CMAKE_CUDA_ARCHITECTURES must be valid if set.

Hi! Thanks for your work:)
When I run cmake . -B build,I got this:

-- Selecting Windows SDK version 10.0.19041.0 to target Windows 10.0.17763.
CMake Error at E:/Software/cmake/share/cmake-3.23/Modules/CMakeDetermineCUDACompiler.cmake:311 (message):
  CMAKE_CUDA_ARCHITECTURES must be valid if set.
Call Stack (most recent call first):
  CMakeLists.txt:11 (project)

-- Configuring incomplete, errors occurred!
See also "D:/graduate/nerf/instant-ngp-master/build/CMakeFiles/CMakeOutput.log".
See also "D:/graduate/nerf/instant-ngp-master/build/CMakeFiles/CMakeError.log".

I searched this online, and tried these 2 things below:

1.Add set(CMAKE_CUDA_ARCHITECTURES 48) in cmakelists
2. Use cmake . -B build -DCMAKE_CUDA_COMPILER=E:\Software\cuda\bin\nvcc -DTCNN_CUDA_ARCHITECTURES=61

But they didn't work

Which graphics card should I own for this to run fluently?

I get the dreaded out of memory error for cuda. I limited the amount of pictures to 40, resolution to 320 x 240 and I still get it.

I am running a Geforce GTX 960 with 2048 MB Dedicated Video Memory.

Can I make it run on this GPU or which graphics card should I own for this to run fluently?

Network config path configs\nerf\base.json does not exist.

C:\Users\me\nerf\instant-ngp\build>testbed.exe --scene ../data/fox
15:37:58 INFO Loading NeRF dataset from
15:37:58 INFO ..\data\fox\transforms.json
15:37:58 SUCCESS Loaded 50 images after 0s
15:37:58 INFO cam_aabb=[min=[1.0229,-1.33309,-0.378748], max=[2.46175,1.00721,1.41295]]
15:37:59 ERROR Network config path configs\nerf\base.json does not exist.

I think I'm almost there, but I get this error that doesn't bring up much results on Google. Anyone knows what this could be?

The GUI doesn't start it crashes in no time

hello ,
I followed the whole tutorial and steps to run instant-ngp gui but it crashes and never starts when i ran this line of code :
"""<path_to_your_ngp>\instant-ngp\build\testbed.exe --scene data/<image_set_name>"""
as figured in the attachment (
InkedCapture_LI
)]
it stop and crashes .
i also tried to decrease the number of images of the dataset from 100 to 50 images but the same error appeared.
I'm not sure , but i think it is related to hardware problem (GPU) not a software one although i think it could run on different GPUs types lower than mine.
i installed CUDA 11.7
my GPU is (NVIDIA [MX150)])
thanks in advance...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.