Code Monkey home page Code Monkey logo

ai-northstar-tech / vector-io Goto Github PK

View Code? Open in Web Editor NEW
123.0 4.0 20.0 4.39 MB

Use the universal VDF format for vector datasets to easily export and import data from all vector databases

License: Apache License 2.0

Python 8.41% Jupyter Notebook 91.56% Shell 0.03%
parquet vector-database vector-search-engine astradb chromadb data-backup data-export data-import datastax huggingface huggingface-datasets kdb lancedb milvus pinecone qdrant vertex-ai zilliz turbopuffer

vector-io's Introduction

Vector IO

PyPI - Version PyPI - Downloads Discord

This library uses a universal format for vector datasets to easily export and import data from all vector databases.

Request support for a VectorDB by voting/commenting on this poll

See the Contributing section to add support for your favorite vector database.

Supported Vector Databases

Fully Supported

Vector Database Import Export
Pinecone
Qdrant
Milvus
GCP Vertex AI Vector Search
KDB.AI
LanceDB
DataStax Astra DB
Chroma
Turbopuffer

Partial
Vector Database Import Export

In Progress
Vector Database Import Export
Azure AI Search
Weaviate
MongoDB Atlas
OpenSearch
Apache Cassandra
txtai
pgvector
SQLite-VSS

Not Supported
Vector Database Import Export
Vespa
Marqo
Elasticsearch
Redis Search
ClickHouse
USearch
Rockset
Epsilla
Activeloop Deep Lake
ApertureDB
CrateDB
Meilisearch
MyScale
Neo4j
Nuclia DB
OramaSearch
Typesense
Anari AI
Vald
Apache Solr

Installation

Using pip

pip install vdf-io

From source

git clone https://github.com/AI-Northstar-Tech/vector-io.git
cd vector-io
pip install -r requirements.txt

Universal Vector Dataset Format (VDF) specification

  1. VDF_META.json: It is a json file with the following schema VDFMeta defined in src/vdf_io/meta_types.py:
class NamespaceMeta(BaseModel):
    namespace: str
    index_name: str
    total_vector_count: int
    exported_vector_count: int
    dimensions: int
    model_name: str | None = None
    vector_columns: List[str] = ["vector"]
    data_path: str
    metric: str | None = None
    index_config: Optional[Dict[Any, Any]] = None
    schema_dict: Optional[Dict[str, Any]] = None


class VDFMeta(BaseModel):
    version: str
    file_structure: List[str]
    author: str
    exported_from: str
    indexes: Dict[str, List[NamespaceMeta]]
    exported_at: str
    id_column: Optional[str] = None
  1. Parquet files/folders for metadata and vectors.

Export Script

export_vdf --help
usage: export_vdf [-h] [-m MODEL_NAME]
                  [--max_file_size MAX_FILE_SIZE]
                  [--push_to_hub | --no-push_to_hub]
                  [--public | --no-public]
                  {pinecone,qdrant,kdbai,milvus,vertexai_vectorsearch}
                  ...

Export data from various vector databases to the VDF format for vector datasets

options:
  -h, --help            show this help message and exit
  -m MODEL_NAME, --model_name MODEL_NAME
                        Name of model used
  --max_file_size MAX_FILE_SIZE
                        Maximum file size in MB (default:
                        1024)
  --push_to_hub, --no-push_to_hub
                        Push to hub
  --public, --no-public
                        Make dataset public (default:
                        False)

Vector Databases:
  Choose the vectors database to export data from

  {pinecone,qdrant,kdbai,milvus,vertexai_vectorsearch}
    pinecone            Export data from Pinecone
    qdrant              Export data from Qdrant
    kdbai               Export data from KDB.AI
    milvus              Export data from Milvus
    vertexai_vectorsearch
                        Export data from Vertex AI Vector
                        Search

Import script

import_vdf --help
usage: import_vdf [-h] [-d DIR] [-s | --subset | --no-subset]
                  [--create_new | --no-create_new]
                  {milvus,pinecone,qdrant,vertexai_vectorsearch,kdbai}
                  ...

Import data from VDF to a vector database

options:
  -h, --help            show this help message and exit
  -d DIR, --dir DIR     Directory to import
  -s, --subset, --no-subset
                        Import a subset of data (default: False)
  --create_new, --no-create_new
                        Create a new index (default: False)

Vector Databases:
  Choose the vectors database to export data from

  {milvus,pinecone,qdrant,vertexai_vectorsearch,kdbai}
    milvus              Import data to Milvus
    pinecone            Import data to Pinecone
    qdrant              Import data to Qdrant
    vertexai_vectorsearch
                        Import data to Vertex AI Vector Search
    kdbai               Import data to KDB.AI

Re-embed script

This Python script is used to re-embed a vector dataset. It takes a directory of vector dataset in the VDF format and re-embeds it using a new model. The script also allows you to specify the name of the column containing text to be embedded.

reembed_vdf --help
usage: reembed_vdf [-h] -d DIR [-m NEW_MODEL_NAME]
                  [-t TEXT_COLUMN]

Reembed a vector dataset

options:
  -h, --help            show this help message and exit
  -d DIR, --dir DIR     Directory of vector dataset in
                        the VDF format
  -m NEW_MODEL_NAME, --new_model_name NEW_MODEL_NAME
                        Name of new model to be used
  -t TEXT_COLUMN, --text_column TEXT_COLUMN
                        Name of the column containing
                        text to be embedded

Examples

export_vdf -m hkunlp/instructor-xl --push_to_hub pinecone --environment gcp-starter

import_vdf -d /path/to/vdf/dataset milvus

reembed_vdf -d /path/to/vdf/dataset -m sentence-transformers/all-MiniLM-L6-v2 -t title

Follow the prompt to select the index and id range to export.

Contributing

Adding a new vector database

If you wish to add an import/export implementation for a new vector database, you must also implement the other side of the import/export for the same database. Please fork the repo and send a PR for both the import and export scripts.

Steps to add a new vector database (ABC):

  1. Add your database name in src/vdf_io/names.py in the DBNames enum class.
  2. Create new files src/vdf_io/export_vdf/export_abc.py and src/vdf_io/import_vdf/import_abc.py for the new DB.

Export:

  1. In your export file, define a class ExportABC which inherits from ExportVDF.
  2. Specify a DB_NAME_SLUG for the class
  3. The class should implement:
    1. make_parser() function to add database specific arguments to the export_vdf CLI
    2. export_vdb() function to prompt user for info not provided in the CLI. It should then call the get_data() function.
    3. get_data() function to download points (in a batched manner) with all the metadata from the specified index of the vector database. This data should be stored in a series of parquet files/folders. The metadata should be stored in a json file with the schema above.
  4. Use the script to export data from an example index of the vector database and verify that the data is exported correctly.

Import:

  1. In your import file, define a class ImportABC which inherits from ImportVDF.
  2. Specify a DB_NAME_SLUG for the class
  3. The class should implement:
    1. make_parser() function to add database specific arguments to the import_vdf CLI, such as the url of the database, any authentication tokens, etc.
    2. import_vdb() function to prompt user for info not provided in the CLI. It should then call the upsert_data() function.
    3. upsert_data() function to upload points from a vdf dataset (in a batched manner) with all the metadata to the specified index of the vector database. All metadata about the dataset should be read from the VDF_META.json file in the vdf folder.
  4. Use the script to import data from the example vdf dataset exported in the previous step and verify that the data is imported correctly.

Changing the VDF specification

If you wish to change the VDF specification, please open an issue to discuss the change before sending a PR.

Efficiency improvements

If you wish to improve the efficiency of the import/export scripts, please fork the repo and send a PR.

Telemetry

Running the scripts in the repo will send anonymous usage data to AI Northstar Tech to help improve the library.

You can opt out this by setting the environment variable DISABLE_TELEMETRY_VECTORIO to 1.

Questions

If you have any questions, please open an issue on the repo or message Dhruv Anand on LinkedIn

vector-io's People

Contributors

dhruv-anand-aintech avatar pre-commit-ci[bot] avatar tottenjordan avatar sweep-ai[bot] avatar horcruxno13 avatar anush008 avatar rajeshthallam avatar jswortz avatar qynikos avatar jaelgu avatar maghams62 avatar flashblaze avatar

Stargazers

Maria Teresa Aarao avatar Laurens avatar Samuel Rincé avatar Matías Agustín Méndez avatar SANKAR N avatar  avatar Naman Jain avatar Andreas Kollegger avatar  avatar Nischal Jain avatar Luke Hedger avatar sonam avatar Avthar Sewrathan avatar Shravan Sunder avatar Emeka Okoli avatar Evan avatar Rik avatar Vincent avatar Bozhao avatar Ce Gao avatar xieydd avatar Pavel Klymenko avatar Ray avatar Rayaan Ghosh avatar  avatar skeptrune avatar Yashasvi Mantha avatar  avatar Ansel Orville avatar pikapika avatar smellslikeml avatar Yusuf Candra Arif K.A avatar id-2 avatar Chris Goddard avatar Ryan Endacott avatar Jakub avatar Batuhan Akkaya avatar Sandalots avatar Rui Chen avatar Leon avatar Abhishek avatar Lai Woen Yon avatar Prasad Sawool avatar Sahil Sagwekar avatar Sumir Broota avatar Arvind Mishra avatar  avatar cloudpusher avatar Fernando avatar Juri Wiens avatar  avatar  avatar  avatar  avatar George avatar Praveen Yadav avatar  avatar  avatar Chenghao Mou avatar  avatar dackdel avatar Pratik Desai avatar Pathik Prashant Ghugare avatar Shailendra Sharma avatar Manjunath Janardhan avatar Tin Buzancic avatar Danny Avila avatar Pablo Schaffner avatar  avatar Moti Dabastani avatar Justin Watts avatar Julian avatar Timothy Spann avatar  avatar  avatar Boyinapalli Sandeep Dora avatar Felix avatar Carlos Ricardo Ziegler avatar  avatar Cliff Hall avatar Krishna Dubba avatar Miguel GP avatar  avatar Mohamed Chorfa avatar  avatar Vilson Rodrigues avatar Ladislav Vrbsky avatar 4rn1w3sth avatar Vijayant avatar  avatar Devansh Amin avatar Yusef Mosiah Nathanson avatar Aditya Manikanth Rao avatar Ryan K avatar Vishal Shah avatar Dimitar D. Mitov avatar  avatar Bharat S avatar Aditya Chhabra avatar rajpal avatar

Watchers

Rui Chen avatar Wolfgang Bartz avatar Suryakant Agrawal avatar  avatar

vector-io's Issues

Export from pinecone seems to be a little bit hacky

Thanks team for the great work!

When I go through pinecone's exporter, it seems that there is no guarantee that the exporter can get all data.
Any experiments about how fast and accurate we can get when we export from pinecone?

Sweep: Add Support for Turbopuffer

Documentation for Turbopuffer sdk: https://turbopuffer.com/docs/

  1. add turbopuffer[fast] to requirements.txt

  2. Upsert code:

import turbopuffer as tpuf

ns = tpuf.Namespace('namespace-name')
# If an error occurs, this call raises a tpuf.APIError if a retry was not successful.
ns.upsert(
  ids=[1, 2, 3, 4],
  vectors=[[0.1, 0.1], [0.2, 0.2], [0.3, 0.3], [0.4, 0.4]],
  attributes={
    'my-string': ['one', None, 'three', 'four'],
    'my-uint': [12, None, 84, 39],
    'my-string-array': [['a', 'b'], ['b', 'd'], [], ['c']]
  distance_metric='cosine_distance'
)
  1. Export code:
import turbopuffer as tpuf

ns = tpuf.Namespace('namespace-name')

# Cursor paging is handled automatically by the Python client
# If an error occurs, this call raises a tpuf.APIError if a retry was not successful.
for row in ns.vectors():
  print(row)
# VectorRow(id=1, vector=[0.1, 0.1], attributes={'key1': 'one', 'key2': 'a'})
# VectorRow(id=2, vector=[0.2, 0.2], attributes={'key1': 'two', 'key2': 'b'})
# VectorRow(id=3, vector=[0.3, 0.3], attributes={'key1': 'three', 'key2': 'c'})
# VectorRow(id=4, vector=[0.4, 0.4], attributes={'key1': 'four', 'key2': 'd'})

Follow the guidelines at AI-Northstar-Tech/vector-io#adding-a-new-vector-database to implement support for Turbopuffer in Vector-io

Follow the additions in PR: #77

Checklist of features for completion

  • Add mapping of distance metric names
  • Support local and cloud instances
  • Automatically create Python classes for index being exported
  • Export
    • Get all indexes by default
    • Option to Specify index names to export
    • DB-specific command line options (make_parser)
    • Allow input on terminal for each option above (via input() in python) export_vdb
    • Handle multiple vectors per row
  • Import
    • DB-specific command line options (make_parser)
    • Handle multiple vectors per row
    • Allow input on terminal for each option above (via input() in python) export_vdb
Checklist

Error: 'str' object has no attribute 'starts_with' while importing to Pinecone serverless

Hi, I exported data from pod based index and replaced a key and value in VDF_META.json so I can import it to a specific index in my serverless index:

Before

    "indexes": {
        "namespace1": [
            {
                "namespace": "dev-clsix3wzg00052eeep6bdgvcl",
                "index_name": "namespace1",
                "total_vector_count": 84,
                "exported_vector_count": 84,
                "dimensions": 1536,
                "model_name": "text-embedding-ada-002",
                "vector_columns": [
                    "vector"
                ],
                "data_path": "namespace1_dev-clsix3wzg00052eeep6bdgvcl/i1.parquet",
                "metric": "Cosine"
            }
        ]
    },

After

    "indexes": {
        "namespace1-serverless": [
            {
                "namespace": "dev-clsix3wzg00052eeep6bdgvcl",
                "index_name": "namespace1-serverless",
                "total_vector_count": 84,
                "exported_vector_count": 84,
                "dimensions": 1536,
                "model_name": "text-embedding-ada-002",
                "vector_columns": [
                    "vector"
                ],
                "data_path": "namespace1_dev-clsix3wzg00052eeep6bdgvcl/i1.parquet",
                "metric": "Cosine"
            }
        ]
    },

After trying to import via running import_vdf pinecone --serverless, I ran into this error:

Error: 'str' object has no attribute 'starts_with'
Traceback (most recent call last):
  File "/home/neeraj/.local/lib/python3.10/site-packages/vdf_io/import_vdf_cli.py", line 52, in main
    run_import(span)
  File "/home/neeraj/.local/lib/python3.10/site-packages/vdf_io/import_vdf_cli.py", line 129, in run_import
    import_obj = slug_to_import_func[args["vector_database"]](args)
  File "/home/neeraj/.local/lib/python3.10/site-packages/vdf_io/import_vdf/pinecone_import.py", line 80, in import_vdb
    pinecone_import.upsert_data()
  File "/home/neeraj/.local/lib/python3.10/site-packages/vdf_io/import_vdf/pinecone_import.py", line 172, in upsert_data
    parquet_files = self.get_parquet_files(final_data_path)
  File "/home/neeraj/.local/lib/python3.10/site-packages/vdf_io/import_vdf/vdf_import_cls.py", line 77, in get_parquet_files
    return get_parquet_files(data_path, self.args)
  File "/home/neeraj/.local/lib/python3.10/site-packages/vdf_io/util.py", line 196, in get_parquet_files
    if args.get("hf_dataset", None) or data_path.starts_with("hf://"):
AttributeError: 'str' object has no attribute 'starts_with'. Did you mean: 'startswith'?

KDB insert problem

Facing this issue while trying to upsert some data to kdb.ai

@qynikos, could you have a look? It's on v0.1.101

with this command:

import_vdf \        --id_column PMID \--subset \
        --max_num_rows 200 \
        --hf_dataset somewheresystems/dataclysm-pubmed \
        --vector_columns title_embedding,abstract_embedding \
        kdbai \
        --url https://cloud.kdb.ai/instance/n6qap7ddvz
Error: Error inserting chunk: Failed to insert data in table named: dataclysm_pubmed, because of: <html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>openresty/1.19.9.1</center>
</body>
</html>
.
Traceback (most recent call last):
  File "/opt/homebrew/lib/python3.11/site-packages/kdbai_client/api.py", line 536, in insert
    return self.session._rest_post_qipc(Session.INSERT_PATH, self.name, data, True)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/kdbai_client/api.py", line 341, in _rest_post_qipc
    res = request.urlopen(req)
          ^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 216, in urlopen
    return opener.open(url, data, timeout)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 525, in open
    response = meth(req, response)
               ^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 634, in http_response
    response = self.parent.error(
               ^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 563, in error
    return self._call_chain(*args)
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 496, in _call_chain
    result = func(*args)
             ^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 643, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 502: Bad Gateway

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/homebrew/lib/python3.11/site-packages/vdf_io/import_vdf/kdbai_import.py", line 203, in upsert_data
    table.insert(chunk)
  File "/opt/homebrew/lib/python3.11/site-packages/kdbai_client/api.py", line 538, in insert
    raise KDBAIException(f'Failed to insert data in table named: {self.name}.', e=e)
kdbai_client.api.KDBAIException: Failed to insert data in table named: dataclysm_pubmed, because of: <html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>openresty/1.19.9.1</center>
</body>
</html>
.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/homebrew/lib/python3.11/site-packages/vdf_io/import_vdf_cli.py", line 53, in main
    run_import(span)
  File "/opt/homebrew/lib/python3.11/site-packages/vdf_io/import_vdf_cli.py", line 131, in run_import
    import_obj = slug_to_import_func[args["vector_database"]](args)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/vdf_io/import_vdf/kdbai_import.py", line 47, in import_vdb
    kdbai_import.upsert_data()
  File "/opt/homebrew/lib/python3.11/site-packages/vdf_io/import_vdf/kdbai_import.py", line 213, in upsert_data
    raise RuntimeError(f"Error inserting chunk: {e}")
RuntimeError: Error inserting chunk: Failed to insert data in table named: dataclysm_pubmed, because of: <html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>openresty/1.19.9.1</center>
</body>
</html>
.

Is it possible to skip namespaces with errors and move on to next ones?

Upon running the export script, I ran into this error and the script stopped. Is there any way to move on to the next namespace in case an error occurs while exporting?

Error: 1 validation error for NamespaceMeta██████████████████████████████████████████████████████████████████████████████████████████| 86/86 [00:02<00:00, 34.05it/s]
metric
  Field required [type=missing, input_value={'namespace': 'dev-clsika...ey96iff1y30/i1.parquet'}, input_type=dict]
    For further information visit https://errors.pydantic.dev/2.6/v/missing
Traceback (most recent call last):
  File "/home/neeraj/.local/lib/python3.10/site-packages/vdf_io/export_vdf_cli.py", line 56, in main
    run_export(span)
  File "/home/neeraj/.local/lib/python3.10/site-packages/vdf_io/export_vdf_cli.py", line 123, in run_export
    export_obj = slug_to_export_func[args["vector_database"]](args)
  File "/home/neeraj/.local/lib/python3.10/site-packages/vdf_io/export_vdf/pinecone_export.py", line 118, in export_vdb
    pinecone_export.get_data()
  File "/home/neeraj/.local/lib/python3.10/site-packages/vdf_io/export_vdf/pinecone_export.py", line 443, in get_data
    index_meta = self.get_data_for_index(index_name)
  File "/home/neeraj/.local/lib/python3.10/site-packages/vdf_io/export_vdf/pinecone_export.py", line 540, in get_data_for_index
    namespace_meta = NamespaceMeta(
  File "/home/neeraj/.local/lib/python3.10/site-packages/pydantic/main.py", line 171, in __init__
    self.__pydantic_validator__.validate_python(data, self_instance=self)
pydantic_core._pydantic_core.ValidationError: 1 validation error for NamespaceMeta
metric
  Field required [type=missing, input_value={'namespace': 'dev-clsika...ey96iff1y30/i1.parquet'}, input_type=dict]
    For further information visit https://errors.pydantic.dev/2.6/v/missing
Final Step: Fetching vectors: 172it [00:02, 64.60it/s]

Add support for pgvector

Follow the guidelines at https://github.com/AI-Northstar-Tech/vector-io#adding-a-new-vector-database to implement support for PGVector in Vector-io

Join the Discord server for the library at https://discord.gg/RZbXha62Fg, and ask any questions on the #vector-io-dev channel.

Checklist of features for completion

  • Add mapping of distance metric names
  • Support local and cloud instances
  • Automatically create Python classes for index being exported
  • Export
    • Get all indexes by default
    • Option to Specify index names to export
    • DB-specific command line options (make_parser)
    • Allow input on terminal for each option above (via input() in python) export_vdb
    • Handle multiple vectors per row
  • Import
    • DB-specific command line options (make_parser)
    • Handle multiple vectors per row
    • Allow input on terminal for each option above (via input() in python) export_vdb

PGVector python client documentation: https://github.com/pgvector/pgvector-python

For any questions, contact @dhruv-anand-aintech on Linkedin (https://www.linkedin.com/in/dhruv-anand-ainorthstartech/)

Add Support for Weaviate

Follow the guidelines at https://github.com/AI-Northstar-Tech/vector-io#adding-a-new-vector-database to implement support for Weaviate in Vector-io

Join the Discord server for the library at https://discord.gg/RZbXha62Fg, and ask any questions on the #vector-io-dev channel.

Checklist of features for completion

  • Add mapping of distance metric names
  • Support local and cloud instances
  • Automatically create Python classes for index being exported
  • Export
    • Get all indexes by default
    • Option to Specify index names to export
    • DB-specific command line options (make_parser)
    • Allow input on terminal for each option above (via input() in python) export_vdb
    • Handle multiple vectors per row
  • Import
    • DB-specific command line options (make_parser)
    • Handle multiple vectors per row
    • Allow input on terminal for each option above (via input() in python) export_vdb

"keep-alive" script for each cloud VDB

Free trials/free tiers of cloud VDB offerings have a fixed period of idling after which they take your free cluster/index offline. This script will do some no-op actions on the index to keep it alive. It can be run as a daily cron job.

Pinecone Import: Multiple matches for FieldRef.Name(__filename) in id: string

Attached is one of the parquet files generated from a Pinecone export. When I try to re-import I get these errors regarding duplicate fields.

Multiple matches for FieldRef.Name(__filename) in id: string vector: list<element: double> __filename: string __ingested_at: string content_id: string filename: string ingested_at: string text: string __fragment_index: int32 __batch_index: int32 __last_in_fragment: bool __filename: string

i2.parquet.zip

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.