Code Monkey home page Code Monkey logo

docarray's Introduction

DocArray logo: The data structure for unstructured data
The data structure for multimodal data

PyPI Codecov branch PyPI - Downloads from official pypistats

Note The README you're currently viewing is for DocArray>0.30, which introduces some significant changes from DocArray 0.21. If you wish to continue using the older DocArray <=0.21, ensure you install it via pip install docarray==0.21. Refer to its codebase, documentation, and its hot-fixes branch for more information.

DocArray is a Python library expertly crafted for the representation, transmission, storage, and retrieval of multimodal data. Tailored for the development of multimodal AI applications, its design guarantees seamless integration with the extensive Python and machine learning ecosystems. As of January 2022, DocArray is openly distributed under the Apache License 2.0 and currently enjoys the status of a sandbox project within the LF AI & Data Foundation.

Installation

To install DocArray from the CLI, run the following command:

pip install -U docarray

Note To use DocArray <=0.21, make sure you install via pip install docarray==0.21 and check out its codebase and docs and its hot-fixes branch.

Get Started

New to DocArray? Depending on your use case and background, there are multiple ways to learn about DocArray:

Represent

DocArray empowers you to represent your data in a manner that is inherently attuned to machine learning.

This is particularly beneficial for various scenarios:

  • πŸƒ You are training a model: You're dealing with tensors of varying shapes and sizes, each signifying different elements. You desire a method to logically organize them.
  • ☁️ You are serving a model: Let's say through FastAPI, and you wish to define your API endpoints precisely.
  • πŸ—‚οΈ You are parsing data: Perhaps for future deployment in your machine learning or data science projects.

πŸ’‘ Familiar with Pydantic? You'll be pleased to learn that DocArray is not only constructed atop Pydantic but also maintains complete compatibility with it! Furthermore, we have a specific section dedicated to your needs!

In essence, DocArray facilitates data representation in a way that mirrors Python dataclasses, with machine learning being an integral component:

from docarray import BaseDoc
from docarray.typing import TorchTensor, ImageUrl
import torch


# Define your data model
class MyDocument(BaseDoc):
    description: str
    image_url: ImageUrl  # could also be VideoUrl, AudioUrl, etc.
    image_tensor: TorchTensor[1704, 2272, 3]  # you can express tensor shapes!


# Stack multiple documents in a Document Vector
from docarray import DocVec

vec = DocVec[MyDocument](
    [
        MyDocument(
            description="A cat",
            image_url="https://example.com/cat.jpg",
            image_tensor=torch.rand(1704, 2272, 3),
        ),
    ]
    * 10
)
print(vec.image_tensor.shape)  # (10, 1704, 2272, 3)
Click for more details

Let's take a closer look at how you can represent your data with DocArray:

from docarray import BaseDoc
from docarray.typing import TorchTensor, ImageUrl
from typing import Optional
import torch


# Define your data model
class MyDocument(BaseDoc):
    description: str
    image_url: ImageUrl  # could also be VideoUrl, AudioUrl, etc.
    image_tensor: Optional[
        TorchTensor[1704, 2272, 3]
    ] = None  # could also be NdArray or TensorflowTensor
    embedding: Optional[TorchTensor] = None

So not only can you define the types of your data, you can even specify the shape of your tensors!

# Create a document
doc = MyDocument(
    description="This is a photo of a mountain",
    image_url="https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg",
)

# Load image tensor from URL
doc.image_tensor = doc.image_url.load()


# Compute embedding with any model of your choice
def clip_image_encoder(image_tensor: TorchTensor) -> TorchTensor:  # dummy function
    return torch.rand(512)


doc.embedding = clip_image_encoder(doc.image_tensor)

print(doc.embedding.shape)  # torch.Size([512])

Compose nested Documents

Of course, you can compose Documents into a nested structure:

from docarray import BaseDoc
from docarray.documents import ImageDoc, TextDoc
import numpy as np


class MultiModalDocument(BaseDoc):
    image_doc: ImageDoc
    text_doc: TextDoc


doc = MultiModalDocument(
    image_doc=ImageDoc(tensor=np.zeros((3, 224, 224))), text_doc=TextDoc(text='hi!')
)

You rarely work with a single data point at a time, especially in machine learning applications. That's why you can easily collect multiple Documents:

Collect multiple Documents

When building or interacting with an ML system, usually you want to process multiple Documents (data points) at once.

DocArray offers two data structures for this:

  • DocVec: A vector of Documents. All tensors in the documents are stacked into a single tensor. Perfect for batch processing and use inside of ML models.
  • DocList: A list of Documents. All tensors in the documents are kept as-is. Perfect for streaming, re-ranking, and shuffling of data.

Let's take a look at them, starting with DocVec:

from docarray import DocVec, BaseDoc
from docarray.typing import AnyTensor, ImageUrl
import numpy as np


class Image(BaseDoc):
    url: ImageUrl
    tensor: AnyTensor  # this allows torch, numpy, and tensor flow tensors


vec = DocVec[Image](  # the DocVec is parametrized by your personal schema!
    [
        Image(
            url="https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg",
            tensor=np.zeros((3, 224, 224)),
        )
        for _ in range(100)
    ]
)

In the code snippet above, DocVec is parametrized by the type of document you want to use with it: DocVec[Image].

This may look weird at first, but we're confident that you'll get used to it quickly! Besides, it lets us do some cool things, like having bulk access to the fields that you defined in your document:

tensor = vec.tensor  # gets all the tensors in the DocVec
print(tensor.shape)  # which are stacked up into a single tensor!
print(vec.url)  # you can bulk access any other field, too

The second data structure, DocList, works in a similar way:

from docarray import DocList

dl = DocList[Image](  # the DocList is parametrized by your personal schema!
    [
        Image(
            url="https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg",
            tensor=np.zeros((3, 224, 224)),
        )
        for _ in range(100)
    ]
)

You can still bulk access the fields of your document:

tensors = dl.tensor  # gets all the tensors in the DocList
print(type(tensors))  # as a list of tensors
print(dl.url)  # you can bulk access any other field, too

And you can insert, remove, and append documents to your DocList:

# append
dl.append(
    Image(
        url="https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg",
        tensor=np.zeros((3, 224, 224)),
    )
)
# delete
del dl[0]
# insert
dl.insert(
    0,
    Image(
        url="https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg",
        tensor=np.zeros((3, 224, 224)),
    ),
)

And you can seamlessly switch between DocVec and DocList:

vec_2 = dl.to_doc_vec()
assert isinstance(vec_2, DocVec)

dl_2 = vec_2.to_doc_list()
assert isinstance(dl_2, DocList)

Send

DocArray facilitates the transmission of your data in a manner inherently compatible with machine learning.

This includes native support for Protobuf and gRPC, along with HTTP and serialization to JSON, JSONSchema, Base64, and Bytes.

This feature proves beneficial for several scenarios:

  • ☁️ You are serving a model, perhaps through frameworks like Jina or FastAPI
  • πŸ•ΈοΈ You are distributing your model across multiple machines and need an efficient means of transmitting your data between nodes
  • βš™οΈ You are architecting a microservice environment and require a method for data transmission between microservices

πŸ’‘ Are you familiar with FastAPI? You'll be delighted to learn that DocArray maintains full compatibility with FastAPI! Plus, we have a dedicated section specifically for you!

When it comes to data transmission, serialization is a crucial step. Let's delve into how DocArray streamlines this process:

from docarray import BaseDoc
from docarray.typing import ImageTorchTensor
import torch


# model your data
class MyDocument(BaseDoc):
    description: str
    image: ImageTorchTensor[3, 224, 224]


# create a Document
doc = MyDocument(
    description="This is a description",
    image=torch.zeros((3, 224, 224)),
)

# serialize it!
proto = doc.to_protobuf()
bytes_ = doc.to_bytes()
json = doc.json()

# deserialize it!
doc_2 = MyDocument.from_protobuf(proto)
doc_4 = MyDocument.from_bytes(bytes_)
doc_5 = MyDocument.parse_raw(json)

Of course, serialization is not all you need. So check out how DocArray integrates with Jina and FastAPI.

Store

After modeling and possibly distributing your data, you'll typically want to store it somewhere. That's where DocArray steps in!

Document Stores provide a seamless way to, as the name suggests, store your Documents. Be it locally or remotely, you can do it all through the same user interface:

  • πŸ’Ώ On disk, as a file in your local filesystem
  • πŸͺ£ On AWS S3
  • ☁️ On Jina AI Cloud

The Document Store interface lets you push and pull Documents to and from multiple data sources, all with the same user interface.

For example, let's see how that works with on-disk storage:

from docarray import BaseDoc, DocList


class SimpleDoc(BaseDoc):
    text: str


docs = DocList[SimpleDoc]([SimpleDoc(text=f'doc {i}') for i in range(8)])
docs.push('file://simple_docs')

docs_pull = DocList[SimpleDoc].pull('file://simple_docs')

Retrieve

Document Indexes let you index your Documents in a vector database for efficient similarity-based retrieval.

This is useful for:

  • πŸ—¨οΈ Augmenting LLMs and Chatbots with domain knowledge (Retrieval Augmented Generation)
  • πŸ” Neural search applications
  • πŸ’‘ Recommender systems

Currently, Document Indexes support Weaviate, Qdrant, ElasticSearch, Redis, and HNSWLib, with more to come!

The Document Index interface lets you index and retrieve Documents from multiple vector databases, all with the same user interface.

It supports ANN vector search, text search, filtering, and hybrid search.

from docarray import DocList, BaseDoc
from docarray.index import HnswDocumentIndex
import numpy as np

from docarray.typing import ImageUrl, ImageTensor, NdArray


class ImageDoc(BaseDoc):
    url: ImageUrl
    tensor: ImageTensor
    embedding: NdArray[128]


# create some data
dl = DocList[ImageDoc](
    [
        ImageDoc(
            url="https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg",
            tensor=np.zeros((3, 224, 224)),
            embedding=np.random.random((128,)),
        )
        for _ in range(100)
    ]
)

# create a Document Index
index = HnswDocumentIndex[ImageDoc](work_dir='/tmp/test_index')


# index your data
index.index(dl)

# find similar Documents
query = dl[0]
results, scores = index.find(query, limit=10, search_field='embedding')

Learn DocArray

Depending on your background and use case, there are different ways for you to understand DocArray.

Coming from DocArray <=0.21

Click to expand

If you are using DocArray version 0.30.0 or lower, you will be familiar with its dataclass API.

DocArray >=0.30 is that idea, taken seriously. Every document is created through a dataclass-like interface, courtesy of Pydantic.

This gives the following advantages:

  • Flexibility: No need to conform to a fixed set of fields -- your data defines the schema
  • Multimodality: At their core, documents are just dictionaries. This makes it easy to create and send them from any language, not just Python.

You may also be familiar with our old Document Stores for vector DB integration. They are now called Document Indexes and offer the following improvements (see here for the new API):

  • Hybrid search: You can now combine vector search with text search, and even filter by arbitrary fields
  • Production-ready: The new Document Indexes are a much thinner wrapper around the various vector DB libraries, making them more robust and easier to maintain
  • Increased flexibility: We strive to support any configuration or setting that you could perform through the DB's first-party client

For now, Document Indexes support Weaviate, Qdrant, ElasticSearch, Redis, Exact Nearest Neighbour search and HNSWLib, with more to come.

Coming from Pydantic

Click to expand

If you come from Pydantic, you can see DocArray documents as juiced up Pydantic models, and DocArray as a collection of goodies around them.

More specifically, we set out to make Pydantic fit for the ML world - not by replacing it, but by building on top of it!

This means you get the following benefits:

  • ML-focused types: Tensor, TorchTensor, Embedding, ..., including tensor shape validation
  • Full compatibility with FastAPI
  • DocList and DocVec generalize the idea of a model to a sequence or batch of models. Perfect for use in ML models and other batch processing tasks.
  • Types that are alive: ImageUrl can .load() a URL to image tensor, TextUrl can load and tokenize text documents, etc.
  • Cloud-ready: Serialization to Protobuf for use with microservices and gRPC
  • Pre-built multimodal documents for different data modalities: Image, Text, 3DMesh, Video, Audio and more. Note that all of these are valid Pydantic models!
  • Document Stores and Document Indexes let you store your data and retrieve it using vector search

The most obvious advantage here is first-class support for ML centric data, such as {Torch, TF, ...}Tensor, Embedding, etc.

This includes handy features such as validating the shape of a tensor:

from docarray import BaseDoc
from docarray.typing import TorchTensor
import torch


class MyDoc(BaseDoc):
    tensor: TorchTensor[3, 224, 224]


doc = MyDoc(tensor=torch.zeros(3, 224, 224))  # works
doc = MyDoc(tensor=torch.zeros(224, 224, 3))  # works by reshaping

try:
    doc = MyDoc(tensor=torch.zeros(224))  # fails validation
except Exception as e:
    print(e)
    # tensor
    # Cannot reshape tensor of shape (224,) to shape (3, 224, 224) (type=value_error)


class Image(BaseDoc):
    tensor: TorchTensor[3, 'x', 'x']


Image(tensor=torch.zeros(3, 224, 224))  # works

try:
    Image(
        tensor=torch.zeros(3, 64, 128)
    )  # fails validation because second dimension does not match third
except Exception as e:
    print()


try:
    Image(
        tensor=torch.zeros(4, 224, 224)
    )  # fails validation because of the first dimension
except Exception as e:
    print(e)
    # Tensor shape mismatch. Expected(3, 'x', 'x'), got(4, 224, 224)(type=value_error)

try:
    Image(
        tensor=torch.zeros(3, 64)
    )  # fails validation because it does not have enough dimensions
except Exception as e:
    print(e)
    # Tensor shape mismatch. Expected (3, 'x', 'x'), got (3, 64) (type=value_error)

Coming from PyTorch

Click to expand

If you come from PyTorch, you can see DocArray mainly as a way of organizing your data as it flows through your model.

It offers you several advantages:

  • Express tensor shapes in type hints
  • Group tensors that belong to the same object, e.g. an audio track and an image
  • Go directly to deployment, by re-using your data model as a FastAPI or Jina API schema
  • Connect model components between microservices, using Protobuf and gRPC

DocArray can be used directly inside ML models to handle and represent multimodaldata. This allows you to reason about your data using DocArray's abstractions deep inside of nn.Module, and provides a FastAPI-compatible schema that eases the transition between model training and model serving.

To see the effect of this, let's first observe a vanilla PyTorch implementation of a tri-modal ML model:

import torch
from torch import nn


def encoder(x):
    return torch.rand(512)


class MyMultiModalModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.audio_encoder = encoder()
        self.image_encoder = encoder()
        self.text_encoder = encoder()

    def forward(self, text_1, text_2, image_1, image_2, audio_1, audio_2):
        embedding_text_1 = self.text_encoder(text_1)
        embedding_text_2 = self.text_encoder(text_2)

        embedding_image_1 = self.image_encoder(image_1)
        embedding_image_2 = self.image_encoder(image_2)

        embedding_audio_1 = self.image_encoder(audio_1)
        embedding_audio_2 = self.image_encoder(audio_2)

        return (
            embedding_text_1,
            embedding_text_2,
            embedding_image_1,
            embedding_image_2,
            embedding_audio_1,
            embedding_audio_2,
        )

Not very easy on the eyes if you ask us. And even worse, if you need to add one more modality you have to touch every part of your code base, changing the forward() return type and making a whole lot of changes downstream from that.

So, now let's see what the same code looks like with DocArray:

from docarray import DocList, BaseDoc
from docarray.documents import ImageDoc, TextDoc, AudioDoc
from docarray.typing import TorchTensor
from torch import nn
import torch


def encoder(x):
    return torch.rand(512)


class Podcast(BaseDoc):
    text: TextDoc
    image: ImageDoc
    audio: AudioDoc


class PairPodcast(BaseDoc):
    left: Podcast
    right: Podcast


class MyPodcastModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.audio_encoder = encoder()
        self.image_encoder = encoder()
        self.text_encoder = encoder()

    def forward_podcast(self, docs: DocList[Podcast]) -> DocList[Podcast]:
        docs.audio.embedding = self.audio_encoder(docs.audio.tensor)
        docs.text.embedding = self.text_encoder(docs.text.tensor)
        docs.image.embedding = self.image_encoder(docs.image.tensor)

        return docs

    def forward(self, docs: DocList[PairPodcast]) -> DocList[PairPodcast]:
        docs.left = self.forward_podcast(docs.left)
        docs.right = self.forward_podcast(docs.right)

        return docs

Looks much better, doesn't it? You instantly win in code readability and maintainability. And for the same price you can turn your PyTorch model into a FastAPI app and reuse your Document schema definition (see below). Everything is handled in a pythonic manner by relying on type hints.

Coming from TensorFlow

Click to expand

Like the PyTorch approach, you can also use DocArray with TensorFlow to handle and represent multimodal data inside your ML model.

First off, to use DocArray with TensorFlow we first need to install it as follows:

pip install tensorflow==2.12.0
pip install protobuf==3.19.0

Compared to using DocArray with PyTorch, there is one main difference when using it with TensorFlow: While DocArray's TorchTensor is a subclass of torch.Tensor, this is not the case for the TensorFlowTensor: Due to some technical limitations of tf.Tensor, DocArray's TensorFlowTensor is not a subclass of tf.Tensor but rather stores a tf.Tensor in its .tensor attribute.

How does this affect you? Whenever you want to access the tensor data to, let's say, do operations with it or hand it to your ML model, instead of handing over your TensorFlowTensor instance, you need to access its .tensor attribute.

This would look like the following:

from typing import Optional

from docarray import DocList, BaseDoc

import tensorflow as tf


class Podcast(BaseDoc):
    audio_tensor: Optional[AudioTensorFlowTensor] = None
    embedding: Optional[AudioTensorFlowTensor] = None


class MyPodcastModel(tf.keras.Model):
    def __init__(self):
        super().__init__()
        self.audio_encoder = AudioEncoder()

    def call(self, inputs: DocList[Podcast]) -> DocList[Podcast]:
        inputs.audio_tensor.embedding = self.audio_encoder(
            inputs.audio_tensor.tensor
        )  # access audio_tensor's .tensor attribute
        return inputs

Coming from FastAPI

Click to expand

Documents are Pydantic Models (with a twist), and as such they are fully compatible with FastAPI!

But why should you use them, and not the Pydantic models you already know and love? Good question!

  • Because of the ML-first features, types and validations, here
  • Because DocArray can act as an ORM for vector databases, similar to what SQLModel does for SQL databases

And to seal the deal, let us show you how easily documents slot into your FastAPI app:

import numpy as np
from fastapi import FastAPI
from docarray.base_doc import DocArrayResponse
from docarray import BaseDoc
from docarray.documents import ImageDoc
from docarray.typing import NdArray, ImageTensor


class InputDoc(BaseDoc):
    img: ImageDoc
    text: str


class OutputDoc(BaseDoc):
    embedding_clip: NdArray
    embedding_bert: NdArray


app = FastAPI()


def model_img(img: ImageTensor) -> NdArray:
    return np.zeros((100, 1))


def model_text(text: str) -> NdArray:
    return np.zeros((100, 1))


@app.post("/embed/", response_model=OutputDoc, response_class=DocArrayResponse)
async def create_item(doc: InputDoc) -> OutputDoc:
    doc = OutputDoc(
        embedding_clip=model_img(doc.img.tensor), embedding_bert=model_text(doc.text)
    )
    return doc


input_doc = InputDoc(text='', img=ImageDoc(tensor=np.random.random((3, 224, 224))))

async with AsyncClient(app=app, base_url="http://test") as ac:
    response = await ac.post("/embed/", data=input_doc.json())

Just like a vanilla Pydantic model!

Coming from Jina

Click to expand

Jina has adopted docarray as their library for representing and serializing Documents.

Jina allows to serve models and services that are built with DocArray allowing you to serve and scale these applications making full use of DocArray's serialization capabilites.

import numpy as np
from jina import Deployment, Executor, requests
from docarray import BaseDoc, DocList
from docarray.documents import ImageDoc
from docarray.typing import NdArray, ImageTensor


class InputDoc(BaseDoc):
    img: ImageDoc
    text: str


class OutputDoc(BaseDoc):
    embedding_clip: NdArray
    embedding_bert: NdArray


def model_img(img: ImageTensor) -> NdArray:
    return np.zeros((100, 1))


def model_text(text: str) -> NdArray:
    return np.zeros((100, 1))


class MyEmbeddingExecutor(Executor):
    @requests(on='/embed')
    def encode(self, docs: DocList[InputDoc], **kwargs) -> DocList[OutputDoc]:
        ret = DocList[OutputDoc]()
        for doc in docs:
            output = OutputDoc(
                embedding_clip=model_img(doc.img.tensor),
                embedding_bert=model_text(doc.text),
            )
            ret.append(output)
        return ret


with Deployment(
    protocols=['grpc', 'http'], ports=[12345, 12346], uses=MyEmbeddingExecutor
) as dep:
    resp = dep.post(
        on='/embed',
        inputs=DocList[InputDoc](
            [InputDoc(text='', img=ImageDoc(tensor=np.random.random((3, 224, 224))))]
        ),
        return_type=DocList[OutputDoc],
    )
    print(resp)

Coming from a vector database

Click to expand

If you came across DocArray as a universal vector database client, you can best think of it as a new kind of ORM for vector databases. DocArray's job is to take multimodal, nested and domain-specific data and to map it to a vector database, store it there, and thus make it searchable:

from docarray import DocList, BaseDoc
from docarray.index import HnswDocumentIndex
import numpy as np

from docarray.typing import ImageUrl, ImageTensor, NdArray


class ImageDoc(BaseDoc):
    url: ImageUrl
    tensor: ImageTensor
    embedding: NdArray[128]


# create some data
dl = DocList[ImageDoc](
    [
        ImageDoc(
            url="https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg",
            tensor=np.zeros((3, 224, 224)),
            embedding=np.random.random((128,)),
        )
        for _ in range(100)
    ]
)

# create a Document Index
index = HnswDocumentIndex[ImageDoc](work_dir='/tmp/test_index2')


# index your data
index.index(dl)

# find similar Documents
query = dl[0]
results, scores = index.find(query, limit=10, search_field='embedding')

Currently, DocArray supports the following vector databases:

An integration of OpenSearch is currently in progress.

Of course this is only one of the things that DocArray can do, so we encourage you to check out the rest of this readme!

Coming from Langchain

Click to expand

With DocArray, you can connect external data to LLMs through Langchain. DocArray gives you the freedom to establish flexible document schemas and choose from different backends for document storage. After creating your document index, you can connect it to your Langchain app using DocArrayRetriever.

Install Langchain via:

pip install langchain
  1. Define a schema and create documents:
from docarray import BaseDoc, DocList
from docarray.typing import NdArray
from langchain.embeddings.openai import OpenAIEmbeddings

embeddings = OpenAIEmbeddings()

# Define a document schema
class MovieDoc(BaseDoc):
    title: str
    description: str
    year: int
    embedding: NdArray[1536]


movies = [
    {"title": "#1 title", "description": "#1 description", "year": 1999},
    {"title": "#2 title", "description": "#2 description", "year": 2001},
]

# Embed `description` and create documents
docs = DocList[MovieDoc](
    MovieDoc(embedding=embeddings.embed_query(movie["description"]), **movie)
    for movie in movies
)
  1. Initialize a document index using any supported backend:
from docarray.index import (
    InMemoryExactNNIndex,
    HnswDocumentIndex,
    WeaviateDocumentIndex,
    QdrantDocumentIndex,
    ElasticDocIndex,
    RedisDocumentIndex,
)

# Select a suitable backend and initialize it with data
db = InMemoryExactNNIndex[MovieDoc](docs)
  1. Finally, initialize a retriever and integrate it into your chain!
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.retrievers import DocArrayRetriever


# Create a retriever
retriever = DocArrayRetriever(
    index=db,
    embeddings=embeddings,
    search_field="embedding",
    content_field="description",
)

# Use the retriever in your chain
model = ChatOpenAI()
qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)

Alternatively, you can use built-in vector stores. Langchain supports two vector stores: DocArrayInMemorySearch and DocArrayHnswSearch. Both are user-friendly and are best suited to small to medium-sized datasets.

See also

DocArray is a trademark of LF AI Projects, LLC

docarray's People

Contributors

agaraman0 avatar alaeddine-13 avatar alexcg1 avatar alphinside avatar anna-charlotte avatar anneyang720 avatar azayz avatar bwanglzu avatar cristianmtr avatar davidbp avatar delgermurun avatar dependabot[bot] avatar dongxiang123 avatar generall avatar guenthermi avatar hanxiao avatar hsm207 avatar jackmin801 avatar jina-bot avatar joanfm avatar johannesmessner avatar jupyterjazz avatar makram93 avatar mapleeit avatar maxwelljin avatar nan-wang avatar numb3r3 avatar punndcoder28 avatar samsja avatar winstonww avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docarray's Issues

Bug: Unexpected `da[0]` not present in elastic backend after re-conecting

Bug

import numpy as np

from docarray import DocumentArray, Document

storage = 'elastic'
da = DocumentArray(
    storage=storage,
    config={'index_name': 'old_data0', 'n_dim': 128},
    persist=False
)

da.extend([Document(embedding=np.random.random(128)) for _ in range(1000)])
print(da[0].embedding)


da2 = DocumentArray(
    storage=storage,
    config={'index_name': 'old_data0', 'n_dim': 128},
)

da2.summary()
print(da2[0].embedding)

`match` overwrites scores across queries

The example from the documentation does not work as expected. It creates wrong results:

query emb = [1 1 1 1 0]
match emb = [1.  1.2 1.  1.  0. ] score = 1.2806248474865694
match emb = [1.  2.2 2.  1.  0. ] score = 0.19999999999999787
match emb = [1.  0.1 0.  0.  0. ] score = 2.9342801502242417

The reason is, that if only_id is set to False (which is the default), it uses the same in-memory objects if a Document happens to be a match for two queries. Then scores are overwritten.

Consider xarray and/or built-in "array metadata" support?

Hi, this library is very close to what I've been thinking about for quite a while now: how to standardize data types for ML use. I haven't dug too much into it, besides the docs, but I already love it (especially Pydantic support).

However, one thing that I feel is missing in mainstream ML is a "named tensor" type, which is actually just an ndarray/tensor with semantic metadata. The minimum metadata is dimensions of the data, e.g. "feature", "height/width/RGB", "date (time series)", "MCMC draw" and so on. On top of this, coordinates let you assign semantic meaning to each index of a dimension (RGB or BRG or HSV? what dates? what latitude/longitude?). xarray is an ndarray that wraps Numpy (or other arrays, including zarr, I believe) with Pandas indices (and more...) and supports this kind of rich metadata.

Since DocArray is "early in life", would you consider having this sort of metadata in the library, or would it be way outside of scope? One way would be to just support xarray as a possible field type, though the serialization would need adjustment. The second way would be to natively support such metadata (which is quite a chore, but would work with Torch/TF/etc tensor types). In the native support version, it seems it could be done by having structured hierarchies of Document with just additional checking... I'll have to play around with that.

FWIW, I briefly thought the name meant "documented array" and got excited (still am, but for a different reason πŸ˜‹).

🚏 Roadmap

Before 1.0, all new features and breaking changes will only bump the minor version. That also means the list below will only mention those features/breaking changes that are necessary for that minor version. Patch release is triggered whenever a bug is fixed. Hence they are not tracked/planned here in advance. Bugs solved in patch release can be tracked in the CHANGELOG.

0.13:

on 20.04.2022

  • an example and documentation about using ES backend for traditional text search, e.g. works on .text and .tags
  • Benchmark 6 ops: C,R,U,D, find by NN, find by conditions on all backends we have. No CICD, no integration, just
    • a table of 6 rows and 6 columns in QPS,
    • details about the benchmark machine setup, CPU, core, Python version
    • a reproducible script in the repo that can run as-is.
    • put the benchmark table in https://docarray.jina.ai/advanced/document-store/ and link to the script

0.12

on 07.04.2022

  • Documents the dataclass

0.11

on 01.04.2022

0.10

on 28.03.2022

  • Dataclass to represent multimodal document with free schema. (@alaeddine-13 ) (#188 ) (intentionally undocumented!)
  • ES support (@davidbp @winstonww) (#207 )
  • Publish .find interface by documenting it: refactoring if needed and adding docs (@numb3r3 ) #197

0.9:

on 08.03.2022

  • Solve the poll-out column such as embedding and vector at DocumentArray level (#161 )
    • Update all DocumentStore backend to the latest. P,Q,W,S (depends on the last one)
  • #156
  • #157 (decided to drop this feature after discussion)
  • PQlite -> ANNLite renaming (#151 )
  • adding lookuppy-based QL to the .find interface (#150)

0.8

on 25.02.2022

  • find,match interface #93
  • add & support serialization protocol via filename suffix #128
  • Documentations for the above

Support JPG for convert_image_tensor_to_uri

convert_image_tensor_to_uri is useful when you respond Documents to a frontend. The data uri can directly used to visualize the document.
However, sending PNGs is not acceptable since it leads to serious network issues. Moreover is much more memory efficient to store jpgs in an indexer than PNGs.
Please let convert_image_tensor_to_uri support JPEG as well.
Introduce two parameters:
format(default=png) and quality (default=0.9)

Problem with scores, evaluations and serializations

I have found a weird problem, look at this behavior:

from docarray import Document, DocumentArray
d = Document()
d.evaluations['cosine'] = 10
d.to_protobuf()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/joan/.local/lib/python3.7/site-packages/docarray/document/mixins/protobuf.py", line 18, in to_protobuf
    return flush_proto(self)
  File "/home/joan/.local/lib/python3.7/site-packages/docarray/proto/io/__init__.py", line 55, in flush_proto
    for ff in vv.non_empty_fields:
AttributeError: ('Field `evaluations` is problematic', "'int' object has no attribute 'non_empty_fields'")
from docarray import Document, DocumentArray
d2 = Document()
d2.evaluations['cosine'].value = 10
d2.to_protobuf()
evaluations {
  key: "cosine"
  value {
    value: 10.0
  }
}

Creating document with blob data "drop" text data

As far as I can tell the below is not intended behaviour. If you initialise a docarray with both blob and text data then the text data is dropped.

d = Document(text="test")
d.text

>>> "test"
d = Document(text="test", blob=[1, 2, 3])
d.text

>>> ""

deleting elements using attribute indexing yields counterintuitive results

If you initialise an array with the below code

da = DocumentArray()

for test_tag in ["a", "a", "b"]:
    d = Document(tags={"test_tag": test_tag})
    da.append(d)

Then running

del da[0]
da[:, "tags__test_tag"]

Gives ["a", "b"] as expected

But if you instead do

del da[np.array(da[:, "tags__test_tag"]) == "a"]
da[:, "tags__test_tag"]

Then you get ["a", "a"] but one would expect to get ["b"]

speed up setting docs by traversal paths

The current implementation of setting docs by traversal paths is not efficient because we have to retrieve the root documents from the storage backend.
A better implementation would be to traverse the documents from the storage backend and return the root document each time with the child. That way, the child should be referenced already by the parent and we just need to modify the child and persist the parent:
new non-debugged code:

def _gen_children_parent_pairs(docs_gen, parent):
    for children in docs_gen:
        yield children, parent

def _gen_children_from_pairs(pair_gen):
    if isinstance(pair_gen, Generator):
        for children, _ in pair_gen:
            yield _gen_children_from_pairs(children)
    elif isinstance(pair_gen, tuple):
        children, _ = pair_gen
        yield children

    @staticmethod
    def _traverse(
        docs: 'T',
        path: str,
        filter_fn: Optional[Callable[['Document'], bool]] = None,
        parent: Optional['Document'] = None
    ):
        path = re.sub(r'\s+', '', path)
        if path:
            cur_loc, cur_slice, _left = _parse_path_string(path)
            if cur_loc == 'r':
                yield from _gen_children_parent_pairs(TraverseMixin._traverse(
                    docs[cur_slice], _left, filter_fn=filter_fn,
                ), parent)
            elif cur_loc == 'm':
                for d in docs:
                    yield from _gen_children_parent_pairs(TraverseMixin._traverse(
                        d.matches[cur_slice], _left, filter_fn=filter_fn, parent=d
                    ), parent)
            elif cur_loc == 'c':
                for d in docs:
                    yield from _gen_children_parent_pairs(TraverseMixin._traverse(
                        d.chunks[cur_slice], _left, filter_fn=filter_fn, parent=d
                    ), parent)
            else:
                raise ValueError(
                    f'`path`:{path} is invalid, please refer to https://docarray.jina.ai/fundamentals/documentarray/access-elements/#index-by-nested-structure'
                )
        elif filter_fn is None:
            yield docs, parent
        else:
            from .. import DocumentArray

            yield DocumentArray(list(filter(filter_fn, docs))), parent

Bug in to_pydantic_model/from_pydantic_model

Problem description:

The class methods to_pydantic_model() / from_pydantic_model() do not appear to be inverses of each other.

How to reproduce:

>>> Document.from_pydantic_model(Document(blob=b'hello').to_pydantic_model())
binascii.Error: Invalid base64-encoded string: number of data characters (5) cannot be 1 more than a multiple of 4

Seems like this part of the conversion (in the from_pydantic_model() method) might be based on an assumption that does not always hold:

elif f_name == 'blob':
    # here is a dirty fishy itchy trick
    # the original bytes will be encoded two times:
    # first time is real during `to_dict/to_json`, it converts into base64 string
    # second time is at `from_dict/from_json`, it is unnecessary yet inevitable, the result string get
    # converted into a binary string and encoded again.
    # consequently, we need to decode two times here!
    fields[f_name] = base64.b64decode(base64.b64decode(value))

My guess is that the blob is actually only encoded once, but it tries to decode twice, giving the error above.

Related Issue

The .dict() method of PydanticDocument does not implement any decoding of blob, but I think it should:

>>> d = Document(blob=b'hello')
>>> d.blob
b'hello'
>>> d_pyd = d.to_pydantic_model()  # encodes the blob
>>> d_pyd.blob
'aGVsbG8='
>>> d_pyd.dict()['blob']  # I think this should decode the blob and return b'hello'
'aGVsbG8='

@JoanFM I believe that if this behaviour of .dict() is fixed then the trick in the core of creating a DocumentArray becomes unnecessary.

Add protocol and compression info DocumentArray filename

When storing a DocumentArray there is a protocol and compression used. This implies that users need to remember how they serialized a DocumentArray to be able to load it back from binary.

We can add this information in the string of the filename of the serialiized docarray and use it when loading from binary.

cannot import name 'schema_json_of' from 'pydantic'

We need explicitly add the dependency pydantic>=1.9.0 in setup.py. With pydantic==1.8.2, the following error is throw

$ pytest tests/unit/test_pydantic.py
cls = <class 'docarray.array.document.DocumentArray'>, indent = 2

    @classmethod
    def get_json_schema(cls, indent: int = 2) -> str:
        """Return a JSON Schema of DocumentArray class."""
>       from pydantic import schema_json_of
E       ImportError: cannot import name 'schema_json_of' from 'pydantic' (/Users/fengwang/.pyenv/versions/3.8.3/lib/python3.8/site-packages/pydantic/__init__.cpython-38-darwin.so)

docarray/array/mixins/pydantic.py:13: ImportError

problem to_dict when matches with score

Reproduce

>>> import numpy as np; from docarray import DocumentArray, Document; d = Document(); d.embedding = np.random.random([2, 2])
>>> matches = DocumentArray()
>>> matches.append(Document())
>>> matches[0].scores['relevance'] = 10
>>> matches
<DocumentArray (length=1) at 140351942587856>
>>> matches[0]
<Document ('id', 'scores') at 10d13f947a0011ec80695076afa7ec3e>
>>> d.matches = matches
>>> d.to_dict()

Got TypeError when tried to use DocumentArray.find

Describe the bug

Hi, I tried to use the DocumentArray.find method to search for tags (Like below in the official document):

Code in official document:
r = da.find({'tags__h': {'$gt': 10}})

My code:
result = total_doc.find({'tags__asin': {'$eq': 11300000.0}})

However, I got

raise TypeError(f'can not determine the array type: {module_tags}.{class_name}')
TypeError: can not determine the array type: ['builtins'].dict

Is it because of my usage of DocumentArray stored in sqlite? If so, is there any other approach suitable for me to find the tag in my DocumentArray?

The detailed information is attached below.

Thank you!

My Code
process_data.py

import os
import json
from docarray import Document, DocumentArray
import gzip
from tqdm import tqdm

data = []
with gzip.open('meta_Electronics.json.gz') as f:
    for l in f:
        data.append(json.loads(l.strip()))

docarr = DocumentArray.empty(0)
for product in tqdm(data):
    d = Document(tags={
        'asin': product['asin'],
        'title': product['title'] if 'title' in product else '',
        'feature': product['feature'] if 'feature' in product else '',
        'description': product['description'] if 'description' in product else '',
        'price': product['price'] if 'price' in product else '',
        'imageURL': product['imageURL'] if 'imageURL' in product else '',
        'imageURLHighRes': product['imageURLHighRes'] if 'imageURLHighRes' in product else '',
        'also_buy': product['also_buy'] if 'also_buy' in product else '',
        'also_viewed': product['also_viewed'] if 'also_viewed' in product else '',
        'salesRank': product['salesRank'] if 'salesRank' in product else '',
        'brand': product['brand'] if 'brand' in product else '',
        'categories': product['categories'] if 'categories' in product else '',
    }
    )
    docarr.append(d)

total_doc = DocumentArray(
    storage='sqlite',
    config={
        'connection': './generate_data/meta_data.db',
        'table_name': 'meta_data',
    },
)
total_doc.extend(docarr)
print(total_doc[0].to_dict(exclude_none=True))
result = total_doc.find({'tags__asin': {'$eq': 11300000.0}})
print(result[0].to_dict(exclude_none=True))


Environment

- protobuf 3.19.1                                                                                         [0/1416]
- proto-backend cpp
- grpcio 1.38.1
- pyyaml 5.4.1
- python 3.7.11
- platform Linux
- platform-release 5.13.0-27-generic
- platform-version jina-ai/jina#29~20.04.1-Ubuntu SMP Fri Jan 14 00:32:30 UTC 2022
- architecture x86_64
- processor x86_64
- uid 112144060752112
- session-id d8c37ff9-a673-11ec-aa9c-65fe92e7a0f0
- uptime 2022-03-18T13:28:22.927735
- ci-vendor (unset)
* JINA_ARRAY_QUANT (unset)
* JINA_CONTROL_PORT (unset)
* JINA_DEFAULT_HOST (unset)
* JINA_DEFAULT_TIMEOUT_CTRL (unset)
* JINA_DISABLE_UVLOOP (unset)
* JINA_FULL_CLI (unset)
* JINA_HUBBLE_REGISTRY (unset)
* JINA_HUB_CACHE_DIR (unset) 
* JINA_HUB_ROOT (unset)
* JINA_K8S_USE_TEST_PIP (unset)
* JINA_LOG_CONFIG (unset)
* JINA_LOG_LEVEL (unset)
* JINA_LOG_NO_COLOR (unset)
* JINA_LOG_WORKSPACE (unset) 
* JINA_MP_START_METHOD (unset)
* JINA_OPTIMIZER_TRIAL_WORKSPACE(unset)
* JINA_DEPLOYMENT_NAME (unset)
* JINA_RANDOM_PORT_MAX (unset)
* JINA_RANDOM_PORT_MIN (unset)
* JINA_VCS_VERSION (unset)

Screenshots

python process_data.py 
{'id': '5d8cacb6a66911ecaa9865fe92e7a0f0', 'tags': {'asin': 11300000.0, 'title': 'Genuine Geovision 1 Channel 3rd Party NVR IP Software with USB Dongle Onvif PSIA', 'feature': ['Genuine Geovision 1 Channel NVR IP Software', 'Support 3rd Party IP Camera', 'USB Dongle'], 'description': ['The following camera brands and models have been tested for compatibility with GV-Software.\nGeoVision \tACTi \tArecont Vision \tAXIS \tBosch \tCanon\nCNB \tD-Link \tEtroVision \tHikVision \tHUNT \tIQEye\nJVC \tLG \tMOBOTIX \tPanasonic \tPelco \tSamsung\nSanyo \tSony \tUDP \tVerint \tVIVOTEK \t \n \nCompatible Standard and Protocol\nGV-System also allows for integration with all other IP video devices compatible with ONVIF(V2.0), PSIA (V1.1) standards, or RTSP protocol.\nONVIF \tPSIA \tRTSP \t  \t  \t \nNote: Specifications are subject to change without notice. Every effort has been made to ensure that the information on this Web site is accurate. No liability is assumed for incidental or consequential damages arising from the use of the information or products contained herein.'], 'price': '$65.00', 'imageURL': ['https://images-na.ssl-images-amazon.com/images/I/411uoWa89KL._SS40_.jpg'], 'imageURLHighRes': ['https://images-na.ssl-images-amazon.com/images/I/411uoWa89KL.jpg'], 'also_buy': [], 'also_viewed': '', 'salesRank': '', 'brand': 'GeoVision', 'categories': ''}}
Traceback (most recent call last):
  File "process_data.py", line 42, in <module>
    result = total_doc.find({'tags__asin': {'$eq': 11300000.0}})
  File "/home/yubo/anaconda3/envs/jina/lib/python3.7/site-packages/docarray/array/mixins/find.py", line 82, in find
    _, _ = ndarray.get_array_type(_query)
  File "/home/yubo/anaconda3/envs/jina/lib/python3.7/site-packages/docarray/math/ndarray.py", line 124, in get_array_type
    raise TypeError(f'can not determine the array type: {module_tags}.{class_name}')
TypeError: can not determine the array type: ['builtins'].dict

cannot update docs via slices at some cases

The updating API given slice has some bugs, and cannot update the value at the special case as shown as:

from docarray import DocumentArray, Document

da = DocumentArray([Document(id=f'id_{i}', text=f'doc {i}') for i in range(5)])

da.append(Document(id=f'id_4', text='doc 4_1'))

assert da['id_4'].text == 'doc 4_1'

da[4:5] = DocumentArray(Document(id=f'id_4', text='doc 4_2'))

print(da.get_attributes('text'))

assert da['id_4'].text == 'doc 4_2'

yields

Traceback (most recent call last):
  File "/Users/fengwang/Jina/workspace/docarray/toy1.py", line 7, in <module>
    assert da['id_4'].text == 'doc 4_2'
AssertionError
['doc 0', 'doc 1', 'doc 2', 'doc 3', 'doc 4_2', 'doc 4_1']

Note: this issue is related about the implementation/usage of _rebuild_id2offset

switch to list container for DocumentArrayInMemory

Revert back to list container instead of dict for DocumentArrayInMemory while keeping offset2ids implementation for storage backends.
This should fix the issue with @ selector on matches and allow conflicting IDs within the same DocumentArrayInMemory

Feature hashing fails with float strings `inf` and `nan`

To produce:

>>> from docarray import Document
>>> x = Document(text="to infinity and beyond")
>>> x.embed_feature_hashing()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "~/docarray-env/docarray/docarray/document/mixins/featurehash.py", line 40, in embed_feature_hashing
    _hash_column(f_id, val, n_dim, max_value, idxs, data, table)
  File "~/docarray-env/docarray/docarray/document/mixins/featurehash.py", line 63, in _hash_column
    table[col] += np.sign(h) * col_val
IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices

PydanticDocument cannot work with document tags

The following code will explain what's the problem

from docarray import Document

doc = Document(tags={'a': [{'b': 1}]})
doc.to_pydantic_model()

gives

Traceback (most recent call last):
  File "/Users/fengwang/Jina/workspace/docarray/toy13.py", line 4, in <module>
    doc.to_pydantic_model()
  File "/Users/fengwang/Jina/workspace/docarray/docarray/document/mixins/pydantic.py", line 38, in to_pydantic_model
    return DP(**_p_dict)
  File "pydantic/main.py", line 406, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 7 validation errors for PydanticDocument
tags -> a
  value could not be parsed to a boolean (type=type_error.bool)
tags -> a
  value is not a valid float (type=type_error.float)
tags -> a
  str type expected (type=type_error.str)
tags -> a -> 0
  value could not be parsed to a boolean (type=type_error.bool)
tags -> a -> 0
  value is not a valid float (type=type_error.float)
tags -> a -> 0
  str type expected (type=type_error.str)
tags -> a
  value is not a valid dict (type=type_error.dict)

Bug Report: DocumentArray Embedding Projector visualization FileNotFoundError

I'm not entirely sure what is causing this, but I keep getting the error `FileNotFoundError: [Errno 2] No such file or directory: {some long install path}/docarray/resources/embedding-projector/index.html.gz' when trying to run the minimal example from https://docarray.jina.ai/fundamentals/documentarray/visualization/#embedding-projector.

I first ran into it on Google Colab, after installing DocArray with pip install "docarray[full]" (Version 0.1.4). I then tried installing from source with pip install git+https://github.com/jina-ai/docarray.git (Version 0.1.5). This seemed to address that index.html.gz FileNotFound error, but then I got the message You should see a webpage opened in your browser, if not, you may open http://localhost:35967/static/index.html?config=config.json manually, which made me think it won't be possible in Colab (without additional tweaking).

I then tried to install on my local MacBook, using pip install "docarray[full]", pip install git+https://github.com/jina-ai/docarray.git, and conda install, but simply couldn't get around the index.html.gz FileNotFoundError.

I was so excited to see the Tensorboard Embedding Projector working so seamlessly in the documentation, so hopefully this bug report helps get that working more universally.

embed(model) not working if input shape is dictionary

When using a TensorFlow model with a dictionary output signature it crashes:

File ".../env/lib/python3.7/site-packages/docarray/array/mixins/content.py", line 158, in _get_len
    return len(value) if isinstance(value, (list, tuple)) else value.shape[0]
AttributeError: 'dict' object has no attribute 'shape'

value was:


{'output': <tf.Tensor: shape=(256, 256), dtype=float32, numpy=
array([[  8.158718 ,   2.4887109,  -2.5281172, ...,   8.878306 ,
         -8.251455 ,  10.672461 ],
       [ 10.874803 ,   6.623814 ,   1.6076566, ...,  12.355374 ,
          1.0049605,   7.5029907],
       [ 12.8260975,   5.9833894,   2.475936 , ...,   4.0352244,
        -10.895865 ,   9.048189 ],
       ...,
       [ 10.0770645, -19.8818   ,  -1.3060216, ..., -13.77472  ,
        -17.711151 ,  19.755842 ],
       [  3.8991792,  -9.776771 ,  -3.6401577, ..., -14.964616 ,
        -23.071358 ,   3.897675 ],
       [  7.3335924, -23.23055  , -10.826139 , ...,   3.6922274,
         -2.501924 ,   8.707939 ]], dtype=float32)>}

Embedding of a document

As per my understanding the chunk of a document is basically a sub document but when I am trying to see the vocabulary or the embeddings of the document I don't see the chunks being utilized for creating embeddings. Is there any other parameter to include chunks while creating a feature embedding of a document ? Or I am wrong in my understanding.

P.S : First time using docarray and I am in awe of it

`plot_embeddings` with `image_sprites` from uri

To produce:

from docarray import DocumentArray, Document
import torchvision

def preproc(d: Document):
    return (d.load_uri_to_image_tensor()  
             .set_image_tensor_shape((200, 200))  
             .set_image_tensor_normalization() 
             .set_image_tensor_channel_axis(-1, 0)) 

left_da = DocumentArray.from_files('left/*.jpg')

left_da.apply(preproc)

model = torchvision.models.resnet50(pretrained=True)  
left_da.embed(model)  

left_da.plot_embeddings(image_sprites=True)

Error:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Input In [1], in <module>
     14 model = torchvision.models.resnet50(pretrained=True)  
     15 left_da.embed(model)  
---> 17 left_da.plot_embeddings(image_sprites=True)

File ~/workplaces/docarray-env/docarray/docarray/array/mixins/plot.py:152, in PlotMixin.plot_embeddings(self, title, path, image_sprites, min_image_size, channel_axis, start_server, host, port)
    140     if len(self) > max_docs:
    141         warnings.warn(
    142             f'''
    143             {self!r} has more than {max_docs} elements, which is the maximum number of image sprites can support. 
   (...)
    149             '''
    150         )
--> 152     self.plot_image_sprites(
    153         os.path.join(path, sprite_fn),
    154         canvas_size=canvas_size,
    155         min_size=min_image_size,
    156         channel_axis=channel_axis,
    157     )
    159 self.save_embeddings_csv(os.path.join(path, emb_fn), delimiter='\t')
    161 _exclude_fields = ('embedding', 'tensor', 'scores')

File ~/workplaces/docarray-env/docarray/docarray/array/mixins/plot.py:331, in PlotMixin.plot_image_sprites(self, output, canvas_size, min_size, channel_axis)
    329 row_id = floor(img_id / img_per_row)
    330 col_id = img_id % img_per_row
--> 331 sprite_img[
    332     (row_id * img_size) : ((row_id + 1) * img_size),
    333     (col_id * img_size) : ((col_id + 1) * img_size),
    334 ] = _d.tensor
    336 img_id += 1
    337 if img_id >= max_num_img:

ValueError: could not broadcast input array from shape (16,16,200) into shape (16,16,3)

Suggested fix:
passing use_uri flag to plot_embeddings to allow visualizing the dots using uri attribute. I would be happy to raise a PR for this.

Final result are not displayed

The following lines of codes (as also shown in the documentation) are not displaying me any result matrix.
The code only runs through without throwing any error but it does not display anything.

(DocumentArray(left_da[8].matches, copy=True)
.apply(lambda d: d.set_image_blob_channel_axis(0, -1)
.set_image_blob_inv_normalization())
.plot_image_sprites('result.png'))

traverse_flat not the same as [] operator

from jina import DocumentArray, Document

da = DocumentArray([Document(text='Hello. World! Go? Back')])
da['@r']
da.traverse_flat('@r')

this fails with

Traceback (most recent call last):
  File "/home/cristian/.config/JetBrains/PyCharm2021.3/scratches/scratch_16.py", line 5, in <module>
    da.traverse_flat('@r')
  File "/home/cristian/.virtualenvs/executors30/lib/python3.7/site-packages/docarray/array/mixins/traverse.py", line 120, in traverse_flat
    return self._flatten(leaves)
  File "/home/cristian/.virtualenvs/executors30/lib/python3.7/site-packages/docarray/array/mixins/traverse.py", line 159, in _flatten
    return DocumentArray(list(itertools.chain.from_iterable(sequence)))
  File "/home/cristian/.virtualenvs/executors30/lib/python3.7/site-packages/docarray/array/mixins/traverse.py", line 46, in traverse
    yield from self._traverse(self, p, filter_fn=filter_fn)
  File "/home/cristian/.virtualenvs/executors30/lib/python3.7/site-packages/docarray/array/mixins/traverse.py", line 56, in _traverse
    cur_loc, cur_slice, _left = _parse_path_string(path)
  File "/home/cristian/.virtualenvs/executors30/lib/python3.7/site-packages/docarray/array/mixins/traverse.py", line 164, in _parse_path_string
    _this = g.group(1)
AttributeError: 'NoneType' object has no attribute 'group'

docarray==0.7.2

feat: add `field_resolver` to `from_dataframe`

Proposal

Currently, our from_csv takes field_resolver as input which will map the the column name of data to the value specified in field_resolver.

The from_dataframe currently does not seem to have this feature yet. It would be great if we can add the requested feature as discussed here.

inconsistent none behavior?

I would expect #1 and #2 to be the same? but #2 works and #1 doesn't.

from docarray import DocumentArray

da = DocumentArray.empty(10)
da.embeddings = [[1, 2, 3]] * 10
da.tensors = [[4, 5, 6]] * 10

# 1
da[...][:, ('embeddings', 'tensors')] = None, None

# 2
# da[...].embeddings = None
# da[...].tensors = None

print(da.embeddings)
print(da.tensors)

Chunks get different parent_id depending on argument order, when creating a Document

The sequence of arguments affect whether or not parent_id is assigned correctly. This is illustrated in below example:

If the id argument is before the chunks argument then everything works as expected

from docarray import Document

d = Document(
    id='d0',  
    chunks=[Document(id='d1')],
)
print(d.id, d.chunks[0].parent_id)

>>> d0 d0

If the id argument is after the chunks argument then the chunks are assigned a UUID as parent_id instead of the correct d0

d = Document(  
    chunks=[Document(id='d1')],
    id='d0',
)
print(d.id, d.chunks[0].parent_id)

>>> d0 50202e269c7911ec85291221a3f8eaea

bug: `_ipython_display_` stacking tensors/embeddings

When printing DocumentArray information in a jupyter notebook, which ends up calling _ipython_display which calls summary currently the codebase stacks embeddings/ tensors.

This does not work and provides ValueError: all input arrays must have the same shape

from docarray import DocumentArray,Document
import numpy as np
da = DocumentArray([Document(tensor=np.zeros(3)), Document(tensor=np.zeros(4))])
da._ipython_display_()

but this works as expected

In [4]: from docarray import DocumentArray,Document
   ...: import numpy as np
   ...: da = DocumentArray([Document(tensor=np.zeros(3)), Document(tensor=np.zeros(3))])
   ...: da._ipython_display_()
   ...: 
   ...: 
             Documents Summary             
                                           
  Length                 2                 
  Homogenous Documents   True              
  Common Attributes      ('id', 'tensor')  
                                           
                      Attributes Summary                       
                                                               
  Attribute   Data type      #Unique values   Has empty value  
 ───────────────────────────────────────────────────────────── 
  id          ('str',)       2                False            
  tensor      ('ndarray',)   2                False            
                                                               
          Storage Summary          
                                   
  Class     DocumentArrayInMemory  
  Backend   In Memory      

Why this happens

When plotting to a jupyter notebook _ipython_display_ is called which calls summary which calls
all_attrs_values = self._get_attributes(*all_attrs_names). If there are tensor or embedding fields then all_attrs_names contains them. This implies .tensors or .emdeddings can be called which will break since data can't be stacked.

Workaround

Never call .tensors or .emdeddings which is actually quite dangerous for big datasets because it will allocate the memory for all the vectors.

fix tensor serialization

Maybe this problem due to the serialization of DA?

from jina import DocumentArray, Document, Flow
import numpy as np

docs = DocumentArray([Document(tensor=np.random.rand(224, 224))] * 100)
print(docs.summary())

with Flow(protocol='http').add() as flow:
    flow.post('/index', inputs=docs, show_progress=True)

with Flow(protocol='grpc').add() as flow:
    flow.post('/index', inputs=docs, show_progress=True)

yields

          Flow@96082[I]:πŸŽ‰ Flow is ready to use!
	πŸ”— Protocol: 		HTTP
	🏠 Local access:	0.0.0.0:64451
	πŸ”’ Private network:	192.168.31.35:64451
	πŸ’¬ Swagger UI:		http://localhost:64451/docs
	πŸ“š Redoc:		http://localhost:64451/redoc
β Ό       DONE ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 0:00:48 100% ETA: 0 seconds 80 steps done in 48 seconds
           Flow@96082[I]:πŸŽ‰ Flow is ready to use!
	πŸ”— Protocol: 		GRPC
	🏠 Local access:	0.0.0.0:49305
	πŸ”’ Private network:	192.168.31.35:49305
β ‹       DONE ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 0:00:00 100% ETA: 0 seconds 80 steps done in 0 seconds

We can observe that HTTP flow takes much more time.

Weaviate Backend: Create objects in batch instead of one at a time

The current setup creates objects sequentially by calling the method _setitem, which also checks if objects exists. This code flow is good when the creation requests to the backend are asynchronous, and are not waiting for each object individually to be created. Since the current version of the python client (v3.3.0) does not allow asynchronous calls, we can achieve it by creatinf objects in batches.

The draw back of creating objects sequentially is that every time we make a request to create the object we wait until Weaviate Server creates the objects and sends back a response. While we wait for the response we could actually send more requests to weaviate to create objects, since objects do not depends on each over (I assume the don't). This can be achieved by collecting objects in a batch and then send them to Weaviate server in one go. This workflow will reduce the waiting time of the main python thread.

Another thing to notice is that creating objects in batch will replace the object/s if the UUID already exists, this means that this _setitem can be reduced to one line and one batch request.

The misuse of smart union in pydantic

The string value "1" in doc.tags will be automatically converted to bool True when using .to_pydantic_model. This would not make sense.

The reproduce script:

from docarray import Document

doc = Document(tags={'a': '1', 'b': 1, 'c': '2'})
print(f'origin:')
print(doc.tags)

pd_doc_tags = doc.to_pydantic_model().tags
print(f'pydantic: ')
print(pd_doc_tags)


proto_tags = doc.to_dict(protocol='protobuf')['tags']
print(f'protobuf: ')
print(proto_tags)


json_tags = doc.to_dict(protocol='jsonschema')['tags']
print(f'jsonschema')
print(proto_tags)

yields the following output:

origin:
{'a': '1', 'b': 1, 'c': '2'}
pydantic: 
{'a': True, 'b': True, 'c': 2.0}
protobuf: 
{'a': '1', 'c': '2', 'b': 1.0}
jsonschema
{'a': '1', 'c': '2', 'b': 1.0}

Docarray `copy=True`

@slettner commented that the current behaviour for DocumentArray does work as expected when copy=True is used.

MWE

from docarray import DocumentArray, Document


record = DocumentArray([
    Document(tags={'track_id': 'a'}),
    Document(tags={'track_id': 'b'}),
])

all_tracks_a = DocumentArray(record.find({'tags__track_id': {'$eq': 'a'}}), copy=True)
all_tracks_b = DocumentArray(record.find({'tags__track_id': {'$in': ['a', 'b']}}), copy=True)

result = DocumentArray()

for doc in all_tracks_a:
    doc.tags['label'] = 'l_a'
result.extend(all_tracks_a)
print(result[0].tags['label'])  # >>> 'l_a'

for doc in all_tracks_b:
    doc.tags['label'] = 'l_b'
result.extend(all_tracks_b)
print(result[0].tags['label'])  # >>> 'l_b'

πŸ™Œ Are you using DocArray in your company/software?

Are you using DocArray in your software/library/company? Share it with us. πŸ™Œ We will link in our README & docs to you

Name The company software/package name
Logo Your company software/package logo, better in SVG/PNG
URL Your company software/package URL, if possible points to the page where DocArray is used.

e.g.

Name Weaviate
Logo https://www.semi.technology/playbooks/design/style-graphic-solutions.html
URL https://weaviate.io/developers/weaviate/current/client-libraries/python.html#neural-search-frameworks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.