Code Monkey home page Code Monkey logo

mongo-rust-driver's Introduction

MongoDB Rust Driver

Crates.io docs.rs License

This is the officially supported MongoDB Rust driver, a client side library that can be used to interact with MongoDB deployments in Rust applications. It uses the bson crate for BSON support. The driver contains a fully async API that requires tokio. The driver also has a sync API that may be enabled via feature flags.

For more details, including features, runnable examples, troubleshooting resources, and more, please see the official documentation.

Installation

Requirements

  • Rust 1.64+ (See the MSRV policy for more information)
  • MongoDB 3.6+

Supported Platforms

The driver tests against Linux, MacOS, and Windows in CI.

Importing

The driver is available on crates.io. To use the driver in your application, simply add it to your project's Cargo.toml.

[dependencies]
mongodb = "2.8.0"

Version 1 of this crate has reached end of life and will no longer be receiving any updates or bug fixes, so all users are recommended to always depend on the latest 2.x release. See the 2.0.0 release notes for migration information if upgrading from a 1.x version.

Enabling the sync API

The driver also provides a blocking sync API. To enable this, add the "sync" feature to your Cargo.toml:

[dependencies.mongodb]
version = "2.8.0"
features = ["sync"]

Note: The sync-specific types can be imported from mongodb::sync (e.g. mongodb::sync::Client).

All Feature Flags

Feature Description
dns-resolver Enable DNS resolution to allow mongodb+srv URI handling. Enabled by default.
rustls-tls Use rustls for TLS connection handling. Enabled by default.
openssl-tls Use openssl for TLS connection handling.
sync Expose the synchronous API (mongodb::sync).
aws-auth Enable support for the MONGODB-AWS authentication mechanism.
zlib-compression Enable support for compressing messages with zlib
zstd-compression Enable support for compressing messages with zstd.
snappy-compression Enable support for compressing messages with snappy
in-use-encryption-unstable Enable support for client-side field level encryption and queryable encryption. This API is unstable and may be subject to breaking changes in minor releases.
tracing-unstable Enable support for emitting tracing events. This API is unstable and may be subject to breaking changes in minor releases.

Web Framework Examples

Actix

The driver can be used easily with the Actix web framework by storing a Client in Actix application data. A full example application for using MongoDB with Actix can be found here.

Rocket

The Rocket web framework provides built-in support for MongoDB via the Rust driver. The documentation for the rocket_db_pools crate contains instructions for using MongoDB with your Rocket application.

Note on connecting to Atlas deployments

In order to connect to a pre-4.2 Atlas instance that's M2 or bigger, the openssl-tls feature flag must be enabled. The flag is not required for clusters smaller than M2 or running server versions 4.2 or newer.

Windows DNS note

On Windows, there is a known issue in the trust-dns-resolver crate, which the driver uses to perform DNS lookups, that causes severe performance degradation in resolvers that use the system configuration. Since the driver uses the system configuration by default, users are recommended to specify an alternate resolver configuration on Windows (e.g. ResolverConfig::cloudflare()) until that issue is resolved. This only has an effect when connecting to deployments using a mongodb+srv connection string.

Warning about timeouts / cancellation

In async Rust, it is common to implement cancellation and timeouts by dropping a future after a certain period of time instead of polling it to completion. This is how tokio::time::timeout works, for example. However, doing this with futures returned by the driver can leave the driver's internals in an inconsistent state, which may lead to unpredictable or incorrect behavior (see RUST-937 for more details). As such, it is highly recommended to poll all futures returned from the driver to completion. In order to still use timeout mechanisms like tokio::time::timeout with the driver, one option is to spawn tasks and time out on their JoinHandle futures instead of on the driver's futures directly. This will ensure the driver's futures will always be completely polled while also allowing the application to continue in the event of a timeout.

Bug Reporting / Feature Requests

To file a bug report or submit a feature request, please open a ticket on our Jira project:

  • Create an account and login at jira.mongodb.org
  • Navigate to the RUST project at jira.mongodb.org/browse/RUST
  • Click Create Issue - If the ticket you are filing is a bug report, please include as much detail as possible about the issue and how to reproduce it.

Before filing a ticket, please use the search functionality of Jira to see if a similar issue has already been filed.

Contributing

We encourage and would happily accept contributions in the form of GitHub pull requests. Before opening one, be sure to run the tests locally; check out the testing section for information on how to do that. Once you open a pull request, your branch will be run against the same testing matrix that we use for our continuous integration system, so it is usually sufficient to only run the integration tests locally against a standalone. Remember to always run the linter tests before opening a pull request.

Running the tests

Integration and unit tests

In order to run the tests (which are mostly integration tests), you must have access to a MongoDB deployment. You may specify a MongoDB connection string in the MONGODB_URI environment variable, and the tests will use it to connect to the deployment. If MONGODB_URI is unset, the tests will attempt to connect to a local deployment on port 27017.

Note: The integration tests will clear out the databases/collections they need to use, but they do not clean up after themselves.

To actually run the tests, you can use cargo like you would in any other crate:

cargo test --verbose # runs against localhost:27017
export MONGODB_URI="mongodb://localhost:123"
cargo test --verbose # runs against localhost:123

Auth tests

The authentication tests will only be included in the test run if certain requirements are met:

  • The deployment must have --auth enabled
  • Credentials must be specified in MONGODB_URI
  • The credentials specified in MONGODB_URI must be valid and have root privileges on the deployment
export MONGODB_URI="mongodb://user:pass@localhost:27017"
cargo test --verbose # auth tests included

Topology-specific tests

Certain tests will only be run against certain topologies. To ensure that the entire test suite is run, make sure to run the tests separately against standalone, replicated, and sharded deployments.

export MONGODB_URI="mongodb://my-standalone-host:27017" # mongod running on 27017
cargo test --verbose
export MONGODB_URI="mongodb://localhost:27018,localhost:27019,localhost:27020/?replicaSet=repl" # replicaset running on ports 27018, 27019, 27020 with name repl
cargo test --verbose
export MONGODB_URI="mongodb://localhost:27021" # mongos running on 27021
cargo test --verbose

Run the tests with TLS/SSL

To run the tests with TLS/SSL enabled, you must enable it on the deployment and in MONGODB_URI.

export MONGODB_URI="mongodb://localhost:27017/?tls=true&tlsCertificateKeyFile=cert.pem&tlsCAFile=ca.pem"
cargo test --verbose

Note: When you open a pull request, your code will be run against a comprehensive testing matrix, so it is usually not necessary to run the integration tests against all combinations of topology/auth/TLS locally.

Linter Tests

Our linter tests use the nightly version of rustfmt to verify that the source is formatted properly and the stable version of clippy to statically detect any common mistakes. You can use rustup to install them both:

rustup component add clippy --toolchain stable
rustup component add rustfmt --toolchain nightly

Our linter tests also use rustdoc to verify that all necessary documentation is present and properly formatted. rustdoc is included in the standard Rust distribution.

To run the linter tests, run the check-clippy.sh, check-rustfmt.sh, and check-rustdoc.sh scripts in the .evergreen directory. To run all three, use the check-all.sh script.

bash .evergreen/check-all.sh

Continuous Integration

Commits to main are run automatically on evergreen.

Minimum supported Rust version (MSRV) policy

The MSRV for this crate is currently 1.64.0. This will rarely be increased, and if it ever is, it will only happen in a minor or major version release.

License

This project is licensed under the Apache License 2.0.

This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit (http://www.openssl.org/).

mongo-rust-driver's People

Contributors

abr-egn avatar attila-lin avatar bajanam avatar benjirewis avatar bugadani avatar drshika avatar freakmaxi avatar isabelatkinson avatar jwillbold avatar karmenliang avatar kevinalbs avatar kkloberdanz avatar kmahar avatar luisosta avatar mlokr avatar nbsquare avatar nevi-me avatar patrickfreed avatar pmeredit avatar purewhitewu avatar saghm avatar sanav33 avatar seanpianka avatar shsaskin avatar stanislav-tkach avatar stincmale avatar terakilobyte avatar thomasdezeeuw avatar tom-selander avatar ttdonovan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mongo-rust-driver's Issues

Possible issue with generic parameterization of collections

A simple application find_one_and_update might be to update an item in an array within a document and return the old item (given our document may be large and we don't want to transfer all of it).

You might do this like:

let update_result = user_collection.find_one_and_update(
    doc! { "_id": doc_id, "items.item_id": &item_id },
    mongodb::options::UpdateModifications::Document(
        doc! { "$set": { "items.$.data": 12 } },
    ),
    mongodb::options::FindOneAndUpdateOptions::builder()
        .projection(doc! {
            "item": {
                "$arrayElemAt": ["$items", {
                    "$indexOfArray": ["$items.item_id",&item_id]
                } ]
            }
        })
        .build(),
).await;

This may work if we have Collection<Document> as update_result would be Result<Option<Document>> which would allow for projection. But as soon as we use a specific model as recommend:

It is recommended to define types that model your data which you can parameterize your Collections with instead of Document

We can no longer use projection in any of the operations on Collection which return a type parameterised with T (e.g. find_one_and_update, find_one, find_one_and_replace, etc.).

If I'm missing something here please let me know but this seems like a fairly significant design issue.

A fairly easy solution might be to simply return impl From<Document> or impl TryFrom<Document> or impl Deserialize for find_one_and_update etc. Notably this same approach could be applied to aggregate to return Cursor<K> instead of Cursor<Document>.

Change streams support

Hi! Thank you, folks, for implementing the MongoDB driver for Rust. Though I'm concerned, is it really possible to use change streams. If not, when do you plan to implement this feature?

Mongodb Rust - view all records inside a collection (table)

Main.rs...
extern crate actix_web;
use bson::Bson;
use bson::oid::ObjectId;
use mongodb::bson::{doc};
use serde::{self, Deserialize, Serialize};
use mongodb::{Client};

#[derive(Deserialize, Serialize)]
#[derive(Debug)]
struct Product {
#[serde(rename = "_id")]
id: ObjectId,
item: String,
}

#[actix_rt::main]
async fn main() -> mongodb::error::Result<()> {

let  client = Client::with_uri_str("mongodb+srv://sm8082:*******@cluster0.zemkn.mongodb.net/myFirstDatabase?retryWrites=true&w=majority").await.expect("Mongo Error");
let  db = client.database("example");
let  collection = db.collection::<Product>("products");
println!("Adding record...");

let mut cursor = collection.find(None, None).await;

while let Some(doc) = cursor.next().await {
    let prod: Product = bson::from_bson(Bson::Document(doc?))?;
    println!("{}: {}", prod.id, prod.item);
}

Ok(())

}


Cargo.toml dependencies:

[dependencies]
actix-web = "3.0.0"
actix-rt = "1.0.0"
bson = "0.14.1"
serde_json = "1.0"
tokio = {version="1.11.0", features=["full"]}
serde = { version = "1.0", features = ["derive"] }
futures = "0.1.29"
#futures = "0.3.12"
[dependencies.mongodb]
version = "2.0.0"
default-features = false
features = ["async-std-runtime"]


Error screen as below:

error[E0599]: no method named next found for enum Result in the current scope
--> src/main.rs:26:34
|
26 | while let Some(doc) = cursor.next().await {
| ^^^^ method not found in Result<mongodb::Cursor<Product>, mongodb::error::Error>

error[E0277]: ? couldn't convert the error to mongodb::error::ErrorKind
--> src/main.rs:27:66
|
27 | let prod: Product = bson::from_bson(Bson::Document(doc?))?;
| ^ the trait From<DecoderError> is not implemented for mongodb::error::ErrorKind
|
= note: the question mark operation (?) implicitly performs a conversion on the error value using the From trait
= help: the following implementations were found:
<mongodb::error::ErrorKind as Frommongodb::bson::de::Error>
<mongodb::error::ErrorKind as Frommongodb::bson::ser::Error>
<mongodb::error::ErrorKind as Fromstd::io::Error>
<mongodb::error::ErrorKind as Fromstd::io::ErrorKind>
= note: required because of the requirements on the impl of From<DecoderError> for mongodb::error::Error
= note: required because of the requirements on the impl of FromResidual<Result<Infallible, DecoderError>> for Result<(), mongodb::error::Error>
note: required by from_residual
--> /home/sm8082/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/try_trait.rs:339:5
|
339 | fn from_residual(residual: R) -> Self;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Some errors have detailed explanations: E0277, E0599.
For more information about an error, try rustc --explain E0277.
error: could not compile Ch13_Mongodb_BSON_display_data due to 2 previous errors

Async timeouts, multiple async calls

So I am slightly confused by the recommendation to place the async calls in a tokio::task::spawn() thread.
My question is what if I have a function that makes a lot of collection calls, do you recommend placing all the async calls in one tokio spawn()? Or placing every single async call in their own tokio thread?

Constant background epoll() syscall when using tokio async runtime

Versions/Environment

  1. What version of Rust are you using? rustc 1.57.0 (f1edd0429 2021-11-29)

  2. What operating system are you using? arch linux lattest

  3. What versions of the driver and its dependencies are you using? (Run
    cargo pkgid mongodb & cargo pkgid bson)
    https://github.com/rust-lang/crates.io-index#mongodb:2.1.0
    https://github.com/rust-lang/crates.io-index#bson:2.1.0

  4. What version of MongoDB are you using? (Check with the MongoDB shell using db.version()) 5.0.5

  5. What is your MongoDB topology (standalone, replica set, sharded cluster, serverless)? standalone

Describe the bug

in an actix web main function, i creaated the mongodb client with
let client = db::connect("mongodb://localhost/27017").await.expect("unable to connect to Database");

then ran cargo build --release to build it once, then ran strace cargo run --release
in the output i can see that the program calls epoll_wait() and then write() a little more than once a second, this issue does not happen when the client is not created, this issue also does not happen when we use the async-std-runtime by writing :
mongodb = { version = "2.1.0", default-features = false, features = ["async-std-runtime"]}
in the Cargo.toml file

with the latter runtime, there is only once epoll_wait() syscall and then nothing more.
with the default tokio runtime, for some reason the program calls epoll_wait and write() many times per second

the code used in the main.rs file was

use actix_web::{web, App, HttpResponse, HttpServer, Responder};

use mongodb::{bson::doc, options::ClientOptions, Client};

async fn hey() -> impl Responder {
        HttpResponse::Ok().body("Hello world!")
}

async fn connect(uri: &str) -> mongodb::error::Result<Client> {
    // Parse your connection string into an options struct
    let mut client_options =
        ClientOptions::parse(uri).await?;
    //
    // Manually set an option
    client_options.app_name = Some("demo".to_string());
    // Get a handle to the cluster
    let client = Client::with_options(client_options)?;
    // Ping the server to see if you can connect to the cluster
    client
        .database("test")
        .run_command(doc! {"ping": 1}, None)
        .await?;
    println!("connected successfully !");
    Ok(client)
}

#[actix_web::main]
async fn main() -> std::io::Result<()> {

    let client = connect("mongodb://localhost/27017").await.expect("unable to connect to Database");
    // let db = client.database("test"); // unecessary to reproduce bug

    HttpServer::new(|| {
        App::new()
            .route("/", web::get().to(hey))
    })
    .bind("127.0.0.1:8080")?
    .run()
    .await
}

the Cargo.toml file was

[package]
name = "rustback"
version = "0.1.0"
edition = "2021"

# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html

[dependencies]
actix-web = "4.0.0-beta.19"
mongodb = "2.1.0"
# mongodb = { version = "2.1.0", default-features = false, features = ["async-std-runtime"]} # fixes the bug when used insted

RUST-1048 support something like get_default_database from client?

Let's say that I have the following code:

use mongodb::sync::Client;
let cli = Client::with_uri_str("mongodb://localhost:21017/test_db").unwrap();

It's good that we can fetch default database like this:

assert_eq!(cli.get_default_databse(), Some("test_db"));

If I have the following code:

use mongodb::sync::Client;
let cli = Client::with_uri_str("mongodb://localhost:21017/").unwrap();

then cli.get_default_database can return None.

Connection string reference: https://docs.mongodb.com/manual/reference/connection-string/

Can't run the documentation examples

Hi,

i'm trying to use the package and I'm facing some issues. The code below doesn't compile

use mongodb::{options::ClientOptions, Client};
#[tokio::main]
async fn main() {
    let mut client_options = ClientOptions::parse("mongodb://localhost:27017").await?;
    let client = Client::with_options(client_options)?;

    for db_name in client.list_database_names(None, None).await? {
        println!("{}", db_name);
    }
    println!("Done");
}

I get this error:

error: the `?` operator can only be used in an async block that returns `Result` or `Option` (or another type that implements `FromResidual`)
label: this function should return `Result` or `Option` to accept `?`
note: required by `from_residual`
label: this function should return `Result` or `Option` to accept `?`

My environment:

Cargo:

[dependencies]
mongodb = "2.0.0"
futures = "0.3.0"
tokio = { version = "1", features = ["full"] }
serde = { version = "1.0", features = ["derive"] }

rustc 1.55.0 (c8dfcfe04 2021-09-06)

What I'm I doing wrong?

Support for mongodb+consul?

Hi there, I'd like to know if this driver supports mongodb+consul scheme now? If not, do we have a plan for it?

Compiling mongodb v2.1.0 error

rustc 1.60.0-nightly (17d29dcdc 2022-01-21) running on x86_64-pc-windows-msvc I don't know how to solve, I deleted main to such an extent that it still reports an error

main.rs:

use std::error::Error;

#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
    println!("Hello, world!");
    Ok(())
}
   Compiling mongodb v2.1.0
error: internal compiler error: compiler\rustc_mir_transform\src\generator.rs:755:13: Broken MIR: generator contains type ClientOptionsParser in MIR, but typeck only knows about {ResumeTy, impl AsRef<str>, std::option::Option<resolver_config::ResolverConfig>, bool, client::options::ClientOptions, [closure@C:\Users\BORBER\.cargo\registry\src\mirrors.tuna.tsinghua.edu.cn-df7c3c540f42cdbd\mongodb-2.1.0\src\client\options\mod.rs:1100:69: 1100:90], impl futures_util::Future<Output = std::result::Result<SrvResolver, error::Error>>, (), SrvResolver, &Vec<client::options::ServerAddress>, Vec<client::options::ServerAddress>, usize, &client::options::ServerAddress, client::options::ServerAddress, &str, impl futures_util::Future<Output = std::result::Result<ResolvedConfig, error::Error>>} and [impl AsRef<str>, std::option::Option<client::options::resolver_config::ResolverConfig>]
    --> C:\Users\BORBER\.cargo\registry\src\mirrors.tuna.tsinghua.edu.cn-df7c3c540f42cdbd\mongodb-2.1.0\src\client\options\mod.rs:1092:23
     |
1092 |       ) -> Result<Self> {
     |  _______________________^
1093 | |         let parser = ClientOptionsParser::parse(uri.as_ref())?;
1094 | |         let srv = parser.srv;
1095 | |         let auth_source_present = parser.auth_source.is_some();
...    |
1145 | |         Ok(options)
1146 | |     }
     | |_____^

thread 'rustc' panicked at 'Box<dyn Any>', /rustc/17d29dcdce9b9e838635eb0adefd9b8b1588410b\compiler\rustc_errors\src\lib.rs:1115:9
stack backtrace:
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
note: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.60.0-nightly (17d29dcdc 2022-01-21) running on x86_64-pc-windows-msvc
note: compiler flags: -C embed-bitcode=no -C debuginfo=2 --crate-type lib
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [optimized_mir] optimizing MIR for `client::options::<impl at C:\Users\BORBER\.cargo\registry\src\mirrors.tuna.tsinghua.edu.cn-df7c3c540f42cdbd\mongodb-2.1.0\src\client\options\mod.rs:973:1: 1261:2>::parse_uri::{closure#0}`
#1 [layout_of] computing layout of `[static generator@C:\Users\BORBER\.cargo\registry\src\mirrors.tuna.tsinghua.edu.cn-df7c3c540f42cdbd\mongodb-2.1.0\src\client\options\mod.rs:1092:23: 1146:6]`
#2 [layout_of] computing layout of `core::future::from_generator::GenFuture<[static generator@C:\Users\BORBER\.cargo\registry\src\mirrors.tuna.tsinghua.edu.cn-df7c3c540f42cdbd\mongodb-2.1.0\src\client\options\mod.rs:1092:23: 1146:6]>`
#3 [layout_of] computing layout of `impl core::future::future::Future<Output = [async output]>`
#4 [optimized_mir] optimizing MIR for `client::options::<impl at C:\Users\BORBER\.cargo\registry\src\mirrors.tuna.tsinghua.edu.cn-df7c3c540f42cdbd\mongodb-2.1.0\src\client\options\mod.rs:973:1: 1261:2>::parse_uri`
end of query stack
error: aborting due to previous error

error: could not compile `mongodb` due to previous error

Get element by '_id' deprecated?

I would like to know if in the current version 2.1 the possibility to get an element by id is deprecated?
I can not find a hint hint in the documenation on how to get an element by the _id field.

match collection
        .find_one(doc! { "_id": mongodb::oid::ObjectId::with_uri_str(&id)  }, None)
        .await
        { 
           [...]
        }

(id here is a String)

Documentation: Updating example

I just had a lot of trouble finding how to update a document in my collection, which has a value that isn't primitive (a structure here in my example).

I was trying to set the "oders" fields.
And I had to look for some time to find that you can use bson::to_bson to Sereliaze the structure.

#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Turtle {
    name: String,
    pub orders: Vec<Command>,
    pub infos: HashMap<String, String>,
    pos: Position,
    facing: Facing,
}


let result = turtles
        .update_one(
            doc! { "name": &name },
            doc! { "$set": { "orders": bson::to_bson(&orders).expect("Unable to convert orders to bson") } },
            None,
        )
        .await
        .expect("Unable to update orders of the turtle");

Could we add an example about bson::to_bson and perhaps also about updating in the README.md usage section ?
I can do it and open a Pull requests if it helps 👍

No Error on DNS/network failure

I am getting started with this driver, however I had a difficult time trying to figure out why I was getting no results from a particular query. I realized later I wasn't connected to the VPN, so the connection should have failed. This is a minimal reproducible example:

use mongodb::{Client, Collection};
use mongodb::bson::{doc, Document};
use futures::stream::StreamExt;

#[tokio::main]
async fn main() {
    let client = Client::with_uri_str("mongodb://obvious_garbage").await.unwrap();
    let db = client.database("");
    let collection: Collection<Document> = db.collection("");
    for mut cursor in collection.find(doc!{}, None).await {
        let next = cursor.next().await.unwrap().unwrap();
        println!("{:?}", next);
    }
}

My expectation is that this code should panic, either at the creation of the client, or the unwrap() on the result, because there is no such database at mongodb://obvious_garbage. Instead this code simply runs silently and produces no output.

thread 'main' panicked at 'there is no timer running

Using mongodb = 1.2.2
the example code from doc:

// get error
let options = ClientOptions::builder()
    .hosts(vec![StreamAddress {
      hostname: "localhost".into(),
      port: Some(27017),
    }])
    .build();
  let client = Client::with_options(options)?;
  let db = client.database("some_db");
  for coll_name in db.list_collection_names(None).await.unwrap() {
    println!("collection: {}", coll_name);
  }

got the error:

thread 'main' panicked at 'there is no timer running, must be called from the context of a Tokio 0.2.x runtime', /Users/xxx/.cargo/registry/src/mirrors.ustc.edu.cn-61ef6e0cd06fb9b8/tokio-0.2.25/src/time/driver/handle.rs:24:32
note: run with RUST_BACKTRACE=1 environment variable to display a backtrace

using mongodb = 0.9.2 is ok:

// works
  let client = Client::with_uri_str("mongodb://localhost:27017/").unwrap();
  let options = ClientOptions::builder()
    .hosts(vec![StreamAddress {
      hostname: "localhost".into(),
      port: Some(27017),
    }])
    .build();
  let client = Client::with_options(options)?;
  let db = client.database("main");
  for coll_name in db.list_collection_names(None).unwrap() {
    println!("collection: {}", coll_name);
  }

all With tokio = { version = "1.7.1", features = ["full","test-util"] }

New

Versions/Environment

  1. What version of Rust are you using?
  2. What operating system are you using?
  3. What versions of the driver and its dependencies are you using? (Run
    cargo pkgid mongodb & cargo pkgid bson)
  4. What version of MongoDB are you using? (Check with the MongoDB shell using db.version())
  5. What is your MongoDB topology (standalone, replica set, sharded cluster, serverless)?

Describe the bug

A clear and concise description of what the bug is.

BE SPECIFIC:

  • What is the expected behavior and what is actually happening?
  • Do you have any particular output that demonstrates this problem?
  • Do you have any ideas on why this may be happening that could give us a
    clue in the right direction?
  • Did this issue arise out of nowhere, or after an update (of the driver,
    server, and/or Rust)?
  • Are there multiple ways of triggering this bug (perhaps more than one
    function produce a crash)?
  • If you know how to reproduce this bug, please include a code snippet here:

To Reproduce
Steps to reproduce the behavior:

  1. First, do this.
  2. Then do this.
  3. After doing that, do this.
  4. And then, finally, do this.
  5. Bug occurs.

BSON performance

Hello Everyone!
My team is currently writing a very traffic-heavy server, so our main goals are performance and security (which are Rust's lead perks). I was extremely happy with Rust's actix-web framework performance, before introducing Bson objects.
I've started reading about this issue and found those benchmarks, and also an alternative for document operations.
https://github.com/only-cliches/NoProto
I'm wondering if it's possible to replace BSON with NoProto Documents? They seem to have the same functionality, but noProto is around 160x faster for decodes, and 85x faster for updates of a single document.

I understand that Document functionality is one of the core MongoDB features, but using BSON for it is a major performance hit for the Rust driver. Changing it might raise the performance several times!

Thanks for your time and attention!

My bench results:

========= SIZE BENCHMARK =========
NoProto:     size: 308b, zlib: 198b
Flatbuffers: size: 264b, zlib: 181b
Bincode:     size: 163b, zlib: 129b
Postcard:    size: 128b, zlib: 119b
Protobuf:    size: 154b, zlib: 141b
MessagePack: size: 311b, zlib: 193b
JSON:        size: 439b, zlib: 184b
BSON:        size: 414b, zlib: 216b
Prost:       size: 154b, zlib: 142b
Avro:        size: 702b, zlib: 333b
Flexbuffers: size: 490b, zlib: 309b
Abomonation: size: 261b, zlib: 165b
Rkyv:        size: 180b, zlib: 152b
Raw BSON:    size: 414b, zlib: 216b
MessagePack: size: 296b, zlib: 187b
Serde JSON:  size: 446b, zlib: 198b

======== ENCODE BENCHMARK ========
NoProto:           739 ops/ms 1.00
Flatbuffers:      2710 ops/ms 3.66
Bincode:          9615 ops/ms 13.03
Postcard:         4505 ops/ms 6.10
Protobuf:         1484 ops/ms 2.01
MessagePack:       760 ops/ms 1.03
JSON:              700 ops/ms 0.95
BSON:              196 ops/ms 0.27
Prost:            1773 ops/ms 2.40
Avro:              235 ops/ms 0.32
Flexbuffers:       483 ops/ms 0.65
Abomonation:      5405 ops/ms 7.30
Rkyv:             3690 ops/ms 4.99
Raw BSON:          203 ops/ms 0.28
MessagePack:       284 ops/ms 0.39
Serde JSON:       1167 ops/ms 1.58

======== DECODE BENCHMARK ========
NoProto:          1085 ops/ms 1.00
Flatbuffers:     12821 ops/ms 11.81
Bincode:          6944 ops/ms 6.40
Postcard:         5682 ops/ms 5.22
Protobuf:         1727 ops/ms 1.59
MessagePack:       561 ops/ms 0.52
JSON:              564 ops/ms 0.52
BSON:              164 ops/ms 0.15
Prost:            2625 ops/ms 2.41
Avro:               72 ops/ms 0.07
Flexbuffers:       562 ops/ms 0.52
Abomonation:     83333 ops/ms 73.77
Rkyv:            58824 ops/ms 52.62
Raw BSON:          925 ops/ms 0.85
MessagePack:       376 ops/ms 0.35
Serde JSON:        377 ops/ms 0.35

====== DECODE ONE BENCHMARK ======
NoProto:         30303 ops/ms 1.00
Flatbuffers:    142857 ops/ms 4.24
Bincode:          7407 ops/ms 0.24
Postcard:         6289 ops/ms 0.21
Protobuf:         1751 ops/ms 0.06
MessagePack:       721 ops/ms 0.02
JSON:              714 ops/ms 0.02
BSON:              186 ops/ms 0.01
Prost:            2710 ops/ms 0.09
Avro:               83 ops/ms 0.00
Flexbuffers:     15385 ops/ms 0.50
Abomonation:    333333 ops/ms 10.65
Rkyv:           250000 ops/ms 7.14
Raw BSON:        15625 ops/ms 0.51
MessagePack:       404 ops/ms 0.01
Serde JSON:        375 ops/ms 0.01

====== UPDATE ONE BENCHMARK ======
NoProto:         11494 ops/ms 1.00
Flatbuffers:      2336 ops/ms 0.20
Bincode:          4367 ops/ms 0.38
Postcard:         2674 ops/ms 0.23
Protobuf:          706 ops/ms 0.06
MessagePack:       312 ops/ms 0.03
JSON:              525 ops/ms 0.05
BSON:              136 ops/ms 0.01
Prost:            1121 ops/ms 0.10
Avro:               54 ops/ms 0.00
Flexbuffers:       251 ops/ms 0.02
Abomonation:      5495 ops/ms 0.48
Rkyv:             3247 ops/ms 0.28
Raw BSON:          140 ops/ms 0.01
MessagePack:       215 ops/ms 0.02
Serde JSON:        289 ops/ms 0.03


//! | Format / Lib                                               | Encode  | Decode All | Decode 1 | Update 1 | Size (bytes) | Size (Zlib) |
//! |------------------------------------------------------------|---------|------------|----------|----------|--------------|-------------|
//! | **Runtime Libs**                                           |         |            |          |          |              |             |
//! | *NoProto*                                                  |         |            |          |          |              |             |
//! |        [no_proto](https://crates.io/crates/no_proto)       |     739 |       1085 |    30303 |    11494 |          308 |         198 |
//! | Apache Avro                                                |         |            |          |          |              |             |
//! |         [avro-rs](https://crates.io/crates/avro-rs)        |     235 |         72 |       83 |       54 |          702 |         333 |
//! | FlexBuffers                                                |         |            |          |          |              |             |
//! |     [flexbuffers](https://crates.io/crates/flexbuffers)    |     483 |        562 |    15385 |      251 |          490 |         309 |
//! | JSON                                                       |         |            |          |          |              |             |
//! |            [json](https://crates.io/crates/json)           |     700 |        564 |      714 |      525 |          439 |         184 |
//! |      [serde_json](https://crates.io/crates/serde_json)     |    1167 |        377 |      375 |      289 |          446 |         198 |
//! | BSON                                                       |         |            |          |          |              |             |
//! |            [bson](https://crates.io/crates/bson)           |     196 |        164 |      186 |      136 |          414 |         216 |
//! |         [rawbson](https://crates.io/crates/rawbson)        |     203 |        925 |    15625 |      140 |          414 |         216 |
//! | MessagePack                                                |         |            |          |          |              |             |
//! |             [rmp](https://crates.io/crates/rmp)            |     760 |        561 |      721 |      312 |          311 |         193 |
//! |  [messagepack-rs](https://crates.io/crates/messagepack-rs) |     284 |        376 |      404 |      215 |          296 |         187 |
//! | **Compiled Libs**                                          |         |            |          |          |              |             |
//! | Flatbuffers                                                |         |            |          |          |              |             |
//! |     [flatbuffers](https://crates.io/crates/flatbuffers)    |    2710 |      12821 |   142857 |     2336 |          264 |         181 |
//! | Bincode                                                    |         |            |          |          |              |             |
//! |         [bincode](https://crates.io/crates/bincode)        |    9615 |       6944 |     7407 |     4367 |          163 |         129 |
//! | Postcard                                                   |         |            |          |          |              |             |
//! |        [postcard](https://crates.io/crates/postcard)       |    4505 |       5682 |     6289 |     2674 |          128 |         119 |
//! | Protocol Buffers                                           |         |            |          |          |              |             |
//! |        [protobuf](https://crates.io/crates/protobuf)       |    1484 |       1727 |     1751 |      706 |          154 |         141 |
//! |           [prost](https://crates.io/crates/prost)          |    1773 |       2625 |     2710 |     1121 |          154 |         142 |
//! | Abomonation                                                |         |            |          |          |              |             |
//! |     [abomonation](https://crates.io/crates/abomonation)    |    5405 |      83333 |   333333 |     5495 |          261 |         165 |
//! | Rkyv                                                       |         |            |          |          |              |             |
//! |            [rkyv](https://crates.io/crates/rkyv)           |    3690 |      58824 |   250000 |     3247 |          180 |         152 |

RUST-1138 Missing bson-serde_with feature flag?

Right now, it seems we cannot use serde_with::serde_as macro without adding the bson crate to cargo.toml, even though the documentation mentionned it shouldn't be necessary

Note that if you are using bson through the mongodb crate, you do not need to specify it in your Cargo.toml, since the mongodb crate already re-exports it.

I guess a bson-serde_with feature-flag should be added to the mongodb crate?

Add function for ping command

The ping command is useful to check if the database cluster is available.

While it can be easily checked with database.run_command(doc! {"ping": 1}, None).is_ok(), it would be nice to be able to simply call database.ping().is_ok().

SendError when reusing the Client

Hi I am fairly new to the rust language and just finished the book. Here I am trying to wrap the mongodb client for later usage, but invoking the wrapped client panics the application. My Cargo.toml is:

[dependencies]
tokio = "1.13.0"
bson = "2.0.1"
mongodb = "2.0.1"
futures = "0.3"
serde = { version = "1.0", features = ["derive"] }

And my code looks like:

use futures::stream::TryStreamExt;
use mongodb::{
    bson::doc,
    options::{ClientOptions, FindOptions},
    Client, Database,
};
use serde::{Deserialize, Serialize};

// This is the wrapper around the mongo Client
struct Backend {
    client: Client,
}

// This is the document type
#[derive(Debug, Serialize, Deserialize)]
struct Data {
    date: String,
}

impl Backend {
    #[tokio::main]
    pub async fn test(&self) {
        let mut cursor = self
            .client
            .database("a")
            .collection::<Data>("date")
            .find(doc! { "date" : "2021-01-01" }, None)
            .await
            .expect("find error");
        while let Some(s) = cursor.try_next().await.unwrap() {
            println!("{:#?}", s)
        }
    }

    #[tokio::main]
    pub async fn new(url: &str) -> Result<Backend, mongodb::error::Error> {
        let client_options = ClientOptions::parse(url).await?;
        let client = Client::with_options(client_options)?;
        client
            .database("admin")
            .run_command(doc! {"ping": 1}, None)
            .await?;
        let b = Backend { client: client };
        let mut cursor = b
            .client
            .database("a")
            .collection::<Temp>("date")
            .find(doc! { "date" : "2021-01-01" }, None)
            .await
            .expect("find error");
        while let Some(s) = cursor.try_next().await.unwrap() {
            println!("{:#?}", s)
        }
        Ok(b)
    }
}

fn main() {
    let b = Backend::new("mongodb://localhost:3306").expect("cannot connect to mongo backend.");
    b.test();
}

What I am trying to do here is to create a wrapper around the client, and to reuse that wrapper in multiple places. The wrapper is created in the async fn new function, which moves the mongo client into the wrapper struct. In this function a find command is also executed to make sure the client is working.

Then when I called the test method on the wrapper, which is supposed to dispatch the same find command using the same client, the application panics with the following message:

thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: SendError(Sender { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) })', ~/.cargo/registry/src/rsproxy.cn-8f6827c7555bfaf8/mongodb-2.0.1/src/cmap/connection_requester.rs:43:34

I have no idea why the same find command works in the new function but not in the test function? I suspect I made some silly mistake with the async runtime, but the error message does not point me anywhere and searching it yields few results. Have spent hours but still can't figure it out.

Would appreciate your help!

Unable to stop `try_for_each_concurrent`

Hello!

I'm trying to create a GRPC endpoint that streams results from Cursor. To achieve this I'm using .map_ok and try_for_each_concurent methods. I'm using the try version of for each because I would like to stop the loop if any error occurs.

I'm experiencing an issue with trying to stop try_for_each_concurrent because is expecting me to return mongodb::error::Error and I'm unable to create it.

How to create mongodb error?

here is a snippet of code :-)

let mongo_db_collection_stream = mongo_db_collection_stream.map_ok(|doc| {
    let parse_result: Result<Asset, _> = from_bson(Bson::Document(doc));
    parse_result
});

mongo_db_collection_stream.try_for_each_concurrent(None, |vs_or_err| {
    let mut tx_copy = tx.clone();

    tokio::spawn(async move {
        tx_copy.send(vs_or_err.clone()).await.unwrap();
    });

    async {
        match vs_or_err {
            Ok(v) => Ok(()),
            Err(v) => Err((v)),
        }
    }
}).await;

output from console

error[E0271]: type mismatch resolving `<impl futures::Future as futures::Future>::Output == Result<(), mongodb::error::Error>`
   --> file.rs:101:9
    |
101 | /         mongo_db_collection_stream.try_for_each_concurrent(None, |vs_or_err| {
102 | |             let mut tx_copy = tx.clone();
103 | |
104 | |             tokio::spawn(async move {
...   |
113 | |             }
114 | |         }).await;
    | |________________^ expected struct `mongodb::error::Error`, found enum `mongodb::bson::de::Error`

I created StackOverflow question as well https://stackoverflow.com/questions/68778216/unable-to-stop-try-for-each-concurrent-for-mongodb-client

EDIT

I forgot to use .into function

mongo_db_collection_stream.try_for_each_concurrent(None, |vs_or_err| async {
            let mut tx_copy = tx.clone();
            tx_copy.send(vs_or_err);
            match vs_or_err {
                Err(e) => Err(e.into()),
                Ok(_) => Ok(())
            }
        }).await;

RUST-1120 bson's is_human_readable not configurable from mongodb side

Hey, we are trying to save a struct with ipnet::Ipv4Net and the serializers are using the is_human_readable serde option to distinguish between different cases.
When inserting the struct, it serializes with is_human_readable false (this issue is referenced in bson's 2.1.0-beta changelog) but when using find commands, the deserializer is set with is_human_readable true by default.
In result, we cant save and use the struct in mongo.

In bson 2.1.0-beta they added options to set the is_human_readable variable.
If there could be a way we can set it so the deserializer will set is_human_readable to false when using find commands, we will be able to solve this issue :)

Example below

use ipnet::Ipv4Net;
use mongodb::error::Result as MongoResult;
use serde::{Deserialize, Serialize};

#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct Subnets {
    pub private: Ipv4Net,
    pub public: Ipv4Net,
}

impl Subnets {
    pub async fn init(subnets: Vec<Subnets>) -> MongoResult<mongodb::Cursor<Subnets>> {
        let client = mongodb::Client::with_uri_str("mongodb://localhost:27017")
            .await
            .expect("Failed to initialize mongodb client");
        let db = client.database("ipnet");
        let coll = db.collection::<Subnets>("subnets");

        coll.insert_many(subnets, None).await?; // Will insert the address like Array["10","0","0","1","16"]
        coll.find(None, None).await // Try's to find address like "10.0.0.1/16"
    }
}

Migrations or Schema Management Recommendations?

I've found plenty of ORMs with mongo support that have migrations, and I've dealt with migrations in SQL before.
Does mongo have a recommended methodology for dealing with changing object schemas? Searching hasn't brought up many good, or modern, examples of dealing with varying objects. The best methodology I have found has been a just in time style, where when a piece of data is written to or read from, the "schema version" is tested and if it is out of date it the application updates it. But I could see that getting exceptionally messy regardless of how well it was implemented.

Does anyone have any ideas or know how they could share in this area?

WriteConflict error

While using the method "find_one_and_update_with_session", I got an error with message "Command failed (WriteConflict): WriteConflict error: this operation conflicted with another operation. Please retry your operation or multi-document transaction.".
This offen happens when I take stress testing with concurrency. The version of mongo-rust-driver is "2.0.2". The version of mongodb server is "4.4.4".

Use Arc::clone in Collection::clone_with_type

Current clone_with_type implementation includes calling builder and constructing a new CollectionInner instance.
Given that CollectionInner is wrapped in an Arc, and generic T isn't included in CollectionInner, is there any possibility that we refactor clone_with_type into Self{inner: self.inner.clone(), _phantom: Default::default()}?

DBRefs support?

Hi,

I use Mongoose with Node, use populate support DBRefs, so with Rust how should I do the same thing.

Thanks a lot.

Can't connect to a mongo cluster which address is behind a haproxy.

Reproducing code:

use mongodb::sync::Client;
use bson::{Document};

fn main() {
    let cli = Client::with_uri_str("mongodb://xxx:[email protected]:37017").unwrap();
    let db = cli.database("local");
    let coll = db.collection::<Document>("oplog.rs");
    println!("{:?}", coll.find_one(None, None).unwrap());
}

When I'm trying to find, it returns an error:

thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Error { kind: ServerSelection { message: "Server selection timeout: No available servers. Topology: { Type: Unknown, Servers: [ { Address: 192.168.10.11:37017, Type: Unknown, Error: unexpected end of file }, ] }" }, labels: {} }', src/bin/test.rs:8:48

Using pymongo or mongo shell doesn't have this problem:

图片
The address of mongodb is a haproxy to real mongodb address. When the ip is not behind a ha proxy, the code works fine.

Database version: 3.2.22

It also seems that the expected tpoplogy type is single:
图片

Invalid server selection timeout

I have code that connects to a Mongo database using Rust 1.55. The Mongo connection function works as expected when launched from a simple Rust command line application. However, when the Mongo connection function is invoked by a tide http handler, I receive this error:

Server selection timeout: No available servers. Topology: { Type: ReplicaSetNoPrimary, Servers: [ { Address: host1.com:27017, Type: Unknown }, { Address: host2.com:27017, Type: Unknown }, { Address: host3.com27017, Type: Unknown }, ]

I have validated that the same connection URI string does not receive the error when run outside of a tide http handler but receives the error when running in the tide http handler. I am wondering if it could be related to the use of the async-std runtime. I do not know how to debug any more but if given guidance I will try to assist in troubleshooting.

For reference these are the relevant dependencies in my Cargo.toml file

mongodb = { version = "2.0.1", default-features = false, features = ["async-std-runtime"]}
bson = "2.0.0"
tide = { version = "0.16.0" }
async-std = { version = "1.10.0", features = ["attributes"] }

cannot connect to mongo atlas from aws instance

I am following this doc to connect to a mongo atlas db from an AWS instance.

dependencies

[dependencies]
tokio = { version = "1.11.0", features = ["full"] }
mongodb = { version = "2.0.0", features = ["aws-auth"] }

code

let mut client_options = ClientOptions::parse("mongodb+srv://<username>:<password>@<cluster-url>/test?w=majority")
.await?;

error:

Error: Error { kind: ServerSelection { message: "Server selection timeout: No available servers. Topology: { Type: ReplicaSetNoPrimary, Servers: [ { Address: xxx.vzycd.mongodb.net:27017, Type: Unknown, Error: unexpected end of file }, { Address: xxx.vzycd.mongodb.net:27017, Type: Unknown, Error: unexpected end of file }, { Address: xxx.vzycd.mongodb.net:27017, Type: Unknown, Error: unexpected end of file }, ] }" }, labels: {} }

I tried both the SRV and non-SRV URI, neither of them work. However, I was able to use the URI to connect via python or golang code.

  • python: both works
  • golang: non-SRV works, SRV doesn't work

Can someone give me a hint about the root cause and solution?

Error when using credentials:Server selection timeout: No available servers

let mut client_options =
            ClientOptions::parse(format!("mongodb://{}:{}", prop.host, prop.port))
                .await
                .unwrap();
let credential: Credential = Credential::builder()
    .username(prop.username)
    .password(prop.password)
    .build();
client_options.credential = Some(credential);
let client = Client::with_options(client_options).unwrap();
for db_name in client.list_database_names(None, None).await.unwrap() {
      println!("{}", db_name);
}

Concurrent session calls

As per my issue mentioned here:
https://stackoverflow.com/questions/69476422/try-join-to-make-mongodb-transactions-sent-at-the-same-time

I would like to make the concurrent transaction as in MongoDB Node.js driver, but looks like the Rust driver has some limitations in session and transaction functionality. DB.update_many is not a solution when you want to use different collections concurrently.

Wondering if it could be fixed or if there are different approaches.
Thanks!

Update remove fields

Hey guys.

I'm trying to update my document, if I don't pass a field it is being deleted.

my document is

{
   "_id":"27fc47a4-0730-446c-8acd-41aa6e227406",
   "user_id":"a07c8c2f-e83a-47f7-80dc-a18407f997e1",
   "pet":{
      "name":"Hello",
      "bio":"Hello",
      "gender":"Male",
      "can_live_with_other_cats":true,
      "can_live_with_other_dogs":true
   },
   "status":"Pending",
   "created_at":{
      "$date":"2021-11-06T22:30:41.977Z"
   }
}

I tried to update with

"pet":{
      "name":"Hello",
      "bio":"Hello",
      "gender":"Male",
      "can_live_with_other_cats":true,
   },
   "status":"Pending",
   "created_at":{
      "$date":"2021-11-06T22:30:41.977Z"
   }

he is deleting the "can_live_with_other_dogs":true
how can i update without deleting a field?

let update = doc!{"$set": {"pet": doc} };

`aggregate_with_session` fails when collection size > batchSize

Overview

Instantiate a MongoDB DB with a collection of N documents. Run a simple aggregation against that collection with a batchSize that is lower than N and that's not a multiple of N.

Examples:

  • N = 100, batchSize = 40
  • N = 102, batchSize = 101 (default value)

For all those cases, cursor.next(&mut session) will return the following error:

Error: Command failed (CursorNotFound): cursor id XXXXXXXXX not found)

Reproduction

Below is a minimal reproduction example:

use anyhow::Result;
use mongodb::{
    bson::{doc, Document},
    options::{AggregateOptions, ClientOptions},
    Client, Collection,
};
use tokio;

#[tokio::main]
async fn main() -> Result<()> {
    let client_options = ClientOptions::parse("MONGO_URL").await?;
    let client = Client::with_options(client_options)?;
    let mut session = client.start_session(None).await?;
    let db = client.database("DATABASE_NAME");
    let coll: Collection<Document> = db.collection("COLLECTION_NAME");
    // Default batchSize is 101. Have at least 102 documents for the error to happen
    // or set a `batchSize` according to the rules above depending on the number of documents in your collection
    let opts = AggregateOptions::builder().build();
    let mut res = coll
        .aggregate_with_session(
            vec![doc! {
                "$project": { "_id": 1 }
            }],
            opts,
            &mut session,
        )
        .await?;
    let mut docs = vec![];

    while let Some(result) = res.next(&mut session).await {
        match result {
            Ok(document) => docs.push(document),
            Err(e) => return Err(e.into()),
        }
    }

    println!("Number of documents: {}", docs.len());

    Ok(())
}

Poor performance in benchmark

Introduced Axum, one of the most promising Rust web frameworks in the TechEmpower benchmark with tests against Postgresql (using sqlx) and MongoDb (using the official driver). Was expecting the performance to be similar, but it is 100 times worse for MongoDb, so not sure what I am doing wrong.

https://www.techempower.com/benchmarks/#section=test&runid=939169a8-6a56-44ae-9e50-0e714753c445&hw=ph&test=db&a=2

axum [sqlx] | 101,844 requests
axum [mongodb] 736 requests

There are other benchmarks (in Go and other languages) that have reasonable performance, so I do not think there is something wrong with the MongoDb setup.

The code I am using is at https://github.com/TechEmpower/FrameworkBenchmarks/tree/master/frameworks/Rust/axum
More specific the main is this one https://github.com/TechEmpower/FrameworkBenchmarks/blob/master/frameworks/Rust/axum/src/main_mongo.rs

How can I add document fields conditional?

Hello!

I am building a website using rust and mongodb.

I get a problem: How can I add document fields conditional?

I can solve this problem when using javascript:

        let changedFields = {};
        const { name, about } = JSON.parse(body);
        if (typeof (name) != "undefined") {
            changedField.name = name;
        }
        if (typeof (about) != "undefined") {
            changedField.about = about;
        }
        
         await db.collection("exercises").updateOne({ _id: new ObjectId(bookId) }, {
            $set: {
                ...changedFields,
                updated_at: new Date()
            }
        });

In Rust case:

let name = Some("mongodb");
let about = None;

let update_doc=update! {
    "name": name,
    "about": about,
}

The about field will not be included in update_doc, hence it will not be updated.

But I've no idea to finish the micro update.

Can any one help me?

RUST-488 Allow sync and async to coexist

If you use a crate that uses the sync API and another crate that uses the async API you won't be able to compile. However, the async part is always compiled even when the sync feature is active, it's just not exposed as public.

This seems like an artificial and unnecessary limitation. It should be easy to expose both APIs.

Also, despite the docs saying it's not possible, you can in fact enable both sync and async-std-runtime as features without compiler errors. Yet, in this case only the sync API will work. Having sync and tokio-runtime enabled does what the docs say and throws an error but very late so it's hard to detect.

Can't connect to a mongo cluster(by IPs) which address is behind a vpn

Hi,

I'm trying to connect to a replica set by IPs which is behind VPN.

    let client = Client::with_uri_str("mongodb://name:[email protected]:27017,xx.xx.xx.xxx:27017,xx.xx.xx.xxx:27017/?tlsallowinvalidcertificates=true&replicaSet=repl-set-name").expect("db client created");

    for db_name in client
        .list_database_names(None, None)
        .expect("database names fetched")
    {
        println!("{}", db_name);
    }

I receive the following error:

Error { kind: ServerSelection { message: "Server selection timeout: No available servers. Topology: { Type: ReplicaSetNoPrimary, Servers: [ { Address: xx.xx.xx.xxx:27017, Type: Unknown, Error: An error occurred during DNS resolution: InvalidDNSNameError }, { Address: xx.xx.xx.xxx:27017, Type: Unknown, Error: An error occurred during DNS resolution: InvalidDNSNameError }, { Address: xx.xx.xx.xxx:27017, Type: Unknown, Error: An error occurred during DNS resolution: InvalidDNSNameError }, ] }" }, labels: {} }

mongo-rust-driver version: 2.0.0
MongoDB version: 3.6.20

using mongo shell with the following format I can successfully connect(self signed cert is used):

mongo --sslAllowInvalidHostnames --ssl --authenticationDatabase 'admin' --host repl-set-name/xx.xx.xx.xxx:27017,xx.xx.xx.xxx:27017,xx.xx.xx.xxx:27017 -u user -p passwd

What I'm doing wrong?

Sync reads and async writes

Thank you for providing a rust driver for mongodb!

Since the async API is privatized when you import the sync feature, how would I go about making async writes? I'd really lik to make sync reads, but since the async API is limited, what would I do in my scenario?

Thank you so much in advance :)

Cursor iterator returns Error with InvalidResponse "invalid type: unit value, expected a string"

use mongodb::bson::Document;
use mongodb::{bson::doc, sync::Client};
use serde::{Deserialize, Serialize};
use serde_json::Value;

#[derive(Debug, Serialize, Deserialize)]
struct ASNEntry {
    asn: i32,
    mode: String,
    name: String,
    description: String,
    prefixes: Vec<PrefixEntry>,
}

#[derive(Debug, Serialize, Deserialize)]
struct PrefixEntry {
    prefix: String,
    name: String,
    countryCode: Option<String>,
    description: String,
}

fn main() {
    let client = Client::with_uri_str("mongodb://mongodb.intern.ninjahub.net:27017").unwrap();
    let database = client.database("censored");
    let collection = database.collection::<ASNEntry>("censored");
    let cursor = collection.find(doc! {}, None).unwrap();
    for result in cursor {
        dbg!(result);
    }
}

When using the integrated deserialization feature the 102nd Document returns this:

[src/main.rs:29] result = Err(
    Error {
        kind: InvalidResponse {
            message: "invalid type: unit value, expected a string",
        },
        labels: {},
    },
)
[src/main.rs:29] result = Err(
    Error {
        kind: Command(
            CommandError {
                code: 43,
                code_name: "CursorNotFound",
                message: "cursor id 7287796362393293625 not found",
            },
        ),
        labels: {},
    },
)

No longer installes, dependency version pin was yanked

The driver no longer downloads and compiles when added as a dependency starting today. A crate version that is pinned for [email protected] is a very old version, which itself has a dependency to a crate that no longer exists. When adding to a project, this is the result.

Execution failed (exit code 101).
/home/dallin/.cargo/bin/cargo metadata --verbose --format-version 1 --all-features
stdout :     Updating crates.io index
error: failed to select a version for the requirement `crypto-mac = "^0.7"`
candidate versions found which didn't match: 0.11.1, 0.11.0, 0.10.1, ...
location searched: crates.io index
required by package `hmac v0.7.1`
    ... which is depended on by `mongodb v1.2.2`

Here is the screenshot of [email protected]
image

I will add for whomever comes after me looking, switching to the latest beta release, for me mongodb = "2.0.0-beta.3", solves the issue as the versions are upgraded for these specific things.

TimeseriesOptions is impossible to use

I was trying to create timeseries collection via driver and found that it's not possible since TimeseriesOptions is marked as non-exhaustive so there's no way to instantiate that structure outside mongodb crate. Please, either add derive(Default) or create a function to instantiate TimeseriesOptions struct.

Multithreaded access to DB handle panics

I'm trying to share a connection handle across threads. It will usually (not always) panic eventually. Here's a sample application where somewhere between 0 and 9 of the 10 tests will fail:

# Cargo.toml
[package]
name = "mongodb_test"
version = "0.1.0"
edition = "2018"

# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html

[dependencies]
futures = "=0.3.17"
lazy_static = "=1.4.0"
mongodb = "=2.0.1"
tokio = "=1.12.0"
//! main.rs
use std::sync::Arc;

use futures::executor::block_on;
use lazy_static::lazy_static;
use mongodb::{
    bson::{doc, Document},
    error::Error,
    options::ClientOptions,
    results::InsertOneResult,
    Client, Collection,
};

lazy_static! {
    static ref COLL: Arc<Collection<Document>> = Arc::new(collection());
}

fn main() {
    println!("Hello, world!");
}

fn collection() -> Collection<Document> {
    let mongo_db_client_options =
        block_on(ClientOptions::parse("mongodb://localhost:27017")).unwrap();
    let client = Client::with_options(mongo_db_client_options).unwrap();
    let database = client.database("test_database");
    database.collection("test_collection")
}

async fn write_data() -> Result<InsertOneResult, Error> {
    COLL.clone().insert_one(doc! { "test": "test" }, None).await
}

#[cfg(test)]
mod tests {
    use super::write_data;

    #[tokio::test]
    async fn insert_1() {
        assert!(write_data().await.is_ok());
    }

    #[tokio::test]
    async fn insert_2() {
        assert!(write_data().await.is_ok());
    }

    #[tokio::test]
    async fn insert_3() {
        assert!(write_data().await.is_ok());
    }

    #[tokio::test]
    async fn insert_4() {
        assert!(write_data().await.is_ok());
    }

    #[tokio::test]
    async fn insert_5() {
        assert!(write_data().await.is_ok());
    }

    #[tokio::test]
    async fn insert_6() {
        assert!(write_data().await.is_ok());
    }

    #[tokio::test]
    async fn insert_7() {
        assert!(write_data().await.is_ok());
    }

    #[tokio::test]
    async fn insert_8() {
        assert!(write_data().await.is_ok());
    }

    #[tokio::test]
    async fn insert_9() {
        assert!(write_data().await.is_ok());
    }

    #[tokio::test]
    async fn insert_10() {
        assert!(write_data().await.is_ok());
    }
}

And one example output:

cargo test
   Compiling mongodb_test v0.1.0 (/Users/mharkins/projects/mongodb_test)
    Finished test [unoptimized + debuginfo] target(s) in 3.15s
     Running unittests (target/debug/deps/mongodb_test-8331f298acdfcc6f)

running 10 tests
test tests::insert_6 ... ok
test tests::insert_3 ... ok
test tests::insert_8 ... ok
test tests::insert_7 ... ok
test tests::insert_4 ... ok
test tests::insert_9 ... FAILED
test tests::insert_2 ... FAILED
test tests::insert_10 ... FAILED
test tests::insert_1 ... FAILED
test tests::insert_5 ... FAILED

failures:

---- tests::insert_9 stdout ----
thread 'tests::insert_9' panicked at 'called `Result::unwrap()` on an `Err` value: JoinError::Cancelled', /Users/mharkins/.cargo/registry/src/github.com-1ecc6299db9ec823/mongodb-2.0.1/src/runtime/join_handle.rs:34:90

---- tests::insert_2 stdout ----
thread 'tests::insert_2' panicked at 'called `Result::unwrap()` on an `Err` value: RecvError(())', /Users/mharkins/.cargo/registry/src/github.com-1ecc6299db9ec823/mongodb-2.0.1/src/cmap/connection_requester.rs:47:24
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

---- tests::insert_10 stdout ----
thread 'tests::insert_10' panicked at 'called `Result::unwrap()` on an `Err` value: JoinError::Cancelled', /Users/mharkins/.cargo/registry/src/github.com-1ecc6299db9ec823/mongodb-2.0.1/src/runtime/join_handle.rs:34:90

---- tests::insert_1 stdout ----
thread 'tests::insert_1' panicked at 'called `Result::unwrap()` on an `Err` value: RecvError(())', /Users/mharkins/.cargo/registry/src/github.com-1ecc6299db9ec823/mongodb-2.0.1/src/cmap/connection_requester.rs:47:24

---- tests::insert_5 stdout ----
thread 'tests::insert_5' panicked at 'assertion failed: write_data().await.is_ok()', src/main.rs:52:9


failures:
    tests::insert_1
    tests::insert_10
    tests::insert_2
    tests::insert_5
    tests::insert_9

test result: FAILED. 5 passed; 5 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.59s

error: test failed, to rerun pass '--bin mongodb_test'

Is there some other way I should be making the connection available to all tests/threads?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.