Code Monkey home page Code Monkey logo

deadpool's Introduction

Deadpool Latest Version Build Status Unsafe forbidden Rust 1.75+

Deadpool is a dead simple async pool for connections and objects of any type.

This crate provides two implementations:

  • Managed pool (deadpool::managed::Pool)

    • Creates and recycles objects as needed
    • Useful for database connection pools
    • Enabled via the managed feature in your Cargo.toml
  • Unmanaged pool (deadpool::unmanaged::Pool)

    • All objects either need to be created by the user and added to the pool manually. It is also possible to create a pool from an existing collection of objects.
    • Enabled via the unmanaged feature in your Cargo.toml

Features

Feature Description Extra dependencies Default
managed Enable managed pool implementation - yes
unmanaged Enable unmanaged pool implementation - yes
rt_tokio_1 Enable support for tokio crate tokio/time no
rt_async-std_1 Enable support for async-std crate async-std no
serde Enable support for deserializing pool config serde/derive no

The runtime features (rt_*) are only needed if you need support for timeouts. If you try to use timeouts without specifying a runtime at pool creation the pool get methods will return an PoolError::NoRuntimeSpecified error.

Managed pool (aka. connection pool)

This is the obvious choice for connection pools of any kind. Deadpool already comes with a couple of database connection pools which work out of the box.

Example

use deadpool::managed;

#[derive(Debug)]
enum Error { Fail }

struct Computer {}

impl Computer {
    async fn get_answer(&self) -> i32 {
        42
    }
}

struct Manager {}

impl managed::Manager for Manager {
    type Type = Computer;
    type Error = Error;
    
    async fn create(&self) -> Result<Computer, Error> {
        Ok(Computer {})
    }
    
    async fn recycle(&self, _: &mut Computer, _: &managed::Metrics) -> managed::RecycleResult<Error> {
        Ok(())
    }
}

type Pool = managed::Pool<Manager>;

#[tokio::main]
async fn main() {
    let mgr = Manager {};
    let pool = Pool::builder(mgr).build().unwrap();
    let mut conn = pool.get().await.unwrap();
    let answer = conn.get_answer().await;
    assert_eq!(answer, 42);
}

Database connection pools

Deadpool supports various database backends by implementing the deadpool::managed::Manager trait. The following backends are currently supported:

Backend Crate Latest Version
bolt-client deadpool-bolt Latest Version
tokio-postgres deadpool-postgres Latest Version
lapin (AMQP) deadpool-lapin Latest Version
redis deadpool-redis Latest Version
async-memcached deadpool-memcached Latest Version
rusqlite deadpool-sqlite Latest Version
diesel deadpool-diesel Latest Version
tiberius deadpool-tiberius Latest Version
r2d2 deadpool-r2d2 Latest Version
rbatis rbatis Latest Version

Reasons for yet another connection pool

Deadpool is by no means the only pool implementation available. It does things a little different and that is the main reason for it to exist:

  • Deadpool is compatible with any executor. Objects are returned to the pool using the Drop trait. The health of those objects is checked upon next retrieval and not when they are returned. Deadpool never performs any actions in the background. This is the reason why deadpool does not need to spawn futures and does not rely on a background thread or task of any type.

  • Identical startup and runtime behaviour. When writing long running application there usually should be no difference between startup and runtime if a database connection is temporarily not available. Nobody would expect an application to crash if the database becomes unavailable at runtime. So it should not crash on startup either. Creating the pool never fails and errors are only ever returned when calling Pool::get().

    If you really want your application to crash on startup if objects can not be created on startup simply call pool.get().await.expect("DB connection failed") right after creating the pool.

  • Deadpool is fast. Whenever working with locking primitives they are held for the shortest duration possible. When returning an object to the pool a single mutex is locked and when retrieving objects from the pool a Semaphore is used to make this Mutex as little contested as possible.

  • Deadpool is simple. Dead simple. There is very little API surface. The actual code is barely 100 lines of code and lives in the two functions Pool::get and Object::drop.

  • Deadpool is extensible. By using post_create, pre_recycle and post_recycle hooks you can customize object creation and recycling to fit your needs.

  • Deadpool provides insights. All objects track Metrics and the pool provides a status method that can be used to find out details about the inner workings.

  • Deadpool is resizable. You can grow and shrink the pool at runtime without requiring an application restart.

Unmanaged pool

An unmanaged pool is useful when you can't write a manager for the objects you want to pool or simply don't want to. This pool implementation is slightly faster than the managed pool because it does not use a Manager trait to create and recycle objects but leaves it up to the user.

Unmanaged pool example

use deadpool::unmanaged::Pool;

struct Computer {}

impl Computer {
    async fn get_answer(&self) -> i32 {
        42
    }
}

#[tokio::main]
async fn main() {
    let pool = Pool::from(vec![
        Computer {},
        Computer {},
    ]);
    let s = pool.get().await.unwrap();
    assert_eq!(s.get_answer().await, 42);
}

FAQ

Why does deadpool depend on tokio? I thought it was runtime agnostic...

Deadpool depends on tokio::sync::Semaphore. This does not mean that the tokio runtime or anything else of tokio is being used or will be part of your build. You can easily check this by running the following command in your own code base:

cargo tree --format "{p} {f}"

License

Licensed under either of

at your option.

deadpool's People

Contributors

0xwof avatar aarashy avatar attila-lin avatar aumetra avatar benesch avatar bikeshedder avatar brocaar avatar conradludgate avatar dchenk avatar elpiel avatar gyfis avatar kitsuneninetails avatar mcheshkov avatar nathanflurry avatar paolobarbolini avatar randers00 avatar rofrol avatar rossdylan avatar shadaj avatar slashnick avatar srijs avatar tobz avatar turbo87 avatar tyranron avatar walfie avatar weiznich avatar wokket avatar xfbs avatar yotamofek avatar younessbird avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deadpool's Issues

Re-export deadpool PoolConfig from deadpool-redis (et co)

I'm probably missing something obvious, but I don't seem to be able to construct a PoolConfig struct manually, like this:

let config = Config { url: Some(redis_uri()), pool: PoolConfig { max_size: 32 } };

I think this would work if the deadpool-redis re-exported the PoolConfig from deadpool. Would you be open to this addition?

Thanks!

bump deadpool-postgres version

I added deadpool-postgres 0.5.6 which should depend on tokio-postgres=0.6.0
however the crate depends on tokio-postgres= "0.5.1"

maybe bumb deadpool-postgres to 0.6.0 to keep on paar with tokio-postgres and publish a new crate?

Cargo run 'postgres-actix-web' example error

Cargo run 'postgres-actix-web' example error:

error[E0308]: mismatched types
  --> src/main.rs:50:5
   |
48 | fn create_pool() -> Result<Pool, ConfigError> {
   |                     ------------------------- expected `std::result::Result<deadpool::managed::Pool<deadpool_postgres::ClientWrapper, tokio_postgres::error::Error>, config::error::ConfigError>` because of return type
49 |     let cfg = Config::from_env("PG")?;
50 |     cfg.create_pool(tokio_postgres::NoTls)
   |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected enum `config::error::ConfigError`, found enum `deadpool_postgres::config::ConfigError`
   |
   = note: expected type `std::result::Result<_, config::error::ConfigError>`
              found type `std::result::Result<_, deadpool_postgres::config::ConfigError>`

How to commit a transaction?

Transaction#commit moves the transaction, but an owned reference to the transaction doesn't seem to be available. Am I missing something, or are transactions unusable with this pool?

get a problem when use pipe()

use deadpool_redis::pipe;
let mut data: Vec<String> = pipe().cmd("lrange").arg("ck_cache".to_string()).arg(0).arg(-1).query_async(&mut connection).await.unwrap();

I want to get the data from the ck_cache. but no data return

when I just use cmd() and return data.
I don't know why. and I hope to execute lrange and ltrim, so I choose the pipe() function.

the whole command is

let mut data: Vec<String> = pipe().cmd("lrange").arg("ck_cache".to_string()).arg(0).arg(-1).cmd("ltrim").arg("ck_cache").arg(length).arg(-1).ignore()
                        .query_async(&mut connection).await.unwrap();

it will delete data successfully but no data

Error when updating from 0.4 to 0.5?

Hi, I'm updating my app to from 0.4 to 0.5 and I'm getting these errors when building:

error[E0599]: no method named `host_path` found for type `tokio_postgres::config::Config` in the current scope
   --> C:\Users\Admin\.cargo\registry\src\github.com-1ecc6299db9ec823\deadpool-postgres-0.5.0\src\config.rs:202:17
    |
202 |             cfg.host_path("/run/postgresql");
    |                 ^^^^^^^^^ method not found in `tokio_postgres::config::Config`

error[E0599]: no method named `host_path` found for type `tokio_postgres::config::Config` in the current scope
   --> C:\Users\Admin\.cargo\registry\src\github.com-1ecc6299db9ec823\deadpool-postgres-0.5.0\src\config.rs:204:17
    |
204 |             cfg.host_path("/var/run/postgresql");
    |                 ^^^^^^^^^ method not found in `tokio_postgres::config::Config`

error[E0599]: no method named `host_path` found for type `tokio_postgres::config::Config` in the current scope
   --> C:\Users\Admin\.cargo\registry\src\github.com-1ecc6299db9ec823\deadpool-postgres-0.5.0\src\config.rs:206:17
    |
206 |             cfg.host_path("/tmp");
    |                 ^^^^^^^^^ method not found in `tokio_postgres::config::Config`

error: aborting due to 3 previous errors

Any advice for how to solve?

README comparison is out of date

bb8 uses a callback based interface (See pool.run) and provides the same configuration options as r2d2. At the time of writing there is no official release which supports async/.await.

This is no longer true. There is now a guard-based interface and an official release which supports async/await.

Removal of deadpool-redis wrappers for Cmd and Pipeline

Since redis 0.15 the wrappers should no longer be needed as the only reason for them used to be the consuming API. redis 0.15 however introduced the ConnectionLike trait which makes it awkward to use in conjunction with Deref and DerefMut of Object as automatic dereferencing and trait implementation don't go well together.

Option 1: Explicit dereferencing

This would require the user to derefenrece Object<redis::Client> explicitly:

let conn = pool.get().await;
cmd("PING").query_async(*conn).await

While this is perfectly fine Rust code it leaks some implementation details that I rather want to hide. I feel like using deadpool-redis should work exactly the same way as working with redis directly.

Option 2: Change Object into a trait

That way deadpool_redis::Object would now only be implementing the deadpool::Object trait. This would enable it to also implement the ConnectionLike trait and therefore be compatible with redis::Cmd and redis::Pipeline.

let conn = pool.get().await;
cmd("PING").query_async(conn).await;

Option 3: Merge all deadpool crates into one

This would enable impl ConnectionLike for Object<redis::Client>.

I'm highly against merging the crates together just to be able to do that. I'm just listing it here as it is in fact a solution to the problem. The first deadpool version 0.1 did it exactly this way and had a postgres and redis feature.


I'm tempted to choose Option 2 and make deadpool more compatible with future libraries that might also use a trait for its client object.

This is something that I need to figure out before releasing verson 1.0.

deadpool-posgtres: support LISTEN/NOTIFY

Feature request:

The current way that tokio_postgres supports async messaging in Postgres is through the poll_message method on a Connection (from which you can construct your own stream with stream::poll_fn). Unfortunately, deadpool-postgres only exposes Clients without access to those Clients' counterpart Connection. It would be great if deadpool-postgres could support this use-case directly in some way.

Propsed changes:

There's probably a few ways this could be done, but the ones that I thought of:

  1. Allow underlying connections to to be extracted from the ClientWrapper through a method.
  2. implement Into<Connection> for ClientWrapper
  3. if exposing the underlying Connection isn't the right approach, then ClientWrapper could expose a stream of tokio_postgres::AsyncMessages directly through a method

Of course, if there's another workaround, please let me know! And if any of these approaches ☝️ sound appropriate, then I'd be happy to take a crack at implementing this myself.

Timeouts are executor specific

Deadpool is meant to be 100% compatible with any executor. Since the introduction of timeout_get (see #9) there is a small amount of code that depends on the tokio executor. Whenever configuring timeouts or using timeout_get the code is only compatible with a tokio executor:

async fn apply_timeout<F, O, E>(
    future: F,
    timeout_type: TimeoutType,
    duration: Option<Duration>,
) -> Result<O, PoolError<E>>
where
    F: Future<Output = O>,
{
    match duration {
        Some(duration) => match timeout(duration, future).await {
            Ok(result) => Ok(result),
            Err(_) => Err(PoolError::Timeout(timeout_type)),
        },
        None => Ok(future.await),
    }
}

This should be fixed and I'm looking for ways to implement timeouts which are compatible with multiple executors.

I'm currently thinking about two ways to solve this:

  • Add feature flags to enable tokio, async-std, etc. support and add an executor enum in the configuration where you can specify the executor to be used. When configuring a timeout and/or calling timeout_get without an executor set the code will return a Err(PoolError::NoRuntimeSpecified).

  • Pass in the timeout function into the config object. I'm currently thinking about a Executor/Runtime trait that provides the timeout function.

Add support for resizing the pool

It would be nice to have support for growing and shrinking the pool size during runtime.

It is currently unclear how this should be implemented. crossbeam_queue::SegQueue could be used instead of ArrayQueue at the cost of a slightly slower pool.

Disabling the config feature is not passed through from deadpool-lapin to deadpool

I have a bunch of projects that depend on one or both of deadpool-lapin and deadpool-postgres but don't depend on deadpool directly. In order to avoid the dependency on the config crate, which I don't need, I set default-features = false on deadpool-lapin and deadpool-postgres, but the removal of the feature is not passed through because default-features are enabled on deadpool.

Since people would not normally be depending on deadpool directly it might be better to have no default features on that crate, and have the implementation crates enable explicitly the features they need. Alternatively the shipped implementation crates could disable default features and only enable what they need.

Error when running example for postgres

Running the example produces the error "Value: missing field pg" - what do I set this to? Is it set up in the code correctly? A partial snippet for context is provided below after the stacktrace.

thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: missing field `pg`', src/main.rs:199:15
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Panic in Arbiter thread.
#[derive(Debug, Deserialize)]
struct Config {
    pg_host: String,
    pg_port: String,
    pg_user: String,
    pg_password: String,
    pg_sslmode: String,
    pg: deadpool_postgres::Config,
}

impl Config {
    fn from_env() -> Result<Self, ConfigError> {
        let mut cfg = ::config::Config::new();
        cfg.merge(::config::Environment::new())?;
        cfg.try_into()
    }
}

#[actix_rt::main]
async fn main() -> std::result::Result<(), std::io::Error> {
    dotenv().ok();
    let cfg = Config::from_env().unwrap();
    let pool = cfg.pg.create_pool(
                MakeTlsConnector::new(TlsConnector::builder().danger_accept_invalid_certs(true).build().unwrap())
                ).unwrap();
    let posts = get_posts(pool.clone()).await.unwrap();

    HttpServer::new(move || {
        App::new()
        .data(pool.clone())
        .data(posts.clone())
        .service(index_handler)
        .service(page_handler)
        .service(post_handler)
        .service(search_handler)
        .service(fs::Files::new("/assets", "./static"))
        .default_service(web::to(not_found_handler))
    })
    .bind("0.0.0.0:80")?
    .run().await
}

Disable default features for lapin

The default set of features for lapin includes native-tls. Since features are additive, this means that a user of deadpool-lapin cannot remove the dependency on native-tls and thus openssl, which means this crate can't be used in situations where linking openssl isn't possible/desired.

What Rust Type should I use for postgres's timestamp?

What Rust Type should I use for postgres's timestamp?
let a: Timestamp = row.get("time_edited");
panick happen:
panicked at 'error retrieving column time_edited: error deserializing column 2: cannot convert between the Rust type postgres_types::special::Timestamp<i64> and the Postgres type timestamp'

Add an unmanaged version of the pool

@svenknobloch proposed in #24 to add an add function to the pool which allows using the pool where objects are not generated during runtime but upfront. This version of the pool does not need a manager and timeouts as no creation and/or recycling needs to happen.

Doesn't build with current Tokio?

Attempting to compile this crate throws these errors:

error[E0432]: unresolved import `tokio::sync::mpsc`
 --> src/lib.rs:6:18
  |
6 | use tokio::sync::mpsc::{channel, Receiver, Sender};
  |                  ^^^^ could not find `mpsc` in `sync`

error[E0432]: unresolved import `tokio::sync::Mutex`
 --> src/lib.rs:7:5
  |
7 | use tokio::sync::Mutex;
  |     ^^^^^^^^^^^^^^^^^^ no `Mutex` in `sync`

error[E0603]: module `sync` is private
 --> src/lib.rs:6:12
  |
6 | use tokio::sync::mpsc::{channel, Receiver, Sender};
  |            ^^^^

error[E0603]: module `sync` is private
 --> src/lib.rs:7:12
  |
7 | use tokio::sync::Mutex;
  |            ^^^^

error: aborting due to 4 previous errors

Looks like this might need a feature flag now

Don't panic when redis connection is `None`

Right now the redis connection wrapper panics when running query after a previous call to query failed with an error. The query function should either return an Error if the connection is dead (maybe it is possible to generate a RedisError from within deadpool-redis) or create a new connection transparently.

How to test if initialization is successful?

Hi everyone!

Just a quick question: how can we test if the database pool gets initialized successfully?

The context is quite simple: imagine the .env file containing a typo for the password option (either in the name such as PASS instead of PASSWORD or just the value as in this example):

PG.USER=myuser
PG.PASSWORD=wrongpass
PG.HOST=127.0.0.1
PG.PORT=5432
PG.DBNAME=mydb

Currently, a code such as the one below does not detect/receive any issue at startup:\

let pool = match config.pg.create_pool(NoTls) {
        Ok(pool) => pool,
        Err(err) => {
            println!(">>> Database connection error: '{}'", err);
            exit(1);
        }
    };

Of course, the issue appears when the Client is used:

Error getting db connection: Backend(Error { kind: Connect, cause: Some(Os { code: 2, kind: NotFound, message: "No such file or directory" }) })

Any advice, please?
Thanks

Cargo run 'postgres-actix-web' example error

Cargo run 'postgres-actix-web' example error:

error[E0308]: mismatched types
  --> src/main.rs:50:5
   |
48 | fn create_pool() -> Result<Pool, ConfigError> {
   |                     ------------------------- expected `std::result::Result<deadpool::managed::Pool<deadpool_postgres::ClientWrapper, tokio_postgres::error::Error>, config::error::ConfigError>` because of return type
49 |     let cfg = Config::from_env("PG")?;
50 |     cfg.create_pool(tokio_postgres::NoTls)
   |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected enum `config::error::ConfigError`, found enum `deadpool_postgres::config::ConfigError`
   |
   = note: expected type `std::result::Result<_, config::error::ConfigError>`
              found type `std::result::Result<_, deadpool_postgres::config::ConfigError>`

Code executes concurrently but not in parallel when awaiting tasks on .get()

I've been eating around this particular cookie for awhile today.

When I rely solely on tasks calling .get(), even if I spawn 4 tasks, there's only ever one resource spawned by deadpool. This is pretty noticeable because the object pool I'm managing with deadpool is a pool of browsers created by fantoccini. You were in the thread earlier.

I'm believe that with #[tokio::main(core_threads = 4, max_threads = 10)] and:

tokio::spawn(async { scrape_worker(scraping1).await }),

for each worker, it should be working in parallel. To force it to create more resources I did this:

    let browser1 = scraping.client_pool.get().await;
    let browser2 = scraping.client_pool.get().await;
    let browser3 = scraping.client_pool.get().await;
    let browser4 = scraping.client_pool.get().await;
    let mut futures = vec![
        tokio::spawn(async { scrape_worker(scraping1).await }),
        tokio::spawn(async { scrape_worker(scraping2).await }),
        tokio::spawn(async { scrape_worker(scraping3).await }),
        tokio::spawn(async { scrape_worker(scraping4).await }),
    ];
    
    std::mem::drop(browser1);
    std::mem::drop(browser2);
    std::mem::drop(browser3);
    std::mem::drop(browser4);
    loop { };

It creates 4 browser windows, but the work proceeds in one browser window at a time. It does rotate in a consistent way between the 4 browser windows (counter-clockwise in my tiled wm), but they're still not parallel.

Here's the full code: https://gist.github.com/bitemyapp/f0ee741224f49be13104e5e3fc1af911

I doubt my hypothesis somewhat because the logs look like this:

[2020-01-17T02:15:08Z ERROR scraping_rs] Started scrape_page
[2020-01-17T02:15:08Z ERROR scraping_rs] Acquired client
[2020-01-17T02:15:08Z ERROR scraping_rs] Went to URL
[2020-01-17T02:15:09Z ERROR scraping_rs] Started scrape_page
[2020-01-17T02:15:09Z ERROR scraping_rs] Acquired client
[2020-01-17T02:15:09Z ERROR scraping_rs] Went to URL

If it were the case that they were concurrently blocking on deadpool's pool.get() I should see consecutive Started scrape_page.

I feel like I must be missing something obvious here. If the code is getting spuriously linearized by something else, I'm guessing that would cause deadpool to only ever instantiate one resource right?

I did some profiling as well (for whatever call graph hierarchy attribution is worth with async) and I got something like this:

image

The purple/blue is the stuff that has the sub-string deadpool.

deadpool-lapin - recycling connections and channels

Per my understanding, RabbitMQ requires using a channel (lapin::Channel) within a connection (lapin::Connection) in order for a Publisher to send messages.

Unfortunately, I'm struggling to recycle deadpool_lapin::Connection objects provided by deadpool_lapin::Pool Manager, as the Manager does not handle channels, and lapin::Connection.channels is a private field.

My use case is a hyper based microservice publishing events to RabbitMQ.
Currently, the less worse performance is achieved by reusing a channel directly:

let client: deadpool_lapin::Connection = pool.get().await.expect("RabbitMQ connection failed");
let channel = client.create_channel().await.unwrap();

let service = make_service_fn(|_| {                                                                                                                                                          
    let channel = channel.clone();                                                                                                                                                           
    async {
        Ok::<_, hyper::Error>(                                                                                                                                                                   
            service_fn(move |req: Request<Body>| microservice_handler(req, channel.clone()))
        )                                                                                                                                                                                    
    }                                                                                                                                                                                        
});

While providing the best performance so far, the overall impact on req/s is quite important, as the single connection and channel to RabbitMQ becomes the bottleneck.

I would prefer using code similar to deadpool-redis or deadpool-postgres, allowing several connections to RabbitMQ, each with its channel:

let service = make_service_fn(|_| {                                                                                                                                                          
    let pool = pool.clone();                                                                                                                                                           
    async {
        Ok::<_, hyper::Error>(                                                                                                                                                                   
            service_fn(move |req: Request<Body>| microservice_handler(req, pool.clone()))
        )                                                                                                                                                                                    
    }                                                                                                                                                                                        
});

However, for the time being, as deadpool_lapin::Pool does not handle channels, and lapin::Connection.channels is a private field, channels must be created and removed with each request, resulting in an even greater performance impact than a single connection/channel bottleneck.

While having a limited experience on the matter, my intuition, to match the Pool pattern, is that it would be great if the Manager implementation managed channels on top of the connections.

try_get?

A try_get method (non-async, returning Result<Option<T>, E> would be nice to have if you would rather shed load than queue unboundedly and risk bufferbloat.

This is related to #9, but avoids the need to ever switch contexts or even touch the async machinery, and so does benefit from being in deadpool rather than user code.

Provide a `deadpool_postgres::Client` type alias

Thanks so much for this crate!

deadpool_lapin has this handy type alias:

pub type Connection = deadpool::Object<lapin::Connection, Error>;

… but deadpool_postgres is missing an equivalent (I guess it would be deadpool_postgres::Client).

It would be useful because it means a crate using deadpool_postgres doesn't need to also depend on deadpool just to get the type, if it wants to pass around a database client that it got from the pool.

Split PoolConfig and ConnectionConfig into separate structures

Right now the Config structs of the deadpool-* crates contain all the connection configuration and a pool: PoolConfig field. It would be better to move the configuration specific to the connection/manager into its own structure.

Right now the Config structs implements a create_pool method which creates a manager using the connection/manager specific fields and uses the pool field to create the actual Pool.

I think a better design would be to have a PoolConfig and ManagerConfig structure which are both part of a new structure called Config. The new Config structure would look more or less like that:

#[derive(Clone, Debug)]
#[cfg_attr(feature = "config", derive(serde::Deserialize))]
struct Config {
    pub manager: Option<ManagerConfig>,
    pub pool: Option<PoolConfig> 
}

impl Config {
    pub fn create_pool(self) -> Pool {
        let manager = crate::Manager::new(self.manager);
        Pool::from_config(manager, self.pool);
    }
}

The create_pool would now consume self and also enable the implementation of the Into/From traits:

impl From<Config> for Pool {
    fn from(config: Config) -> Self {
        config.create_pool()
    }
}

The Config struct could even be made into a generic Config<M> structure, but I'm undecided if that's actually worth it.

Cargo run 'postgres-actix-web' example error

Cargo run 'postgres-actix-web' example error:

error[E0308]: mismatched types
  --> src/main.rs:50:5
   |
48 | fn create_pool() -> Result<Pool, ConfigError> {
   |                     ------------------------- expected `std::result::Result<deadpool::managed::Pool<deadpool_postgres::ClientWrapper, tokio_postgres::error::Error>, config::error::ConfigError>` because of return type
49 |     let cfg = Config::from_env("PG")?;
50 |     cfg.create_pool(tokio_postgres::NoTls)
   |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected enum `config::error::ConfigError`, found enum `deadpool_postgres::config::ConfigError`
   |
   = note: expected type `std::result::Result<_, config::error::ConfigError>`
              found type `std::result::Result<_, deadpool_postgres::config::ConfigError>`

Add support for odbc

I have a noob question.

I would like to do something like:

use async_std::sync::{Arc, Mutex};
use async_trait::async_trait;
use failure::Error;
use odbc_safe::AutocommitOn;

use std::env;

struct Conn {
    conn: Mutex<Arc<odbc::Connection<'static, AutocommitOn>>>,
}
struct Manager {}
type Pool = deadpool::managed::Pool<Conn, Error>;

#[async_trait]
impl deadpool::managed::Manager<Conn, Error> for Manager {
    async fn create(&self) -> Result<Conn, Error> {
        let env = odbc::create_environment_v3_with_os_db_encoding("utf-8", "latin1").unwrap();

        let conn_str = env::var("SQL_STR").unwrap();

        let conn = env.connect_with_connection_string(&conn_str).unwrap();

        let conn = Mutex::new(Arc::new(conn));

        Ok(Conn { conn })
    }

    async fn recycle(&self, conn: &mut Conn) -> deadpool::managed::RecycleResult<Error> {
        Ok(())
    }
}

but the compiler complains that '*mut odbc_sys::Env' cannot be shared between threads safely.

Why isn't the Mutex and Arc enough for this?

Use tokio_postgres::Client::is_closed when recycling

After having talked to @sfackler and @dunnock I think it is safe to assume that Client::is_closed can be used instead of client.simple_query("") for most uses of the library. That simple_query just adds little extra safety to the use of deadpool-postgres.

Therefore I'm adding a new config option called recycle_test_query which controls wether the test query is executed after checking the health using is_closed. This config option will default to true in version 0.5 and changed to false in version 0.6.

Would you expect any issues when passing a reference to the pool into a handler?

I've got a weird performance issue going on and I just want to check if the way I'm doing this raises any flags.

There's basically a single handler in this Actix server, and depending on the type of request it dispatches to one of two functions. One of these functions needs a transaction, and it looks pretty much like this:

async fn problem_fn(pool: &Pool) -> Result<stuff> {
    let mut client = pool.get().await.unwrap();
    let transaction = client.transaction().await.unwrap();
    let _result = transaction.execute(statement, params).await.unwrap();
    Ok(())
}

The call chain is like: registered_actix_handler -> dispatch_fn -> problem_fn, where dispatch_fn has the Pool as part of its Actix AppState.

The other function just takes (client: &Client) as I've seen in most examples, and it doesn't seem to suffer from any issues, so I'm just wondering if passing the pool in as an argument to a handler might cause any issues with returning connections to the pool or something?

For some reason the handler that uses transactions takes a long time to process (especially when there are 0 connections in the pool when the server has just started). Subsequent requests run at an expected speed. When running on AWS EC2 instances, however, just about every request to this handler suffers from the long processing times, in case this adds any other clues.

I may be looking at the wrong problem here, but again I just want to check and see if I'm doing it wrong by passing in the pool itself like that. I have a feeling it has something to do with connections and not the SQL getting executed, because as I mentioned earlier there are times (usually after the first connection is made) that it processes quite quickly. Sorry for the super long issue and thanks in advance for any pointers.

Observing pool exhaustion?

Any recommendations on observing pool exhaustion? During load testing, curious to know if we’re exhausting available connections or approaching that threshold.

Like a warning: “Pool exhausted, requesters must wait for a connection to become available. Or provision more connections”

Add timeout function to unmanaged pool implementation or additional documentation

I think it would be an intuitive addition to add a timeout wrapper to the unmanaged Pool. A new method (like the managed one) would perhaps be preferrable instead of returning a Result for the normal get()? If you don't deem such a method necessary, I'd suggest to add a snippet to the documentation for noobs like me who are not very familiar with tokio.

Based on this: #9

See your comment

Unmanaged pools don't recycle and create new objects. Thus only wait_timeout makes any sense. Right now this can already be achieved by calling tokio::time::timeout(std::time::Duration::from_secs(5), pool.get()). It might make sense to add this as a configuration option to the pool nonetheless. It will require the pool.get() method to return a Result<T, PoolError> instead of T.

I like the lib. Thank you.

Postgres URL support

tokio_postgres::Config supports URLs, but as far as I can see deadpool doesn't, meaning one has to drop using URLs (and some systems, such as Heroku, provide URLs).

Incorrect number of available objects when Manager::create returns an error

Hi!

I was experimenting with this library and found a problem with how the number of available objects in the pool is calculated.

Example:

use async_trait::async_trait;
use deadpool::*;
use futures::channel::oneshot;

#[derive(Debug)]
struct Manager {
    should_fail: bool,
}

#[derive(Debug)]
enum Error {
    Fail,
}

#[derive(Debug)]
struct Connection;

#[async_trait]
impl deadpool::Manager<Connection, Error> for Manager {
    async fn create(&self) -> Result<Connection, Error> {
        if self.should_fail {
            Err(Error::Fail)
        } else {
            Ok(Connection)
        }
    }
    async fn recycle(&self, _conn: &mut Connection) -> Result<(), Error> {
        Ok(())
    }
}

#[tokio::main]
async fn main() -> Result<(), Error> {
    let manager = Manager { should_fail: true }; // when should_fail: false this program doesn't hang
    let pool = Pool::new(manager, 10);
    let pool_clone = pool.clone();
    let (tx, rx) = oneshot::channel::<()>();
    let join_handle_1 = tokio::spawn(async move {
        println!("1: {:?}", pool_clone.status());
        let _conn = pool_clone.get().await;
        tx.send(()).unwrap();
    });
    let pool_clone = pool.clone();
    let join_handle_2 = tokio::spawn(async move {
        rx.await.unwrap();
        println!("2: {:?}", pool_clone.status());
        let _conn = pool_clone.get().await;
    });
    let _ = join_handle_1.await;
    let _ = join_handle_2.await;
    Ok(())
}

This program prints

1: Status { size: 0, available: 0 }
2: Status { size: 0, available: 1 }

and hangs.

My understanding is that after the first call to get the value of available is 1 because the obj is dropped when obj.state == ObjectState::Creating:

deadpool/src/lib.rs

Lines 111 to 114 in cadbf70

ObjectState::Creating => {
pool.available.fetch_add(1, Ordering::Relaxed);
pool.size.fetch_sub(1, Ordering::Relaxed);
}

The second call to get hangs on recv because available > 0

Change the folder structure to match crate names?

The way that the workspace crates are laid out -- redis instead of deadpool-redis -- makes it impossible to patch/do Git imports of forked reasons of the driver-specific crates.

Naturally, this makes it harder to test and forces users to vendor code if they want a portable way to patch these dependencies temporarily.

It seems like simply making the folder names match the crate names would be a simple enough change with no repercussions, but I'm curious if there was originally a specific reason to lay it out as it is now.

Continuous integration

Please add any ci, for example travis - login on travis-ci.org via github, enable ci for this project, and add .travis.yml to project root

language: rust
rust:
- stable

sudo: false

cache: cargo

script:
- cargo build --verbose
- cargo test --verbose

how to use #[tokio::test]

error: the async keyword is missing from the function declaration
  --> hawk_data\src\db\redis.rs:44:7
   |
44 | async fn test_managed_basic() {
   |       ^^

when I use #[tokio::test] , how to solve this error?

Can the Actix-web + deadpool-postgres example be updated?

Hi, in the actix-web + deadpool-postgres example you mention that because actix-web doesn't support tokio 0.3, you need to use deadpool 0.5.0.

I probably don't fully understand how crates + dependencies work, but when I tested the latest actix-web + deadpool-redis, everything worked fine, with Cargo.lock showing that it uses tokio = 0.2.23.

Is the incompatibility message necessary?

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.