Comments (4)
Will it swallow all memory and fail or it will be running in a kind on streaming format?
Hi @Smotrov, given your description and code, I would expect this query to run incrementally and not buffer all the results to memory -- that is I would expect the query to stream
There are some operators that require potentially buffering all data (grouping, joins, sorts) but you don't seem to be doing that
I am not super familar with exactly how the json writing is implemented, but I believe that should be streaming as well
How could I limit the amount of memory
You can limit the amount of memory using https://docs.rs/datafusion/latest/datafusion/execution/memory_pool/trait.MemoryPool.html
However, as I mentioed I wouldn't expect your query to buffer large amounts of memory, so if it is maybe we need to adjust the writer seetings or there is some improvement to make to datafusion
Let us know how it goes!
from arrow-datafusion.
Hi @Smotrov -- I agree the use of 20-30 GB seems not good. Perhaps there is something in DataFusion that is not accounting for memory correctly (perhaps it is the decoding of the ndjson / zstd stream) 🤔
from arrow-datafusion.
Thank you @alamb
This is what I actually did.
const MEMORY_LIMIT: usize = 8 * 1024 * 1024 * 1024; // 8GB
fn create_context() -> Result<SessionContext> {
// Create a memory pool with a limit
let memory_pool = Arc::new(FairSpillPool::new(MEMORY_LIMIT));
// Configure the runtime environment to use the memory pool
let rt_config = RuntimeConfig::new()
.with_memory_pool(memory_pool)
.with_temp_file_path(PathBuf::from("./tmp"));
let runtime_env = Arc::new(RuntimeEnv::new(rt_config)?);
// Configure the session context to use the runtime environment
let session_config = SessionConfig::new();
let ctx = SessionContext::new_with_config_rt(session_config, runtime_env);
Ok(ctx)
}
However it easily takes 20..30GB of RAM and what is interesting, the CPU load stays relatively small. Like 20...30%.
The memory consumption is that high when I set at lease 4 target partitions.
// Define the partitioned Listing Table
let listing_options = ListingOptions::new(file_format)
.with_table_partition_cols(part)
.with_target_partitions(4)
.with_file_extension(".ndjson.zst");
Would be grate if it would be possible to set an actual limit for the memory otherwise I can't use it on docker :-(
Full utilization of all CPU cores would also be cool if possible.
from arrow-datafusion.
FWIW: I was able to reproduce it while writing single file into partitioned table with multiple partitions (~5kk rows, 4k partitions):
The memory usage comes from having separate instance of ZSTD encoder per writing thread (per partition). Probably this feature could help with the memory usage issue (when implemented in zstd-rs -> async-compression).
UPD: for single file -> non-partitioned table case, DF works just fine (~75MB peak in total, ~15MB of them is memory for encoder), and it's also ok for writing multiple partitions without compression (~500MB in total due to buffering for 4k writes) so it's just an issue in case of dozens / hundreds of partitions + ZSTD.
from arrow-datafusion.
Related Issues (20)
- `BinaryExpr` evaluate lacks optimization for `Or` and `And` scenarios HOT 14
- Register SQL planners in `SessionState::new()` HOT 4
- Implement user defined planner for `date_part`
- Implement user defined planner for `create_struct` HOT 1
- Implement user defined planner for `create_named_struct` HOT 1
- Implement user defined planner for `sql_overlay_to_expr` HOT 1
- BinaryOp supporting multiple parameters in Substrait
- Support `COUNT()` in addition to `COUNT(*)` HOT 1
- Replace `println!` with `assert!` if possible in DataFusion examples HOT 3
- Implement Substrait support for SubqueryType::Scalar
- HashJoin for nested types give wrong results
- Clean Up Data Page Statistics Tests and Fix Bug HOT 1
- Implement user defined planner for sql_position_to_expr
- Implement user defined planner for `sql_compound_identifier_to_expr`
- Implement user defined planner for `sql_substring_to_expr `
- Implement user defined planner for `sql_position_to_expr`
- `where` clause incorrectly reject `NULL` literal (by SQLancer-NoREC) HOT 1
- Make error message better when `bitwise_*` operator takes wrong argument type HOT 1
- Improve error message for wrong argument type in operators
- Error evaluating clause `where COL_BIGINT < 1e100` (Found by SQLancer-NoREC) HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from arrow-datafusion.