vahid-sohrabloo / chconn Goto Github PK
View Code? Open in Web Editor NEWLow-level ClickHouse database driver for Golang
License: MIT License
Low-level ClickHouse database driver for Golang
License: MIT License
Using the latest v2
version, I get this error when attempting to insert Column[bool]
into a Bool
column:
mismatch column type: ClickHouse Type: Bool, column types: Int8|UInt8|Enum8
It seems like chconn
validates the schema before insert, and it doesn't recognize Bool
.
I understand that Bool
is just an alias for UInt8
, but this is handled transparently by other query interfaces. Is there some way to insert into a column of type Bool
using chconn
, or are there plans to support this?
Hello, I found this nice project, but I would like to know if it is possible to support multi-threaded parallel access (multiple clients in parallel) on one server
I'm doing robustness tests by killing randomly Clickhouse or Zookeeper to see if my program retries correctly inserting data. I noticed that sometimes I have an empty insert. I think this is due to a bug in chconn v3:
func (ch *conn) InsertWithOption(
ctx context.Context,
query string,
queryOptions *QueryOptions,
columns ...column.ColumnBasic) error {
stmt, err := ch.InsertStreamWithOption(ctx, query, queryOptions)
if err != nil {
return err
}
defer stmt.Close()
err = stmt.Write(ctx, columns...)
if err != nil {
return err
}
err = stmt.Flush(ctx)
if err != nil {
return err
}
for _, col := range columns {
col.Reset()
}
return nil
}
The problem is that column buffers are already reset at the end of stmt.Write()
so if the error happens in stmt.Flush(),
it is not possible to retry the insert.
I have a requirement that is a little bit unusual. The reason behind this is a little long to explain...
I generate data in 4 columns let's say K, V1, V2, V3. Now, I would like to transform these data before insertion so that I end up with this in database:
K V1 (empty) (empty) (empty)
K (empty) V2 (empty) (empty)
K (empty) (empty) V3
To do this, I would need a way to prepend values in a column. Does it seem feasible?
Thanks.
how to support Load balance and failover
for _, key := range model.Keys {
col := column.New[types.Decimal64]()
for _, d := range model.Values[key] {
col.Append(types.Decimal64FromFloat64(d, 3))
}
keys = append(keys, key)
columns = append(columns, col)
}
for Decimal64 field, i have to loop append, for performace issue i want to use Append(d...),
also ,if the types.Decimal64 can use Add, subtract, multiply and divide directly ,
i will not covert Decimal64 to float64 on db reading and then convert back for db writing.
types.Date and types.Datetime is the same ,if it can directly use compare function , it will be better
I'm currently implementing a strategy which limits the size of my inserts based on memory constraints, because in some cases I have many parallel insert buffers in memory and I would like to be sure that they all together don't exceed some max memory setting.
My idea would be to define a method like this:
func (c *Base[T]) UsedMemory() uint64 {
return uint64(cap(c.values) * int(c.rtype.Size()))
}
I'm using cap() instead of len() as it reflects the allocated memory. Not sure if the function should return an uint64 or another type.
From what I see in the code it shouldn't be too difficult to implement. There are some subtleties for some types (low cardinality...) but it seems doable. One point not very clear to me is that some structures seems to be used for written data, other for read data. My use case is to measure memory for writes only, but read data could be captured too.
What do you think?
i'm working on large data calculation for clickhouse.
i found this lib is very cool and fast.
i'm reading large rows columns from clickhouse using chconn, time consuming about 850ms,after calculation, it will write back to clickhouse, but this library only accept column type, so i must loop each go array and assign to new column type, it will take a lot of time(about 450ms),i tried the column type Fill method, but it not working as expect(string,date,float64 to decimal64).is there any fast way to convert normal array to chconn column.
may be 1.18 generic type will solve this problem(actually, i think the convertion is totally unnecessary),look forward to your new generic version coming.
following is my current convert code, i'm using map to store dynamic arrays
I'm interested by your driver to improve insert performance (I'm currently using clickhouse-go) but I have some columns with type Array(Nullable(UInt32)) and Array(Array(Nullable(UInt32)).
Are these types supported?
Thanks.
Hello,
I'm looking at https://github.com/ClickHouse/ch-go#stream-data and there is this nice feature of being able to streaming data block by block for large inserts. Is there something similar in chconn?
ch-go seems ok but this callback pattern is not convenient for me, I would prefer a more direct API.
thanks.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.