Comments (4)
Hi, currently I'm not planning to implement this feature.
You can create a semi-sync replica or relay log and connect the library to it.
I don't see why MySqlCdc should block transactions on the master server.
from mysqlcdc.
thanks your reply, no block master is right!!
When master stream is very big, how to implement parallel replication? Look forward to better guidance
`private async Task ReadEventStreamAsync(Func<IBinlogEvent, Task> handleEvent, CancellationToken cancellationToken = default)
{
var eventStreamReader = new EventStreamReader(_databaseProvider.Deserializer);
var channel = new EventStreamChannel(eventStreamReader, _channel.Stream, cancellationToken);
var timeout = _options.HeartbeatInterval.Add(TimeSpan.FromMilliseconds(TimeoutConstants.Delta));
while (!cancellationToken.IsCancellationRequested)
{
var packet = await channel.ReadPacketAsync(cancellationToken).WithTimeout(timeout, TimeoutConstants.Message);
if (packet is IBinlogEvent binlogEvent)
{
// We stop replication if client code throws an exception
// As a derived database may end up in an inconsistent state.
await handleEvent(binlogEvent);
// Commit replication state if there is no exception.
UpdateGtidPosition(binlogEvent);
UpdateBinlogPosition(binlogEvent);
}
else if (packet is ErrorPacket error)
throw new InvalidOperationException($"Event stream error. {error.ToString()}");
else if (packet is EndOfFilePacket && !_options.Blocking)
return;
else throw new InvalidOperationException($"Event stream unexpected error.");
}
}`
from mysqlcdc.
Hi, @wilsonliu78
You cannot read the replication stream in parallel. You need to sequentially read the stream from the master.
Database replication is based on the Commit Log(also known as Transactional log).
Commit Log is just a log file that contains a sequence of transactions.
When you insert/delete/update rows in your database, new records are appended to the log.
MySQL(just like other relational databases) has a single Commit Log.
No matter how big your log(stream) is, you must replicate it sequentially (not in parallel).
You can use batch processing. For example, you read the first 1000 log records, then you process them, then you read the next 1000 log records and repeat.
It sounds like your architecture needs data streaming.
I would recommend you to take a look at Apache Kafka.
Kafka is a streaming platform, very similar to Commit Log.
In Kafka, you can create many partitions. It allows you parallelize stream processing.
Kafka can store and process terabytes of data.
You can use Kafka itself, or you can capture database changes using MySqlCdc and put them in Kafka's topic and then process them.
from mysqlcdc.
In other words, MySqlCdc is designed for capturing database changes. But if you need event-streaming, Kafka is the right choice. Every Kafka partition is similar to Commit Log. You can use Kafka for event sourcing / real-time data streaming
from mysqlcdc.
Related Issues (17)
- support for binlog-do-db/binlog-ignore-db HOT 6
- There is a question about missing transactions HOT 13
- How to get the cells column name? HOT 8
- MaxScale BinLog Server compatibility like maxwell discussion HOT 1
- Empty VARCHAR data column value and JSON data column inner string field value is parsed as null out of MySQL binlog. HOT 12
- Implement v2.0
- no UpdateRowsEvent\DeleteRowsEvent only QueryEvent HOT 2
- Is it possible to get the logs for a specific table? HOT 7
- [Question] TableMapEvent.TableMetaData.ColumnNames returns null HOT 7
- update row event,but ColumnsPresentBeforeUpdate and ColumnsPresentAfterUpdate is same ? HOT 2
- Could not receive a master heartbeat within the specified interval HOT 3
- Does BinlogClient support cluster? HOT 12
- Missing CancellationToken
- Parsing opaque values in JSON
- Implement SSL strategies HOT 3
- use Production Environment HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mysqlcdc.