Code Monkey home page Code Monkey logo

community's Introduction

TiDB Community Logo

Welcome to the TiDB Community! The main objective is to help members of the TiDB community who share similar interests to learn from and collaborate with each other.

Your journey of becoming a contributor and committer starts from here: improving docs, improving code, giving talks, organizing meetups, etc.

TiDB User Group

User Group

The TiDB User Groups (TUGs) are groups for facilitating communication and discovery of information related to topics that have long term relevance to large groups of TiDB users.

See TiDB documentation in English or Chinese. You can also get help in AskTUG.com (Chinese) if you encounter any problem.

TiDB Developer Group

Developer Group

Communication

The communicating.md file lists communication channels like chat, social medias, etc.

For more specific topics, join TiDB Internals developer discussion forum and post topics, or TiDB Community slack workspace and discuss with others.

Governance

TiDB has the following types of groups that are officially supported:

  • Technical Oversight Committee (TOC) serves as the main bridge and channel for coordinating and information sharing across companies and organizations. It is the coordination center for solving problems in terms of resource mobilization, technical research and development direction in the current community and cooperative projects.

  • Teams are persistent open groups that focus on a part of the TiDB projects. A team has its reviewer, committer and maintainer, and owns one or more repositories. Team level decision making comes from its maintainers.

How to contribute

Contributions are welcomed and greatly appreciated.

See contributors for details.

All the contributors are welcomed to claim your reward by filing this form.

Learning Resources

Learning resources are collected in the learning-resources. Here you can find all the resources which can help you learn and contribute to TiDB. For example, you can learn the TiDB architecture through the following blog posts:

Community Activities

License

TiDB Community is under the Apache 2.0 license. See the LICENSE file for details.

Acknowledgements

Thank you to the Kubernetes, Apache and Docker community pages for providing us with inspirations.

community's People

Contributors

amyangfei avatar bb7133 avatar bellaxiang avatar breezewish avatar caitinchen avatar cofyc avatar dcalvin avatar disksing avatar kennytm avatar lance6716 avatar lilin90 avatar lonng avatar mini256 avatar overvenus avatar qw4990 avatar ran-huang avatar rleungx avatar rustin170506 avatar soline324 avatar sunrunaway avatar sunxiaoguang avatar sykp241095 avatar tangenta avatar tisonkun avatar winkyao avatar winoros avatar xuhuaiyu avatar zhangyangyu avatar zimulala avatar zz-jason avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

community's Issues

REQUEST: New membership for dragonly

GitHub Username

@dragonly

SIG you are requesting membership in

  • sig-name: sig-k8s
  • role: committer

Requirements

Sponsors

List of contributions to the SIG’s project

Incubating Program: fuzz and auto debug

Incubating Program

General fuzz and auto debug tool.

Describe the feature or project you want to incubate:

Summary

A tool supports SQLFuzz and General Fuzz by mutation, then detects suspicious code block leading to bugs in fuzz automatically by monitoring code execution path of application.

Motivation

There is a lot of uncertainty in Chaos and fuzz test of complicated software like tidb.
Developers often costs a few of weeks to localize the root cause.
this tool may help on this.

Estimated Time

90 days

Your RFC/Proposal?

#125

REQUEST: New membership for jyz0309

GitHub Username

@jyz0309

SIG you are requesting membership in

  • sig-name: sig-sql-infra
  • role: reviewer

Requirements

Sponsors

List of contributions to the SIG’s project

  • PRs reviewed / authored
  • Issues responded to
  • SIG projects I am involved with

Incubating Program: Modelized Data Interface (MDI)

Project Incubating Request

Modelized Data Interface(Hereinafter referred to as, MDI)

Describe the project you want to incubate:

Summary

We note that there is no dedicated data service tool in the TiDB ecosystem, i.e., it is easy to obtain high-performance APIs for data manipulation (or access) through existing databases in order to achieve the goal of simplifying (or replacing) the back-end development work without repeating the inefficient work of developing interfaces for third-party systems.

Motivation

This project mainly addresses the following issues.

  • Centralized data management. TiDB is naturally suitable for storing large amount of data and becoming a data center, then the management of these data is required to have special tools. the functions currently designed by MDI include the management of data models and their versions, the management of model relationships, and the management of data access rights.
  • Flexible data query. Most of the Automatic API generation Tool on the market currently generates traditional APIs like restful, which is less flexible and less available. Then MDI also supports custom queries like GraphQL and OData to improve the flexibility of system queries and the availability of APIs, and really reduce the back-end development workload. In addition, in order to support more complex queries, you can consider SQL wrapping function to wrap a segment SQL query into an API.
  • Flexible data processing. Combined crosstalk function of APIs. Encapsulate a simple business logic and become a restful API by arranging add, delete, and check APIs with some transactional.
  • Alternatives to triggers. TiDB currently does not support triggers, and the implementation of related functions requires users to operate at the business level, which is tedious. MDI can provide a unified URL callback function to achieve a trigger-like effect.

Estimated Time

6 Months

Your RFC/Proposal?

A POC version has been implemented internally in our company. It already has features related to automatic generation of restful API and GraphQL, OData API, other features mentioned above are yet to be improved. In addition, the code specification, unit testing are to be improved. We are now ready to open source this project, with the power of the community to give it more life.Here is the open source code for this project:https://github.com/DigitalChinaOpenSource/MDI-kernel

Incubating Program: Talent Plan Courses

Incubating Program

Talent Plan Courses

Describe the feature or project you want to incubate:

Summary

The proposal is for Talent Plan Courses, which is a series of courses about computer systems, also TiDB. The purpose of the courses is to help more people enter the field of TiDB or computer infrastructure.

Motivation

Since there are plenty of people interested in TiDB but lack background knowledge, so we propose to create a series of computer system courses to help those people enter the field of the TiDB ecosystem, also computer system infrastructure. Course participants should have developed good knowledge and skills in system programming after consumed course materials.

Your RFC/Proposal?

#140

REQUEST: New membership for Tjianke

GitHub Username

@Tjianke

SIG you are requesting membership in

  • sig-name: sig-execution
  • role: reviewer

Requirements

Sponsors

List of contributions to the SIG’s project

Incubating Program: TiDB built-in SQL Diagnostics

Incubating Program

TiDB built-in SQL Diagnostics

Describe the feature or project you want to incubate:

Summary

Currently, TiDB diagnostic information acquisition relies mainly on external tools (perf/iosnoop/iotop/iostat/vmstat/sar/...), monitoring systems (Prometheus/Grafana), log files, HTTP APIs, and system tables provided by TiDB. The decentralized toolchains and cumbersome acquisition methods lead to high barriers to the use of TiDB clusters, difficulty in operation and maintenance, failure to detect problems in advance, and failure to timely investigate, diagnose, and recover clusters.
This proposal proposes a new method of acquiring diagnostic information in TiDB and exposing diagnostic information by the system tables so that users can query using SQL.

Motivation

This proposal mainly solves the following problems in TiDB's process of obtaining diagnostic information:

  • The toolchains are scattered, it needs to switch back and forth between different tools, and some Linux distributions do not have built-in corresponding tools or built-in tools don't have versions as expected.
  • The information acquisition methods are inconsistent, such as SQL, HTTP, export monitoring, login to each node to view logs, and so on.
  • There are many TiDB cluster components, and the correlation monitoring information between different components is inefficient and cumbersome.
  • TiDB does not have centralized log management components, and there is no efficient ways to filter, retrieve, analyze, and aggregate logs of the entire cluster.
  • The system table only contains the current node information, and does not reflect the state of the entire cluster, such as: SLOW_QUERY, PROCESSLIST, STATEMENTS_SUMMARY.

The efficiency of the cluster-based information query, state acquisition, log retrieval, one-click inspection, and fault diagnosis will be improved after the multi-dimensional cluster-level system table and the cluster's diagnostic rule framework is provided. And provide basic data for the subsequent abnormal early warning function.

Estimated Time

30 days

Your RFC/Proposal?

pingcap/tidb#13481

Incubating Program: Migrate Discourse's database from PostgreSQL to TiDB

Incubating Program

Migrate Discourse's database from PostgreSQL to TiDB

Describe the feature or project you want to incubate:

Summary

Migrate Discourse's database from PostgreSQL to MySQL first, then migrate from MySQL to TiDB.

Motivation

As https://asktug.com is running on Discourse, and Discourse is running on PostgreSQL, we want to migrate from Discourse from PostgreSQL to TiDB, for:

  1. A big benefit to TiDB Community;
  2. Potential interaction with Discourse Community and Ruby On Rails Community;
  3. Eat our own dog food;

Estimated Time

30 days

Your RFC/Proposal?

#109

PingCAP Special Week 2019 Q4: Dumpling / Mydumper replacement for DM integration and Lightning performance

Dumpling / Mydumper replacement for DM integration and Lightning performance

Full RFC at #123.

Abstract

We propose introduce a library to replace Mydumper, code named Dumpling, optimized for TiDB Lightning and to be usable as a library/plugin inside DM and TiDB, as well as be an independent program.

Problem statement

Mydumper is a third-party tool to dump MySQL databases into local filesystem as SQL dump. TiDB Lightning relies on output of Mydumper for importing into TiDB, and DM embeds Mydumper to quickly extract data from upstream.

Using Mydumper in the TiDB ecosystem has the following problems:

  • as a third-party tool, it does not match our development pace
  • Mydumper is licensed in GPLv3, which is not compatible with TiDB (Apache 2.0)

Therefore, we would like to replace Mydumper with our own tool, and develop new features on top of it, like

  • create a custom output format to reduce parsing effort and speed up Lightning
  • support dumping directly to cloud storage

Success criteria

  1. Replacement. Created a Go module with a CLI front-end which supports the Mydumper features required for DM and Lightning.

    • Resulting data are sorted by primary key
    • SQL files are split into size close to the given configuration
    • Single tables can be dumped in parallel, if a primary key or unique btree key exists
    • Consistency: dumping a snapshot instead of live data (either acquire a read lock or ignore new updates)
    • E2E test succeeds
    • Performance matching Mydumper
  2. Extension. Implements features which further helps the ecosystem

    • Support an easy-to-decode output format
    • Support directly writing to cloud storage

TODO list

Phase 1: Essentials — 8000 points total

Phase 2: Features — 2000 points total

Difficulty

  • (Mixed)

Score

  • 10000

Mentor(s)

Recommended skills

  • Go language
  • Software architecture (structuring and API design)
  • Task scheduling strategy for parallel programs

Incubating Program: make PITR production-ready

Introduce

github: https://github.com/lvleiice/Better-PITR

PITR is an ecosystem tool for TiDB Binlog. By preprocessing the incremental backup file of TiDB, PITR merged the changes of the same line of data to produce a new, lighter incremental backup file, which greatly reduced the Time of incremental backup Recovery and realized fast-pitr (Fast Point in Time Recovery).

For example

There is a table t1, it's schema is: create table t1 (id int primary key, name varchar(24)). And now we execute four SQLs in TiDB:

insert into t1 values(1, “a”);
insert into t1 values(2, “b”);
update t1 set name = “c” where id = 1;
delete from t1 where id = 2;

These SQLs will generate four binlog, restore binlog using Reparo tool data to downstream will execute four SQLs in downstream database. These binlogs are actually can merged to generate an insert into t1 values(1, "c"); This will save a quarter as much space as before and restore the files four times as fast. We can think of it simply: the binlog file produced by Drainer is compressed/preprocessed by PITR.

Current Situation

PITR is a Hackathon project, so it only implements the basic functionality, has some known problems, and lacks testing, so there may be more unknown problems. We need to solve the below problems, and make PITR production-ready.

Bug

  • PITR needs to maintain the table structure information (to parse the binlog data) but will report an error if there is no database information in the DDL.

Performance

  • PITR retrieves TiDB's historical DDL information to build the table structure of the initial state, and executes the DDL in PITR's built-in TiDB to retrieve the table structure. If the history of DDL is large, the initialization state can be very slow. (defective design)

Test

  • Unit test coverage was 63.8%
  • No integration testing

Usability

  • PITR does not output meta information, such as the location of the file to run the processing, or the time period during which binlog was processed (the binlog commit ts at the beginning and the binlog commit ts at the end).

Estimated Time

3(Developers) * 7(days)

Incubating Program: Plan Change Capturer

Project Incubating Request

Plan Change Capturer

Describe the project you want to incubate:

Summary

User story: as a user, I want to ensure there is no performance regression caused by SQL plan changes when updating TiDB.

This tool can detect SQL plan changes between different TiDB versions according to corresponding statistical information and schemas.

This tool is almost done(repo, tutorial), and it's time to make it an incubating project for long-term maintaining.

Estimated Time

30 days

Your RFC/Proposal?

The design doc, and tutorial.

Incubating Program: Cherry Bot

Incubating Program

Cherry Bot

Describe the feature or project you want to incubate:

Summary

Cherry Bot is a bot which helps you automate some work on github. It provides many features including.

Features:

  • Cherry Pick: bot can automatically cherry pick PRs to other branches when PR get merged.
  • Auto Merge: for the PRs which have been reviewed and are waiting for CI is completed, bot can merge the PRs when you are away form keyboard.
  • Label Contributor: bot can add specified label for contributors which may helps your project management.
  • Notice: bot helps you find out unlabled issues, not reviewed and stale PRs.
  • Slack: the bot is integrated with Slack, you can get the latest status from Slack.
  • More features: some features are not such frequently used, but it will bring surprise when you are required.

Motivation

Manually cherry pick spend a lot of time, waiting for CI completed makes people tired. GitHub brings good experience for manually usage, however, we need some automatics. We also want our repositories being managed under some rules, which is benefit for maintainance and statistics. While working with GitHub, we always want some feature that GitHub does not offer, which engaged us to make the bot much more stronger.

Estimated Time

2 months

Your RFC/Proposal?

#164

Incubating Feature: steam bulk insert in IoT scenario

Feature Incubating Request

In the IoT industry, integrated with YoMo to reduce the pressure and the cost when storing data to the cloud.

Describe the feature you want to incubate:

We are introducing YoMo a streaming serverless framework to the IoT industry. In this scenario, IoT devices/sensors always generate data at high frequency, when hundreds of these data sources connected, the writing pressure makes our customers put Kafka before DB, as we know, Cloud Kafka is expensive, result in bad ROI for our customers. By YoMo, developers can implement a bulk insert to the backend database whenever every 10,000 data arrived or every 30s like yomo-sink-faunadb-example

BTW, YoMo was built atop IETF QUIC Protocol, and designed as an event-first system, as gRPC is busy implementing over HTTP/3, and here, someday these two products can communicate over QUIC, cause in IoT, QUIC shows lots of advantages.

Incubating Program: Follower Read With Applied Index

Follower Read With Applied Index

Describe the feature or project you want to incubate:

Summary

This RFC proposes a improvement about getting snapshot on followers, which is
if the read request carries an applied index, the peer can get snapshot locally
wihtout any commucation with its leader, if only it has applied to the given
index. It's useful to reduce latency if the cluster is deployed in multi
datacenters.

Motivation

For clusters deployed in multi datacenters, the system latency could mainly
depend on the network RTT between datacenters. For example, suppose PDs are
deployed in Beijing, and TiKVs are deployed in Beijing and Xian (for high
availability). If a client which is near to Xian wants to read a region, it
needs to get a transaction timestamp from PDs (in Beijing), and then sends
requests to TiKVs in Beijing or Xian. For the latter case, TiKVs in Xian will
send read index requests to their leaders (in Beijing) internally, which still
involves a RTT crossing datacenters.

So, If we can add some proxies in the major datacenter, and let it help TiDBs
in Xian to get transaction timestamp and applied indices of all target regions,
the read latency between multi datacenter will be reduced from 2 RTT to 1 RTT.

Your RFC/Proposal?

tikv/rfcs#32
tikv/rfcs#33

Tracking list

  • Article about the improvement (done)

TiKV tasks about follower read with applied index

  • RFC about follower read with applied index: tikv/rfcs#32 (in review)
  • Support follower read with applied index in TiKV: tikv/tikv#5972 (in review)
  • DCProxy for GetTsAndAppliedIndex (unstarted)

TiDB tasks about follower read with applied index

  • Support location labels in TiDB: pingcap/tidb#13296 (in review)
  • Support getting ts from DCProxy in TiDB (unstarted)

Tasks about follower replication

  • RFC about follower replication: tikv/rfcs#33 (in review)
  • follower replication implementation: tikv/raft-rs#249 (in review)
  • Introduce follower replication into TiKV (unstarted)

Incubating Program: KeyVisualizer: The visualized hotspot profiler

Incubating Program

KeyVisualizer: The visualized hotspot profiler

Describe the feature or project you want to incubate:

Summary

Add an enabled-by-default KeyVisualizer component to PD, providing a visual webpage to show key heatmap of the TiKV cluster, which is useful for troubleshooting and reasoning inefficient application usage patterns. KeyVisualizer is accessible in-browser via the URL from PD.

Motivation

At present, if someone wants to troubleshoot a performance issue in TiKV, he will usually make use of some existing diagnostic tools, such as pd-ctl, Prometheus and Grafana, which are hard to learn, intuitive, not efficient, unable to recognize the pattern of application and hard to make correct judgment basing on them.

Google BigTable has similar problems. So they provided a heatmap profiler called BigTable KeyVisualizer (Video), which is recommended that the user first use to troubleshoot performance problems. After investigation, we realized that it has
good features that also suits TiKV, thus we added some of those features to our design.

Estimated Time

30 days

Your RFC/Proposal?

tikv/rfcs#36

REQUEST: New membership for @TszKitLo40

GitHub Username

@TszKitLo40

SIG you are requesting membership in

  • sig-name: sig-exec
  • role: reviewer

Requirements

Sponsors

List of contributions to the SIG’s project

Incubating Program: Add Apache Pulsar to tidb-binlog

Incubating Program

Add Apache Pulsar to tidb-binlog.

Describe the feature or project you want to incubate:

Summary

tidb-binlog is a very nice and efficient Change Data Capture(CDC) tool, introducing a new component Apache Pulsar to further improve the processing power of tidb-binlog.

Motivation

During the Change Data Capture(CDC), we need to ensure the order of the messages. Aka, we need to ensure that DDL arrives before DML. Currently, we use Kafka to process the order of messages, but as you know, Kafka can only guarantee the order of messages within a single partition. If we need to expand the downstream data processing capabilities, sometimes we want more partitions to provide more processing power, but how do we ensure the order of messages at this time? This is a trickier question.

Here, please allow me to introduce a new feature of Apache Pulsar: key_shared It ensures that messages are distributed to the key in order different partitions, this sounds like a very nice feature. Can help tidb-binlog to improve processing performance, at the same time, Pulsar has other great features, please refer to Apache Pulsar.

This is one of the reasons I introduce Apache Pulsar, and the other is we can consider making tidb-binlog support pluggable interfaces while allowing support for Kafka, Pulsar and other message queues, which can give users more choices.

Estimated Time

30 days

REQUEST: New membership for csuzhangxc

GitHub Username

@csuzhangxc

SIG you are requesting membership in

  • sig-name: sig-k8s
  • role: committer

Requirements

Sponsors

List of contributions to the SIG’s project

Incubating Program: TiDB with WebAssembly

Incubating Program

TiDB with WebAssembly

Describe the feature or project you want to incubate:

Summary

TiDB has a community, as does WebAssembly. As WebAssembly technology matures, more and more applications will be running in browsers, and users no longer have to endure cumbersome downloads and installation processes, which greatly reduces the cost of promoting these applications. Obviously users who are using TiDB for the first time are faced with these problems, so we can use WebAssembly technology to reduce the barriers to user engagement with TiDB, and we can also expand the TiDB audience with the help of the WebAssembly community.

Motivation

This proposal is expected to achieve these goals:

  • TiDB can work properly in browsers such as chrome/firefox on PC(done)
  • tour.tidb.io and play.tidb.io can be used as play ground(done)
  • support community project such as tidb-wasm-markdown
  • TiDB can work properly in browsers on phones
  • TiDB can run in wasmer and can be deployed to WebAssembly Shell

This proposal mainly solves the following problems:

  • compiling TiDB to WebAssembly will encounter the the problem that TiDB or third-party libraries may not be compatible with Wasm, so compilation will fail, the code need to be modified accordingly.
  • Before running TiDB's WebAssembly result in browser, we should fix some issues:
    • There are some compatibility issues in Golang's current version
    • TiDB can't listen on a specific port or access file in browser since the security strategy
  • Before running TiDB's WebAssembly result in phone, we should fix:
    • The webkit has a limit on the executable memory, typically it's 64M, however, the result of TiDB is 76M
    • The browser in phone has different events than which in PC, so the SQL terminal must be compatible with both
  • Before running with wasmer, the TiDB should be compiled to WASI, however:
    • Golang doesn't support WASI very well, we must use a fork of Golang
    • The TiDB result is so large, we should make it smaller

Estimated Time

30 days

Your RFC/Proposal?

pingcap/tidb#13570

TiDB Design Document Workflow

Feature Incubating Request

We have an entry about how to write proposals in this directory, but the content is quite out of date. As TiDB community grows rapidly, we'd better setup a clear design document workflow with nowadays situation.

Describe the feature you want to incubate:

Propose a TiDB Design Document Workflow and update TiDB design doc content.

Correspondingly update contributors guide to reconcile the description. And follow up considering update content of community RFC(Why do we have so many RFC contents overlap?).

Your RFC/Proposal?

Incubating Project: DB Mesh For TiDB

Summary

The high availability and load balancing access to TiDB recommended by PingCAP is solved by load balancing software or hardware such as LVS, HAProxy and F5. Although this solves the problem of load balancing, it brings a layer of network consumption. LVS, one of the more widely used software with Four-layer load balancing, has not been updated for a long time. HAProxy is a software that provides four - and seven-layer load balancing services. but with slightly lower performance than LVS。F5 is a commercial load balancing product with high cost. Therefore, we want to introduce a new Mysql access solution,called DB Mesh,to solve the access problem of TiDB. This project was our in-house developed solution for the MySQL access requirements,but considering TiDB also implements the standard MySQL protocol,we only need small modifications to connect to TiDB, so we decide to open source this project in the form of project incubation, so that more users can enjoy this product and at the same time better tempered products can be provided.

Motivation

The DB Mesh is a access services for cloud-native database (TiDB, MySQL) , mainly for the current mainstream Kubernetes or other virtualized application deployment environment and provides data proxy services by the sidecar way. It provides a grid layer that interacts with the database and it is a zero-intrusion solution for application program, and the problems such as the coupling issue of component and application, complexity of operation and maintenance and network consumption after load balancing can be solved. The product will achieve read-write load balance, read-write separation, process heat update, data source management, SQL detailed acquisition, SQL firewall and other functions with the advantage of the entry point of data traffic.

Labor cost

2 person/month
Estimated time required for project delivery
It will need 2-3 months for the first phase, mainly focusing on code review, function development, and standardization of existing code to achieve high standard of open source.

Architecture & Deployment

  • Architecture

Architecture

  • Deployment
    Process
    k8s

Core function (Phase I)

Features Status Progress
Load balancing To be verified Running
Read and write separation Achieved Running
Data source configuration management Need to develop Running
Connection pool management Need to develop Running
Configure hot swap Achieved Running
Failover Achieved Running
Reporting of monitoring information Need to develop Running
Multi-process, multi-version operation on the same port Achieved Running
Slow SQL Statistics Achieved Running
Daemon and management API interface Need to develop Running
Memory OOM management New feature Running

REQUEST: New membership for @qiancai

GitHub Username

@qiancai

SIG you are requesting membership in

  • sig-name: Docs SIG
  • role: active contributor

Requirements

Sponsors

List of contributions to the SIG’s project

REQUEST: New membership for zimulala

GitHub Username

@zimulala

SIG you are requesting membership in

  • sig-name: sig-exec
  • role: committer

Requirements

Sponsors

List of contributions to the SIG’s project

  • PRs 300+ authored
  • SIG projects I am involved with: sig-sql-infra

Incubating Program: TiUP graduates from pingcap-incubator

Project Incubating Request

Describe the project you want to incubate:

The project https://github.com/pingcap-incubator/tiup has developed almost three monthes and generally available from 2020-05-28.

This PR proposes that transfer the TiUP from pingcap-incubator to pingcap.

Your RFC/Proposal?

Rename pingcap-incubator to tidb-incubator

The mission of PingCAP Incubator is to make the TiDB ecosystem and community more active, let the TiDB community get more contributions from developers and users. Therefore, we believe that the "PingCAP Incubator" cannot express its mission. So we are going to rename it to "TiDB Incubator"

REQUEST: New membership for @dragonly

GitHub Username

@dragonly

SIG you are requesting membership in

Docs SIG committer

Requirements

Sponsors

List of contributions to the SIG’s project

Auto-tune RocksDB

(This is an example issue of TiDB Performance Challenge Program. Please don't open/assign this issue)

  • Description: TiKV heavily depends on RocksDB, but RocksDB has many configurations and it is hard to choose proper values in production. The goal for this section is to auto-tune RocksDB in real-time for different workloads.
  • Recommended Skills: Rust, RocksDB
  • Mentor(s): Yi Wu (@yiwu-arbug)
  • Issue: tikv/tikv#4052
  • Peer Bonus: 1000

REQUEST: New membership for @Yisaer

GitHub Username

@Yisaer

SIG you are requesting membership in

  • sig-name: sig-sql-infra
  • role: reviewer

Requirements

Sponsors

List of contributions to the SIG’s project

  • PRs reviewed / authored
  • Issues responded to

REQUEST: New membership for handlerww

GitHub Username

@handlerww

SIG you are requesting membership in

  • sig-name: sig-k8s
  • role: committer

Requirements

Sponsors

List of contributions to the SIG’s project

Incubating Program: tiup-dm

Project Incubating Request

tiup-dm

Describe the project you want to incubate:

Summary

Currently, DM is deployed through ansible, which is less easy to use than TiUP. The deployment of components using TiUP-related ecology is simpler and is a trend after version 4.0. tiup-dm allows DM to be deployed through TiUP components.

Motivation

  • Launch of incubation project on tiup-dm.
  • Make it able to be deployed by tiup.

Estimated Time

30 days

Your RFC/Proposal?

#206

REQUEST: New membership for @wshwsh12

GitHub Username

@wshwsh12

SIG you are requesting membership in

  • sig-name: sig-coprocessor
  • role: reviwer

Requirements

Sponsors

List of contributions to the SIG’s project

  • PRs 11 authored
  • SIG projects I am involved with: tikv

ti-community-bot's SIG committer promotion check is too strict

Currently, ti-community-bot requires the following roles to /lgtm on a SIG's membership.json file:

  1. modifying "activeContributor" requires one /lgtm from reviewers (or above)
  2. modifying "reviewers" requires one /lgtm from committers (or above)
  3. modifying "committers" requires two /lgtm from "maintainers"
  4. modifying "coLeaders" and "techLeaders" require two /lgtm from "maintainers"

The permission checks for 1, 2 and 4 are mostly consistent with existing SIG promotion rules, but for 3 it is too strict for most SIGs.

Let's check the existing roles-and-organization-management.md of all SIGs:

Basically, other than SIG-Diagnosis, all SIG only require approval from 2 existing committers or maintainers. Restricting /lgtm to maintainers is inconsistent with the existing rules. So either the bot should be changed to follow the common denominator of roles-and-organization-management.md, or all roles-and-organization-management.md should be changed to align with the bot.

REQUEST: New membership for @Reminiscent

GitHub Username

Reminiscent

SIG you are requesting membership in

  • sig-name: sig-planner
  • role: committer

Requirements

Sponsors

List of contributions to the SIG’s project

Incubating Program: Dynamic Configuration Change

Incubating Program

Dynamic Configuration Change

Describe the feature or project you want to incubate:

Summary

This proposal proposes a unified way to manage the configuration options of TiDB, TiKV, and PD by storing them in PD and support dynamically change the configuration options by using the same way among the different components which can greatly improve usability.

Motivation

Here are some reasons why we need to do it:

  • For now, each component in TiDB cluster has its own configuration file, which is hard for management. we need a unified way to manage the configuration options of all components
  • Although some configuration options support dynamic modification, the operations need to learn a lot to use them properly since we have multiple entries, e.g., pd-ctl, tikv-ctl, and SQL, resulting in poor usability. For better usability, provide a unified way to modify them dynamically.

Your RFC/Proposal?

pingcap/tidb#13660

Incubating Project: DB Proxy/Mesh for TiDB

Project Incubating Request

A Proxy/Mesh for TiDB that can replace Layer 4 load balancing such as SLB+Haproxy. It works in Proxy or Mesh(like Envoy) mode but can understand sql semantics, we call it weir.

Describe the project you want to incubate:

Summary

TiDB is adopted by PalFish(伴鱼) since 2016 and is the most important OLTP/OLAP database in our company. We use SLB+Haproxy as the Layer 4 load balancing before, but we want more features in L4 Proxy to keep tidb safe.

Motivation

This proposal will offer features as follows:

  • Auto-Detect Rate limiting、Circuit breaker.
  • Multi-tenancy.
  • Service discovery with smart client.
  • Connection pool.
  • Database Mesh for TiDB.
  • Web Application Firewall (WAF) for sql.
  • More metrics, such as tenant-dashboard which have hot/slow sql fingerprint.
  • Route across multi data centers.
  • SQL audit、slow sql statistics、session statistics、shadow database for benchmark and so on.

Estimated Time

6 months

Initial Team Members

徐成选(cx3ptr)
曹东瑜(eastfisher)

Your RFC/Proposal?

Incubating Program: odin

Incubating Program

Odin

Describe the feature or project you want to incubate:

Summary

Odin is a data generator/load tool which helps you fast generator/load data on TiDB cluster. It provides many features including.

Features:

  • fast generate TPCC/TPCH data in csv format(sst format will support in future)
  • fast load data to TIDB cluster(by wrap a lightning)

Motivation

Now. benchmark is hard for many people. they have to generator much data for hours and then benchmark. We want to speed it up. Fast generate and Fast load to TiDB cluster

Estimated Time

2 months

Incubating Program: discourse theme asktug

Project Incubating Request

discourse-theme-asktug

Describe the project you want to incubate:

Summary

discourse-theme-asktug is a custom theme which used in AskTUG.

Motivation

Custom discourse, it is difficult to say whether it is a simple or not simple thing. You can overwrite the page by using vanilla javascript and CSS. This is simple. But this is not customizable and destroys the structure of the page.

If not, you need to dive into the discourse source code. You will need to learn such things:

This is not simple but It embeds code seamlessly.

And asktug theme uses the second way, make all overwrites can be highly componentized.

Now, this theme is just getting started, it will add more features in the future.

Estimated Time

1 month.

Your RFC/Proposal?

#202

Incubating Program: Talent Plan Courses - Distributed Database System

Incubating Program

Talent Plan Courses - Distributed Database System

Describe the feature or project you want to incubate:

Summary:

As a part of Talent Plan Courses, this program mainly about creating a distributed database system course, both readings and assignments will be covered.

Motivation

There are already many open study resources for database systems and distributed systems, but not both topics been strongly connected. We think a well-organized course is necessary to help more campus students get in touch with the TiDB ecosystem. The course will be consist of readings and assignments. The readings could be derived from textbooks and notes from other courses. Even though there are many programming scaffolds about the database system and the distributed system, but they are not strongly connected, also C++ and Java are not easy to be mastered for young students. The new framework will be built with Golang, which is simpler and also the horsepower of the TiDB ecosystem.

Your RFC/Proposal?

#129

Incubating Project: TiDE

Project Incubating Request

Describe the project you want to incubate:

Original RFC in TiDB Hackathon 2020

TiDE is a TiDB IDE project, which got the second prize in the TiDB Hackathon 2020.

Motivation

TiDB as a distributed database system project is very complicated. For community newbies, one have a hard learning curve before he/she could contribute to any of the TiDB related open source projects (TiDB/TiKV/PD/Tools). For contributors, they must hand craft tons of customized scripts to even get the basic things done, like compiling a new tikv-server binary, distributing to servers, restarting the cluster, and harvesting logs for debugging.

We propose a new way of solving these issues by incubating an IDE project called TiDE (IDE for TiDB, also homophonic to IDE), which is aimed at TiDB related developing experiences.

Your RFC/Proposal?

#395

CONTRIBUTING.md is out of date

The contributing.md file here mentions Go 1.9 and links to Go 1.8 as well. The current requirement is Go 1.12 though. It also has out of date advice about dependency management.

I think it makes sense to change the contributing.md in pingcap/tidb to be very short with just a link to this location, so there is only one copy that needs to be maintained.

This should help keep it more up-to-date in future.

REQUEST: New membership for @qiancai

GitHub Username

@qiancai

SIG you are requesting membership in

  • sig-name: Docs SIG
  • role: Reviewer

Requirements

Sponsors

List of contributions to the SIG’s project

Incubating Project: BigData connectors for TiDB

Project Incubating Request

Describe the project you want to incubate:

Implement a fully featured and performant TiDB connector support to PrestoDB, PrestoSQL, Flink and miscellanies other BigData systems on top of TiSpark project.

Incubating Project: TIS - Solr based enterprise level, one-stop search center products

Project Incubating Request

TIS - Solr based enterprise level, one-stop search center products with high performance, high reliability and high scalability

Describe the project you want to incubate:

Summary

Use TIS to quickly build enterprise search service for you. TIS includes three components:

  • offline index building platform
    The data is exported from TiKV through full table scanning, and then the wide table is constructed by local Mr tool, or the wide table is constructed directly by tispark
  • incremental real-time channel
    It is transmitted to Kafka downstream through drainer, and real-time stream calculation is carried out by Flink and submitted to search engine to ensure that the data in search engine and tidb are consistent in near real time
  • search engine
    currently,based on Solr8

TIS can integrate this component seamlessly and bring users one-stop, out of the box search products

Motivation

This proposal mainly realize features in Execution process of search products based on TiDB:

  • Define the data source process and select the data source table of TiKV
  • Building wide table rules based on the selected data table define by pre step
  • In the previous step, the index schema is defined by defining the wide table structure, and the machine node is selected to build the index instance
  • The binlog message of TiDB is obtained based on drainer, and the channel synchronization of incremental flow calculation based on Flink
  • Script writing of platform automation installation based on ansible

Estimated Time

90 days

Attach file

TIS-TiDB.pdf

Your RFC/Proposal?

Incubating Program: Official recommended topologies and configurations to use TiDB

Incubating Program

Official recommended topologies and configurations to use TiDB

Describe the feature or project you want to incubate:

Summary

Give structure, type, and configuration, determine the benchmark value corresponding to TiDB, and give some configuration suggestions for TiDB

Motivation

When users need to use TiDB as their database, they often face how many machine and what machine configuration and topology to choose. At present, we do not give an official answer, which may cause users to be troubled, what kind of The configuration is to meet the current needs. This proposal is mainly to solve the user's selection problem.

Estimated Time

60 days

REQUEST: New membership for @handlerww

GitHub Username

handlerww

SIG you are requesting membership in

  • sig-name: sig-docs
  • role: reviewer

Requirements

Sponsors

List of contributions to the SIG’s project

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.