TBase is an advanced enterprise-level database management system based on prior work of Postgres-XL project. It supports an extended subset of the SQL standard, including transactions, foreign keys, user-defined types and functions. Additional, it adds parallel computing, security, management, audit and other functions.
TBase has many language interfaces similar to PostgreSQL, many of which are listed here:
https://www.postgresql.org/download
A TBase cluster consists of multiple CoordinateNodes, DataNodes, and GTM nodes. All user data resides in the DataNode, the CoordinateNode contains only metadata, the GTM for global transaction management. The CoordinateNodes and DataNodes share the same schema.
Users always connect to the CoordinateNodes, which divides up the query into fragments that are executed in the DataNodes, and collects the results.
The latest version of this software may be obtained at:
http://github.com/Tencent/TBase
For more information look at our website located at:
http://tbase.qq.com
cd ${SOURCECODE_PATH}
rm -rf ${INSTALL_PATH}/tbase_bin_v2.0
chmod +x configure*
./configure --prefix=${INSTALL_PATH}/tbase_bin_v2.0 --enable-user-switch --with-openssl --with-ossp-uuid CFLAGS=-g
make clean
make -sj
make install
chmod +x contrib/pgxc_ctl/make_signature
cd contrib
make -sj
make install
Use PGXC_CTL tool to build a cluster, for example: a cluster with a global transaction management node (GTM), a coordinator(COORDINATOR) and two data nodes (DATANODE).
-
Install pgxc and import the path of pgxc installation package into environment variable.
PG_HOME=${INSTALL_PATH}/tbase_bin_v2.0 export PATH="$PATH:$PG_HOME/bin" export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$PG_HOME/lib" export LC_ALL=C
Get through the SSH password free login between the machines where the cluster node is installed, and then deploy and init will SSH to the machines of each node. After getting through, you do not need to enter the password.
ssh-keygen -t rsa ssh-copy-id -i ~/.ssh/id_rsa.pub destination-user@destination-server
-
Generate and fill in configuration file pgxc_ctl.conf. pgxc_ctl tool can generate a template for the configuration file. You need to fill in the cluster node information in the template. After the pgxc_ctl tool is started, pgxc_ctl directory will be generated in the current user's home directory. After entering " prepare config" command, the configuration file template that can be directly modified will be generated in pgxc_ctl directory.
- The pgxcInstallDir at the beginning of the configuration file refers to the installation package location of pgxc. The database user can set it according to his own needs.
pgxcInstallDir=${INSTALL_PATH}/tbase_bin_v2.0
- For GTM, you need to configure the node name, IP, port and node directory.
#---- GTM ---------- gtmName=gtm gtmMasterServer=xxx.xxx.xxx.1 gtmMasterPort=50001 gtmMasterDir=${GTM_MASTER_DATA_DIR}/data/gtm_master
- If you do not need gtmSlave, you can directly set it to 'n' in the configuration of the corresponding node.
gtmSlave=n
If you need gtmSlave, configure it according to the instructions in the configuration file.
- Coordination node, which needs to be configured with IP, port, directory, etc.
coordNames=(cn001) coordMasterCluster=(tbase_cluster) coordPorts=(30004) poolerPorts=(30014) coordPgHbaEntries=(0.0.0.0/0) coordMasterServers=(xxx.xxx.xxx.2) coordMasterDirs=(${COORD_MASTER_DATA_DIR}/data/cn_master/cn001)
- Data node, similar to the above nodes: IP, port, directory, etc. (since there are two data nodes, you need to configure the same information as the number of nodes.)
primaryDatanode=dn001 datanodeNames=(dn001 dn002) datanodePorts=(20008 20009) datanodePoolerPorts=(20018 20019) datanodeMasterCluster=(tbase_cluster tbase_cluster) datanodePgHbaEntries=(0.0.0.0/0) datanodeMasterServers=(xxx.xxx.xxx.3 xxx.xxx.xxx.4) datanodeMasterDirs=(${DATANODE_MASTER_DATA_DIR}/data/dn_master/dn001 ${DATANODE_MASTER_DATA_DIR}/data/dn_master/dn002)
There are coordSlave and datanodeSlave corresponding to the coordination node and data node. If not, configure them as 'n'; otherwise, configure them according to the configuration file.
In addition, two type ports:
poolerPort
andport
, need to be configured for coordinator node and datanode.poolerPort
is used by nodes to communicate with other nodes.port
is the port used to login to the node. Here,poolerPort
andport
must be configured differently, otherwise there will be conflicts and the cluster cannot be started.Each node needs to have its own directory and cannot be created in the same directory.
Distribution of installation package(deploy all). After filling in the configuration file, run the pgxc_ctl tool,and then input "deploy all" command to distribute the installation package to the IP machine of each node.
Initialize each node of the cluster(init all). After the distribution of the installation package is completed, input "init all" command in pgxc_ctl tool to initialize all the nodes in the configuration file pgxc_ctl.conf and start the cluster. So far, the cluster has been started.
$ psql -h ${CoordinateNode_IP} -p ${CoordinateNode_PORT} -U ${pgxcOwner} -d postgres postgres=# create default node group default_group with (dn001,dn002); CREATE NODE GROUP postgres=# create sharding group to group default_group; CREATE SHARDING GROUP postgres=# create table foo(id bigint, str text) distribute by shard(id);
https://github.com/Tencent/TBase/wiki/1%E3%80%81TBase_Quick_Start
The TBase is licensed under the BSD 3-Clause License. Copyright and license information can be found in the file LICENSE.txt
tbase's People
Forkers
qiuwenhuifx banlilin guanqingtao chenxilinsidney justdoit1024 wwwbjqcom leegkon ltnil weishaolun huanwei 203haha swrd harmonyangel mloves0824 yangmeilly crixalis2013 chenyineng1024 liupan126 phully wuyanxing xiaoshangwu weily1 javayveneno b-xiang celestialempiror dd-guo oyo-byte tangguoqiang qxhy123 lukw00heck hadkins1 herminia1993 pengzhanyuan zmyer jianlirong staysun dawsongzhao jiangrongbo gosonzhang xuqianjin-stars eagle2016 liusicong chorfung donnyzone neobig icefoxs linjiquan xq2005 orisonchan hongyaochang spongcer zhipinglin joe2hpimn wangpangzi xiliu-n jangocheng dut3062796s huangweiboy2 gavinljj tchigher senorita56 tangyibo xubingyue duzhanyuan ximmyxiao chenyucong170 zag2017 happydarron zjcwj wind-linux miusuncle enginix hevan1sky david-z 13111186 elvinshang sskycn codecrh tekintian linuxlovers dngiveu liuluheng duoluodexiaokeke zhanglei civlcf logan-lu fuwei44 awesome-archive cwjcode vaethelliott haplone darrkz rick-sanchez-0 callmedba 396125654 tryforname patrickliu limyang jianliangyi super-raintbase's Issues
when measues Benmarksql 5.0, it pops out duplicate key error.
01:15:45,792 [Thread-30] ERROR jTPCCTData : Unexpected SQLException in NEW_ORDER
01:15:45,792 [Thread-30] ERROR jTPCCTData : ERROR: node:dn001, backend_pid:29859, nodename:dn001,backend_pid:29859,message:duplicate key value violates unique constraint "bmsql_oorder_pkey"
Detail: Key (o_w_id, o_d_id, o_id)=(5, 10, 3005) already exists.
org.postgresql.util.PSQLException: ERROR: node:dn001, backend_pid:29859, nodename:dn001,backend_pid:29859,message:duplicate key value violates unique constraint "bmsql_oorder_pkey"
Detail: Key (o_w_id, o_d_id, o_id)=(5, 10, 3005) already exists.
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2198)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1927)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:561)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:419)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:365)
at jTPCCTData.executeNewOrder(jTPCCTData.java:410)
at jTPCCTData.execute(jTPCCTData.java:95)
at jTPCCTerminal.executeTransactions(jTPCCTerminal.java:261)
at jTPCCTerminal.run(jTPCCTerminal.java:88)
at java.lang.Thread.run(Thread.java:748)Recursive queries do not support functions.
It used to work in PG-XL.
The small patch is in our branch, but it does not check if the function is immutable/stable/volatile. Proper fix should probably check it.
编译出现unused variable告警
环境:
LSB Version: :core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID: CentOS
Description: CentOS Linux release 7.4.1708 (Core)
Release: 7.4.1708
Codename: Core
gcc version 4.8.5 20150623 (Red Hat 4.8.5-39) (GCC)
TBase: https://github.com/Tencent/TBase编译步骤:
./configure --prefix=${INSTALL_PATH}/tbase_bin_v2.0 --enable-user-switch --with-openssl --with-ossp-uuid CFLAGS=-g
make clean
make -sjIs global_snapshot_source='coordinator' supported?
Looks like it causes xid to be always 0, no matter if set on coordinator or datanode renders system unusable with messages like
ERROR: XX000: start timestamp 0 is too old to execute, recentCommitTs 1840649352
wiki和#20里面的二维码过期了
麻烦大佬能否再发个群二维码啊 感谢!
during make -sj error occurs
cat: ../../src/interfaces/libpq/objfiles.txt: No such file or directory
during pgxc_ctl init all in progress the following info can be seen:
2019-11-11 10:55:21.307 CST [794] LOG: could not open file "global/pg_crypt_key.map":No such file or directory for bufFile merging.
2019-11-11 10:55:21.307 CST [794] LOG: could not open file "global/pg_rel_crypt.map":No such file or directory for bufFile merging.
ALTER NODE coord1 WITH (HOST='host228', PORT=20004, CLUSTER='cluster1');
ERROR: CLUSTER could not be modified.EXECUTE DIRECT ON (datanode1) 'ALTER NODE datanode1 WITH (TYPE=''datanode'', HOST=''host228'', PORT=20008, CLUSTER=''cluster1'', PRIMARY, PREFERRED)';
ERROR: node:datanode1, backend_pid:2870, nodename:datanode1,backend_pid:2870,message:CLUSTER could not be modified.after installation:
postgres=# select * from pgxc_node;
node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id | node_cluster_name
-----------+-----------+-----------+-----------+----------------+------------------+-------------+-------------------
coord1 | C | 5432 | localhost | f | f | 1885696643 | tbase_cluster
gtm | G | 20001 | host227 | t | f | 428125959 | tbase_cluster
coord2 | C | 20005 | host229 | f | f | -1197102633 | cluster1
datanode1 | D | 20008 | host228 | t | t | 888802358 | cluster1
datanode2 | D | 20009 | host229 | f | f | -905831925 | cluster1
(5 rows)postgres=# create database test;
ERROR: PGXC Node coord1: object not definedtbase支持update limit 语句吗
目前公司的tbase数据库是postgre10.0 的,业务处于开发测试阶段,可能不是真正意义上的tbase数据库,暂不支持update limit语句,但一些业务场景需要用到 update limit
Query crashing the server, -Nan..NaN cost. Works on PG-XL.
The query works well on XL, but on the TBase it crashes the server:
run is replicated, rundataspec is distributed. Statistics are up to date.
select r.runid, r.runname, r.runmetadata->>'runRequesterUserName'::text username, r.size, r.state::text, sum(stat.processed)::bigint processed, sum(stat.total)::bigint total, r.creationdate, --max(stat.lastupdate)::timestamp with time zone lastupdate, r.lastupdatedate from run r right join (select status, count(*) total, count(*) filter ( where status = 4 ) processed, rd.runid from rundataspec rd where rd.runid in (select rr.runid from run rr where rr.creationdate >= now() - interval '1 days' and ( rr.lastupdatedate is null or rr.lastupdatedate >= now() - interval '1 days')) group by status,rd.runid) stat on (stat.runid = r.runid) group by r.runid,r.size,r.state::text,r.creationdate order by r.creationdate desc;
and the plan
Remote Subquery Scan on all (datanode1,datanode10,datanode11,datanode12,datanode2,datanode3,datanode4,datanode5,datanode6,datanode7,datanode8,datanode9) (cost=-nan..-nan rows=726 width=136) -> Sort (cost=-nan..-nan rows=726 width=136) Sort Key: r.creationdate DESC -> Finalize GroupAggregate (cost=-nan..-nan rows=726 width=136) Group Key: r.runid, ((r.state)::text) -> Sort (cost=-nan..-nan rows=726 width=0) Sort Key: r.runid, ((r.state)::text) -> Remote Subquery Scan on all (datanode1,datanode10,datanode11,datanode12,datanode2,datanode3,datanode4,datanode5,datanode6,datanode7,datanode8,datanode9) (cost=22768.39..-nan rows=726 width=0) Distribute results by S: runid -> Partial GroupAggregate (cost=22668.39..-nan rows=726 width=325) Group Key: r.runid, ((r.state)::text) -> Sort (cost=22668.39..22670.67 rows=1525 width=277) Sort Key: r.runid, ((r.state)::text) -> Hash Left Join (cost=22388.95..22620.01 rows=1525 width=277) Hash Cond: (rd.runid = r.runid) -> Finalize HashAggregate (cost=22195.26..22271.51 rows=1525 width=22) Group Key: rd.status, rd.runid -> Remote Subquery Scan on all (datanode1,datanode10,datanode11,datanode12,datanode2,datanode3,datanode4,datanode5,datanode6,datanode7,datanode8,datanode9) (cost=22109.86..22190.69 rows=1525 width=0) Distribute results by S: runid -> Partial HashAggregate (cost=22009.86..22086.11 rows=1525 width=22) Group Key: rd.status, rd.runid -> Nested Loop (cost=0.07..21977.67 rows=10733 width=6) -> Seq Scan on run rr (cost=0.00..162.83 rows=1 width=4) Filter: ((creationdate >= (now() - '1 day'::interval)) AND ((lastupdatedate IS NULL) OR (lastupdatedate >= (now() - '1 day'::interval)))) -> Index Scan using rundataspec_pkey on rundataspec rd (cost=0.07..20537.43 rows=25548 width=6) Index Cond: (runid = rr.runid) -> Hash (cost=156.30..156.30 rows=726 width=237) -> Seq Scan on run r (cost=0.00..156.30 rows=726 width=237)
which finishes with grave error:
WARNING: pgxc_abort_connections dn node:datanode1 invalid socket 4294967295! WARNING: pgxc_abort_connections dn node:datanode5 invalid socket 4294967295! WARNING: pgxc_abort_connections dn node:datanode1 invalid socket 4294967295! WARNING: pgxc_abort_connections dn node:datanode5 invalid socket 4294967295! ERROR: Failed to receive more data from data node 16398 Time: 159019.614 ms (02:39.020)
on the other hand on XL it returns after few ms, the plan is much simpler:
Sort (cost=23507.41..23507.85 rows=289 width=135) Sort Key: r.creationdate DESC -> GroupAggregate (cost=23481.54..23500.32 rows=289 width=135) Group Key: r.runid, ((r.state)::text) -> Sort (cost=23481.54..23481.97 rows=289 width=135) Sort Key: r.runid, ((r.state)::text) -> Nested Loop Left Join (cost=23036.82..23474.45 rows=289 width=135) -> GroupAggregate (cost=23036.73..23054.64 rows=289 width=22) Group Key: rd.status, rd.runid -> Remote Subquery Scan on all (datanode1,datanode2,datanode3,datanode4,datanode5,datanode6,datanode7,datanode8) (cost=23036.73..23038.03 rows=289 width=6) -> Sort (cost=22936.73..22937.16 rows=289 width=6) Sort Key: rd.status, rd.runid -> Nested Loop (cost=0.11..22929.64 rows=289 width=6) -> Seq Scan on run rr (cost=0.00..4423.48 rows=1 width=4) Filter: ((creationdate >= (now() - '1 day'::interval)) AND ((lastupdatedate IS NULL) OR (lastupdatedate >= (now() - '1 day'::interval)))) -> Index Scan using rundataspec_pkey on rundataspec rd (cost=0.11..17501.46 rows=20094 width=6) Index Cond: (runid = rr.runid) -> Materialize (cost=100.10..101.45 rows=1 width=95) -> Remote Subquery Scan on all (datanode2) (cost=100.10..101.45 rows=1 width=95) -> Index Scan using run_pkey on run r (cost=0.10..1.35 rows=1 width=95) Index Cond: (rd.runid = runid)
有微信或者qq讨论群吗?
tbase 支持列存储吗?
PG 默认只有heap表行存储,tbase 可以支持列存储吗?
似乎代码里没看到谢谢
查询异常:latch already owned
在执行查询时有时会抛出异常或卡住无法取消请求:
ERROR: node:dn004, backend_pid:32570, nodename:dn004,backend_pid:32570,message:node:dn002, backend_pid:16921, nodename:dn002,backend_pid:16921,message:Failed to read from SQueue p_1_3c93_9_58a551eb, consumer (node 3, pid 16921, status 2) - CONSUMER_ERROR set, err_msg "node:dn004, backend_pid:32522, nodename:dn001,backend_pid:29990,message:latch already owned",,
SQL语句:select vssd.f_stationcode, SUM(f_passenger) f_passenger, SUM(f_nodenumber) f_nodenumber from ( select f_guid, f_stationcode, f_passenger from v_sendschedule where f_date='2018-12-01' )vss, ( select f_parentid, f_stationcode, SUM(f_nodenumber) f_nodenumber from v_sendschedule_detail where f_date='2018-12-01' group by f_stationcode,f_parentid ) vssd where vssd.f_parentid=vss.f_guid and vssd.f_stationcode=vss.f_stationcode group by vssd.f_stationcode;
其中:
v_sendschedule记录数:19163112
v_sendschedule_detail记录数:48369630【并发】最大并发数问题
请问是否做过并发测试,已知的最大并发数是多少?对应的集群规模?
与postgres xl 类似?
很高兴有国产的大厂PG产品开源,天天向上!
创建表时用DISTRIBUTE BY hash或MODULO报错
如下:
testdb=# CREATE TABLE t_student(student_id int primary key,name varchar(20) not null,gender varchar(10),age int,class_id int, comment varchar(2000)) DISTRIBUTE BY MODULO(student_id);
ERROR: Cannot support distribute type: Hash
testdb=# CREATE TABLE t_student_course(id serial8,student_id int,course_id int) DISTRIBUTE BY HASH(id);
ERROR: Cannot support distribute type: Hash
我的节点拓扑如下:
testdb=# SELECT * FROM pgxc_node;
node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferre
d | node_id | node_cluster_name
-----------+-----------+-----------+-----------------+----------------+----------------
--+------------+-------------------
gtm | G | 20001 | 192.168.155.109 | t | f
| 428125959 | tbase_cluster
coord1 | C | 20004 | 192.168.155.109 | f | f
| 1885696643 | tbase_cluster
dn1 | D | 20008 | 192.168.155.109 | t | t
| -560021589 | tbase_cluster
dn2 | D | 20009 | 192.168.155.101 | f | f
| 352366662 | tbase_cluster请大神们看看怎么回事?
Does not compile under gcc8 due to undefined Recovery_IsStandby.
Seems gcc/ld 8 got stricter about undefined functions and lack of body of
Recovery_IsStandby
breaks the compilation (possibly due to flto enabled).Steps to reproduce on centos 7 with devtoolset-8 installed.
scl enable devtoolset-8 bash export AR="gcc-ar" export NM="gcc-nm" export RANLIB="gcc-ranlib" export XHOME=$HOME/XZ_10_STABLE ./configure '--with-libxml' '--with-libxslt' '--prefix=XZ_10_STABLE' '--with-segsize=8' '--with-blocksize=16' '--with-wal-segsize=64' '--with-wal-blocksize=16' '--with-ldap' '--with-python' '--enable-debug' 'CC=gcc' 'CFLAGS=-DPGXC -DXCP -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -O3 -flto -fuse-linker-plugin -fno-omit-frame-pointer -march=native' --enable-user-switch --with-openssl --with-ossp-uuid
gives:
... gcc -D_PG_ORCL_ -DPGXC -DXCP -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -D_USER_SWITCH_ -DPGXC -DXCP -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribut e -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -O3 -flto -fuse-linker-plugin -fno-omit-frame-pointer -march=native -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -L../../../src/port -L../../../src/common -Wl,--as-needed -Wl,-rpath,'/home/pgxzDR3/XZ_10_STABLE/lib',--enable-new-dtags -lpgcommon -lpgport -lpthread -lxslt -lxml2 -lssl -lcrypto -lz -lreadline -lrt -lcrypt -ldl -lm main.o gtm_thread.o gtm_txn.o gtm_seq.o gtm_snap.o gtm_standby.o gtm_opt.o gtm_backup.o gtm_store.o gtm_xlog.o ../libpq/libpqcomm.a ../path/libgtmpath.a ../recovery/libgtmrecovery.a ../client/libgtmclient.a ../common/libgtm.a ../../port/libpgport.a -o gtm /opt/rh/devtoolset-8/root/usr/libexec/gcc/x86_64-redhat-linux/8/ld: Dwarf Error: Could not find abbrev number 118. /tmp/ccNUWJtw.ltrans0.ltrans.o: In function `CreateLockFile.constprop.107': <artificial>:(.text+0x2e41): undefined reference to `Recovery_IsStandby' /tmp/ccNUWJtw.ltrans0.ltrans.o: In function `ProcessStartPreparedTransactionCommand.constprop.60': <artificial>:(.text+0x71e8): undefined reference to `Recovery_IsStandby' <artificial>:(.text+0x7202): undefined reference to `Recovery_IsStandby' /tmp/ccNUWJtw.ltrans0.ltrans.o: In function `ProcessSequenceInitCommand.constprop.44': <artificial>:(.text+0x8b7b): undefined reference to `Recovery_IsStandby' /opt/rh/devtoolset-8/root/usr/libexec/gcc/x86_64-redhat-linux/8/ld: Dwarf Error: Offset (317445) greater than or equal to .debug_str size (1374). /opt/rh/devtoolset-8/root/usr/libexec/gcc/x86_64-redhat-linux/8/ld: Dwarf Error: Offset (2365960) greater than or equal to .debug_str size (1374). /opt/rh/devtoolset-8/root/usr/libexec/gcc/x86_64-redhat-linux/8/ld: Dwarf Error: Offset (613888) greater than or equal to .debug_str size (1374). /opt/rh/devtoolset-8/root/usr/libexec/gcc/x86_64-redhat-linux/8/ld: Dwarf Error: Could not find abbrev number 574. /tmp/ccNUWJtw.ltrans1.ltrans.o: In function `RedoXLogRecord.isra.9.lto_priv.197': <artificial>:(.text+0xc9ce): undefined reference to `Recovery_IsStandby' /tmp/ccNUWJtw.ltrans1.ltrans.o:<artificial>:(.text+0xcafd): more undefined references to `Recovery_IsStandby' follow collect2: error: ld returned 1 exit status make[3]: *** [Makefile:23: gtm] Error 1 make[3]: Leaving directory '/home/pgxzDR3/git/TBase/src/gtm/main' make[2]: *** [Makefile:22: all] Error 2 make[2]: Leaving directory '/home/pgxzDR3/git/TBase/src/gtm' make[1]: *** [Makefile:38: all-gtm-recurse] Error 2 make[1]: Leaving directory '/home/pgxzDR3/git/TBase/src' make: *** [GNUmakefile:11: all-src-recurse] Error 2
make -sj时报 error redefinition
从一开始pgxz出来就关注很久了终于开源了,请尽快将文档开放吧
之前用pgxl一段时间做测试,始终不敢用到生产环境,请问tbase用于生产环境(OLTP)是否完全可行?
tbase.qq.com看描述中好像有后台管理页面,请问如何开启访问?Docker Test Demo?
execute direct with DML does not work anymore.
While normally not needed nor encouraged, we had some important corner cases when issuing DML directly on a datanode was the only option to get the performance we need.
In XL it was possible to issue:
execute direct on (datanodex) $$ insert into table select...
in Tbase the logic of txid serving has changed and while it is still possible to do the results are violating MVCC, i.e. we see disappearing tuples when we do insert and subsequent selects!
Can you shed some light if proper txid serving while in
execute direct
could be brought back, with MVCC working properly?Thanks
Cannot create table as .. distribute by replication
When creating a table with
create table X by replication as select on sharded table with group by
we getERROR: XX000 - Shard tables from different groups should not be invloved in one Query, Shard tables should not be invloved in one Query with other tables, such as hash table.
There is a typo in the error message, but also Postgres-XL supported this so it is a regression..
TBASE执行慢SQL和DN上直接执行效率差异很大
1,慢SQL:
EXPLAIN ANALYZE
SELECT
*
FROM
hscs_itf_imp_lines l
JOIN hscs_itf_imp_headers h ON l.HEADER_ID = h.HEADER_ID
WHERE
l.ATTRIBUTE10 IS NULL
AND h.INTERFACE_NAME IN ( 'MAL_MONPAY_INTERFACE' )
AND EXISTS ( SELECT apply_num FROM yxhscs_itf_ar_interface WHERE l.value2 = apply_num GROUP BY apply_num )
LIMIT 2000
hscs_itf_imp_lines大概8500W,hscs_itf_imp_headers 150W,yxhscs_itf_ar_interface 2900W
注:本SQL之前一直提示message:start timestamp XXXXX is too old to execute, recentCommitTs is XXXXX,
请假过企鹅的小伙伴后,提示在DN上修改 vacuum_delta 100 ->10000后,SQL可正确执行2,该语句在TBASE的CN上执行,消耗40000秒,对应执行计划:
A:Limit (cost=100.38..124.45 rows=1 width=20444) (actual time=40194212.346..40194212.346 rows=0 loops=1)
-> Remote Subquery Scan on all (dn1,dn2) (cost=100.38..124.45 rows=1 width=20444) (actual time=40194212.335..40194212.335 rows=0 loops=1)
Planning time: 332.126 ms
Execution time: 40195371.888 ms3,在DN1上执行,消耗233秒,对应执行计划:
Limit (cost=3224.17..17945.65 rows=2000 width=9936) (actual time=233250.093..233250.093 rows=0 loops=1)
-> Nested Loop Semi Join (cost=3224.17..30053846.59 rows=4082555 width=9936) (actual time=233250.089..233250.089 rows=0 loops=1)
-> Gather (cost=3223.73..5268494.06 rows=4082555 width=9936) (actual time=1984.672..219937.263 rows=86 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Parallel Hash Join (cost=2223.73..4859238.56 rows=1701065 width=9936) (actual time=5385.324..233230.741 rows=43 loops=2)
Hash Cond: (l.header_id = (h.header_id)::numeric)
-> Parallel Seq Scan on hscs_itf_imp_lines l (cost=0.00..2502785.23 rows=15628109 width=5198) (actual time=0.058..222744.610 rows=18741760 loops=2)
Filter: (attribute10 IS NULL)
Rows Removed by Filter: 1697783
-> Parallel Hash (cost=2216.35..2216.35 rows=591 width=4738) (actual time=26.879..26.879 rows=488 loops=2)
-> Parallel Seq Scan on hscs_itf_imp_headers h (cost=0.00..2216.35 rows=591 width=4738) (actual time=0.673..25.803 rows=488 loops=2)
Filter: ((interface_name)::text = 'MAL_MONPAY_INTERFACE'::text)
Rows Removed by Filter: 31903
-> Merge Append (cost=0.43..273.77 rows=97 width=8) (actual time=154.785..154.785 rows=0 loops=86)
Sort Key: yxhscs_itf_ar_interface.apply_num
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_0 on yxhscs_itf_ar_interface (partition sequence: 0, name: yxhscs_itf_ar_interface_part_0) (cost=0.43..8.56 rows=3 width=8) (actual time=0.015..0.015 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_1 on yxhscs_itf_ar_interface (partition sequence: 1, name: yxhscs_itf_ar_interface_part_1) (cost=0.43..8.56 rows=3 width=8) (actual time=0.009..0.009 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_2 on yxhscs_itf_ar_interface (partition sequence: 2, name: yxhscs_itf_ar_interface_part_2) (cost=0.43..8.56 rows=3 width=8) (actual time=0.008..0.008 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_3 on yxhscs_itf_ar_interface (partition sequence: 3, name: yxhscs_itf_ar_interface_part_3) (cost=0.43..8.56 rows=3 width=8) (actual time=0.344..0.344 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_4 on yxhscs_itf_ar_interface (partition sequence: 4, name: yxhscs_itf_ar_interface_part_4) (cost=0.43..8.56 rows=3 width=8) (actual time=0.572..0.572 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_5 on yxhscs_itf_ar_interface (partition sequence: 5, name: yxhscs_itf_ar_interface_part_5) (cost=0.43..8.56 rows=3 width=8) (actual time=1.457..1.457 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_6 on yxhscs_itf_ar_interface (partition sequence: 6, name: yxhscs_itf_ar_interface_part_6) (cost=0.43..8.56 rows=3 width=8) (actual time=2.399..2.399 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_7 on yxhscs_itf_ar_interface (partition sequence: 7, name: yxhscs_itf_ar_interface_part_7) (cost=0.43..8.56 rows=3 width=8) (actual time=3.124..3.124 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_8 on yxhscs_itf_ar_interface (partition sequence: 8, name: yxhscs_itf_ar_interface_part_8) (cost=0.43..8.56 rows=3 width=8) (actual time=3.969..3.969 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_9 on yxhscs_itf_ar_interface (partition sequence: 9, name: yxhscs_itf_ar_interface_part_9) (cost=0.43..8.56 rows=3 width=8) (actual time=3.977..3.977 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_10 on yxhscs_itf_ar_interface (partition sequence: 10, name: yxhscs_itf_ar_interface_part_10) (cost=0.43..8.56 rows=3 width=8) (actual time=3.258..3.258 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_11 on yxhscs_itf_ar_interface (partition sequence: 11, name: yxhscs_itf_ar_interface_part_11) (cost=0.43..8.56 rows=3 width=8) (actual time=3.630..3.630 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_12 on yxhscs_itf_ar_interface (partition sequence: 12, name: yxhscs_itf_ar_interface_part_12) (cost=0.43..8.56 rows=3 width=8) (actual time=9.651..9.651 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_13 on yxhscs_itf_ar_interface (partition sequence: 13, name: yxhscs_itf_ar_interface_part_13) (cost=0.43..8.56 rows=3 width=8) (actual time=5.094..5.094 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_14 on yxhscs_itf_ar_interface (partition sequence: 14, name: yxhscs_itf_ar_interface_part_14) (cost=0.43..8.56 rows=3 width=8) (actual time=9.988..9.988 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_15 on yxhscs_itf_ar_interface (partition sequence: 15, name: yxhscs_itf_ar_interface_part_15) (cost=0.43..8.56 rows=3 width=8) (actual time=8.599..8.599 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_16 on yxhscs_itf_ar_interface (partition sequence: 16, name: yxhscs_itf_ar_interface_part_16) (cost=0.43..8.56 rows=3 width=8) (actual time=6.085..6.085 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_17 on yxhscs_itf_ar_interface (partition sequence: 17, name: yxhscs_itf_ar_interface_part_17) (cost=0.43..8.56 rows=3 width=8) (actual time=5.871..5.871 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_18 on yxhscs_itf_ar_interface (partition sequence: 18, name: yxhscs_itf_ar_interface_part_18) (cost=0.43..8.56 rows=3 width=8) (actual time=6.907..6.907 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_19 on yxhscs_itf_ar_interface (partition sequence: 19, name: yxhscs_itf_ar_interface_part_19) (cost=0.43..8.56 rows=3 width=8) (actual time=5.857..5.857 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_20 on yxhscs_itf_ar_interface (partition sequence: 20, name: yxhscs_itf_ar_interface_part_20) (cost=0.43..8.56 rows=3 width=8) (actual time=9.075..9.075 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_21 on yxhscs_itf_ar_interface (partition sequence: 21, name: yxhscs_itf_ar_interface_part_21) (cost=0.43..8.56 rows=3 width=8) (actual time=5.314..5.314 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_22 on yxhscs_itf_ar_interface (partition sequence: 22, name: yxhscs_itf_ar_interface_part_22) (cost=0.43..8.56 rows=3 width=8) (actual time=5.416..5.416 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_23 on yxhscs_itf_ar_interface (partition sequence: 23, name: yxhscs_itf_ar_interface_part_23) (cost=0.43..8.56 rows=3 width=8) (actual time=5.927..5.927 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_24 on yxhscs_itf_ar_interface (partition sequence: 24, name: yxhscs_itf_ar_interface_part_24) (cost=0.43..8.56 rows=3 width=8) (actual time=4.862..4.862 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_25 on yxhscs_itf_ar_interface (partition sequence: 25, name: yxhscs_itf_ar_interface_part_25) (cost=0.43..8.56 rows=3 width=8) (actual time=6.522..6.522 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_26 on yxhscs_itf_ar_interface (partition sequence: 26, name: yxhscs_itf_ar_interface_part_26) (cost=0.43..8.56 rows=3 width=8) (actual time=9.027..9.027 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_27 on yxhscs_itf_ar_interface (partition sequence: 27, name: yxhscs_itf_ar_interface_part_27) (cost=0.43..8.56 rows=3 width=8) (actual time=7.632..7.632 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_28 on yxhscs_itf_ar_interface (partition sequence: 28, name: yxhscs_itf_ar_interface_part_28) (cost=0.43..8.56 rows=3 width=8) (actual time=5.757..5.757 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_29 on yxhscs_itf_ar_interface (partition sequence: 29, name: yxhscs_itf_ar_interface_part_29) (cost=0.43..8.56 rows=3 width=8) (actual time=6.805..6.805 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_30 on yxhscs_itf_ar_interface (partition sequence: 30, name: yxhscs_itf_ar_interface_part_30) (cost=0.43..8.56 rows=3 width=8) (actual time=3.365..3.365 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
-> Index Only Scan using idx_76128_yxhscs_itf_ar_interface_n5_part_31 on yxhscs_itf_ar_interface (partition sequence: 31, name: yxhscs_itf_ar_interface_part_31) (cost=0.43..8.56 rows=3 width=8) (actual time=4.172..4.172 rows=0 loops=86)
Index Cond: (apply_num = l.value2)
Heap Fetches: 0
Planning time: 8.501 ms
Execution time: 233303.360 ms4,我的一点看法(纯猜测)
本来感觉是统计信息未更新,猜想依据:
在CN上观察数据量:
SELECT oid, relname, relkind, reltuples, relpages
FROM pg_class
WHERE relname LIKE 'hscs_itf_imp_lines%';
输出:
143351 hscs_itf_imp_lines r 0 0
143364 hscs_itf_imp_lines_line_id_seq S 1 1
观察统计信息:
SELECT attname, inherited, n_distinct,
array_to_string(most_common_vals, E'\n') as most_common_vals
FROM pg_stats
WHERE tablename like 'hscs_itf_imp_lines%';
输出为空;在DN1上的执行结果:
数据量:
143364 hscs_itf_imp_lines r 4.09933e+07 2331980
143377 hscs_itf_imp_lines_line_id_seq S 1 1
统计信息:
value137 f 7 0,2250,0.000,1450,1250,450
value138 f 1 0
value139 f 1 0
value140 f 81 0,500,6407.45
...可见,在DN上的数据量和统计信息是靠谱的,怀疑这个问题是协处理器(CN)没有读到正确的统计信息,才运行了错误的执行计划?
求答,多谢TBase交流群图片404
https://github.com/Tencent/TBase/wiki 里面的交流群图片需要修复
恭喜开源新力量
普天同庆
pgxc_ctl.cnf template
The template expression of pgxc_ctl.cnf is not clear, please simplify the content
FK's not supported on replicated tables.
While PG-XL allowed for FKs on replicated tables.
Was there any particular reason why it was disabled? - it seems like an easy case vs distributed FKs?Suggest to add more introductions for this product
-
Product Positioning: I have a little doubt about this product positioning, if it is for users around the world, the introduction in index page (README.md) should be more professional, which do not mix English and Chinese description together, and consider more from user's perspectives.
-
Solution: The official website (tbase.qq.com) hasn't been update too long since 2017, I see there are many customers using it . So I hope the website can update more customer's solution and let more people to know what TBase can do and How to introduce TBase in their businesses.
Thank you
非常难能可贵!腾讯系PG必将引领企业级数据库潮流!
meaning of __SUPPORT_DISTRIBUTED_TRANSACTION__ and other params from pg_config_manual.h
Hello,
First, very impressive changes in comparison to Postgres-XL!
Could you explain what's the idea behind TBase specific defines and code defined in pg_config_manual.h?
i.e.
SUPPORT_DISTRIBUTED_TRANSACTION
COLD_HOT
STORAGE_SCALABLEThere's clearly some code activated by them, but we would appreciate some small explanations.
Also why these are disabled - is the code not trustworthy yet?Thank you.
Krzysztof Nienartowiczbenmarksql测试的时候遇到的错误
SELECT max(o_id) FROM bmsql_oorder WHERE o_w_id = 4 AND o_d_id = 7 AND o_c_id = 916;
查询结果:2573
select * from bmsql_oorder where o_w_id=4 and o_d_id=7 and o_c_id=916
and o_id=(2573);
可以查询到一条记录;
select * from bmsql_oorder where o_w_id=4 and o_d_id=7 and o_c_id=916
and o_id=(
select max(o_id) from bmsql_oorder where o_w_id=4 and o_d_id=7 and o_c_id=916
);
无法查询到结果?
是在做benmarksql测试的时候遇到的错误:
TAL jTPCCTerminal : Last Order for W_ID=4 D_ID=7 C_ID=916 not found963MB
java.lang.Exception: Last Order for W_ID=4 D_ID=7 C_ID=916 not found
at jTPCCTData.executeOrderStatus(jTPCCTData.java:1189)
at jTPCCTData.execute(jTPCCTData.java:103)
at jTPCCTerminal.executeTransactions(jTPCCTerminal.java:204)
at jTPCCTerminal.run(jTPCCTerminal.java:88)
at java.lang.Thread.run(Thread.java:745)Resource leak
Stream is opened
TBase/src/bin/confmod/confmod.c
Line 236 in 3295393
Opened file never closed
TBase/src/bin/confmod/confmod.c
Line 255 in 3295393
可不可以考虑把ssh的端口作为可配置的
pgxc_ctl 执行ssh 的时候只能以22默认端口才可以。
ssh ip:port 是否考虑支持端口可配置的方式咨询下OSS系统是否也一起开源啦?
咨询下OSS系统是否也一起开源啦?
TBase目前只开源这一个版本吗,有对应的版本号吗
现在开源的TBase版本号是多少呢?没有见tag标签
还不知道如何下手,先看看
很不错的项目,tx666
pgloader做表迁移时报错
在使用pgloader做mysql->TBASE的数据迁移测试时,在标准PG上未遇到问题;
在TBASE上,pgloader提示错误如下:
create temp table reloids(oid) as
values ('public.yxhscs_itf_contract_validated'::regclass)ERROR: Cannot support distribute type: RoundRobin
该SQL由pgloader工具发出,请问可以增加对这种临时表的支持吗?pgloader在一键迁移/持续化迁移MYSQL数据到PG上时很好用
pgxc_clean command does not work
Same as XL, TBase leaves sometimes distributed 2PC Prepared transactions. pgxc_clean should work.
It does not work due to #52 (and maybe other issues) most likely.make error: undefined reference to `Recovery_IsStandby'
gtm_store.c: At top level:
../../../src/include/gtm/standby_utils.h:20:13: warning: inline function ‘Recovery_IsStandby’ declared but never defined
inline bool Recovery_IsStandby(void);
^~~~~~~~~~~~~~~~~~
/usr/bin/ld: main.o: in functionGTM_SigleHandler': /home/zmeng/Work/OpenProject/TBase/src/gtm/main/main.c:408: undefined reference to
Recovery_IsStandby'
/usr/bin/ld: main.o: in functionmain': /home/zmeng/Work/OpenProject/TBase/src/gtm/main/main.c:946: undefined reference to
Recovery_IsStandby'
/usr/bin/ld: /home/zmeng/Work/OpenProject/TBase/src/gtm/main/main.c:979: undefined reference toRecovery_IsStandby' /usr/bin/ld: /home/zmeng/Work/OpenProject/TBase/src/gtm/main/main.c:1013: undefined reference to
Recovery_IsStandby'
/usr/bin/ld: /home/zmeng/Work/OpenProject/TBase/src/gtm/main/main.c:1033: undefined reference toRecovery_IsStandby' /usr/bin/ld: main.o:/home/zmeng/Work/OpenProject/TBase/src/gtm/main/main.c:1208: more undefined references to
Recovery_IsStandby' followSuggestion-product positioning and solution description
-
Product Positioning: I have a little doubt about this product positioning, if it is for users around the world, the introduction in index page (README.md) should be more professional, which do not mix English and Chinese description together, and consider more from user's perspectives.
-
Solution: The official website (tbase.qq.com) hasn't been updated since 2017, I see there are many customers using it . So I hope the website can see more customer's solution and let more people to know what TBase can do and How to introduce TBase in their businesses.
Thank you
能否发布一些关于集群加入节点, 删除节点等集群管理的文档?
另外, 集群加入新的DN节点后, 数据会自动rebalance么?
请问哪里有使用文档之类的?
https://tbase.qq.com/ 注册不了,一直提示验证码错误
文档也不让匿名下载,搭建好了如何用....pgxc_node_str() function is broken (XL regression)
Returns pointer as function is thinking GUC being returned is varlena.
XL has proper implementation..gtm单测编译失败
头文件test_common.h不存在
'COPYRIGHT' 文件被在windows编辑过
Tencent is pleased to support the open source community by making TBase available. ^M
^M
Copyright (C) 2019 THL A29 Limited, a Tencent company. All rights reserved.^M
^M
TBase is licensed under the BSD 3-Clause License, except for the third-party component listed below. ^M
^M
A copy of the BSD 3-Clause License is included in this file.^M
^M
Other dependencies and licenses:^M
^MSegfault on update --enable-regress
--enable-regress caused tha code fail that used to work before - trigger related on update.
Is it expected?[Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Core was generated by `postgres: dr3_dry_agis31f surveys 192.168.169.251(51186) UPDATE '. Program terminated with signal SIGABRT, Aborted. #0 0x00007fd2605ca337 in raise () from /lib64/libc.so.6 Missing separate debuginfos, use: debuginfo-install cyrus-sasl-lib-2.1.26-23.el7.x86_64 glibc-2.17-292.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-37.el7_7.2.x86_64 libcom_err-1.42.9-16.el7.x86_64 libselinux-2.5-14.1.el7.x86_64 libxml2-2.9.1-6.el7_2.3.x86_64 nspr-4.21.0-1.el7.x86_64 nss-3.44.0-7.el7_7.x86_64 nss-softokn-freebl-3.44.0-8.el7_7.x86_64 nss-util-3.44.0-4.el7_7.x86_64 openldap-2.4.44-21.el7_6.x86_64 openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64 xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-18.el7.x86_64 (gdb) bt #0 0x00007fd2605ca337 in raise () from /lib64/libc.so.6 #1 0x00007fd2605cba28 in abort () from /lib64/libc.so.6 #2 0x0000000000a61aff in ExceptionalCondition (conditionName=conditionName@entry=0xc49868 "!(((const void*)(fdw_trigtuple) != ((void *)0)) ^ ((bool) (((const void*)(tupleid) != ((void *)0)) && ((tupleid)->ip_posid != 0))))", errorType=errorType@entry=0xaf2109 "FailedAssertion", fileName=fileName@entry=0xc35fcd "trigger.c", lineNumber=lineNumber@entry=2878) at assert.c:54 #3 0x00000000006a62be in ExecBRUpdateTriggers (estate=estate@entry=0x1798690, epqstate=epqstate@entry=0x1798f28, relinfo=relinfo@entry=0x17988e0, tupleid=tupleid@entry=0x7ffde999bbd4, fdw_trigtuple=fdw_trigtuple@entry=0x7ffde999bc00, slot=slot@entry=0x1a139b8) at trigger.c:2878 #4 0x00000000006f6985 in ExecUpdate (mtstate=mtstate@entry=0x1798e60, tupleid=tupleid@entry=0x7ffde999bbd4, oldtuple=oldtuple@entry=0x7ffde999bc00, slot=slot@entry=0x1a139b8, planSlot=planSlot@entry=0x1799eb0, epqstate=epqstate@entry=0x1798f28, estate=0x1798690, canSetTag=1 '\001') at nodeModifyTable.c:1260 #5 0x00000000006f78a8 in ExecModifyTable (pstate=<optimized out>) at nodeModifyTable.c:2374 #6 0x00000000006cd1e2 in ExecProcNode (node=0x1798e60) at ../../../src/include/executor/executor.h:273 #7 ExecutePlan (execute_once=<optimized out>, dest=0x1021fc0 <donothingDR>, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_UPDATE, use_parallel_mode=<optimized out>, planstate=0x1798e60, estate=0x1798690) at execMain.c:1955 #8 standard_ExecutorRun (queryDesc=0x199d0b8, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:465 #9 0x000000000090d448 in ProcessQuery (plan=0x19e8590, sourceText=<optimized out>, params=<optimized out>, queryEnv=0x0, dest=0x1021fc0 <donothingDR>, completionTag=0x7ffde999c110 "") at pquery.c:209 #10 0x000000000090d8dc in PortalRunMulti (portal=portal@entry=0x19682f0, isTopLevel=isTopLevel@entry=1 '\001', setHoldSnapshot=setHoldSnapshot@entry=0 '\000', dest=0x1021fc0 <donothingDR>, dest@entry=0x187fe18, altdest=0x1021fc0 <donothingDR>, altdest@entry=0x187fe18, completionTag=completionTag@entry=0x7ffde999c110 "") at pquery.c:2050 #11 0x000000000090ffb6 in PortalRun (portal=portal@entry=0x19682f0, count=count@entry=1, isTopLevel=isTopLevel@entry=1 '\001', run_once=<optimized out>, dest=dest@entry=0x187fe18, altdest=altdest@entry=0x187fe18, completionTag=0x7ffde999c110 "") at pquery.c:1320 #12 0x000000000090ad91 in exec_execute_message (max_rows=1, portal_name=0x187fa00 "") at postgres.c:2951 #13 PostgresMain (argc=<optimized out>, argv=argv@entry=0x17b5248, dbname=<optimized out>, username=<optimized out>) at postgres.c:5498 #14 0x00000000008536ed in BackendRun (port=0x1786cd0) at postmaster.c:4979 #15 BackendStartup (port=0x1786cd0) at postmaster.c:4651 #16 ServerLoop () at postmaster.c:1956 #17 0x0000000000857a3c in PostmasterMain (argc=argc@entry=5, argv=argv@entry=0x175a7d0) at postmaster.c:1564 #18 0x0000000000498853 in main (argc=5, argv=0x175a7d0) at main.c:228
Functions defined with set search_path cause search_path reset and schema removal in subsequent sql calls
We have separate schema for partitions of partitioned tables and doing direct inserts into partitions.
While inserting
insert into schema_partition.table_partition(...)
we see in the logs
00000: [SAVEPOINT] node->sql_statement:SET search_path TO schema_base, public
and then query with schema stripped from the table on the insert:
INSERT INTO table_partition (...)
Which of course cannot work as the table is in the other schema (schema_partition).
If relevant, we also issue
set application_name=...
before and were wondering if this could cause this bug.
This is a blocker for us as well. Hope somebody can tune in for this one..
Thanks
communicate protocol of gtm and coordinator
gtm和coordinator的通信协议能开源一下吗
MVCC not working? - timestamp `A` is too old to execute, recentCommitTs `B`
We experience basic MVCC problem. It never appeared on PG-XL:
XX000: node:datanode9, backend_pid:7215, nodename:datanode7,backend_pid:38951,message:start timestamp 432575393455 is too old to execute, recentCommitTs 432678004078,
Query is read only, but there is some insert/update activity from massive number of clients (500+) executing the same query with different parameters on other tables.
select t.* from ts t join catalog_source cs on (t.sourceid = cs.fdatalight_sourceid and t.catalogid = cs.fdatalight_catalogid) where cs.catalog_catalogid = $1 and cs.fdatalight_sourceid between $2 and $3 and catalogid = getowningcatalogid($4) and sourceid between $5 and $6
This is a blocker for us as touches fundamentals of the DB. Hoping somebody can support us here.
No GTM proxies, GTM is source of transaction IDs.Does coordinator take statistics from its preferred node only?
This has been the situation with XL currently.
Has TBase changed the logic and statistics are merged from all datanodes or
analyze (coordinator) ..
is still needed to propagate statistics to all coordinators?Thanks,
Krzysztofgtm_proxy segfaults on start
with a stacktrace:
Missing separate debuginfos, use: debuginfo-install glibc-2.17-292.el7.x86_64 (gdb) bt #0 0x00007f6b0773c337 in raise () from /lib64/libc.so.6 #1 0x00007f6b0773da28 in abort () from /lib64/libc.so.6 #2 0x0000000000402545 in GTM_RWLockAcquire (lock=0x149a4b0, mode=GTM_LOCKMODE_WRITE) at gtm_lock.c:164 #3 0x000000000040c96b in AllocSetAlloc () at aset.c:578 #4 0x0000000000409dad in MemoryContextAlloc (size=269, context=<optimized out>) at mcxt.c:564 #5 MemoryContextCreate (size=256, methods=0x6203a0 <AllocSetMethods>, name=0x41cd40 "ErrorContext", parent=0x149a480) at mcxt.c:505 #6 AllocSetContextCreate.constprop.22 (parent=0x149a480, name=0x41cd40 "ErrorContext", minContextSize=8192, maxBlockSize=8192, isShared=<optimized out>, initBlockSize=8192) at aset.c:341 #7 0x0000000000402a58 in MemoryContextInit () at mcxt.c:106 #8 InitGTMProxyProcess () at proxy_main.c:303 #9 main (argc=3, argv=0x7ffd0ba1cc68) at proxy_main.c:645
Also gtm_slave fails to start.
We are using nonstandard gtm/gtm_proxy ports, but it does not look like a problem..tbase在linux上的编译问题arm架构
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.