Code Monkey home page Code Monkey logo

pg_qualstats's People

Contributors

anayrat avatar curlup avatar df7cb avatar rdunklau avatar rekgrpth avatar rjuju avatar tedyu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pg_qualstats's Issues

Support for Citus

Is there a known way of making this extension work for distributed tables on Citus? Currently it works flawless for regular tables, but stops tracking immediately after enabling distribution on a table.

Thank you.

CREATE EXTENSION pg_qualstats

I have configured the extension with the following features:

. contrib + dev rpm installed done (Done
. modified postgresql.conf (add shared_preload_libraries = 'pg_stat_statements,pg_qualstats') (Done)
. make install pg_qualstat.c (Done)
.restart the instance (pg_ctl -D restart (Done)

It seems like good, but when I would like to add the EXTENSION Postgres write this Error:

postgres=# CREATE EXTENSION pg_qualstats ;
ERROR: This module can only be loaded via shared_preload_libraries
Time: 3.347 ms

Could you please help me?
Any suggestions will be accepted.

Regards.

Lorenzo


Extension installed:

postgres=# \dx
List of installed extensions
Name | Version | Schema | Description
--------------------+---------+------------+-----------------------------------------------------------
adminpack | 1.0 | pg_catalog | administrative functions for PostgreSQL
pg_stat_kcache | 2.0.2 | public | Kernel cache statistics gathering
pg_stat_statements | 1.3 | public | track execution statistics of all SQL statements executed
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language

postgres=# show shared_preload_libraries;

shared_preload_libraries

pg_stat_statements
(1 row)

select name,pending_restart, from pg_settings where name in ('shared_preload_libraries');

       name           | pending_restart

--------------------------+-----------------
shared_preload_libraries | f

Segfault when qual on view using expressions

Found with sqlsmith. Minimal test case to reproduce:

create table bug1 (id1 integer, val1 integer);
create table bug2 (id2 integer, val2 integer);
create view vbug as select id1, int4smaller(val1, val2) val12 from bug1 join bug2 on id1 = id2;
create table bug3 (id3 integer);
select * from vbug join bug3 on id3 = val12;

At least, pgqs_resolve_var() doesn't handle get_tle_by_resno() returning NULL.

with parallel queries, all workers report the quals

With parallel queries on PostgreSQL 9.6, pgqs_ExecutorEnd() is executed by both the leader and parallel workers, accounting for the quals from both leader and the workers. With sampling, this behaves unpredictably because random() is evaluated in each process independently, and thus only a subset of processes will do the reporting.

missing 0.0.8

Isn't pg_qualstats--0.0.8.sql missing?
Because:
capa4it=# create extension pg_qualstats;
ERROR: could not stat file "/tech/postgres/9.4.1/server/share/extension/pg_qualstats--0.0.8.sql": No such file or directory
capa4it=#

PG15 binary for Windows

Release notes for 2.0.4 states that support for PostgreSQL 15 has been added, but the release does not contain any .exe/zip for PG15 on Windows.

[42P18] ERROR: malformed array literal: "hash"

Im running 9.6 and Im getting below issue:

[22P02] ERROR: malformed array literal: "hash" Detail: Array value must start with "{" or dimension information. Where: PL/pgSQL function pg_qualstats_index_advisor(integer,integer,text[]) line 27 at assignment

if I fix mentioned line like

IF pg_catalog.current_setting('server_version_num')::bigint < 100000 THEN
        forbidden_am := forbidden_am || ARRAY['hash'];
END IF;

I get [22023] ERROR: cannot extract elements from a scalar

any tip ?

pg_qualstat can failed to group all constant values together

Hello Dalibo Team,

I think that I found an issue with pg_qualstat. I attach a scripted test and this result at the end.

When a query use one or more indexes, pgqualstat does not provide qualid or uniquequalid. This absence seems impact the result in powa-web.
Powa-web can't create explain plan because argument array is not full (we can see it with pg_qualstat_by_query).

The problem occurs when there is an index. No problem with a primary key or without indexes.

script to reproduce this issue.
test_pgqual_sql.txt

the result on my postgres
result.txt
result_pk.txt

Regards

pg_qualstats_indexes usage post 2.0

I was trying to use pg_qualstats for the first time, and naively ran select * from pg_qualstats_indexes following the instructions in the readme, which led to an error.

Could it be that the pg_qualstats_indexes view was removed in 6619c10 and not reflected in README.md?

End trimmed - pg_qualstats_example_query()

Postgres version 11.0

I have a problem with pg_qualstats_example() function.
For some queries the end is trimmed. Mostly for the longer queries.

I will try not to change formattation. This should be exactly the same as seen in pg_stat_statements.
Prepared statement query -

SELECT 
			(SELECT(get_user_name(su.school_usr_id))) as full_name,
			cps.class_name,
			scs.school_classes_id,
			u.usr_id,
			cps.transfer_type,
			(
				SELECT count(scs1.school_classes_id) FROM school_classes_students scs1 WHERE
				scs.school_classes_id = scs1.school_classes_id
			) as total_students
		FROM
			users_v u
			INNER JOIN school_users su ON (u.usr_id = su.usr_id)
			INNER JOIN school_classes_students_v scs ON (su.school_usr_id = scs.school_usr_id)
			INNER JOIN close_period_students cps ON (scs.school_classes_students_id = cps.school_classes_students_id)
			INNER JOIN close_period_classes cpc ON (cpc.name = cps.class_name)
		WHERE
			now() BETWEEN u.start_date AND COALESCE(u.end_date,now()) and
			now() BETWEEN su.start_date AND COALESCE(su.end_date,now()) and
			now() BETWEEN scs.start_date AND COALESCE(scs.end_date,now()) and
			su.school_id = $1 and
			cps.transfer_type != $2 and
			cps.transfer_type != $3
			and cps.class_name in ($4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22)

Also the formatation should be the same as function's result.
The result I get from the function looks like this -

		SELECT 
			(SELECT(get_user_name(su.school_usr_id))) as full_name,
			cps.class_name,
			scs.school_classes_id,
			u.usr_id,
			cps.transfer_type,
			(
				SELECT count(scs1.school_classes_id) FROM school_classes_students scs1 WHERE
				scs.school_classes_id = scs1.school_classes_id
			) as total_students
		FROM
			users_v u
			INNER JOIN school_users su ON (u.usr_id = su.usr_id)
			INNER JOIN school_classes_students_v scs ON (su.school_usr_id = scs.school_usr_id)
			INNER JOIN close_period_students cps ON (scs.school_classes_students_id = cps.school_classes_students_id)
			INNER JOIN close_period_classes cpc ON (cpc.name = cps.class_name)
		WHERE
			now() BETWEEN u.start_date AND COALESCE(u.end_date,now()) and
			now() BETWEEN su.start_date AND COALESCE(su.end_date,now()) and
			now() BETWEEN scs.start_date AND COALESCE(scs.end_date,now()) and
			su.school_id = '792' and
			cps.transfer_type != 'STUDENT_DROPPED' and
			cps.transfer_type != 'STUDENT_GRADUATED'
			and cps.class_name in ('10.a','10.b','10.c'

As we can see the end part is missing - where in () function is not complete.

Segmentation fault

Hi there ๐Ÿ‘‹

After an upgrade of qualstats (from 1.0.9 to 2.0.1), I've started to get segfaults which causes postgres to restart while powa taking snapshots.

Some environment info:

  • OS: Debian 10 - buster
  • Postgres: 11.7
  • Qualstats package upgrade: from 1.0.9-1.pgdg100+1 to 2.0.1-1.pgdg100+1

Error from postgres logs:

2020-05-06 14:44:34 UTC [4833-62] LOG:  background worker "powa" (PID 19562) was terminated by signal 11: Segmentation fault
2020-05-06 14:44:34 UTC [4833-63] DETAIL:  Failed process was running: SELECT powa_take_snapshot()
2020-05-06 14:44:34 UTC [4833-64] LOG:  terminating any other active server processes
2020-05-06 14:44:34 UTC [32741-1] [email protected] WARNING:  terminating connection because of crash of another server process
2020-05-06 14:44:34 UTC [32741-2] [email protected] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another s
erver process exited abnormally and possibly corrupted shared memory.
2020-05-06 14:44:34 UTC [32741-3] [email protected] HINT:  In a moment you should be able to reconnect to the database and repeat your command.
...
...
2020-05-06 14:44:34 UTC [4833-65] LOG:  all server processes terminated; reinitializing
2020-05-06 14:44:34 UTC [16293-1] LOG:  database system was interrupted; last known up at 2020-05-06 14:20:03 UTC
2020-05-06 14:44:34 UTC [16328-1] user2@[local] FATAL:  the database system is in recovery mode
2020-05-06 14:44:35 UTC [16337-1] user2@[local] FATAL:  the database system is in recovery mode
2020-05-06 14:44:35 UTC [16293-2] LOG:  database system was not properly shut down; automatic recovery in progress
2020-05-06 14:44:35 UTC [16293-3] LOG:  redo starts at 51/327D5540
2020-05-06 14:44:35 UTC [16293-4] LOG:  redo done at 51/32935E98
2020-05-06 14:44:35 UTC [16293-5] LOG:  last completed transaction was at log time 2020-05-06 14:39:34.381339+00
2020-05-06 14:44:35 UTC [16293-6] LOG:  checkpoint starting: end-of-recovery immediate
2020-05-06 14:44:35 UTC [16293-7] LOG:  checkpoint complete: wrote 169 buffers (0.5%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.003 s, sync=0.002 s, total=0.011 s; sync files=15, longest=0.001 s, average=0.000 s; distance=1410 kB, estimate=1410 kB
2020-05-06 14:44:35 UTC [4833-66] LOG:  database system is ready to accept connections

Meanwhile at kernel log:

[18579089.750187] traps: postgres[25395] general protection ip:5619cf6a452d sp:7ffea07b5070 error:0 in postgres[5619cf699000+465000]
[18584190.492834] postgres[19562]: segfault at 0 ip 00005619cf6a452d sp 00007ffea07b5070 error 4 in postgres[5619cf699000+465000]
[18584190.492843] Code: 43 01 49 83 c4 70 4c 39 fb 74 72 48 89 c3 41 80 3c 1e 00 75 e9 49 0f bf 04 24 48 8b 3c de 66 83 f8 ff 75 a9 41 80 7c 24 11 70 <0f> b6 17 74 1e f6 c2 03 75 19 8b 0f c1 e9 02 83 e9 04 4c 8d 41 01

(After the upgrade, I've manually executed drop extension + create extension for pg_qualstats under powa database to upgrade it to v2.0.1)

I couldn't find anything specific I need to apply on the upgrade, any suggestions?

Thanks in advance ๐Ÿ‘

Hanged randomly in hash_seq_search

Hello,

When I execute the following request around 10-50 times in a row, postgres
hangs randomly on the connection. New connections are still working.

The CPU peaks at 100% and the process needs to be SIGKILLed. SIGTERM is not
treated.

The request is:
SELECT * FROM MEASURE_FLAT_H INNER JOIN ( SELECT max(dt) as dtmax FROM MEASURE_FLAT_H WHERE (ASSETS_NAME = ?) AND (METRIC_NAME = ?) ) d ON (D.DTMAX = DT) WHERE (ASSETS_NAME = ?) AND (METRIC_NAME = ?)
At this step, no lock is found in pg_locks.

The process can be found stuck in the hash_seq_search().

Call stack:
#0 0x000000000075e919 in hash_seq_search ()
#1 0x00007fd0af74fb2b in pgqs_entry_dealloc ()

from /tech/postgres/9.4.1/server/lib/pg_qualstats.so
#2 0x00007fd0af750a15 in pgqs_ExecutorEnd ()

from /tech/postgres/9.4.1/server/lib/pg_qualstats.so
#3 0x000000000055dade in PortalCleanup ()
#4 0x000000000077438a in PortalDrop ()
#5 0x000000000077472a in PreCommit_Portals ()
#6 0x00000000004b36ec in CommitTransaction ()
#7 0x00000000004b4ab5 in CommitTransactionCommand ()
#8 0x000000000067a269 in finish_xact_command ()
#9 0x000000000067d685 in PostgresMain ()
#10 0x000000000062c246 in PostmasterMain ()
#11 0x00000000005c4ab8 in main ()

Each time, I've seen the process looping infinitely from 75e910 to 75e927:

0x000000000075e910 <+96>: mov (%r9,%rcx,8),%rax
0x000000000075e914 <+100>: test %rax,%rax
0x000000000075e917 <+103>: jne 0x75e93c <hash_seq_search+140>
0x000000000075e919 <+105>: add $0x1,%edx
---Type to continue, or q to quit---
=> 0x000000000075e91c <+108>: cmp %edx,%esi
0x000000000075e91e <+110>: jb 0x75e958 <hash_seq_search+168>
0x000000000075e920 <+112>: add $0x1,%rcx
0x000000000075e924 <+116>: cmp %rcx,%r8
0x000000000075e927 <+119>: jg 0x75e910 <hash_seq_search+96>

For information, other hangs was also met in the past with other requests in similar situations but I don't have traces and can't confirm the issue was also in hash_seq_search().

Best regards,

Amaury

missing call to InstrEndLoop

pgqs_ExecutorEnd gets called before regular ExecutorEnd, which means the instrumentation may not be finalized - there may still be data from the last loop in some of the nodes. pgqs_collectNodeStats needs to do something like this:

if (planstate->instrument)
	InstrEndLoop(planstate->instrument);

similarly to ExplainNode().

Can't CREATE EXTENSION

Hi,

# CREATE EXTENSION pg_qualstats;
FATAL:  cannot create PGC_POSTMASTER variables after startup
server closed the connection unexpectedly
        This probably means the server terminated abnormally
        before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.

I don't really understand what's happening here. What should I do to load the extension? (note that I restarted postgresql and pg_qualstats is in shared_preload_libraries in postgresql.conf)

Build fails on Solaris 10 SPARC with Solaris Studio 12.4

I want to build pg_qualstats on a Solaris 10 SPARC with Solaris Studio 12.4 and it fails with the following error :

bash-3.00$ gmake /opt/studio11/SUNWspro/bin/cc -Xa -xO3 -xspace -Xa -xildoff -xCC -xarch=generic64 -KPIC -I. -I./ -I/appli/pgsql/9.4.6/include/64/server -I/appli/pgsql/9.4.6/include/64/internal -I/usr/include/libxml2 -I/usr/sfw/include -I/opt/csw/include -c -o pg_qualstats.o pg_qualstats.c cc: Warning: -xarch=generic64 is deprecated, use -m64 to create 64-bit programs "pg_qualstats.c", line 647: invalid cast expression cc: acomp failed for pg_qualstats.c gmake: *** [pg_qualstats.o] Error 2

I am using the master. I have the same error with version 0.0.9 at line 642.
Postgres version is 9.4.6 64 bits

Build failure with PG12: pg_qualstats.c:47:10: fatal error: optimizer/var.h: No such file or directory

Building with 12beta1:

16:13:05 gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -g -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -fno-omit-frame-pointer -g -O2 -fdebug-prefix-map=/<<PKGBUILDDIR>>=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I. -I./ -I/usr/include/postgresql/12/server -I/usr/include/postgresql/internal  -Wdate-time -D_FORTIFY_SOURCE=2 -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include/mit-krb5  -c -o pg_qualstats.o pg_qualstats.c
16:13:05 pg_qualstats.c:47:10: fatal error: optimizer/var.h: No such file or directory
16:13:05  #include "optimizer/var.h"
16:13:05           ^~~~~~~~~~~~~~~~~
16:13:05 compilation terminated.

version 2.0.2 has no update path from version "1.0.7" and could not find function "pg_qualstats_2_0" in file

Hello
I have updated binaries to the latest 2.0.2 but couldn't install it on my PG12.5

  1. Update failed
    alter extension pg_qualstats update; ERROR: extension "pg_qualstats" has no update path from version "1.0.7" to version "2.0.2"
  2. Create from the scratch failed too
    postgres=# drop extension pg_qualstats; DROP EXTENSION postgres=# create extension pg_qualstats; ERROR: could not find function "pg_qualstats_2_0" in file "/usr/pgsql-10/lib/pg_qualstats.so"

cppcheck find bug

pg_qualstats.c:741:15: style: Variable 'queryEntry' is assigned a value that is never used. [unreadVariable]
   queryEntry = (pgqsQueryStringEntry *) hash_search_with_hash_value(pgqs_query_examples_hash, &queryKey,
              ^

[Doc] ALTER EXTENSION UPDATE fails

There is no SQL scripts, so ALTER EXTENSION UPDATE fails:

powa=# ALTER EXTENSION pg_qualstats UPDATE;
ERROR: extension "pg_qualstats" has no update path from version "2.0.1" to version "2.0.2"

As in HypoPG/hypopg#54, I suppose that it just needs a line about the upgrade in the documentation.

Does not work with postgres10

Cannot install this extension on postgres10. Whenever I start postgres with "pg_qualstats" in "shared_preload_libraries" there is a core dump and postgres won't start.

multi node stat collection

Suppose there are 3 PG backends (running on 3 nodes) and same data has been loaded into each backend.

On each backend, the stat is collected by pg_qualstats via shared_preload_libraries properly configured.
There are several clients.
Each client would issue read queries against any of the 3 backends (each backend would only have partial view of the total workload).

This issue is to develop tool which can make index advice based on the aggregate stat from all the backends.

pg_qualstats_example_query() and queries() don't fill in all the predicate values

There seems to be an issue where the predicate values are not filling in when either pg_qualstats_example_query() or pg_qualstats_example_queries() is executed. What conditions must be met for the values to populate?

Example:
predicate as appears in pg_stat_statements
where mem2_.mem_id=$1 and ($2 is null or compute1_.code is null or compute1_.code=$3) and ($6=$4 and $7=mempro0_.active and $8=compute1_.active and $9=mem2_.active or $10=$5 and ($11<>mempro0_.active or $12<>compute1_.active or $13<>mem2_.active))

predicate as returned from pg_qualstats_example_query
where mem2_.mem_id=$1 and ($2 is null or compute1_.code is null or compute1_.code=$3) and (true=$4 and true=mempro0_.active and true=compute1_.active and true=mem2_.active or false=$5 and (true<>mempro0_.active or true<>compute1_.active or true<>mem2_.active))

It appears that it gets the values for only 'true' or 'false'. These queries are all coming from either Hibernate / Spring JPA. It works fine if I do a query by hand instead of the application executing the queries. Is there is known issue with Hibernate/Spring JPA queries?

crash on PG 9.6.5 make installcheck

Hi,

running the PostgreSQL 9.6.5 regression tests on a database with powa + pg_qualstats installed has a 30% chance of crashing for me while running the object_address test. Backtrace below.

regards,
andreas

    Program terminated with signal SIGSEGV, Segmentation fault.
    #0  pgqs_collectNodeStats (planstate=0x557e4d260cd8, context=context@entry=0x557e4d26f368, ancestors=0x0) at pg_qualstats.c:938
    938		total_filtered = planstate->instrument->nfiltered1 + planstate->instrument->nfiltered2;
    (gdb) bt
    #0  pgqs_collectNodeStats (planstate=0x557e4d260cd8, context=context@entry=0x557e4d26f368, ancestors=0x0) at pg_qualstats.c:938
    #1  0x00007f3d6b55fa0d in pgqs_ExecutorEnd (queryDesc=0x557e4d25ebe8) at pg_qualstats.c:695
    #2  0x0000557e4b3c96c3 in PortalCleanup (portal=0x557e4d154d58) at portalcmds.c:280
    #3  0x0000557e4b64bbf3 in PortalDrop (portal=0x557e4d154d58, isTopCommit=<optimized out>) at portalmem.c:510
    #4  0x0000557e4b43020c in SPI_cursor_close (portal=<optimized out>) at spi.c:1487
    #5  0x00007f3d61f9904e in exec_stmt_fors (stmt=0x557e4d22e748, estate=0x7ffc6d6fdf80) at pl_exec.c:2121
    #6  exec_stmt (stmt=0x557e4d22e748, estate=0x7ffc6d6fdf80) at pl_exec.c:1467
    #7  exec_stmts (estate=0x7ffc6d6fdf80, stmts=<optimized out>) at pl_exec.c:1398
    #8  0x00007f3d61f9c6f1 in exec_for_query (estate=estate@entry=0x7ffc6d6fdf80, stmt=stmt@entry=0x557e4d22e570, portal=0x557e4d154c40, prefetch_ok=prefetch_ok@entry=1 '\001') at pl_exec.c:5209
    #9  0x00007f3d61f9903e in exec_stmt_fors (stmt=0x557e4d22e570, estate=0x7ffc6d6fdf80) at pl_exec.c:2116
    #10 exec_stmt (stmt=0x557e4d22e570, estate=0x7ffc6d6fdf80) at pl_exec.c:1467
    #11 exec_stmts (estate=0x7ffc6d6fdf80, stmts=<optimized out>) at pl_exec.c:1398
    #12 0x00007f3d61f9c6f1 in exec_for_query (estate=estate@entry=0x7ffc6d6fdf80, stmt=stmt@entry=0x557e4d22e1c8, portal=0x557e4d154b28, prefetch_ok=prefetch_ok@entry=1 '\001') at pl_exec.c:5209
    #13 0x00007f3d61f9903e in exec_stmt_fors (stmt=0x557e4d22e1c8, estate=0x7ffc6d6fdf80) at pl_exec.c:2116
    #14 exec_stmt (stmt=0x557e4d22e1c8, estate=0x7ffc6d6fdf80) at pl_exec.c:1467
    #15 exec_stmts (estate=0x7ffc6d6fdf80, stmts=<optimized out>) at pl_exec.c:1398
    #16 0x00007f3d61f9b627 in exec_stmt_block (estate=estate@entry=0x7ffc6d6fdf80, block=0x557e4d22f678) at pl_exec.c:1336
    #17 0x00007f3d61f9b845 in plpgsql_exec_function (func=func@entry=0x557e4d20e930, fcinfo=fcinfo@entry=0x7ffc6d6fe1c0, simple_eval_estate=simple_eval_estate@entry=0x557e4d223da0) at pl_exec.c:434
    #18 0x00007f3d61f901f1 in plpgsql_inline_handler (fcinfo=<optimized out>) at pl_handler.c:329
    #19 0x0000557e4b62e262 in OidFunctionCall1Coll (functionId=<optimized out>, collation=collation@entry=0, arg1=94000947505744) at fmgr.c:1592
    #20 0x0000557e4b3bef72 in ExecuteDoStmt (stmt=stmt@entry=0x557e4d202d00) at functioncmds.c:2177
    #21 0x0000557e4b52d178 in standard_ProcessUtility (parsetree=0x557e4d202d00, queryString=0x557e4d1bfde0 [...], context=PROCESS_UTILITY_TOPLEVEL, params=0x0, dest=0x557e4d1c14f0, completionTag=0x7ffc6d6ff060 "") at utility.c:515
    #22 0x00007f3d6b768d1d in pgss_ProcessUtility (parsetree=0x557e4d202d00, queryString=0x557e4d1bfde0 [...], context=PROCESS_UTILITY_TOPLEVEL, params=0x0, dest=0x557e4d1c14f0, completionTag=0x7ffc6d6ff060 "") at pg_stat_statements.c:986
    #23 0x0000557e4b529f34 in PortalRunUtility (portal=0x557e4d154a10, utilityStmt=0x557e4d202d00, isTopLevel=<optimized out>, setHoldSnapshot=<optimized out>, dest=0x557e4d1c14f0, completionTag=0x7ffc6d6ff060 "") at pquery.c:1193
    #24 0x0000557e4b52aad2 in PortalRunMulti (portal=portal@entry=0x557e4d154a10, isTopLevel=isTopLevel@entry=1 '\001', setHoldSnapshot=setHoldSnapshot@entry=0 '\000', dest=dest@entry=0x557e4d1c14f0, altdest=altdest@entry=0x557e4d1c14f0, completionTag=completionTag@entry=0x7ffc6d6ff060 "") at pquery.c:1349
    #25 0x0000557e4b52b81e in PortalRun (portal=portal@entry=0x557e4d154a10, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=1 '\001', dest=dest@entry=0x557e4d1c14f0, altdest=altdest@entry=0x557e4d1c14f0, completionTag=completionTag@entry=0x7ffc6d6ff060 "") at pquery.c:815
    #26 0x0000557e4b528f62 in exec_simple_query (query_string=0x557e4d1bfde0 [...]) at postgres.c:1086
    #27 PostgresMain (argc=<optimized out>, argv=argv@entry=0x557e4d1636b0, dbname=<optimized out>, username=<optimized out>) at postgres.c:4074
    #28 0x0000557e4b27e13d in BackendRun (port=0x557e4d15fb50) at postmaster.c:4294
    #29 BackendStartup (port=0x557e4d15fb50) at postmaster.c:3968
    #30 ServerLoop () at postmaster.c:1719
    #31 0x0000557e4b4be2e3 in PostmasterMain (argc=3, argv=0x557e4d135520) at postmaster.c:1327
    #32 0x0000557e4b27f6bd in main (argc=3, argv=0x557e4d135520) at main.c:228
    (gdb) list
    933		if (list_length(parent) > 1)
    934		{
    935			context->uniquequalid = hashExpr((Expr *) parent, context, true);
    936			context->qualid = hashExpr((Expr *) parent, context, false);
    937		}
    938		total_filtered = planstate->instrument->nfiltered1 + planstate->instrument->nfiltered2;
    939		context->nbfiltered = planstate->instrument->nfiltered1 + planstate->instrument->nfiltered2;
    940		context->count = planstate->instrument->tuplecount + planstate->instrument->ntuples + total_filtered;
    941		/* Add the indexquals */
    942		context->evaltype = 'i';
    (gdb) p planstate->instrument
    $1 = (Instrumentation *) 0x0
    (gdb) p debug_query_string
    $2 = 0x557e4d1bfde0 "DO $$\nDECLARE\n\tobjtype text;\n\tnames\ttext[];\n\targs\ttext[];\nBEGIN\n\tFOR objtype IN VALUES\n\t\t('table'), ('index'), ('sequence'), ('view'),\n\t\t('materialized view'), ('foreign table'),\n\t\t('table column'), ('foreign table column'),\n\t\t('aggregate'), ('function'), ('type'), ('cast'),\n\t\t('table constraint'), ('domain constraint'), ('conversion'), ('default value'),\n\t\t('operator'), ('operator class'), ('operator family'), ('rule'), ('trigger'),\n\t\t('text search parser'), ('text search dictionary'),\n\t\t('text search template'), ('text search configuration'),\n\t\t('policy'), ('user mapping'), ('default acl'), ('transform'),\n\t\t('operator of access method'), ('function of access method')\n\tLOOP\n\t\tFOR names IN VALUES ('{eins}'), ('{addr_nsp, zwei}'), ('{eins, zwei, drei}')\n\t\tLOOP\n\t\t\tFOR args IN VALUES ('{}'), ('{integer}')\n\t\t\tLOOP\n\t\t\t\tBEGIN\n\t\t\t\t\tPERFORM pg_get_object_address(objtype, names, args);\n\t\t\t\tEXCEPTION WHEN OTHERS THEN\n\t\t\t\t\t\tRAISE WARNING 'error for %,%,%: %', objtype, names, args, sqlerrm;\n\t\t\t\tEND;\n\t\t\tEND LOOP;\n\t\tEND LOOP;\n\tEND LOOP;\nEND;\n$$;"
    (gdb) 

pg_qualstats_example_query truncated return

We have some large queries and are trying to develop a solution that will do automatic index recommendations based on the Percona implementation (with some variance for the 2.0.X release of pg_qualstats).

However, when we use the pg_qualstats_example_query it truncates the return query to 2048 characters. This makes the query invalid and so it cannot be used to do an explain to compare with/without the recommended query performance.

I have been unable to locate the reason the return would be truncated. Is there something I am missing that controls the max length of the returned query?

Build error on RHEL 7 *and* PostgreSQL <= 11

Hi,

2.0.0 fails to build on RHEL7 against PostgreSQL <= 11. This does not happen on other distros and PostgreSQL versions. RHEL and PG12 builds fine.

gcc version 4.8.5 20150623 (Red Hat 4.8.5-39) (GCC)

Here is the error:

  • /usr/bin/make USE_PGXS=1 -j4
    pg_qualstats.c: In function 'pgqs_entry_err_estim':
    pg_qualstats.c:921:2: error: 'for' loop initial declarations are only allowed in C99 mode
    for (int i = 0; i < 2; i++)
    ^
    pg_qualstats.c:921:2: note: use option -std=c99 or -std=gnu99 to compile your code
    pg_qualstats.c: In function 'pgqs_process_opexpr':
    pg_qualstats.c:1519:3: error: 'for' loop initial declarations are only allowed in C99 mode
    for (int step = 0; step < 2; step++)
    ^
    pg_qualstats.c: In function 'pg_qualstats_common':
    pg_qualstats.c:2002:3: error: 'for' loop initial declarations are only allowed in C99 mode
    for (int j = 0; j < 2; j++)
    ^
    make[1]: *** [pg_qualstats.o] Error 1
    make[1]: *** Waiting for unfinished jobs....
    error: Bad exit status from /var/tmp/rpm-tmp.CK05mX

Can you please take a look?

Regards, Devrim

pg_qualstats_index_advisor index within JSON

I butchered together some SQL statements :)

CREATE TABLE json_test(body jsonb);

with data as (
select
	i,
	substr('8000012000345987243985743285723945798274359824375287435098432759284375843759824375423095208435019843509283475082437592843859432798572430985720984375098432759384750432986772437540958724385743985721098437520984375',
	i,
	1) as tel
from
	generate_series(1, length('8000012000345987243985743285723945798274359824375287435098432759284375843759824375423095208435019843509283475082437592843859432798572430985720984375098432759384750432986772437540958724385743985721098437520984375')) as g(i)
)
insert
	into
	json_test(body)
select
	('{"specCode": {"name": "Telephone Number", "text": "TEL_NUM"}, "specValue": {"code": null, "text":"' || tel || '"}}')::json
from
	data
returning *;
--explain analyze
select
	body
from
	json_test
where
	body->'specValue'->>'text' = '1';

create INDEX ON "json_test" using btree ((body -> 'specValue' ->>'text'));

This should create a table with a JSONB column.
Generate some random data and query a field within the JSON data (explain commented out).
At the end, there is the code to create a BTREE index on the field in the where clause (condition in where is the same as the index).
With the index, you can use the explain statement from above and see that the index should be used.

Such queries are not covered by the index advisor.

PG16 binary for Windows

Hi, would it be possible to include a binary for PG16 for Windows? Would be much appreciated! Thanks

Dereference after null

I have run static analyzer tests. Here is the warning:
After having been compared to a NULL value at pg_qualstats.c:1069,
pointer 'planstate->instrument' is dereferenced at pg_qualstats.c:1121.
It can cause the program to crash, is it?

pg_qualstats causing coredump and db crash

I have been using pg_qualstats version 0.0.7 for several months without issues, but since we upgraded to Postgres 9.5.0 in late January one of our particularly load intensive databases has experienced 2 crashes related to pg_qualstats. Here is what the postgres log looks like:

2016-03-31 10:09:50.216 EDT [18427]: [10-1] user=,db=,host=,sessionid=56bf3b24.47fb LOG: server process (PID 13930) was terminated by signal 11: Segmentation fault
2016-03-31 10:09:50.216 EDT [18427]: [11-1] user=,db=,host=,sessionid=56bf3b24.47fb DETAIL: Failed process was running: COMMIT
2016-03-31 10:09:50.216 EDT [18427]: [12-1] user=,db=,host=,sessionid=56bf3b24.47fb LOG: terminating any other active server processes

And here is the corresponding message in the var log messages:
kernel: postgres[13930]: segfault at 8 ip 00007f9fb41ca664 sp 00007ffe696f7b70 error 4 in pg_qualstats.so[7f9fb41c5000+7000]

The issue occurs even when the extension is not created in any of the databases. The above crash occurred just having the pg_qualstats library loaded with shared_preload_libraries. The crash is infrequent and I am not able to reproduce the condition causing the crash. I hope this is helpful in figuring out the issue causing this behavior.
Thanks,
Boyan Botev

Change query ID from 32 bits to 64 bits

query ID is now on 64 bits (to avoid collisions). From commit cff440d368 :

This takes advantage of the infrastructure introduced by commit
81c5e46c490e2426db243eada186995da5bb0ba7 to greatly reduce the
likelihood that two different queries will end up with the same query
ID. It's still possible, of course, but whereas before it the chances
of a collision reached 25% around 50,000 queries, it will now take
more than 3 billion queries.

Backward incompatibility: Because the type exposed at the SQL level is
int8, users may now see negative query IDs in the pg_stat_statements
view (and also, query IDs more than 4 billion, which was the old
limit).

execution_count column does not show proper row count

Hi Team,

I was working on pg_qualstats extension for the first time. In the document, it says 'execution_count' column of pg_qualstats show the 'no of records processes'.

I tested with below scenario. The original value for this column is 14. I ran an UPDATE which updates a single record. After this UPDATE, the value for execution_count changed to 18 but it updated only one record. Why is this difference ?

postgres=# select * from pg_qualstats_pretty;
 left_schema |    left_table    | left_column |   operator   | right_schema | right_table | right_column | occurences | execution_count | nbfiltered 
-------------+------------------+-------------+--------------+--------------+-------------+--------------+------------+-----------------+------------
 public      | pgbench_branches | bid         | pg_catalog.= |              |             |              |          2 |              14 |          0
(1 row)

postgres=# UPDATE pgbench_branches SET bbalance = bbalance + 20 where bid=3;
UPDATE 1
postgres=# select * from pg_qualstats_pretty;
 left_schema |    left_table    | left_column |   operator   | right_schema | right_table | right_column | occurences | execution_count | nbfiltered 
-------------+------------------+-------------+--------------+--------------+-------------+--------------+------------+-----------------+------------
 public      | pgbench_branches | bid         | pg_catalog.= |              |             |              |          3 |              18 |          0
(1 row)

clang reports warnings on debian stretch

Hello,

I build pg_qualstats ont debian stretch which use clang. clan reports severals warnings:

pg_qualstats.c:1520:27: warning: equality comparison with extraneous parentheses
      [-Wparentheses-equality]
                                if ((boolexpr->boolop == AND_EXPR))
                                     ~~~~~~~~~~~~~~~~~^~~~~~~~~~~
pg_qualstats.c:1520:27: note: remove extraneous parentheses around the comparison to silence this
      warning
                                if ((boolexpr->boolop == AND_EXPR))
                                    ~                 ^          ~
pg_qualstats.c:1520:27: note: use '=' to turn this equality comparison into an assignment
                                if ((boolexpr->boolop == AND_EXPR))
                                                      ^~
pg_qualstats.c:1755:14: warning: address of array 'entry->constvalue' will always evaluate to 'true'
      [-Wpointer-bool-conversion]
                if (entry->constvalue)
                ~~  ~~~~~~~^~~~~~~~~~

#21 fix first message.

For the second message I don't know what to do.

pg_qualstats : segmentation fault

Hello,

During an important load I observe a problem... a "segmentation fault"... here is the detail :
This is an installation with Powa and pg_qualstats.
Installation:
OS: Centos 7 (CentOS Linux release 7.4.1708 (Core))
CPU : E5620 6core / Ram: 8 GB
PostgreSQL : 9.6.8 (from PGDG)
Powa : 3.1.1
pg_qualstats : 1.0.3

Postgresql.conf configuration :

work_mem = 16MB
maintenance_work_mem = 128MB
shared_buffers = 2GB
shared_preload_libraries='pg_stat_statements,powa,pg_qualstats'
powa.frequency='5min'
powa.retention='5d'
powa.database='powa'
powa.coalesce='100'
pg_qualstats.enabled = true
pg_qualstats.max = 10000

Powa installation :

CREATE DATABASE powa ;
\c powa
CREATE EXTENSION powa CASCADE;
CREATE EXTENSION pg_qualstats;

How to reproduce the bug:
Creating the test pgbench table :

/usr/pgsql-9.6/bin/pgbench -i -s 10 test

script4.sql :

\set aaa random(1, 100000)
\set bbb random(1, 50000)
UPDATE pgbench_accounts SET filler = :bbb where aid = :aaa ;

pg bench :

/usr/pgsql-9.6/bin/pgbench -v -c -f script4.sql -j 1 -n -t 10000000000000000000 test

  • Test with 100 connections : no error...
  • Test with 200 connections : seg fault

I added some NOTICE in pgqualstats to better understand the problem and I observe :

  1. First of all i observe out of memory in the postgresql logs :

2018-04-04 09:32:41 CEST [14337]: [1-1] user=postgres,db=test,client=[local],app=pgbench NOTICE: 01000: DEBUG_NGO hash_get_num_entries(pgqs_hash) + hash_get_num_entries(pgqs_localhash) : 1
2018-04-04 09:32:41 CEST [14337]: [2-1] user=postgres,db=test,client=[local],app=pgbench LOCATION: pgqs_ExecutorEnd, pg_qualstats.c:735
2018-04-04 09:32:41 CEST [14337]: [3-1] user=postgres,db=test,client=[local],app=pgbench NOTICE: 01000: DEBUG_NGO hash_get_num_entries(pgqs_hash) (before search) : 0
2018-04-04 09:32:41 CEST [14337]: [4-1] user=postgres,db=test,client=[local],app=pgbench LOCATION: pgqs_ExecutorEnd, pg_qualstats.c:745
2018-04-04 09:32:41 CEST [14337]: [5-1] user=postgres,db=test,client=[local],app=pgbench ERROR: 53200: out of shared memory
2018-04-04 09:32:41 CEST [14337]: [6-1] user=postgres,db=test,client=[local],app=pgbench LOCATION: hash_search_with_hash_value, dynahash.c:1042
2018-04-04 09:32:41 CEST [14337]: [7-1] user=postgres,db=test,client=[local],app=pgbench STATEMENT: UPDATE pgbench_accounts SET filler = 34564 where aid = 43970 ;

  1. Then the "pgqs_hash" takes a rather high value at once (much higher than "pgqs_max").

2018-04-04 09:32:41 CEST [14289]: [1-1] user=postgres,db=test,client=[local],app=pgbench NOTICE: 01000: DEBUG_NGO hash_get_num_entries(pgqs_hash) + hash_get_num_entries(pgqs_localhash) : 65537
2018-04-04 09:32:41 CEST [14289]: [2-1] user=postgres,db=test,client=[local],app=pgbench LOCATION: pgqs_ExecutorEnd, pg_qualstats.c:735

  1. This is repeated a few hundred times without the values changing...

2018-04-04 09:32:41 CEST [14289]: [1143-1] user=postgres,db=test,client=[local],app=pgbench NOTICE: 01000: DEBUG_NGO =>>>>pgqs_entry_dealloc<<<<=
2018-04-04 09:32:41 CEST [14289]: [1144-1] user=postgres,db=test,client=[local],app=pgbench LOCATION: pgqs_entry_dealloc, pg_qualstats.c:813
2018-04-04 09:32:41 CEST [14289]: [1145-1] user=postgres,db=test,client=[local],app=pgbench NOTICE: 01000: DEBUG_NGO pgqs_entry_dealloc hash_get_num_entries(pgqs_hash) : 65536
2018-04-04 09:32:41 CEST [14289]: [1146-1] user=postgres,db=test,client=[local],app=pgbench LOCATION: pgqs_entry_dealloc, pg_qualstats.c:827
2018-04-04 09:32:41 CEST [14289]: [1147-1] user=postgres,db=test,client=[local],app=pgbench NOTICE: 01000: DEBUG_NGO pgqs_entry_dealloc hash_estimate_size(pgqs_hash) : 2100680
2018-04-04 09:32:41 CEST [14289]: [1148-1] user=postgres,db=test,client=[local],app=pgbench LOCATION: pgqs_entry_dealloc, pg_qualstats.c:828

  1. Then after, the memory allocation requests become inconsistent...:

2018-04-04 09:32:41 CEST [14289]: [12465-1] user=postgres,db=test,client=[local],app=pgbench NOTICE: 01000: DEBUG_NGO =>>>>pgqs_entry_dealloc<<<<=
2018-04-04 09:32:41 CEST [14289]: [12466-1] user=postgres,db=test,client=[local],app=pgbench LOCATION: pgqs_entry_dealloc, pg_qualstats.c:813
2018-04-04 09:32:41 CEST [14289]: [12467-1] user=postgres,db=test,client=[local],app=pgbench NOTICE: 01000: DEBUG_NGO pgqs_entry_dealloc hash_get_num_entries(pgqs_hash) : 65536
2018-04-04 09:32:41 CEST [14289]: [12468-1] user=postgres,db=test,client=[local],app=pgbench LOCATION: pgqs_entry_dealloc, pg_qualstats.c:827
2018-04-04 09:32:41 CEST [14289]: [12469-1] user=postgres,db=test,client=[local],app=pgbench NOTICE: 01000: DEBUG_NGO pgqs_entry_dealloc hash_estimate_size(pgqs_hash) : 2100680
2018-04-04 09:32:41 CEST [14289]: [12470-1] user=postgres,db=test,client=[local],app=pgbench LOCATION: pgqs_entry_dealloc, pg_qualstats.c:828
2018-04-04 09:32:41 CEST [14289]: [12471-1] user=postgres,db=test,client=[local],app=pgbench ERROR: XX000: invalid memory alloc request size 8796093546496
2018-04-04 09:32:41 CEST [14289]: [12472-1] user=postgres,db=test,client=[local],app=pgbench LOCATION: palloc, mcxt.c:900
018-04-04 09:32:44 CEST [14280]: [1-1] user=postgres,db=test,client=[local],app=pgbench NOTICE: 01000: DEBUG_NGO hash_get_num_entries(pgqs_hash) + hash_get_num_entries(pgqs_localhash) : 72058693549621249
2018-04-04 09:32:44 CEST [14280]: [2-1] user=postgres,db=test,client=[local],app=pgbench LOCATION: pgqs_ExecutorEnd, pg_qualstats.c:735
2018-04-04 09:32:44 CEST [14280]: [3-1] user=postgres,db=test,client=[local],app=pgbench NOTICE: 01000: DEBUG_NGO =>>>>pgqs_entry_dealloc<<<<=
2018-04-04 09:32:44 CEST [14280]: [4-1] user=postgres,db=test,client=[local],app=pgbench LOCATION: pgqs_entry_dealloc, pg_qualstats.c:813
2018-04-04 09:32:44 CEST [14280]: [5-1] user=postgres,db=test,client=[local],app=pgbench NOTICE: 01000: DEBUG_NGO pgqs_entry_dealloc hash_get_num_entries(pgqs_hash) : 72058693549621248
2018-04-04 09:32:44 CEST [14280]: [6-1] user=postgres,db=test,client=[local],app=pgbench LOCATION: pgqs_entry_dealloc, pg_qualstats.c:827
2018-04-04 09:32:44 CEST [14280]: [7-1] user=postgres,db=test,client=[local],app=pgbench NOTICE: 01000: DEBUG_NGO pgqs_entry_dealloc hash_estimate_size(pgqs_hash) : 2886833749425128424
2018-04-04 09:32:44 CEST [14280]: [8-1] user=postgres,db=test,client=[local],app=pgbench LOCATION: pgqs_entry_dealloc, pg_qualstats.c:828
2018-04-04 09:32:44 CEST [14280]: [9-1] user=postgres,db=test,client=[local],app=pgbench ERROR: XX000: invalid memory alloc request size 576469548396969984
2018-04-04 09:32:44 CEST [14280]: [10-1] user=postgres,db=test,client=[local],app=pgbench LOCATION: palloc, mcxt.c:900
2018-04-04 09:32:55 CEST [14327]: [1-1] user=postgres,db=test,client=[local],app=pgbench NOTICE: 01000: DEBUG_NGO hash_get_num_entries(pgqs_hash) + hash_get_num_entries(pgqs_localhash) : 72340172838076674
2018-04-04 09:32:55 CEST [14327]: [2-1] user=postgres,db=test,client=[local],app=pgbench LOCATION: pgqs_ExecutorEnd, pg_qualstats.c:735
2018-04-04 09:32:55 CEST [14327]: [3-1] user=postgres,db=test,client=[local],app=pgbench NOTICE: 01000: DEBUG_NGO =>>>>pgqs_entry_dealloc<<<<=
2018-04-04 09:32:55 CEST [14327]: [4-1] user=postgres,db=test,client=[local],app=pgbench LOCATION: pgqs_entry_dealloc, pg_qualstats.c:813
2018-04-04 09:32:55 CEST [14327]: [5-1] user=postgres,db=test,client=[local],app=pgbench NOTICE: 01000: DEBUG_NGO pgqs_entry_dealloc hash_get_num_entries(pgqs_hash) : 72340172838076673
2018-04-04 09:32:55 CEST [14327]: [6-1] user=postgres,db=test,client=[local],app=pgbench LOCATION: pgqs_entry_dealloc, pg_qualstats.c:827
2018-04-04 09:32:55 CEST [14327]: [7-1] user=postgres,db=test,client=[local],app=pgbench NOTICE: 01000: DEBUG_NGO pgqs_entry_dealloc hash_estimate_size(pgqs_hash) : 2893589252348059368
2018-04-04 09:32:55 CEST [14327]: [8-1] user=postgres,db=test,client=[local],app=pgbench LOCATION: pgqs_entry_dealloc, pg_qualstats.c:828
2018-04-04 09:32:55 CEST [14327]: [9-1] user=postgres,db=test,client=[local],app=pgbench ERROR: XX000: invalid memory alloc request size 578721382704613384
2018-04-04 09:32:55 CEST [14327]: [10-1] user=postgres,db=test,client=[local],app=pgbench LOCATION: palloc, mcxt.c:900
2018-04-04 09:32:55 CEST [14327]: [11-1] user=postgres,db=test,client=[local],app=pgbench STATEMENT: UPDATE pgbench_accounts SET filler = 28559 where aid = 30566 ;

  1. Until it crashed:

2018-04-04 09:32:56 CEST [14464]: [6-1] user=,db=,client=,app= LOG: 00000: server process (PID 14550) was terminated by signal 11: Segmentation fault
2018-04-04 09:32:56 CEST [14464]: [7-1] user=,db=,client=,app= DETAIL: Failed process was running: UPDATE pgbench_accounts SET filler = 33748 where aid = 79882 ;
2018-04-04 09:32:56 CEST [14464]: [8-1] user=,db=,client=,app= LOCATION: LogChildExit, postmaster.c:3574
2018-04-04 09:32:56 CEST [14464]: [9-1] user=,db=,client=,app= LOG: 00000: terminating any other active server processes
2018-04-04 09:32:56 CEST [14464]: [10-1] user=,db=,client=,app= LOCATION: HandleChildCrash, postmaster.c:3294

Core was generated by `postgres: postgres test [local] UPDATE '.
Program terminated with signal 11, Segmentation fault.
#0 0x00000000007d553d in get_hash_entry (freelist_idx=0, hashp=0x1f6d5e8) at dynahash.c:1283
1283 dynahash.c

the coredump :
bt full

#0  0x00000000007d553d in get_hash_entry (freelist_idx=0, hashp=0x1df6638) at dynahash.c:1283
        hctl = 0x7f72c542f380
        newElement = 0x1000000
        borrow_from_idx = <optimized out>
#1  hash_search_with_hash_value (hashp=0x1df6638, keyPtr=keyPtr@entry=0x1ed5518, hashvalue=2293496263, 
    action=action@entry=HASH_ENTER, foundPtr=foundPtr@entry=0x7ffed465557f "") at dynahash.c:1032
        hctl = <optimized out>
        keysize = 24
        bucket = <optimized out>
        segment_num = <optimized out>
        segment_ndx = <optimized out>
        segp = <optimized out>
        currBucket = <optimized out>
        prevBucketPtr = 0x7f72c5430138
        match = <optimized out>
        freelist_idx = <optimized out>
        __func__ = "hash_search_with_hash_value"
#2  0x00000000007d55bd in hash_search (hashp=<optimized out>, keyPtr=keyPtr@entry=0x1ed5518, 
    action=action@entry=HASH_ENTER, foundPtr=foundPtr@entry=0x7ffed465557f "") at dynahash.c:889
No locals.
#3  0x00007f72c5d93f64 in pgqs_ExecutorEnd (queryDesc=0x1ec7908) at pg_qualstats.c:751
        nvictims = <optimized out>
        info = {num_partitions = 0, ssize = 0, dsize = 0, max_dsize = 0, ffactor = 0, keysize = 24, 
          entrysize = 176, hash = 0x7f72c5d91860 <pgqs_hash_fn>, match = 0x0, keycopy = 0x0, 
          alloc = 0x0, hcxt = 0x0, hctl = 0x0}
        newEntry = <optimized out>
        local_hash_seq = {hashp = 0x1ecfe18, curBucket = 8, curEntry = 0x0}
        context = <optimized out>
        localentry = 0x1ed5518
        queryKey = {queryid = 2441476858}
        queryEntry = <optimized out>
        found = 0 '\000'
        __func__ = "pgqs_ExecutorEnd"
 line 751: newEntry = (pgqsEntry *) hash_search(pgqs_hash, &localentry->key, HASH_ENTER, &found);
#4  0x00000000006d4ad0 in ProcessQuery (plan=<optimized out>, 
    sourceText=0x1e8cdb8 "UPDATE pgbench_accounts SET filler = 33748 where aid = 79882 ;", params=0x0, 
    dest=0x1ec2ab0, completionTag=0x7ffed4655a90 "UPDATE 1") at pquery.c:232
        queryDesc = 0x1ec7908
        __func__ = "ProcessQuery"
#5  0x00000000006d4c7d in PortalRunMulti (portal=portal@entry=0x1e14848, 
    isTopLevel=isTopLevel@entry=1 '\001', setHoldSnapshot=setHoldSnapshot@entry=0 '\000', 
    dest=dest@entry=0x1ec2ab0, altdest=altdest@entry=0x1ec2ab0, 
    completionTag=completionTag@entry=0x7ffed4655a90 "UPDATE 1") at pquery.c:1303
        pstmt = 0x1ec13b0
        stmt = 0x1ec13b0
        active_snapshot_set = 1 '\001'
        stmtlist_item = 0x1ec2a60
#6  0x00000000006d575f in PortalRun (portal=0x1e14848, count=9223372036854775807, 
    isTopLevel=<optimized out>, dest=0x1ec2ab0, altdest=0x1ec2ab0, 
    completionTag=0x7ffed4655a90 "UPDATE 1") at pquery.c:815
        save_exception_stack = 0x7ffed46559a0
        save_context_stack = 0x0
        local_sigjmp_buf = {{__jmpbuf = {0, 3810215649922779956, 32037632, 32254640, 2, 31418472, 
              -3810834759769650380, 3810214515159468852}, __mask_was_saved = 0, __saved_mask = {
              __val = {8215492, 1, 31951320, 9795934, 31541320, 9795934, 1, 1, 88, 31541320, 32037632, 
                9678926, 8304801, 32037552, 2, 31541320}}}}
        result = <optimized out>
        nprocessed = <optimized out>
        saveTopTransactionResourceOwner = 0x1e34798
        saveTopTransactionContext = 0x1df6868
        saveActivePortal = 0x0
        saveResourceOwner = 0x1e34798
        savePortalContext = 0x0
        saveMemoryContext = 0x1df6868
        __func__ = "PortalRun"
#7  0x00000000006d34b6 in PostgresMain (argc=<optimized out>, argv=<optimized out>, 
    dbname=<optimized out>, username=<optimized out>) at postgres.c:1086
        firstchar = 32037552
        input_message = {
          data = 0x1e8cdb8 "UPDATE pgbench_accounts SET filler = 33748 where aid = 79882 ;", len = 63, 
          maxlen = 1024, cursor = 63}
        local_sigjmp_buf = {{__jmpbuf = {31626792, 3810213719076113204, 1, 0, 0, 31588352, 
              -3810834759415231692, 3810214517272742708}, __mask_was_saved = 1, __saved_mask = {
              __val = {0, 31588352, 140131109176696, 0, 206158430256, 140732461832944, 140732461832736, 
                31548584, 16, 31626928, 48, 31548584, 31626792, 0, 31626880, 31626928}}}}
        send_ready_for_query = 0 '\000'
        disable_idle_in_transaction_timeout = <optimized out>
        __func__ = "PostgresMain"
#8  0x000000000046f934 in BackendRun (port=0x1e20340) at postmaster.c:4342
        ac = 1
        secs = 576143237
        usecs = 217392
        i = 1
        av = 0x1e296e8
        maxac = <optimized out>
#9  BackendStartup (port=0x1e20340) at postmaster.c:4016
        bn = 0x1e20000
        pid = 0
#10 ServerLoop () at postmaster.c:1721
        rmask = {fds_bits = {32, 0 <repeats 15 times>}}
        selres = <optimized out>
        now = <optimized out>
        readmask = {fds_bits = {120, 0 <repeats 15 times>}}
        last_lockfile_recheck_time = 1522827994
        last_touch_time = 1522827154
        __func__ = "ServerLoop"
#11 0x00000000006764b9 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x1df5280)
    at postmaster.c:1329
        opt = <optimized out>
        status = <optimized out>
        userDoption = <optimized out>
        listen_addr_saved = <optimized out>
        i = <optimized out>
        output_config_variable = <optimized out>
        __func__ = "PostmasterMain"
#12 0x000000000047059e in main (argc=3, argv=0x1df5280) at main.c:228
No locals.

When the "pg_qualstats" extension is disabled the problem does not occur.

Regards,

pgqs_collectNodeStats is not updating ancestors

pgqs_collectNodeStats fails to update ancestors before processing sub-plans. It needs to do

ancestors = lcons(planstate, ancestors);

before processing initPlans, and removing the planstate at the end. See how ExplainNode in explain.c (around line 1593) does that, depending on haschildren variable.

Provide context for index advice

Currently index advice from pg_qualstats_index_advisor() only provides the index statement.
e.g.

"CREATE INDEX ON public.pgqs USING btree (id)"

It would be nice to include context along with the index advice.
The context may include (not limited to):

  • query statement which would benefit by the suggested index
  • number of executions for the query statement which went through sampling by pg_qualstats (count / occurrences in pgqsEntry struct)

PostgreSQL 14 support

Hi,

I see that you committed v14 support. This is a placeholder issue for a new release which is v14 compatible.

Thanks!

Devrim

pg_qualstats_index_advisor functionality

I installed the latest revision 66b3037 on my local PG 14.5 machine.
I found that you added the "docs" for the advisor here
#31

Your code example:

SELECT v
  FROM json_array_elements(
    pg_qualstats_index_advisor(min_filter => 50)->'indexes') v
  ORDER BY v::text COLLATE "C";

fails with SQL Error [22023]: ERROR: cannot call json_array_elements on a scalar

but all I can figure out is:

SELECT pg_qualstats_index_advisor();

{
  "indexes": null,
  "unoptimised": [
    {
      "qual": "assemblies.\"orderReferenceNumber\" ~~* ?",
      "queryids": [
        -6084300706674462045,
        -5817621241163067326
      ]
    },
    {
      "qual": "orders.\"orderReferenceNumber\" ~~* ?",
      "queryids": [
        5707918505950049101,
        9082355525220716366
      ]
    }
  ]
}

taking the first query :
select pg_qualstats_example_query(-6084300706674462045);

select
	*
from
	"assemblies"
where
	("assemblies"."orderInternalId"::text ilike $1
		or "assemblies"."orderReferenceNumber"::text ilike $2)
order by
	"assemblies"."id" desc,
	"orderType" asc
limit 10 offset 0

I have a BTREE index for both fields (orderInternalId,orderReferenceNumber).

If I remove one of the indexes and rerun the query again and check again

SELECT pg_qualstats_index_advisor();
I get the same result as before, just with 1 more queryId.

I have a very busy schedule and just skimmed the code
it seems that
v_ddl is the part from your code example output and this is not returned.
As soon as I have some time, I will have a closer look at the function.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.