Code Monkey home page Code Monkey logo

nginx-module-vts's People

Contributors

boyanhh avatar cohalz avatar gemfield avatar jongiddy avatar mathieu-aubin avatar patsevanton avatar robn avatar spacewander avatar superq avatar susuper avatar timgates42 avatar tobilarscheid avatar u5surf avatar vozlt avatar wandenberg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nginx-module-vts's Issues

Can't use control functionality if location has more than a segment

I'm using this module with nginx 1.8, and made the update to version VTS v0.1.6 to benefit from the stats reset functionality.

I was surprised that calling any urls with /my/location/status/control?args would simply return the HTML page without executing the command.

I tried again with a vanilla config, and it seems the module doesn't support control if the location scope containing "vhost_traffic_status_display" is composed of multiple segments.

This configuration will work without a problem:

location /status {
     vhost_traffic_status_display;
     vhost_traffic_status_display_format html;
}

while calling http://127.0.0.1/status/control?cmd=reset&group=server&zone=_

If configuring the location with a composed path


location /a/segment/structure/status {
     vhost_traffic_status_display;
     vhost_traffic_status_display_format html;
}

http://127.0.0.1/a/segment/structure/status/control?cmd=reset&group=server&zone=_

Will return the HTML page.

Anything I'm doing wrong?

got coredump when group not exist

http://127.0.0.1:83/status/control?cmd=status&group=test

Program terminated with signal 11, Segmentation fault.
#0  0x00000000005851cf in ngx_http_vhost_traffic_status_node_control_range_set
    (control=0x7f40c32e61a8)
    at ../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:2644
2644            if (control->zone->len == 0) {
Missing separate debuginfos, use: debuginfo-install glibc-2.17-106.el7_2.4.x86_64 libgcc-4.8.5-4.el7.x86_64 lua-5.1.4-14.el7.x86_64 nss-softokn-freebl-3.16.2.3-13.el7_1.x86_64 pcre-8.32-15.el7.x86_64 sssd-client-1.13.0-40.el7_2.2.x86_64 zlib-1.2.7-15.el7.x86_64
(gdb) bt
#0  0x00000000005851cf in ngx_http_vhost_traffic_status_node_control_range_set
    (control=0x7f40c32e61a8)
    at ../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:2644
#1  0x0000000000582439 in ngx_http_vhost_traffic_status_display_handler_control
    (r=0x7f40c32e51a0)
    at ../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:1516
#2  0x0000000000581f39 in ngx_http_vhost_traffic_status_display_handler (
    r=0x7f40c32e51a0)
    at ../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:1386
#3  0x00000000004a06e1 in ngx_http_core_content_phase (r=0x7f40c32e51a0, 
    ph=0x7f40c33041c8) at src/http/ngx_http_core_module.c:1368
#4  0x000000000049f227 in ngx_http_core_run_phases (r=0x7f40c32e51a0)
    at src/http/ngx_http_core_module.c:845
#5  0x000000000049f195 in ngx_http_handler (r=0x7f40c32e51a0)
    at src/http/ngx_http_core_module.c:828
#6  0x00000000004aedfa in ngx_http_process_request (r=0x7f40c32e51a0)
    at src/http/ngx_http_request.c:1914
#7  0x00000000004ad75e in ngx_http_process_request_headers (rev=0x7f40c31b8310)
    at src/http/ngx_http_request.c:1346
#8  0x00000000004acb41 in ngx_http_process_request_line (rev=0x7f40c31b8310)
    at src/http/ngx_http_request.c:1026
#9  0x00000000004ab7af in ngx_http_wait_request_handler (rev=0x7f40c31b8310)
    at src/http/ngx_http_request.c:503
#10 0x000000000048db99 in ngx_epoll_process_events (cycle=0x7f40c32b5000, 
    timer=924, flags=1) at src/event/modules/ngx_epoll_module.c:907
#11 0x000000000047cc89 in ngx_process_events_and_timers (cycle=0x7f40c32b5000)
    at src/event/ngx_event.c:242
#12 0x000000000048b40b in ngx_worker_process_cycle (cycle=0x7f40c32b5000, 
    data=0x1) at src/os/unix/ngx_process_cycle.c:753
#13 0x0000000000487d07 in ngx_spawn_process (cycle=0x7f40c32b5000, 
    proc=0x48b316 <ngx_worker_process_cycle>, data=0x1, 
    name=0x71a783 "worker process", respawn=-3)
    at src/os/unix/ngx_process.c:198
#14 0x000000000048a247 in ngx_start_worker_processes (cycle=0x7f40c32b5000, 
    n=2, type=-3) at src/os/unix/ngx_process_cycle.c:358
#15 0x0000000000489864 in ngx_master_process_cycle (cycle=0x7f40c32b5000)
    at src/os/unix/ngx_process_cycle.c:130
#16 0x000000000044be59 in main (argc=3, argv=0x7ffdf3a760d8)
    at src/core/nginx.c:367

Feature Request: Cache status

First of all great work on this module!

One of the things that would make this module even more awesome if you could add a cache status (efficiency, hits/misses) section to the output statistics as well.

Additionally do you think that you can tag various releases for easy integration when building packages?

Reset counters after getting status, in one request

First of all, thank you for this great plugin!

It'd be nice to have one more feature, though.
I want to periodically request status and reset counters after every one.
Currently, it can be done only in two requests:

/status/control?cmd=status...
/status/control?cmd=reset...

But some numbers will be certainly between them.
One more argument to cmd=status would save the situation, for example:

/status/control?cmd=status&reset=true

or

/status/control?cmd=pop # means "pop numbers and clear"

I've tried to do it on my own, but my knowledge of C is very limited.

Upstreams not showing as down

Hello everybody I just set this up and doing some testing i turned one of my upstream serves off and spammed the Load Balancing with requests from "ab" but it seems to show that it is up.

server 1 is 192.xxx.1.14:80 up 17ms
server 2 is 192.xxx.1.15:80 up 3.4s

but server 2 is turned off and there is no fallback setup yet

my config is round robin is that a problem?

please help

If nginx overloaded - module is not responding

Hi.

This module is useful especially in case of DDoS attack or just traffic flow, but the problem is that if you load nginx by 100% - module stop responding.

Is there any ability to give a module highest priority, over other tasks, so it can continue to work even when load is 100%?

They have changed response timings

They have changed response timings in http://hg.nginx.org/nginx/rev/59fc60585f1e

ngx_http_vhost_traffic_status_module.c(476) : error C2039: 'response_sec' : is not a member of 'ngx_http_upstream_state_t'
src/http\ngx_http_upstream.h(56) : see declaration of 'ngx_http_upstream_state_t'
ngx_http_vhost_traffic_status_module.c(476) : error C2039: 'response_msec' : is not a member of 'ngx_http_upstream_state_t'
src/http\ngx_http_upstream.h(56) : see declaration of 'ngx_http_upstream_state_t'

Maybe:
// (state[i].response_sec * 1000 + state[i].response_msec);
(state[i].response_time * 1000 + state[i].response_time);

?

Incorrect Upstream/Server/State

Hi,

I've got configured a upstream with two servers

upstream BENJS {
server 127.0.0.1:53000;
server 127.0.0.1:53001;
}

On 127.0.0.1:53001 the service is stopped, but on VTS's status page, the state is "up".

It's a mistake ?

Best Regards,
Zarmack

broken nginx 1.9.7 support ?

@vozlt did latest Nov 20th commits break nginx 1.9.7 support https://community.centminmod.com/posts/20624/ ? As it compiled fine prior to those commits

make -f objs/Makefile install
make[1]: Entering directory `/svr-setup/nginx-1.9.7'
ccache /usr/bin/clang -ferror-limit=0 -c -I/usr/local/include/luajit-2.1  -pipe  -O -Wall -Wextra -Wpointer-arith -Wconditional-uninitialized -Wno-unused-parameter -Werror -g -m64 -mtune=native -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wno-deprecated-declarations -Wno-unused-parameter -Wno-unused-const-variable -Wno-conditional-uninitialized -Wno-mismatched-tags -Wno-c++11-extensions -Wno-sometimes-uninitialized -Wno-parentheses-equality -Wno-tautological-compare -Wno-self-assign -Wno-deprecated-register -Wno-deprecated -Wno-invalid-source-encoding -Wno-pointer-sign -Wno-parentheses -Wno-enum-conversion  -DNDK_SET_VAR -DNDK_UPSTREAM_LIST -DNDK_SET_VAR  -I src/core -I src/event -I src/event/modules -I src/os/unix -I ../ngx_pagespeed-1.9.32.10-beta/psol/include -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/third_party/chromium/src -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/third_party/google-sparsehash/src -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/third_party/google-sparsehash/gen/arch/linux/x64/include -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/third_party/protobuf/src -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/third_party/re2/src -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/out/Release/obj/gen -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/out/Release/obj/gen/protoc_out/instaweb -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/third_party/apr/src/include -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/third_party/aprutil/src/include -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/third_party/apr/gen/arch/linux/x64/include -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/third_party/aprutil/gen/arch/linux/x64/include -I ../ngx_devel_kit-0.2.19/objs -I objs/addon/ndk -I /usr/local/include/luajit-2.1 -I ../lua-nginx-module-0.9.18/src/api -I ../nginx_upstream_check_module-0.3    .0 -I ../pcre-8.37 -I ../libressl-2.2.4/.openssl/include -I objs -I src/http -I src/http/modules -I src/http/v2 -I ../ngx_devel_kit-0.2.19/src -I src/mail -I src/stream \
        -o objs/addon/src/ngx_http_vhost_traffic_status_module.o \
        ../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c
../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:904:22: error: adding 'unsigned int' to a string does not append to the string [-Werror,-Wstring-plus-int]
    len = ngx_strlen(ngx_vhost_traffic_status_group_to_string(type));
          ~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:341:30: note: expanded from macro 'ngx_vhost_traffic_status_group_to_string'
    : "NO\0UA\0UG\0CC\0FG\0" + 3 * n                                           \
                             ^
src/core/ngx_string.h:61:51: note: expanded from macro 'ngx_strlen'
#define ngx_strlen(s)       strlen((const char *) s)
                                                  ^
../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:904:22: note: use array indexing to silence this warning
../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:341:30: note: expanded from macro 'ngx_vhost_traffic_status_group_to_string'
    : "NO\0UA\0UG\0CC\0FG\0" + 3 * n                                           \
                             ^
src/core/ngx_string.h:61:51: note: expanded from macro 'ngx_strlen'
#define ngx_strlen(s)       strlen((const char *) s)
                                                  ^
../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:915:23: error: adding 'unsigned int' to a string does not append to the string [-Werror,-Wstring-plus-int]
    p = ngx_cpymem(p, ngx_vhost_traffic_status_group_to_string(type), len);
        ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:341:30: note: expanded from macro 'ngx_vhost_traffic_status_group_to_string'
    : "NO\0UA\0UG\0CC\0FG\0" + 3 * n                                           \
                             ^
src/core/ngx_string.h:103:60: note: expanded from macro 'ngx_cpymem'
#define ngx_cpymem(dst, src, n)   (((u_char *) memcpy(dst, src, n)) + (n))
                                                           ^
../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:915:23: note: use array indexing to silence this warning
../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:341:30: note: expanded from macro 'ngx_vhost_traffic_status_group_to_string'
    : "NO\0UA\0UG\0CC\0FG\0" + 3 * n                                           \
                             ^
src/core/ngx_string.h:103:60: note: expanded from macro 'ngx_cpymem'
#define ngx_cpymem(dst, src, n)   (((u_char *) memcpy(dst, src, n)) + (n))
                                                           ^
2 errors generated.
make[1]: *** [objs/addon/src/ngx_http_vhost_traffic_status_module.o] Error 1
make[1]: Leaving directory `/svr-setup/nginx-1.9.7'
make: *** [install] Error 2

ah it might be due to using clang compiler and adding 'unsigned int' to a string does not append to the string [-Werror,-Wstring-plus-int] ?

cacheZones output combining two different caches

We have two different cache zones search and search_live and it seems whatever one is accessed first will show up in the cacheZones, and both zones seem to aggregate their stats into the single zone (I have confirmed on the server that we have separate caches, and they are being handled appropriately).

Potentially an issue with _ in the name?

make pull(push)time (refresh) configurable

Can you make the refresh time configurable? 1 - 5 seconds, mostly a refresh every 3 seconds is enough.
vhost_traffic_status_display_refresh 3

Maybe add configuration options for colors?
ea.
when value is higher then xx change text colors.
vhost_traffic_status_display_trigger_connections_waiting 1000 white on red
vhost_traffic_status_display_trigger_responses_5xx 500 white on red
(click on white text on red background to reset counter and color)
vhost_traffic_status_display_upstream_down white on red

Wildcard domain support

I'm using this module with nginx 1.8, and I plan on having hundreds of different virtualhosts proxied through this nginx installation.

My current setup allows me to use wildcard nginx virtualhosts (ie. I'm specifying "server_name *.domain.com;"), but unfortunately this doesn't seem to work with this nginx module.

As you might've guessed, I'm seeing "*.domain.com" as the zone name in the module's output.

Is there a way to overwrite this, so I could, say, use "$host" instead of server_name as the zone name?

worker process XXXXX exited on signal 11

Hi.

Have problem with current master version.

2015/05/30 04:12:10 [alert] 10757#0: worker process 12092 exited on signal 11
2015/05/30 04:12:11 [alert] 10757#0: worker process 12093 exited on signal 11
2015/05/30 04:12:12 [alert] 10757#0: worker process 12106 exited on signal 11
2015/05/30 04:16:01 [alert] 24442#0: worker process 24443 exited on signal 11
2015/05/30 04:16:02 [alert] 24442#0: worker process 24463 exited on signal 11
2015/05/30 04:16:12 [alert] 24442#0: worker process 24527 exited on signal 11
2015/05/30 04:16:17 [alert] 24442#0: worker process 24531 exited on signal 11
2015/05/30 04:16:17 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24531
2015/05/30 04:16:17 [alert] 24442#0: worker process 24534 exited on signal 11
2015/05/30 04:16:17 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24534
2015/05/30 04:16:18 [alert] 24442#0: worker process 24535 exited on signal 11
2015/05/30 04:16:20 [alert] 24442#0: worker process 24536 exited on signal 11
2015/05/30 04:16:20 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24536
2015/05/30 04:16:27 [alert] 24442#0: worker process 24539 exited on signal 11
2015/05/30 04:16:27 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24539
2015/05/30 04:16:28 [alert] 24442#0: worker process 24548 exited on signal 11
2015/05/30 04:16:31 [alert] 24442#0: worker process 24549 exited on signal 11
2015/05/30 04:16:31 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24549
2015/05/30 04:16:32 [alert] 24442#0: worker process 24551 exited on signal 11
2015/05/30 04:16:32 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24551
2015/05/30 04:16:35 [alert] 24442#0: worker process 24552 exited on signal 11
2015/05/30 04:16:35 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24552
2015/05/30 04:16:36 [alert] 24442#0: worker process 24554 exited on signal 11
2015/05/30 04:16:36 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24554
2015/05/30 04:16:37 [alert] 24442#0: worker process 24555 exited on signal 11
2015/05/30 04:16:37 [alert] 24442#0: worker process 24556 exited on signal 11
2015/05/30 04:16:38 [alert] 24442#0: worker process 24557 exited on signal 11
2015/05/30 04:16:38 [alert] 24442#0: worker process 24558 exited on signal 11
2015/05/30 04:16:40 [alert] 24442#0: worker process 24559 exited on signal 11
2015/05/30 04:16:40 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24559
2015/05/30 04:16:47 [alert] 24442#0: worker process 24562 exited on signal 11
2015/05/30 04:16:47 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24562

Nginx worker killed which cause % of requests to fail. I've reverted to this tree and it works fine. https://github.com/vozlt/nginx-module-vts/tree/5548b2df4a3f698474c50b27a490b108c429850f
One of the next commits made that issue.

I am using Tengine latest master https://github.com/alibaba/tengine (nginx 1.6.2 - compatible)

Sincerely,

Alex.

Cache status causes cache lock

With the new cache status overview (which is awesome) the cache locks are not released and upon restart the following errors occur:

2015/08/25 07:29:15 [alert] 5661#5661: ignore long locked inactive cache entry 6c7b231443395c911bbfed66866da923, count:1

This appears to be a similar issue to what I found here: FRiCKLE/ngx_slowfs_cache@5800076 and because of a lock on the increment count the files are not release/flushed.

1.9 issues and new stuff

with 1.9 I get a negative uptime value;
Version Uptime Connections Requests
active reading writing waiting accepted handled Total Req/s
1.9.0 -4293105030ms 1 0 1 0 57 57 212 0

any ideas? also can you have a look at the new 'streams' feature in 1.9 ?

Failure when exiting worker process

If I enable the nginx-module-vts via the nginx config, I notice that the active and waiting connections in nginx have sudden spikes. They constantly grow but the number of active/waiting connections is not correct. Output of netstat shows that nginx does not count them right.

The cause of this behaviour could be the following from the nginx error log:
2016/08/18 20:08:13 [notice] 29511#0: signal 17 (SIGCHLD) received
2016/08/18 20:08:13 [alert] 29511#0: worker process 1341 exited on signal 11
2016/08/18 20:08:13 [notice] 29511#0: start worker process 29713
2016/08/18 20:08:13 [notice] 29511#0: signal 29 (SIGIO) received

Usually it should say "worker process 123456 exited with code 0" but if nginx-module-vts is enabled the worker process does not shut down cleanly.

nginx version: 1.11.3

In the http section I have this config:
vhost_traffic_status on;
vhost_traffic_status_zone;
vhost_traffic_status_limit off;

Later I have some filters defined in specific locations.

split by virtual hosts

first of all, thak you very much for the amazing plugin,

but the question is how to split traffic detail on 1 main host ?
please see the example :

http {
    vhost_traffic_status_zone;
    include       mime.types;
    default_type  application/octet-stream;
   server {
        listen 80;
        server_name main.com;

        location / {
            root   /home/admin/html;
            index  index.html index.htm;
        }
        location /status {
            vhost_traffic_status_display;
            vhost_traffic_status_display_format html;
        }
    }

    server {
        listen 80;
        server_name user1.com;

        location /1 {
            root   /home/user1/html/1;
            index  index.html index.htm;
        }
        location /status {  
            vhost_traffic_status_display;
            vhost_traffic_status_display_format html;
        }
    }

    server {
        listen 80;
        server_name user2.com;

        location /2 {
            root   /home/user2/html/2;
            index  index.html index.htm;
        }
        location /status {  
            vhost_traffic_status_display;
            vhost_traffic_status_display_format html;
        }
    }
}

result in attachment
vts

how to split virtual host traffic and show them on the /status page ?

nginx version 1.11.3
compiled from source

Connections show

i use openresty/1.9.7.2 with nginx-module-vts,when i used cosbench (mode=s3)to test HTTP request(with 6000 workers),if my active|reading|writing|waiting num is > 12000,the nums are not correct when my test finished(in fact requests is 0,but it still show a big Numbers like 600x) ,please check and fix it!thx!

nginx -s reload FAILED

Hi,

Now we are using your Nginx module VTS and I feel it's very powerful.

But there is a problem around me and I can't find why it happened and something helpful about this error on GitHub. when I use nginx -s reload three times or four times, there will be an error nginx: [alert] kill (25315, 1) failed (3: No such process) and the nginx master process had been exited.

We use Nginx 1.8.1 and CentOS 6.6, but I have tested the versions which marked by Compatibility on Readme.md and all of them have this problem except for 1.9.9.

Tanks for the excellent module.

Clustered Stats

Is there a way or plans to support aggregating the results from multiple nginx instances? IE, one instance collects results via json from other additional nginx instances in addition to its own and provides a full aggregated results page.

On a side note, This tool is pretty awesome.

The value of total connections is wrong

Hi, I'm back

The new problem is that the value of total connections in Server main of the status webpage is too big, it's wrong, it should be less than 15k, there's not so many connections in this one of our systems. after I restart Nginx, it's back to normal(about 10000 connections), but after about a week, it's into wrong again.

BTW, when it's in the abnormal state, the data of original module stub_status_module of Nginx is also abnormal, of course, they are equal.

SS:

561818f4-3cce-49e4-8513-2d40d5e475cf

nginx-module-vts:

d81a4716-353d-4543-8af1-005cbf410f9c

stub_status_module:

d55a9022-68e2-4718-9a0a-73427f584f35

VTS segfault when balancer_by_lua breaks

Maybe a bit of a weird corner case but could be worth handling:

When playing around with balancer_by_lua_block in (nginx/OpenResty) and "vhost_traffic_status_zone" set, I noticed that workers would segfault when there are errors in the lua code. While this is not something that should usually happen, it highlights the fact that there are cases where ngx_http_upstream_state_t.peer is NULL, which is not handled in VTS (looking at other parts of the nginx source, it seems like they always NULL-check that variable before use).

This change adds handling for that:
https://github.com/moshisushi/nginx-module-vts/commit/592a13d00cefefa0c4028243703048330cbe3c56

Error log without fix:
2016/07/13 10:56:12 [error] 11293#11293: *1 failed to run balancer_by_lua*: balancer_by_lua:8: attempt to call global 'foo' (a nil value) stack traceback: balancer_by_lua:8: in function <balancer_by_lua:1> while connecting to upstream, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", host: "localhost" 2016/07/13 10:56:12 [alert] 11291#11291: worker process 11293 exited on signal 11

Error log with fix:
2016/07/13 10:53:55 [error] 11239#11239: *1 failed to run balancer_by_lua*: balancer_by_lua:8: attempt to call global 'foo' (a nil value) stack traceback: balancer_by_lua:8: in function <balancer_by_lua:1> while connecting to upstream, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", host: "localhost" 2016/07/13 10:53:55 [error] 11239#11239: *1 handler::shm_add_upstream() failed while logging request, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", host: "localhost"

Cant disable server zones by host

Hi,

How i can disable server zones statistics by server_name/$host ?

vhost_traffic_status_filter off;
vhost_traffic_status_filter_by_host off;

This directives doesn't make any changes on status page.

Uptime question

When I access html version of status page I get current nginx uptime. Readme says it's counted as nowMsec - loadMsec but when I try to count it from json - numbers differs.
How to get correct uptime from json?

Response time on server or filter zones?

It would be incredibly useful if I could monitor the response time of server zones.

I'm happy to open a pull request, if you can tell me where I should start looking.

Add average response time per interval

It would really nice to have a sliding window with the average (max, min also?) response time to users.
There is a lot of people doing this today through access log or other modules, like nginx-statsd, which seems to me very expensive.

shm_add_upstream

Hi,
there is some issues with nginx 1.7.10

  1. 2015/04/08 10:26:17 [error] 1073#0: *118 shm_add_upstream failed while logging request, client: 127.0.0.1, server: , request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:8000/favicon.ico", host: "example.com"
  2. on 'Server zones' there is something looks like part of nginx process memory dump in 'Zone' field.
  3. This happens if server_name is empty or there is multiple names in it.

possible traffic limit?

호스팅 환경에서 아파치 시절의 mod_throttle이나 mod_cband의 대용으로 사용할 수 있을지 고려중입니다.
bandwidth는 기타 모듈들로 제어가 가능한데, transfer를 기준으로 제어하는 모듈은 현재까지 없어 보입니다.
혹시 transfer limit의 기능에 대해서 개발 계획이 있으신지 궁금합니다.

좋은 모듈 개발해 주셔서 감사합니다.

Building error

Mac OS X El Capitan
Nginx 1.9.15

/Users/User/Downloads/nginx-module-vts-master/src/ngx_http_vhost_traffic_status_module.c:1277:24: error: comparison of address of 'limits[i].variable' equal to a null pointer is always false [-Werror,-Wtautological-pointer-compare]
        if (&limits[i].variable == NULL) {
             ~~~~~~~~~~^~~~~~~~    ~~~~
/Users/User/Downloads/nginx-module-vts-master/src/ngx_http_vhost_traffic_status_module.c:2151:25: error: comparison of address of 'filters[i].filter_key' equal to a null pointer is always false [-Werror,-Wtautological-pointer-compare]
        if (&filters[i].filter_key == NULL || &filters[i].filter_name == NULL) {
             ~~~~~~~~~~~^~~~~~~~~~    ~~~~
/Users/User/Downloads/nginx-module-vts-master/src/ngx_http_vhost_traffic_status_module.c:2151:59: error: comparison of address of 'filters[i].filter_name' equal to a null pointer is always false [-Werror,-Wtautological-pointer-compare]
        if (&filters[i].filter_key == NULL || &filters[i].filter_name == NULL) {
                                               ~~~~~~~~~~~^~~~~~~~~~~    ~~~~
3 errors generated.

add prometheus display format

Prometheus (prometheus.io) is a very powerfull open-source monitoring solution. Prometheus get the data scraping metrics from the applications.

To use prometheus with nginx-module-vts is a good way to monitoring and aggregate metrics from more than one nginx servers.

A way to do it is the nginx-module-vts support prometheus as a display format.

Wrong cacheZones maxSize calculation

Hello,
I've seen some inconsistent data in cacheZones:

Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos_r620-root 70G 1.7G 69G 3% /
devtmpfs 48G 0 48G 0% /dev
tmpfs 48G 0 48G 0% /dev/shm
tmpfs 48G 137M 47G 1% /run
tmpfs 48G 0 48G 0% /sys/fs/cgroup
tmpfs 48G 155M 47G 1% /cache

"my_zone" Cache directory is /cache.

"cacheZones":{"my_zone":{"maxSize":9223372036854771712,"usedSize":161124352,"inBytes":2191759,"outBytes":1966063488,"responses":

maxSize is really good to be true:). The issue is also in the web page. Maybe tmpfs is a problem?

I'm using master.

BRS

VTS segfault on nginx reload with SIGHUP

I was careful to give enough time between reloads for any workers that needed to die to die, but this causes a failure to bind error from nginx, not a segfault. I think maybe the init conf method isn't hardened for repeated re-cycling?

Repro steps:
service nginx start
gdb -p

signal SIGHUP
ctrl-c
signal SIGHUP
ctrl-c
signal SIGHUP
segfault
backtrace full:
#0  __memcpy_sse2_unaligned () at ../sysdeps/x86_64/multiarch/memcpy-sse2-unaligned.S:33

No locals.
#1  0x00000000004ad46b in memcpy (__len=, __src=, __dest=0x7f03996f2029) at /usr/include/x86_64-linux-gnu/bits/string3.h:51

No locals.
#2  ngx_http_vhost_traffic_status_filter_unique (pool=0x1100190, keys=keys@entry=0x1103780) at ./modules/nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:3184

        hash =
        p = 0x7f03996f2029 ""
        key = {len = 4976979, data = 0x7f03996f2010 "$server_addr:$server_port"}
        i = 0
        n = 1
        uniqs = 0x1196488
        filter_keys = 0x0
        filter =
        filters = 0x1157648
        filter_uniqs =
#3  0x00000000004ad975 in ngx_http_vhost_traffic_status_init_main_conf (cf=0x7ffe12032d40, conf=0x1103778) at ./modules/nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:4779

        ctx = 0x1103778
        rc =
        vtscf = 0x11037c8
#4  0x0000000000432e34 in ngx_http_block (cf=0x7ffe12032d40, cmd=0xa, conf=0x4bf13a) at src/http/ngx_http.c:269

        rv = 0xffff80fc6644eea7 <error: Cannot access memory at address 0xffff80fc6644eea7>
        ctx = 0x117b9e0
        s = 7487744
        clcf = 0x5
#5  0x0000000000419f33 in ngx_conf_handler (last=1, cf=0x7ffe12032d40) at src/core/ngx_conf_file.c:427

        rv =
        conf =
        i = 9
        confp =
        found = 1
        name = 0x1101220
        cmd = 0x70aae0 <ngx_http_commands>
#6  ngx_conf_parse (cf=cf@entry=0x7ffe12032d40, filename=filename@entry=0x1100390) at src/core/ngx_conf_file.c:283

        rv =
        p =
        size =
        fd = 4
        rc =
        buf = {
          pos = 0x10fafec "\n", ' ' <repeats 17 times>, "tcl tk;\n    application/x-x509-ca-cert", ' ' <repeats 12 times>, "der pem crt;\n    application/x-xpinstall", ' ' <repeats 15 times>, "xpi;\n    application/xhtml+xml", ' ' <repeats 17 times>, "xhtml;\n    application/xspf+xm"...,
          last = 0x10fafed ' ' <repeats 17 times>, "tcl tk;\n    application/x-x509-ca-cert", ' ' <repeats 12 times>, "der pem crt;\n    application/x-xpinstall", ' ' <repeats 15 times>, "xpi;\n    application/xhtml+xml", ' ' <repeats 17 times>, "xhtml;\n    application/xspf+xml"..., file_pos = 8602557481092212992, file_last = 3544386174626525807,
          start = 0x10fa6c0 "user www-data;\nworker_rlimit_core 500m;\nworking_directory /tmp/nginxcores/;\nworker_rlimit_nofile 1000000;\nworker_processes 16;\nworker_cpu_affinity\n", ' ' <repeats 20 times>, '0' <repeats 15 times>, "100000000\n        "...,
          end = 0x10fb6c0 "\020\020", tag = 0xffff8001edfcd401, file = 0x401, shadow = 0x100, temporary = 1, memory = 0, mmap = 0, recycled = 0, in_file = 1, flush = 1, sync = 0, last_buf = 0, last_in_chain = 1, last_shadow = 0, temp_file = 0, num = 0}
        tbuf =
        prev = 0x0
        conf_file = {file = {fd = 4, name = {len = 21, data = 0x1100406 "/etc/nginx/nginx.conf"}, info = {st_dev = 2049, st_ino = 133886, st_nlink = 1, st_mode = 33188, st_uid = 0, st_gid = 0, __pad0 = 0, st_rdev = 0, st_size = 2349, st_blksize = 4096, st_blocks = 8,
              st_atim = {tv_sec = 1468545661, tv_nsec = 100448752}, st_mtim = {tv_sec = 1468545433, tv_nsec = 231630832}, st_ctim = {tv_sec = 1468545433, tv_nsec = 231630832}, __glibc_reserved = {0, 0, 0}}, offset = 2349, sys_offset = 17826272, log = 0x11904b8,
            thread_handler = 0x7ffe12032c98, thread_ctx = 0x7ffe12032c98, aio = 0x4000, valid_info = 0, directio = 1}, buffer = 0x7ffe12032b50, dump = 0x0, line = 87}
        cd =
        type = parse_file
#7  0x00000000004177a1 in ngx_init_cycle (old_cycle=old_cycle@entry=0x11904a0) at src/core/ngx_cycle.c:268

        rv =
        senv = 0x7ffe12033358
        env =
        i =
        n =
        log = 0x11904b8
        conf = {name = 0x0, args = 0x1100fe8, cycle = 0x11001e0, pool = 0x1100190, temp_pool = 0x11325b0, conf_file = 0x7ffe12032ba0, log = 0x11904b8, ctx = 0x1101550, module_type = 1347703880, cmd_type = 33554432, handler = 0x0, handler_conf = 0x0}
        pool = 0x1100190
        cycle = 0x11001e0
        old =
        shm_zone =
        oshm_zone =
        part =
        opart =
        file =
        ls =
        nls =
        ccf =
        old_ccf =
        module =
hostname = "omitted.com'
#8 0x0000000000428a22 in ngx_master_process_cycle (cycle=0x11904a0, cycle@entry=0x10f4340) at src/os/unix/ngx_process_cycle.c:234

    title = <optimized out>
    p = <optimized out>
    size = <optimized out>
    i = <optimized out>
    n = <optimized out>
    sigio = 0
    set = {__val = {0 <repeats 16 times>}}
    itv = {it_interval = {tv_sec = 0, tv_usec = 0}, it_value = {tv_sec = 0, tv_usec = 0}}
    live = <optimized out>
    delay = 0
    ls = <optimized out>
    ccf = 0x11910b0

#9 0x0000000000407b9f in main (argc=, argv=) at src/core/nginx.c:359

    b = <optimized out>
    log = 0x7251c0 <ngx_log>
    i = <optimized out>
    cycle = 0x10f4340
    init_cycle = {conf_ctx = 0x0, pool = 0x10f3d90, log = 0x7251c0 <ngx_log>, new_log = {log_level = 0, file = 0x0, connection = 0, disk_full_time = 0, handler = 0x0, data = 0x0, writer = 0x0, wdata = 0x0, action = 0x0, next = 0x0}, log_use_stderr = 0, files = 0x0, 
      free_connections = 0x0, free_connection_n = 0, reusable_connections_queue = {prev = 0x0, next = 0x0}, listening = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, pool = 0x0}, paths = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, pool = 0x0}, config_dump = {
        elts = 0x0, nelts = 0, size = 0, nalloc = 0, pool = 0x0}, open_files = {last = 0x0, part = {elts = 0x0, nelts = 0, next = 0x0}, size = 0, nalloc = 0, pool = 0x0}, shared_memory = {last = 0x0, part = {elts = 0x0, nelts = 0, next = 0x0}, size = 0, nalloc = 0, 
        pool = 0x0}, connection_n = 0, files_n = 0, connections = 0x0, read_events = 0x0, write_events = 0x0, old_cycle = 0x0, conf_file = {len = 21, data = 0x4b6749 "/etc/nginx/nginx.conf"}, conf_param = {len = 0, data = 0x0}, conf_prefix = {len = 11, 
        data = 0x4b6749 "/etc/nginx/nginx.conf"}, prefix = {len = 11, data = 0x4b673d "/etc/nginx/"}, lock_file = {len = 0, data = 0x0}, hostname = {len = 0, data = 0x0}}
    cd = <optimized out>
    ccf = <optimized out>

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.