vozlt / nginx-module-vts Goto Github PK
View Code? Open in Web Editor NEWNginx virtual host traffic status module
License: BSD 2-Clause "Simplified" License
Nginx virtual host traffic status module
License: BSD 2-Clause "Simplified" License
Hi, I noticed very strange piece, I made proxying of statistics of NGINX, and made a mistake, instead of status, prescribed status2, and the module to me gave the 404th error message from the website beonpush.com
I apply the Screenshot http://screenshot.ru/1796de63e158f7cf66458bf4fd79d0ae
at you in a code this address is hidden?
I'm using this module with nginx 1.8, and made the update to version VTS v0.1.6 to benefit from the stats reset functionality.
I was surprised that calling any urls with /my/location/status/control?args would simply return the HTML page without executing the command.
I tried again with a vanilla config, and it seems the module doesn't support control if the location scope containing "vhost_traffic_status_display" is composed of multiple segments.
This configuration will work without a problem:
location /status {
vhost_traffic_status_display;
vhost_traffic_status_display_format html;
}
while calling http://127.0.0.1/status/control?cmd=reset&group=server&zone=_
If configuring the location with a composed path
location /a/segment/structure/status {
vhost_traffic_status_display;
vhost_traffic_status_display_format html;
}
http://127.0.0.1/a/segment/structure/status/control?cmd=reset&group=server&zone=_
Will return the HTML page.
Anything I'm doing wrong?
Hi.
not counting the correct use of traffic statistics at proxy_cache_valid, the problem is shown here http://forum.nginx.org/read.php?21,257971
http://127.0.0.1:83/status/control?cmd=status&group=test
Program terminated with signal 11, Segmentation fault.
#0 0x00000000005851cf in ngx_http_vhost_traffic_status_node_control_range_set
(control=0x7f40c32e61a8)
at ../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:2644
2644 if (control->zone->len == 0) {
Missing separate debuginfos, use: debuginfo-install glibc-2.17-106.el7_2.4.x86_64 libgcc-4.8.5-4.el7.x86_64 lua-5.1.4-14.el7.x86_64 nss-softokn-freebl-3.16.2.3-13.el7_1.x86_64 pcre-8.32-15.el7.x86_64 sssd-client-1.13.0-40.el7_2.2.x86_64 zlib-1.2.7-15.el7.x86_64
(gdb) bt
#0 0x00000000005851cf in ngx_http_vhost_traffic_status_node_control_range_set
(control=0x7f40c32e61a8)
at ../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:2644
#1 0x0000000000582439 in ngx_http_vhost_traffic_status_display_handler_control
(r=0x7f40c32e51a0)
at ../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:1516
#2 0x0000000000581f39 in ngx_http_vhost_traffic_status_display_handler (
r=0x7f40c32e51a0)
at ../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:1386
#3 0x00000000004a06e1 in ngx_http_core_content_phase (r=0x7f40c32e51a0,
ph=0x7f40c33041c8) at src/http/ngx_http_core_module.c:1368
#4 0x000000000049f227 in ngx_http_core_run_phases (r=0x7f40c32e51a0)
at src/http/ngx_http_core_module.c:845
#5 0x000000000049f195 in ngx_http_handler (r=0x7f40c32e51a0)
at src/http/ngx_http_core_module.c:828
#6 0x00000000004aedfa in ngx_http_process_request (r=0x7f40c32e51a0)
at src/http/ngx_http_request.c:1914
#7 0x00000000004ad75e in ngx_http_process_request_headers (rev=0x7f40c31b8310)
at src/http/ngx_http_request.c:1346
#8 0x00000000004acb41 in ngx_http_process_request_line (rev=0x7f40c31b8310)
at src/http/ngx_http_request.c:1026
#9 0x00000000004ab7af in ngx_http_wait_request_handler (rev=0x7f40c31b8310)
at src/http/ngx_http_request.c:503
#10 0x000000000048db99 in ngx_epoll_process_events (cycle=0x7f40c32b5000,
timer=924, flags=1) at src/event/modules/ngx_epoll_module.c:907
#11 0x000000000047cc89 in ngx_process_events_and_timers (cycle=0x7f40c32b5000)
at src/event/ngx_event.c:242
#12 0x000000000048b40b in ngx_worker_process_cycle (cycle=0x7f40c32b5000,
data=0x1) at src/os/unix/ngx_process_cycle.c:753
#13 0x0000000000487d07 in ngx_spawn_process (cycle=0x7f40c32b5000,
proc=0x48b316 <ngx_worker_process_cycle>, data=0x1,
name=0x71a783 "worker process", respawn=-3)
at src/os/unix/ngx_process.c:198
#14 0x000000000048a247 in ngx_start_worker_processes (cycle=0x7f40c32b5000,
n=2, type=-3) at src/os/unix/ngx_process_cycle.c:358
#15 0x0000000000489864 in ngx_master_process_cycle (cycle=0x7f40c32b5000)
at src/os/unix/ngx_process_cycle.c:130
#16 0x000000000044be59 in main (argc=3, argv=0x7ffdf3a760d8)
at src/core/nginx.c:367
First of all great work on this module!
One of the things that would make this module even more awesome if you could add a cache status (efficiency, hits/misses) section to the output statistics as well.
Additionally do you think that you can tag various releases for easy integration when building packages?
First of all, thank you for this great plugin!
It'd be nice to have one more feature, though.
I want to periodically request status and reset counters after every one.
Currently, it can be done only in two requests:
/status/control?cmd=status...
/status/control?cmd=reset...
But some numbers will be certainly between them.
One more argument to cmd=status would save the situation, for example:
/status/control?cmd=status&reset=true
or
/status/control?cmd=pop # means "pop numbers and clear"
I've tried to do it on my own, but my knowledge of C is very limited.
Hello everybody I just set this up and doing some testing i turned one of my upstream serves off and spammed the Load Balancing with requests from "ab" but it seems to show that it is up.
server 1 is 192.xxx.1.14:80 up 17ms
server 2 is 192.xxx.1.15:80 up 3.4s
but server 2 is turned off and there is no fallback setup yet
my config is round robin is that a problem?
please help
Hi.
This module is useful especially in case of DDoS attack or just traffic flow, but the problem is that if you load nginx by 100% - module stop responding.
Is there any ability to give a module highest priority, over other tasks, so it can continue to work even when load is 100%?
To calculate traffic for individual ip using
They have changed response timings in http://hg.nginx.org/nginx/rev/59fc60585f1e
ngx_http_vhost_traffic_status_module.c(476) : error C2039: 'response_sec' : is not a member of 'ngx_http_upstream_state_t'
src/http\ngx_http_upstream.h(56) : see declaration of 'ngx_http_upstream_state_t'
ngx_http_vhost_traffic_status_module.c(476) : error C2039: 'response_msec' : is not a member of 'ngx_http_upstream_state_t'
src/http\ngx_http_upstream.h(56) : see declaration of 'ngx_http_upstream_state_t'
Maybe:
// (state[i].response_sec * 1000 + state[i].response_msec);
(state[i].response_time * 1000 + state[i].response_time);
?
Hello. I want to get statistics for all domains 1/minute and clear statistics. How can I clean it without restart nginx?
Hello,
I'm getting odd behavior when passing in the size
parameter when its a variable. The variable does not render. Is this normal?
set $member request;
set $traffic 100;
vhost_traffic_status_limit_traffic_by_set_key FG@storage::$server_name@$subdomain $member:$traffic 402;
Hi,
I've got configured a upstream with two servers
upstream BENJS {
server 127.0.0.1:53000;
server 127.0.0.1:53001;
}
On 127.0.0.1:53001 the service is stopped, but on VTS's status page, the state is "up".
It's a mistake ?
Best Regards,
Zarmack
Hello. How i can get list and status of upstreams, which has been created by https://github.com/cubicdaiya/ngx_dynamic_upstream ?
P.S. this data stored in the shared memory zone (http://nginx.org/en/docs/http/ngx_http_upstream_module.html#zone).
nginx 1.9.3
Thanks.
@vozlt did latest Nov 20th commits break nginx 1.9.7 support https://community.centminmod.com/posts/20624/ ? As it compiled fine prior to those commits
make -f objs/Makefile install
make[1]: Entering directory `/svr-setup/nginx-1.9.7'
ccache /usr/bin/clang -ferror-limit=0 -c -I/usr/local/include/luajit-2.1 -pipe -O -Wall -Wextra -Wpointer-arith -Wconditional-uninitialized -Wno-unused-parameter -Werror -g -m64 -mtune=native -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wno-deprecated-declarations -Wno-unused-parameter -Wno-unused-const-variable -Wno-conditional-uninitialized -Wno-mismatched-tags -Wno-c++11-extensions -Wno-sometimes-uninitialized -Wno-parentheses-equality -Wno-tautological-compare -Wno-self-assign -Wno-deprecated-register -Wno-deprecated -Wno-invalid-source-encoding -Wno-pointer-sign -Wno-parentheses -Wno-enum-conversion -DNDK_SET_VAR -DNDK_UPSTREAM_LIST -DNDK_SET_VAR -I src/core -I src/event -I src/event/modules -I src/os/unix -I ../ngx_pagespeed-1.9.32.10-beta/psol/include -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/third_party/chromium/src -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/third_party/google-sparsehash/src -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/third_party/google-sparsehash/gen/arch/linux/x64/include -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/third_party/protobuf/src -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/third_party/re2/src -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/out/Release/obj/gen -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/out/Release/obj/gen/protoc_out/instaweb -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/third_party/apr/src/include -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/third_party/aprutil/src/include -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/third_party/apr/gen/arch/linux/x64/include -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/third_party/aprutil/gen/arch/linux/x64/include -I ../ngx_devel_kit-0.2.19/objs -I objs/addon/ndk -I /usr/local/include/luajit-2.1 -I ../lua-nginx-module-0.9.18/src/api -I ../nginx_upstream_check_module-0.3 .0 -I ../pcre-8.37 -I ../libressl-2.2.4/.openssl/include -I objs -I src/http -I src/http/modules -I src/http/v2 -I ../ngx_devel_kit-0.2.19/src -I src/mail -I src/stream \
-o objs/addon/src/ngx_http_vhost_traffic_status_module.o \
../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c
../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:904:22: error: adding 'unsigned int' to a string does not append to the string [-Werror,-Wstring-plus-int]
len = ngx_strlen(ngx_vhost_traffic_status_group_to_string(type));
~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:341:30: note: expanded from macro 'ngx_vhost_traffic_status_group_to_string'
: "NO\0UA\0UG\0CC\0FG\0" + 3 * n \
^
src/core/ngx_string.h:61:51: note: expanded from macro 'ngx_strlen'
#define ngx_strlen(s) strlen((const char *) s)
^
../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:904:22: note: use array indexing to silence this warning
../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:341:30: note: expanded from macro 'ngx_vhost_traffic_status_group_to_string'
: "NO\0UA\0UG\0CC\0FG\0" + 3 * n \
^
src/core/ngx_string.h:61:51: note: expanded from macro 'ngx_strlen'
#define ngx_strlen(s) strlen((const char *) s)
^
../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:915:23: error: adding 'unsigned int' to a string does not append to the string [-Werror,-Wstring-plus-int]
p = ngx_cpymem(p, ngx_vhost_traffic_status_group_to_string(type), len);
~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:341:30: note: expanded from macro 'ngx_vhost_traffic_status_group_to_string'
: "NO\0UA\0UG\0CC\0FG\0" + 3 * n \
^
src/core/ngx_string.h:103:60: note: expanded from macro 'ngx_cpymem'
#define ngx_cpymem(dst, src, n) (((u_char *) memcpy(dst, src, n)) + (n))
^
../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:915:23: note: use array indexing to silence this warning
../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:341:30: note: expanded from macro 'ngx_vhost_traffic_status_group_to_string'
: "NO\0UA\0UG\0CC\0FG\0" + 3 * n \
^
src/core/ngx_string.h:103:60: note: expanded from macro 'ngx_cpymem'
#define ngx_cpymem(dst, src, n) (((u_char *) memcpy(dst, src, n)) + (n))
^
2 errors generated.
make[1]: *** [objs/addon/src/ngx_http_vhost_traffic_status_module.o] Error 1
make[1]: Leaving directory `/svr-setup/nginx-1.9.7'
make: *** [install] Error 2
ah it might be due to using clang compiler and adding 'unsigned int' to a string does not append to the string [-Werror,-Wstring-plus-int]
?
We have two different cache zones search
and search_live
and it seems whatever one is accessed first will show up in the cacheZones, and both zones seem to aggregate their stats into the single zone (I have confirmed on the server that we have separate caches, and they are being handled appropriately).
Potentially an issue with _
in the name?
I have noticed that if the nginx service is restarted, the data gets lost. Is there a permanent storage for the statistical data? With time period filters, e.g. Aug 01 until Aug 31 .. ?
Can you make the refresh time configurable? 1 - 5 seconds, mostly a refresh every 3 seconds is enough.
vhost_traffic_status_display_refresh 3
Maybe add configuration options for colors?
ea.
when value is higher then xx change text colors.
vhost_traffic_status_display_trigger_connections_waiting 1000 white on red
vhost_traffic_status_display_trigger_responses_5xx 500 white on red
(click on white text on red background to reset counter and color)
vhost_traffic_status_display_upstream_down white on red
Hi
How can i show IP or hostname of server at status page ? Because i have many nginx server, i need to difference its.
Thanks,
when upstream use "zone xxx size“,vts status page can not output upstreams
only output ::nogroups
I'm using this module with nginx 1.8, and I plan on having hundreds of different virtualhosts proxied through this nginx installation.
My current setup allows me to use wildcard nginx virtualhosts (ie. I'm specifying "server_name *.domain.com;"), but unfortunately this doesn't seem to work with this nginx module.
As you might've guessed, I'm seeing "*.domain.com" as the zone name in the module's output.
Is there a way to overwrite this, so I could, say, use "$host" instead of server_name as the zone name?
Hi.
Have problem with current master version.
2015/05/30 04:12:10 [alert] 10757#0: worker process 12092 exited on signal 11
2015/05/30 04:12:11 [alert] 10757#0: worker process 12093 exited on signal 11
2015/05/30 04:12:12 [alert] 10757#0: worker process 12106 exited on signal 11
2015/05/30 04:16:01 [alert] 24442#0: worker process 24443 exited on signal 11
2015/05/30 04:16:02 [alert] 24442#0: worker process 24463 exited on signal 11
2015/05/30 04:16:12 [alert] 24442#0: worker process 24527 exited on signal 11
2015/05/30 04:16:17 [alert] 24442#0: worker process 24531 exited on signal 11
2015/05/30 04:16:17 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24531
2015/05/30 04:16:17 [alert] 24442#0: worker process 24534 exited on signal 11
2015/05/30 04:16:17 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24534
2015/05/30 04:16:18 [alert] 24442#0: worker process 24535 exited on signal 11
2015/05/30 04:16:20 [alert] 24442#0: worker process 24536 exited on signal 11
2015/05/30 04:16:20 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24536
2015/05/30 04:16:27 [alert] 24442#0: worker process 24539 exited on signal 11
2015/05/30 04:16:27 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24539
2015/05/30 04:16:28 [alert] 24442#0: worker process 24548 exited on signal 11
2015/05/30 04:16:31 [alert] 24442#0: worker process 24549 exited on signal 11
2015/05/30 04:16:31 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24549
2015/05/30 04:16:32 [alert] 24442#0: worker process 24551 exited on signal 11
2015/05/30 04:16:32 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24551
2015/05/30 04:16:35 [alert] 24442#0: worker process 24552 exited on signal 11
2015/05/30 04:16:35 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24552
2015/05/30 04:16:36 [alert] 24442#0: worker process 24554 exited on signal 11
2015/05/30 04:16:36 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24554
2015/05/30 04:16:37 [alert] 24442#0: worker process 24555 exited on signal 11
2015/05/30 04:16:37 [alert] 24442#0: worker process 24556 exited on signal 11
2015/05/30 04:16:38 [alert] 24442#0: worker process 24557 exited on signal 11
2015/05/30 04:16:38 [alert] 24442#0: worker process 24558 exited on signal 11
2015/05/30 04:16:40 [alert] 24442#0: worker process 24559 exited on signal 11
2015/05/30 04:16:40 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24559
2015/05/30 04:16:47 [alert] 24442#0: worker process 24562 exited on signal 11
2015/05/30 04:16:47 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24562
Nginx worker killed which cause % of requests to fail. I've reverted to this tree and it works fine. https://github.com/vozlt/nginx-module-vts/tree/5548b2df4a3f698474c50b27a490b108c429850f
One of the next commits made that issue.
I am using Tengine latest master https://github.com/alibaba/tengine (nginx 1.6.2 - compatible)
Sincerely,
Alex.
With the new cache status overview (which is awesome) the cache locks are not released and upon restart the following errors occur:
2015/08/25 07:29:15 [alert] 5661#5661: ignore long locked inactive cache entry 6c7b231443395c911bbfed66866da923, count:1
This appears to be a similar issue to what I found here: FRiCKLE/ngx_slowfs_cache@5800076 and because of a lock on the increment count the files are not release/flushed.
with 1.9 I get a negative uptime value;
Version Uptime Connections Requests
active reading writing waiting accepted handled Total Req/s
1.9.0 -4293105030ms 1 0 1 0 57 57 212 0
any ideas? also can you have a look at the new 'streams' feature in 1.9 ?
If I enable the nginx-module-vts via the nginx config, I notice that the active and waiting connections in nginx have sudden spikes. They constantly grow but the number of active/waiting connections is not correct. Output of netstat shows that nginx does not count them right.
The cause of this behaviour could be the following from the nginx error log:
2016/08/18 20:08:13 [notice] 29511#0: signal 17 (SIGCHLD) received
2016/08/18 20:08:13 [alert] 29511#0: worker process 1341 exited on signal 11
2016/08/18 20:08:13 [notice] 29511#0: start worker process 29713
2016/08/18 20:08:13 [notice] 29511#0: signal 29 (SIGIO) received
Usually it should say "worker process 123456 exited with code 0" but if nginx-module-vts is enabled the worker process does not shut down cleanly.
nginx version: 1.11.3
In the http section I have this config:
vhost_traffic_status on;
vhost_traffic_status_zone;
vhost_traffic_status_limit off;
Later I have some filters defined in specific locations.
This module have total active connections, and this is cool. But how about active connections per server?
first of all, thak you very much for the amazing plugin,
but the question is how to split traffic detail on 1 main host ?
please see the example :
http {
vhost_traffic_status_zone;
include mime.types;
default_type application/octet-stream;
server {
listen 80;
server_name main.com;
location / {
root /home/admin/html;
index index.html index.htm;
}
location /status {
vhost_traffic_status_display;
vhost_traffic_status_display_format html;
}
}
server {
listen 80;
server_name user1.com;
location /1 {
root /home/user1/html/1;
index index.html index.htm;
}
location /status {
vhost_traffic_status_display;
vhost_traffic_status_display_format html;
}
}
server {
listen 80;
server_name user2.com;
location /2 {
root /home/user2/html/2;
index index.html index.htm;
}
location /status {
vhost_traffic_status_display;
vhost_traffic_status_display_format html;
}
}
}
how to split virtual host traffic and show them on the /status page ?
nginx version 1.11.3
compiled from source
i use openresty/1.9.7.2 with nginx-module-vts,when i used cosbench (mode=s3)to test HTTP request(with 6000 workers),if my active|reading|writing|waiting num is > 12000,the nums are not correct when my test finished(in fact requests is 0,but it still show a big Numbers like 600x) ,please check and fix it!thx!
Hi,
Now we are using your Nginx module VTS and I feel it's very powerful.
But there is a problem around me and I can't find why it happened and something helpful about this error on GitHub. when I use nginx -s reload
three times or four times, there will be an error nginx: [alert] kill (25315, 1) failed (3: No such process)
and the nginx master process had been exited.
We use Nginx 1.8.1 and CentOS 6.6, but I have tested the versions which marked by Compatibility on Readme.md and all of them have this problem except for 1.9.9.
Tanks for the excellent module.
Is there a way or plans to support aggregating the results from multiple nginx instances? IE, one instance collects results via json from other additional nginx instances in addition to its own and provides a full aggregated results page.
On a side note, This tool is pretty awesome.
Hi, I'm back
The new problem is that the value of total connections in Server main
of the status webpage is too big, it's wrong, it should be less than 15k, there's not so many connections in this one of our systems. after I restart Nginx, it's back to normal(about 10000 connections), but after about a week, it's into wrong again.
BTW, when it's in the abnormal state, the data of original module stub_status_module
of Nginx is also abnormal, of course, they are equal.
Is it possible to show Nginx TCP stream stats http://nginx.org/en/docs/stream/ngx_stream_core_module.html
i.e. for TCP load balanced Redis Cluster in Nginx https://community.centminmod.com/posts/18882/
ngx_http_vhost_traffic_status_module.c:3587: error: integer constant is too large for 'long' type
https://build.opensuse.org/build/home:non7top/CentOS_CentOS-6/i586/nginx/_log
Maybe a bit of a weird corner case but could be worth handling:
When playing around with balancer_by_lua_block in (nginx/OpenResty) and "vhost_traffic_status_zone" set, I noticed that workers would segfault when there are errors in the lua code. While this is not something that should usually happen, it highlights the fact that there are cases where ngx_http_upstream_state_t.peer is NULL, which is not handled in VTS (looking at other parts of the nginx source, it seems like they always NULL-check that variable before use).
This change adds handling for that:
https://github.com/moshisushi/nginx-module-vts/commit/592a13d00cefefa0c4028243703048330cbe3c56
Error log without fix:
2016/07/13 10:56:12 [error] 11293#11293: *1 failed to run balancer_by_lua*: balancer_by_lua:8: attempt to call global 'foo' (a nil value) stack traceback: balancer_by_lua:8: in function <balancer_by_lua:1> while connecting to upstream, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", host: "localhost" 2016/07/13 10:56:12 [alert] 11291#11291: worker process 11293 exited on signal 11
Error log with fix:
2016/07/13 10:53:55 [error] 11239#11239: *1 failed to run balancer_by_lua*: balancer_by_lua:8: attempt to call global 'foo' (a nil value) stack traceback: balancer_by_lua:8: in function <balancer_by_lua:1> while connecting to upstream, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", host: "localhost" 2016/07/13 10:53:55 [error] 11239#11239: *1 handler::shm_add_upstream() failed while logging request, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", host: "localhost"
Hi,
How i can disable server zones statistics by server_name/$host ?
vhost_traffic_status_filter off;
vhost_traffic_status_filter_by_host off;
This directives doesn't make any changes on status page.
When I access html version of status page I get current nginx uptime. Readme says it's counted as nowMsec - loadMsec but when I try to count it from json - numbers differs.
How to get correct uptime from json?
It would be incredibly useful if I could monitor the response time of server zones.
I'm happy to open a pull request, if you can tell me where I should start looking.
Dear,
Is it possible to calculate CCU (concurrent user ) in each vHost using Nginx Modules VTS ?
Thanks for your help,
My sincere,
Tuan Kiet,
Would using this module make Nginx slower? Thanks.
It would really nice to have a sliding window with the average (max, min also?) response time to users.
There is a lot of people doing this today through access log or other modules, like nginx-statsd, which seems to me very expensive.
Hi, Is it possible to install the module without uninstalling previous nginx?
Hi,
there is some issues with nginx 1.7.10
호스팅 환경에서 아파치 시절의 mod_throttle이나 mod_cband의 대용으로 사용할 수 있을지 고려중입니다.
bandwidth는 기타 모듈들로 제어가 가능한데, transfer를 기준으로 제어하는 모듈은 현재까지 없어 보입니다.
혹시 transfer limit의 기능에 대해서 개발 계획이 있으신지 궁금합니다.
좋은 모듈 개발해 주셔서 감사합니다.
Mac OS X El Capitan
Nginx 1.9.15
/Users/User/Downloads/nginx-module-vts-master/src/ngx_http_vhost_traffic_status_module.c:1277:24: error: comparison of address of 'limits[i].variable' equal to a null pointer is always false [-Werror,-Wtautological-pointer-compare]
if (&limits[i].variable == NULL) {
~~~~~~~~~~^~~~~~~~ ~~~~
/Users/User/Downloads/nginx-module-vts-master/src/ngx_http_vhost_traffic_status_module.c:2151:25: error: comparison of address of 'filters[i].filter_key' equal to a null pointer is always false [-Werror,-Wtautological-pointer-compare]
if (&filters[i].filter_key == NULL || &filters[i].filter_name == NULL) {
~~~~~~~~~~~^~~~~~~~~~ ~~~~
/Users/User/Downloads/nginx-module-vts-master/src/ngx_http_vhost_traffic_status_module.c:2151:59: error: comparison of address of 'filters[i].filter_name' equal to a null pointer is always false [-Werror,-Wtautological-pointer-compare]
if (&filters[i].filter_key == NULL || &filters[i].filter_name == NULL) {
~~~~~~~~~~~^~~~~~~~~~~ ~~~~
3 errors generated.
I see the html template at https://github.com/vozlt/nginx-module-vts/blob/master/share/status.template.html. How would I go about customising the css after nginx-module-vts is installed ? Or would customisation need to be done before it's installed ?
Prometheus (prometheus.io) is a very powerfull open-source monitoring solution. Prometheus get the data scraping metrics from the applications.
To use prometheus with nginx-module-vts
is a good way to monitoring and aggregate metrics from more than one nginx
servers.
A way to do it is the nginx-module-vts
support prometheus
as a display format.
I am using this module: https://github.com/cubicdaiya/ngx_dynamic_upstream/ to manage upstreams, but status is not changed on the status page.
Is it possible to be done?
Regards,
ssabchew
Hello,
I've seen some inconsistent data in cacheZones:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos_r620-root 70G 1.7G 69G 3% /
devtmpfs 48G 0 48G 0% /dev
tmpfs 48G 0 48G 0% /dev/shm
tmpfs 48G 137M 47G 1% /run
tmpfs 48G 0 48G 0% /sys/fs/cgroup
tmpfs 48G 155M 47G 1% /cache
"my_zone" Cache directory is /cache.
"cacheZones":{"my_zone":{"maxSize":9223372036854771712,"usedSize":161124352,"inBytes":2191759,"outBytes":1966063488,"responses":
maxSize is really good to be true:). The issue is also in the web page. Maybe tmpfs is a problem?
I'm using master.
BRS
Is there a quick or direct command to ignore the calls made to the status page itself?
I was careful to give enough time between reloads for any workers that needed to die to die, but this causes a failure to bind error from nginx, not a segfault. I think maybe the init conf method isn't hardened for repeated re-cycling?
Repro steps:
service nginx start
gdb -p
signal SIGHUP
ctrl-c
signal SIGHUP
ctrl-c
signal SIGHUP
segfault
backtrace full:
#0 __memcpy_sse2_unaligned () at ../sysdeps/x86_64/multiarch/memcpy-sse2-unaligned.S:33
No locals.
#1 0x00000000004ad46b in memcpy (__len=, __src=, __dest=0x7f03996f2029) at /usr/include/x86_64-linux-gnu/bits/string3.h:51
No locals.
#2 ngx_http_vhost_traffic_status_filter_unique (pool=0x1100190, keys=keys@entry=0x1103780) at ./modules/nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:3184
hash =
p = 0x7f03996f2029 ""
key = {len = 4976979, data = 0x7f03996f2010 "$server_addr:$server_port"}
i = 0
n = 1
uniqs = 0x1196488
filter_keys = 0x0
filter =
filters = 0x1157648
filter_uniqs =
#3 0x00000000004ad975 in ngx_http_vhost_traffic_status_init_main_conf (cf=0x7ffe12032d40, conf=0x1103778) at ./modules/nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:4779
ctx = 0x1103778
rc =
vtscf = 0x11037c8
#4 0x0000000000432e34 in ngx_http_block (cf=0x7ffe12032d40, cmd=0xa, conf=0x4bf13a) at src/http/ngx_http.c:269
rv = 0xffff80fc6644eea7 <error: Cannot access memory at address 0xffff80fc6644eea7>
ctx = 0x117b9e0
s = 7487744
clcf = 0x5
#5 0x0000000000419f33 in ngx_conf_handler (last=1, cf=0x7ffe12032d40) at src/core/ngx_conf_file.c:427
rv =
conf =
i = 9
confp =
found = 1
name = 0x1101220
cmd = 0x70aae0 <ngx_http_commands>
#6 ngx_conf_parse (cf=cf@entry=0x7ffe12032d40, filename=filename@entry=0x1100390) at src/core/ngx_conf_file.c:283
rv =
p =
size =
fd = 4
rc =
buf = {
pos = 0x10fafec "\n", ' ' <repeats 17 times>, "tcl tk;\n application/x-x509-ca-cert", ' ' <repeats 12 times>, "der pem crt;\n application/x-xpinstall", ' ' <repeats 15 times>, "xpi;\n application/xhtml+xml", ' ' <repeats 17 times>, "xhtml;\n application/xspf+xm"...,
last = 0x10fafed ' ' <repeats 17 times>, "tcl tk;\n application/x-x509-ca-cert", ' ' <repeats 12 times>, "der pem crt;\n application/x-xpinstall", ' ' <repeats 15 times>, "xpi;\n application/xhtml+xml", ' ' <repeats 17 times>, "xhtml;\n application/xspf+xml"..., file_pos = 8602557481092212992, file_last = 3544386174626525807,
start = 0x10fa6c0 "user www-data;\nworker_rlimit_core 500m;\nworking_directory /tmp/nginxcores/;\nworker_rlimit_nofile 1000000;\nworker_processes 16;\nworker_cpu_affinity\n", ' ' <repeats 20 times>, '0' <repeats 15 times>, "100000000\n "...,
end = 0x10fb6c0 "\020\020", tag = 0xffff8001edfcd401, file = 0x401, shadow = 0x100, temporary = 1, memory = 0, mmap = 0, recycled = 0, in_file = 1, flush = 1, sync = 0, last_buf = 0, last_in_chain = 1, last_shadow = 0, temp_file = 0, num = 0}
tbuf =
prev = 0x0
conf_file = {file = {fd = 4, name = {len = 21, data = 0x1100406 "/etc/nginx/nginx.conf"}, info = {st_dev = 2049, st_ino = 133886, st_nlink = 1, st_mode = 33188, st_uid = 0, st_gid = 0, __pad0 = 0, st_rdev = 0, st_size = 2349, st_blksize = 4096, st_blocks = 8,
st_atim = {tv_sec = 1468545661, tv_nsec = 100448752}, st_mtim = {tv_sec = 1468545433, tv_nsec = 231630832}, st_ctim = {tv_sec = 1468545433, tv_nsec = 231630832}, __glibc_reserved = {0, 0, 0}}, offset = 2349, sys_offset = 17826272, log = 0x11904b8,
thread_handler = 0x7ffe12032c98, thread_ctx = 0x7ffe12032c98, aio = 0x4000, valid_info = 0, directio = 1}, buffer = 0x7ffe12032b50, dump = 0x0, line = 87}
cd =
type = parse_file
#7 0x00000000004177a1 in ngx_init_cycle (old_cycle=old_cycle@entry=0x11904a0) at src/core/ngx_cycle.c:268
rv =
senv = 0x7ffe12033358
env =
i =
n =
log = 0x11904b8
conf = {name = 0x0, args = 0x1100fe8, cycle = 0x11001e0, pool = 0x1100190, temp_pool = 0x11325b0, conf_file = 0x7ffe12032ba0, log = 0x11904b8, ctx = 0x1101550, module_type = 1347703880, cmd_type = 33554432, handler = 0x0, handler_conf = 0x0}
pool = 0x1100190
cycle = 0x11001e0
old =
shm_zone =
oshm_zone =
part =
opart =
file =
ls =
nls =
ccf =
old_ccf =
module =
hostname = "omitted.com'
#8 0x0000000000428a22 in ngx_master_process_cycle (cycle=0x11904a0, cycle@entry=0x10f4340) at src/os/unix/ngx_process_cycle.c:234
title = <optimized out>
p = <optimized out>
size = <optimized out>
i = <optimized out>
n = <optimized out>
sigio = 0
set = {__val = {0 <repeats 16 times>}}
itv = {it_interval = {tv_sec = 0, tv_usec = 0}, it_value = {tv_sec = 0, tv_usec = 0}}
live = <optimized out>
delay = 0
ls = <optimized out>
ccf = 0x11910b0
#9 0x0000000000407b9f in main (argc=, argv=) at src/core/nginx.c:359
b = <optimized out>
log = 0x7251c0 <ngx_log>
i = <optimized out>
cycle = 0x10f4340
init_cycle = {conf_ctx = 0x0, pool = 0x10f3d90, log = 0x7251c0 <ngx_log>, new_log = {log_level = 0, file = 0x0, connection = 0, disk_full_time = 0, handler = 0x0, data = 0x0, writer = 0x0, wdata = 0x0, action = 0x0, next = 0x0}, log_use_stderr = 0, files = 0x0,
free_connections = 0x0, free_connection_n = 0, reusable_connections_queue = {prev = 0x0, next = 0x0}, listening = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, pool = 0x0}, paths = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, pool = 0x0}, config_dump = {
elts = 0x0, nelts = 0, size = 0, nalloc = 0, pool = 0x0}, open_files = {last = 0x0, part = {elts = 0x0, nelts = 0, next = 0x0}, size = 0, nalloc = 0, pool = 0x0}, shared_memory = {last = 0x0, part = {elts = 0x0, nelts = 0, next = 0x0}, size = 0, nalloc = 0,
pool = 0x0}, connection_n = 0, files_n = 0, connections = 0x0, read_events = 0x0, write_events = 0x0, old_cycle = 0x0, conf_file = {len = 21, data = 0x4b6749 "/etc/nginx/nginx.conf"}, conf_param = {len = 0, data = 0x0}, conf_prefix = {len = 11,
data = 0x4b6749 "/etc/nginx/nginx.conf"}, prefix = {len = 11, data = 0x4b673d "/etc/nginx/"}, lock_file = {len = 0, data = 0x0}, hostname = {len = 0, data = 0x0}}
cd = <optimized out>
ccf = <optimized out>
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.