Code Monkey home page Code Monkey logo

heplify-server's People

Contributors

adubovikov avatar aqsyonas avatar aqsyounas avatar ay1000 avatar dependabot[bot] avatar emmceemoore avatar f355 avatar games130 avatar jstukmanis avatar kevin-olbrich avatar kirychukyurii avatar lmangani avatar luit avatar lwahlmeier avatar mauri870 avatar ncopa avatar negbie avatar okhowang avatar systemcrash avatar tina-kuo avatar trdenton avatar tsearle avatar xhantu avatar ziondials avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

heplify-server's Issues

overflowing db channel

Hi,

What does the following warning mean?

2018-04-14T16:44:33+01:00 WARN overflowing db channel by 128 packets

I get hundreds of these in the server logs (amongst others). Is there a parameter I need to tweak to increase the channel, or do i need to tweak some other values?

heplify method response {response} contains symbols

Hi, I'm using Heplify-Server to expose Prometheus metrics. I'm using the Heplify capture agent on a 3CX server to send data to Heplify-Server.

Some of the heplify_method_response metrics seem to have interesting responses like �JSip0NOTIFY and ��Sip0CANCEL, is this normal?

Here are all the heplify_method_response metrics being exposed

heplify_method_response{method="ACK",response="ACK",target_name="3cx"} 284 heplify_method_response{method="ACK",response="Sip0ACK",target_name="3cx"} 2 heplify_method_response{method="BYE",response="200",target_name="3cx"} 6 heplify_method_response{method="BYE",response="BYE",target_name="3cx"} 6 heplify_method_response{method="BYE",response="Sip0BYE",target_name="3cx"} 1 heplify_method_response{method="CANCEL",response="��Sip0CANCEL",target_name="3cx"} 2 heplify_method_response{method="CANCEL",response="200",target_name="3cx"} 2 heplify_method_response{method="CANCEL",response="CANCEL",target_name="3cx"} 2 heplify_method_response{method="INFO",response="200",target_name="3cx"} 930 heplify_method_response{method="INFO",response="INFO",target_name="3cx"} 930 heplify_method_response{method="INVITE",response="\n�Sip0INVITE",target_name="3cx"} 2 heplify_method_response{method="INVITE",response="100",target_name="3cx"} 282 heplify_method_response{method="INVITE",response="180",target_name="3cx"} 7 heplify_method_response{method="INVITE",response="200",target_name="3cx"} 182 heplify_method_response{method="INVITE",response="401",target_name="3cx"} 1 heplify_method_response{method="INVITE",response="407",target_name="3cx"} 12 heplify_method_response{method="INVITE",response="415",target_name="3cx"} 97 heplify_method_response{method="INVITE",response="487",target_name="3cx"} 2 heplify_method_response{method="INVITE",response="INVITE",target_name="3cx"} 325 heplify_method_response{method="INVITE",response="Sip0INVITE",target_name="3cx"} 1 heplify_method_response{method="NOTIFY",response="�JSip0NOTIFY",target_name="3cx"} 2 heplify_method_response{method="NOTIFY",response="�NSip0NOTIFY",target_name="3cx"} 2 heplify_method_response{method="NOTIFY",response="�PSip0NOTIFY",target_name="3cx"} 2 heplify_method_response{method="NOTIFY",response="200",target_name="3cx"} 951 heplify_method_response{method="NOTIFY",response="406",target_name="3cx"} 386 heplify_method_response{method="NOTIFY",response="500",target_name="3cx"} 4 heplify_method_response{method="NOTIFY",response="NOTIFY",target_name="3cx"} 1338 heplify_method_response{method="OPTIONS",response="OPTIONS",target_name="3cx"} 1 heplify_method_response{method="REGISTER",response="200",target_name="3cx"} 927 heplify_method_response{method="REGISTER",response="401",target_name="3cx"} 22 heplify_method_response{method="REGISTER",response="407",target_name="3cx"} 883 heplify_method_response{method="REGISTER",response="REGISTER",target_name="3cx"} 1832 heplify_method_response{method="REGISTER",response="Sip0REGISTER",target_name="3cx"} 405 heplify_method_response{method="SUBSCRIBE",response="200",target_name="3cx"} 743 heplify_method_response{method="SUBSCRIBE",response="407",target_name="3cx"} 101 heplify_method_response{method="SUBSCRIBE",response="SUBSCRIBE",target_name="3cx"} 844 heplify_method_response{method="SUBSCRIBE",response="Sip0SUBSCRIBE",target_name="3cx"} 12

AlegID = other (non X-CID)

I am doing call correlation using the header "P-Charging-Vector" - inside this have a field ICID which is constant throughout the call.
Therefore in the heplify-server.toml, i configured:
AlegID = "P-Charging-Vector"

Based on the captured sip messages in mysql, the callid_aleg column is always empty

example of the header P-Charging-Vector:
P-Charging-Vector: icid-value="d335f5981ed702c414843f092aca5660.3732318005.2962603871.301"

from kamailio, what is did was set the kamailio.cfg with
modparam("sipcapture", "callid_aleg_header", "P-Charging-Vector")

In the callid_aleg column i would get the value of icid-value="d335f5981ed702c414843f092aca5660.3732318005.2962603871.301"

Homer7 Aleg Correlation

Hi, I'm testing heplify-server with Homer7 postgre database and homer-app ui.
So far I couldn't get the a leg correlation in the call flow gui to work, although freeswitch sets the x-cid header.
I noticed the homer5 mysql db had the callid_aleg column (and the correlation in the "Flow" gui was ok). Homer7 doesn't have such a column, so I was wondering how the correlation works with heplify-server and homer7.

I'm not sure whether this correlation issue is related to heplify-server or homer-app, so please let me know if I should ask in homer-app instead.

Where do I get the stats

Sorry to be a noob. I've installed the docker images, I have a homer v5 running, I am feeding sip & rtcp data to port 9060 of the instance, but I only see classic homer v5 data, I see no new data in homer databases.
I am looking for rtpc / qos stats !
J.

filepath invalid character '

when i execute "go build cmd/heplify-server/heplify-server.go", it results in following error:
-> unzip /root/go/pkg/mod/cache/download/github.com/negbie/heplify-server/@v/v0.0.0-20181221112811-5429c3f96f6e.zip: malformed file path "docker/hep-prom-graf/grafana/provisioning/dashboards/SIP_KPI's.json": invalid char '''

maybe the dashboard could be renamed without the "'" character for better compatibility?

Tables partitioned incorrectly

Hi,

Table rotations done by heplify-server sets incorrect time(local timezone?), e.g.
p20180404_pnr0 1522789200 (Tue, 03 Apr 2018 21:00:00 GMT)
must be 1522886400 (Thu, 05 Apr 2018 00:00:00 GMT)

And on day 2018-04-04 it can't write as
driver error (1526): Table has no partition for value 1522886218

docker homer-heplify ERROR 1136 (21S01) at line 259: Column count doesn't match value count at row 1

@negbie notice this error when running the docker
Is this cause by my sql version? I am using 5.7.21 MySQL Community Server (GPL)

/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/init_db.sql
ERROR 1136 (21S01) at line 259: Column count doesn't match value count at row 1

Error when doing this script

INSERT INTO user VALUES ('localhost','homer_user','*D0F60D1E3C6C124FEEB76527E00A9380C37643EE','N','N','N','N','N','N','N','N','N','N','N','N','N','N','N','N','N','N','N','N','N','N','N','N','N','N','N','N','N','','','','',0,0,0,0,'mysql_native_password','','N');

the mysql user table is formated as:
mysql> desc mysql.user;

+------------------------+
| Field |
+------------------------+
| Host |
| User |
| Select_priv |
| Insert_priv |
| Update_priv |
| Delete_priv |
| Create_priv |
| Drop_priv |
| Reload_priv |
| Shutdown_priv |
| Process_priv |
| File_priv |
| Grant_priv |
| References_priv |
| Index_priv |
| Alter_priv |
| Show_db_priv |
| Super_priv |
| Create_tmp_table_priv |
| Lock_tables_priv |
| Execute_priv |
| Repl_slave_priv |
| Repl_client_priv |
| Create_view_priv |
| Show_view_priv |
| Create_routine_priv |
| Alter_routine_priv |
| Create_user_priv |
| Event_priv |
| Trigger_priv |
| Create_tablespace_priv |
| ssl_type |
| ssl_cipher |
| x509_issuer |
| x509_subject |
| max_questions |
| max_updates |
| max_connections |
| max_user_connections |
| plugin |
| authentication_string |
| password_expired |
| password_last_changed |
| password_lifetime |
| account_locked |
+------------------------+

Malformed packet and empty database

Hey ,

I was looking at the homer-ui and I noticed that it was empty so I looked into the log files and this appeared:

2018-07-13T11:16:01-07:00 INFO start heplify-server with config.HeplifyServer{HEPAddr:"OMITTED:9060", ESAddr:"", MQDriver:"", MQAddr:"", MQTopic:"", PromAddr:"0.0.0.0:9069", PromTargetIP:"", PromTargetName:"", HoraclifixStats:false, RTPAgentStats:false, DBShema:"homer5", DBDriver:"mysql", DBAddr:"localhost:3306", DBUser:"homer", DBPass:"OMITTED", DBDataTable:"homer_data", DBConfTable:"homer_configuration", DBTableSpace:"", DBBulk:200, DBTimer:2, DBRotate:true, DBPartLog:"2h", DBPartSip:"1h", DBPartQos:"12h", DBDropDays:0, DBDropOnStart:false, Dedup:false, DiscardMethod:[]string{}, AlegIDs:[]string{}, LogDbg:"", LogLvl:"info", LogStd:false, Config:"./heplify-server.toml", Version:false}

2018-07-13T11:16:01-07:00 INFO start creating tables (2018-07-13 11:16:01.117106207 -0700 PDT m=+0.012256024)

2018-07-13T11:16:01-07:00 INFO expose metrics with no or unbalanced targets
2018-07-13T11:16:01-07:00 INFO end creating tables (2018-07-13 11:16:01.184513856 -0700 PDT m=+0.079663642)

2018-07-13T11:16:01-07:00 WARN don't schedule daily drop job because config.Setting.DBDropDays is 0

2018-07-13T11:16:01-07:00 INFO schedule daily rotate job at 03:30:00

2018-07-13T11:16:01-07:00 INFO mysql connection established

2018-07-13T11:16:05-07:00 INFO 24.87.208.193:34931 -> heapify
2018-07-13T11:16:26-07:00 WARN HEP packet length is 1187 but should be 1392
2018-07-13T11:16:26-07:00 WARN malformed packet with length 205 which is neither hep nor protobuf encapsulated

Moreover as a sanity check for me I looked at the mySQL database and I saw the right databases, tables and the data was there.

(I'm sorry for the spam on a friday night)

Invalid PostGRES byte Sequence

Some of my packets are not making it into the postgres database due to addition of 0x00 character at the end of the sip packet.
The log is as follows:
ERR pq: invalid byte sequence for encoding "UTF8": 0x00

2019/02/01 10:43:57.309923 postgres.go:262: ERR "SIP/2.0 200 OK\r\nVia: SIP/2.0/UDP 10.0.1.1:11000;branch=z9hG4bKH256a1FF4cDeS;received=10.0.1.1;rport=11000\r\nRecord-Route: sip:10.0.1.2;lr;ftag=XDmHByK8Ue03j\r\nCall-ID: f95b241c-260e-11e9-bbc4-793f36e89cab\r\nFrom: "59413"sip:[email protected];tag=XDmHByK8Ue03j\r\nTo: sip:[email protected];tag=ez4vjg9p\r\nCSeq: 134173004 INVITE\r\nContact: sip:[email protected]:5060;user=phone;expires=1800\r\nSupported: 100rel\r\nAllow: INVITE,ACK,OPTIONS,BYE,CANCEL,REGISTER,INFO,PRACK,SUBSCRIBE,NOTIFY,UPDATE,REFER,MESSAGE\r\nUser-Agent: HUAWEI eSpace IAD1224/V300R002C01SPCm00\r\nContent-Length: 214\r\nContent-Type: application/sdp\r\n\r\nv=0\r\no=58029 3025098750 3025098750 IN IP4 10.0.2.40\r\ns=-\r\nc=IN IP4 10.0.2.40\r\nt=0 0\r\nm=audio 50112 RTP/AVP 0 101\r\na=sendrecv\r\na=rtpmap:0 PCMU/8000\r\na=ptime:20\r\na=rtpmap:101 telephone-event/8000/1\r\na=fmtp:101 0-15\r\n 0-15\r\n_\x00\x00\x00\x00\x00\x00\x00\x00\x00_"

2019/02/01 10:43:57.309947 postgres.go:265: ERR pq: Could not complete operation in a failed transaction

WARN addHdr err: no semi found in

I am getting these errors which cause particular SIP message to not be inserted into the MYSQL database. Below are two examples (i have mask some info with ***):-

2018-04-10T11:00:16+08:00 WARN addHdr err: no semi found in: nonce="xWg1DnB3QqPdiSYw8jsH0g==",algorithm=MD5
"SIP/2.0 401 Unauthorized\r\nVia: SIP/2.0/UDP 172.16.126.5:40672;branch=z9hG4bK4plsn3mkcjiilllskn4n018scT39302;rport=40672\r\nCall-ID: [email protected]\r\nFrom: sip:+**********@***.***.***.**;tag=cz0400cm\r\nTo: sip:+**********@***.***.***.**;tag=7lsatkq7\r\nCSeq: 3047 REGISTER\r\nWWW-Authenticate: Digest realm="..*.",\r\n nonce="xWg1DnB3QqPdiSYw8jsH0g==",algorithm=MD5\r\nContent-Length: 0\r\n\r\n"

2018-04-10T11:00:16+08:00 WARN addHdr err: no semi found in: realm="...",nonce="aWo/bi1Gr0/Mdgr4aW295Q==",
"REGISTER sip:
....;dpt=8804_286 SIP/2.0\r\nVia: SIP/2.0/UDP 172.16.124.15:5060;branch=z9hG4bKc0gcigg3ehiib3b31u421u3z3;Role=3;Dpt=8816_16;TRC=ffffffff-ffffffff,SIP/2.0/UDP 172.16.124.13:5060;branch=z9hG4bK2iz41ez3f0euede0i31cezz14;Role=3;Dpt=8812_16;TRC=ffffffff-ffffffff,SIP/2.0/UDP 172.16.126.5:55795;branch=z9hG4bK0342nc03k4spkil8jz1213mm8T25093;rport=55795\r\nCall-ID: [email protected]\r\nFrom: sip:+**********@***.***.***.**;tag=0pzp8pzs\r\nTo: sip:+**********@***.***.***.**\r\nCSeq: 3842 REGISTER\r\nAccept: application/sdp,application/simservs+xml\r\nAccept-Encoding: identity\r\nAccept-Language: en\r\nAllow: INVITE,ACK,OPTIONS,BYE,CANCEL,REGISTER,INFO,PRACK,SUBSCRIBE,NOTIFY,UPDATE,MESSAGE,REFER\r\nAuthorization: Digest username="+*@...",\r\n realm="...",nonce="aWo/bi1Gr0/Mdgr4aW295Q==",\r\n uri="sip:...",\r\n response="06bffcd5480573678108e7fdfb55ce4f",algorithm=MD5\r\nContact: sip:+**********@172.16.126.5:55795;transport=udp;ann=SE2600_VOIP_PRIVATE\r\nExpires: 1800\r\nMax-Forwards: 68\r\nRequire: path\r\nSupported: 100rel,replaces,timer,privacy,in-dialog\r\nUser-Agent: HUAWEI-EchoLife HG8240H/V3R013C00S105\r\nPath: sip:term@*********.***.***.***.**;lr;ssn;TYPE=V4;IP=172.16.126.5;PORT=55795;Dpt=8812_86;TRC=ffffffff-ffffffff\r\nP-Visited-Network-ID: "....**"\r\nP-Access-Network-Info: IEEE-802.11;"location-info=172.16.126.5"\r\n"

Handler for missing HEP TS

When sending an HEP packet w/o Timestamp (time_sec + time_usec) headers, unix timestamp 0 is used.
Since HEP3 still allows for timestamp-less deliveries in general, perhaps the current time should be used to mark a packet/document delivered without accurate stamps for broader backwards compatibility future protocols lacking this ability.

additional hep-metrics for SRD and RRD

could additional metrics be provided to show the average RRD (Registration Request Delay) and SRD (Session Request Delay)?

RRD = Time of Final Response (200 OK) - Time of REGISTER request

SRD = TIme of 180 Ringing - Time of Initial INVITE

This is taken from RFC 6076.

heplify-server-docker

past installation no show Templates

error failure :Could not resolve all promises by homer5

Fields not populated in index

Hi,

Firstly thank you for this! I've managed to get almost everything up and running in K8's using your examples.

The last problem I have is that in elasticsearch although I can see plenty of potential fields around the SIP call none of them are populated.

I'm using Freeswitch with HEP enabled, sending via heplify-server which sends to prometheus and graphana, where I can see charts of the calls being correctly populated.

I'm then using telegraph to forward the logs into elastic search, where I can see the logs which look a little like this:

@timestamp:December 28th 2018, 16:33:00.000 heplify_packets_total.counter:77 measurement_name:heplify_packets_total tag.host:telegraf-6fbf7666cb-8pfkp tag.type:sip tag.url:http://heplify-server:9096/metrics _id:4mSp9WcB9KjhsTkXa4OX _type:metrics _index:hep-2018.12.28 _score: - @timestamp:December 28th 2018, 16:33:00.000 heplify_method_response.counter:11 measurement_name:heplify_method_response tag.host:telegraf-6fbf7666cb-8pfkp tag.method:INVITE tag.node_id:6553600 tag.response:200 tag.url:http://heplify-server:9096/metrics _id:6GSp9WcB9KjhsTkXa4OX _type:metrics _index:hep-2018.12.28 _score: -

So I can see the invite in there but no other meta information around the call. I'm pretty sure I'm missing something obvious here so any help would be appreciated.

P.S do you have a slack channel for these type of questions?

too much metrics cause heplify-server output to hang

I did some change to prometheus.go
I added new counter (to grab all sip messages with ip and port)

p.CounterVecMetrics["heplify_method_capture01"] = prometheus.NewCounterVec(prometheus.CounterOpts{Name: "heplify_method_capture01", Help: "All SIP message counter"}, []string{"method", "cseq_method", "source_ip", "source_port", "destination_ip", "destination_port"})

After running the newly added counter (it is running fine heplify-server output and telegraf managed to get the output from heplify-server and storing it into influxdb), it will hang / stop working after a few minutes.

Telegraf log says "getsockopt: connection refused". Below is some of the log result:
Any idea how to debug this hang / stop working problem?

I have tested to output smaller number of metrics and it will run fine, no hangs.

Apr 16 21:24:10 localhost telegraf: 2018-04-16T13:24:10Z D! Output [influxdb] wrote batch of 1000 metrics in 50.68262ms
Apr 16 21:24:10 localhost telegraf: 2018-04-16T13:24:10Z D! Output [influxdb] wrote batch of 1000 metrics in 41.738095ms
Apr 16 21:24:10 localhost telegraf: 2018-04-16T13:24:10Z D! Output [influxdb] wrote batch of 1000 metrics in 34.860311ms
Apr 16 21:24:10 localhost telegraf: 2018-04-16T13:24:10Z E! Error in plugin [inputs.prometheus]: took longer to collect than collection interval (1s)
Apr 16 21:24:10 localhost telegraf: 2018-04-16T13:24:10Z D! Output [influxdb] wrote batch of 1000 metrics in 37.493373ms
Apr 16 21:24:10 localhost telegraf: 2018-04-16T13:24:10Z D! Output [influxdb] wrote batch of 1000 metrics in 48.7213ms
Apr 16 21:24:10 localhost telegraf: 2018-04-16T13:24:10Z D! Output [influxdb] wrote batch of 1000 metrics in 36.79713ms
Apr 16 21:24:10 localhost telegraf: 2018-04-16T13:24:10Z D! Output [influxdb] wrote batch of 1000 metrics in 41.394166ms
Apr 16 21:24:10 localhost telegraf: 2018-04-16T13:24:10Z D! Output [influxdb] wrote batch of 1000 metrics in 29.940258ms
Apr 16 21:24:10 localhost telegraf: 2018-04-16T13:24:10Z D! Output [influxdb] wrote batch of 1000 metrics in 35.980791ms
Apr 16 21:24:10 localhost telegraf: 2018-04-16T13:24:10Z D! Output [influxdb] wrote batch of 1000 metrics in 28.388011ms
Apr 16 21:24:10 localhost telegraf: 2018-04-16T13:24:10Z D! Output [influxdb] wrote batch of 1000 metrics in 38.591658ms
Apr 16 21:24:10 localhost telegraf: 2018-04-16T13:24:10Z D! Output [influxdb] wrote batch of 1000 metrics in 40.697321ms
Apr 16 21:24:10 localhost telegraf: 2018-04-16T13:24:10Z D! Output [influxdb] wrote batch of 1000 metrics in 44.516893ms
Apr 16 21:24:10 localhost telegraf: 2018-04-16T13:24:10Z D! Output [influxdb] wrote batch of 1000 metrics in 37.445192ms
Apr 16 21:24:10 localhost telegraf: 2018-04-16T13:24:10Z D! Output [influxdb] wrote batch of 1000 metrics in 37.947831ms
Apr 16 21:24:11 localhost telegraf: 2018-04-16T13:24:11Z D! Output [influxdb] wrote batch of 1000 metrics in 45.27528ms
Apr 16 21:24:11 localhost telegraf: 2018-04-16T13:24:11Z D! Output [influxdb] wrote batch of 1000 metrics in 39.588898ms
Apr 16 21:24:11 localhost telegraf: 2018-04-16T13:24:11Z D! Output [influxdb] wrote batch of 1000 metrics in 41.734043ms
Apr 16 21:24:11 localhost telegraf: 2018-04-16T13:24:11Z E! Error in plugin [inputs.prometheus]: took longer to collect than collection interval (1s)
Apr 16 21:24:11 localhost telegraf: 2018-04-16T13:24:11Z D! Output [influxdb] wrote batch of 1000 metrics in 47.767747ms
Apr 16 21:24:11 localhost telegraf: 2018-04-16T13:24:11Z D! Output [influxdb] wrote batch of 1000 metrics in 35.308999ms
Apr 16 21:24:11 localhost telegraf: 2018-04-16T13:24:11Z D! Output [influxdb] wrote batch of 1000 metrics in 37.057446ms
Apr 16 21:24:11 localhost telegraf: 2018-04-16T13:24:11Z D! Output [influxdb] wrote batch of 1000 metrics in 38.844666ms
Apr 16 21:24:11 localhost telegraf: 2018-04-16T13:24:11Z D! Output [influxdb] wrote batch of 1000 metrics in 37.312039ms
Apr 16 21:24:11 localhost telegraf: 2018-04-16T13:24:11Z D! Output [influxdb] wrote batch of 1000 metrics in 38.181017ms
Apr 16 21:24:11 localhost telegraf: 2018-04-16T13:24:11Z E! Error in plugin [inputs.prometheus]: error making HTTP request to http://localhost:9999/metrics: Get http://localhost:9999/metrics: dial tcp [::1]:9999: getsockopt: connection refused
Apr 16 21:24:11 localhost telegraf: 2018-04-16T13:24:11Z D! Output [influxdb] wrote batch of 1000 metrics in 64.495392ms
Apr 16 21:24:12 localhost telegraf: 2018-04-16T13:24:12Z E! Error in plugin [inputs.prometheus]: error making HTTP request to http://localhost:9999/metrics: Get http://localhost:9999/metrics: dial tcp [::1]:9999: getsockopt: connection refused
Apr 16 21:24:13 localhost telegraf: 2018-04-16T13:24:13Z E! Error in plugin [inputs.prometheus]: error making HTTP request to http://localhost:9999/metrics: Get http://localhost:9999/metrics: dial tcp [::1]:9999: getsockopt: connection refused
Apr 16 21:24:14 localhost telegraf: 2018-04-16T13:24:14Z E! Error in plugin [inputs.prometheus]: error making HTTP request to http://localhost:9999/metrics: Get http://localhost:9999/metrics: dial tcp [::1]:9999: getsockopt: connection refused
Apr 16 21:24:15 localhost telegraf: 2018-04-16T13:24:15Z E! Error in plugin [inputs.prometheus]: error making HTTP request to http://localhost:9999/metrics: Get http://localhost:9999/metrics: dial tcp [::1]:9999: getsockopt: connection refused

Ignoring some message type

Hello,

My captagents capture inter-server flows that are not fully sip, i don't want do insert them in DB for the moment, but as they are not parsable i get 3 log lines any time that overflow my logfiles (in docker).

Is there a way to ignore those parckets for the logs ?

Bellow are the log example with the headers (CGPSIP and CGPRELAY).

{"log":"2018/07/16 09:30:11.135871 decoder.go:101: WARN parseStartLine err: received err while parsing start line: parseStartLineRequest err: request line did not split on LWS correctly\n","stream":"stderr","time":"2018-07-16T09:30:11.136073595Z"}
{"log":""CGPSIP/udp[

{"log":"2018/07/16 09:30:11.137306 decoder.go:101: WARN parseStartLine err: received err while parsing start line: parseStartLineRequest err: request line did not split on LWS correctly\n","stream":"stderr","time":"2018-07-16T09:30:11.137415149Z"}
{"log":""CGPRELAY/tcp

Thanks

heplify-server to elasticsearch index with YYYYMMDD

currently heplify-server posts the sip messages into elasticsearch with index 'heplify-server'. Could this be enhanced to use the index with a date stamp? This will allow to clean up old data records in elasticsearch based on a retention policy i define.

eg. heplify-server-YYYYMMDD.

Kibana Timelion vs Grafana for Successful Calls and Registers

Trying to duplicate the Grafana successful calls/register graph in Kibana/Timelion. Using a query to count the '200 OK' response when cseq.method = REGISTER or INVITE, respectively. Timelion syntax used:
.es(index=heplify-server-, timefield=Timestamp,q='SIP.Cseq.Method:REGISTER AND SIP.StartLine.Resp:200', metric=count).scale_interval(1s).color(#1e6ca7).lines().label("Successful Registers").legend(position=nw,showTime=true), .es(index=heplify-server-, timefield=Timestamp,q='SIP.Cseq.Method:INVITE AND SIP.StartLine.Resp:200', metric=count).scale_interval(1s).color(#f0882f).lines().label("Successful Invites")

Graphs are not lining up between the two.

Questions: In Grafana, how are you able to ensure that the same sip message isn't counted multiple times if it's reported by different servers in the call path? i.e. the 200 OK will traverse a few servers before hitting the endpoint, each of those servers have heplify-client running.

FYI - the grafana graph is spot on. Using SIPP to generate different traffic rates, and the grafana chart reflects the CPS and RPS rates perfectly.

Installation Help

Hey,

This might be a very basic question.

So I'm trying to install it, but to make sure I complete it right, should I do the following:

Im using a new clean debian 8.10 server. If I download the source code and then make my own heplify-server.toml

in the following places would with be correct?

HEPAddr         = "0.0.0.0:9060"
ESAddr          = "my IP:9200"
MQDriver        = ""
MQAddr          = ""
MQTopic         = ""
PromAddr        = "" [Im not sure what to put here]
PromTargetIP    = "" [Im not sure what to put here]
PromTargetName  = "" [Im not sure what to put here]
HoraclifixStats = false
RTPAgentStats   = false
DBShema         = "homer5"
DBDriver        = "mysql"
DBAddr          = "localhost:3306"
DBUser          = "root"
DBPass          = ""
DBDataTable     = "homer_data"
DBConfTable     = "homer_configuration"
DBTableSpace    = ""
DBBulk          = 200
DBTimer         = 2
DBRotate        = true
DBPartLog       = "2h"
DBPartSip       = "1h"
DBPartQos       = "12h"
DBDropDays      = 0
DBDropOnStart   = false
Dedup           = false
DiscardMethod   = []
AlegIDs         = [] [Im not sure what to put here]
LogDbg          = ""
LogLvl          = "info"
LogStd          = false
Config          = "./heplify-server.toml"
Version = false

Essentially I think my doubts can be boiled down to, can I set up my and and run the new binary without having SQL set up? as in the installer would do that for me?

heplify-server crash with "slice bounds out of range"

Hello,
i'm running a docker version of heplify-server (latest :heplify-server 0.95) and see some recuring crashes :

heplify-server | panic: runtime error: slice bounds out of range
heplify-server |
heplify-server | goroutine 53 [running]:
heplify-server | github.com/negbie/sipparser.parseUriHost(0xc4232e6a80, 0xbda1e8)
heplify-server | /go/src/github.com/negbie/sipparser/uri.go:231 +0xa22
heplify-server | github.com/negbie/sipparser.(*URI).Parse(0xc4232e6a80)
heplify-server | /go/src/github.com/negbie/sipparser/uri.go:72 +0x37
heplify-server | github.com/negbie/sipparser.ParseURI(0x0, 0x0, 0x0)
heplify-server | /go/src/github.com/negbie/sipparser/uri.go:65 +0x60
heplify-server | github.com/negbie/sipparser.parseFromGetURI(0xc42261f880, 0xbda178)
heplify-server | /go/src/github.com/negbie/sipparser/from.go:71 +0x79
heplify-server | github.com/negbie/sipparser.(*From).parse(0xc42261f880)
heplify-server | /go/src/github.com/negbie/sipparser/from.go:43 +0x37
heplify-server | github.com/negbie/sipparser.getFrom(0x0, 0x0, 0x1ab)
heplify-server | /go/src/github.com/negbie/sipparser/from.go:109 +0x60
heplify-server | github.com/negbie/sipparser.(*SipMsg).parseTo(0xc4232e4c00, 0x0, 0x0)
heplify-server | /go/src/github.com/negbie/sipparser/parser.go:545 +0x39
heplify-server | github.com/negbie/sipparser.(*SipMsg).addHdr(0xc4232e4c00, 0xc4248500ce, 0x3)
heplify-server | /go/src/github.com/negbie/sipparser/parser.go:230 +0xe9b
heplify-server | github.com/negbie/sipparser.getHeaders(0xc4232e4c00, 0xbda168)
heplify-server | /go/src/github.com/negbie/sipparser/parser.go:616 +0x29d
heplify-server | github.com/negbie/sipparser.(*SipMsg).run(0xc4232e4c00)
heplify-server | /go/src/github.com/negbie/sipparser/parser.go:125 +0x37
heplify-server | github.com/negbie/sipparser.ParseMsg(0xc424850000, 0x27b, 0xc42001da00, 0x1, 0x1, 0xfc0500)
heplify-server | /go/src/github.com/negbie/sipparser/parser.go:644 +0x171
heplify-server | github.com/negbie/heplify-server.(*HEP).parseSIP(0xc422b98820, 0x8f01f9c2e, 0xfc0500)
heplify-server | /go/src/github.com/negbie/heplify-server/decoder.go:227 +0x60
heplify-server | github.com/negbie/heplify-server.(*HEP).parse(0xc422b98820, 0xc4243da000, 0x2e4, 0x2000, 0xc421a7ef80, 0x1)
heplify-server | /go/src/github.com/negbie/heplify-server/decoder.go:123 +0x574
heplify-server | github.com/negbie/heplify-server.DecodeHEP(0xc4243da000, 0x2e4, 0x2000, 0xc424053a00, 0x0, 0x0)
heplify-server | /go/src/github.com/negbie/heplify-server/decoder.go:85 +0x62
heplify-server | github.com/negbie/heplify-server/server.(*HEPInput).hepWorker(0xc421766230)
heplify-server | /go/src/github.com/negbie/heplify-server/server/server.go:239 +0x15e
heplify-server | created by github.com/negbie/heplify-server/server.(*HEPInput).Run
heplify-server | /go/src/github.com/negbie/heplify-server/server/server.go:59 +0x4c

Could you have a look if you see any reason why ?

Thanks :)

Unhandled AlegID = "X-BroadWorks-Correlation-Info"

Hello,

I use an unreferenced Header ""X-BroadWorks-Correlation-Info" and i would like to use as the AlegID.

I saw in #31 that you need to dclare the header in parser.

Would it be possible to add this one ?

Thanks.

Docker image does not contain homer_statistic db

Hi I tried to use docker image with homer5, looks like all works fine. But SIPCapture Charts return that "Could not resolve all promises". Basic check with curl returns to me following:
{
"status": 500,
"error": "PDOException",
"message": "SQLSTATE[42000] [1049] Unknown database 'homer_statistic'"
}
And show databases looks like:
MariaDB [homer_data]> show databases;
+---------------------+
| Database |
+---------------------+
| homer_configuration |
| homer_data |
| information_schema |
| mysql |
| performance_schema |
+---------------------+
5 rows in set (0.001 sec)

MariaDB [homer_data]>

My question is: Heplify server supports statistic db or not?
Thanks.

SIP/TCP Capturing Issue

Hello,

I've been trying to mirror SIP TCP traffic via heplify through heplify-server and have some issues processing TCP packets. I'm using heplify-server 0.96. For context, we're using OpenSIPS as a SIP Proxy to convert from a WebSocket (which end-user clients are using via a load balancer) to SIP over UDP. It seems that the TCP traffic is being mirrored via heplify, however is not being inserted into HOMER properly via heplify-server. I've attached a few items from my debugging and would appreciate some help:

webrtc_capture.zip

  • Interface capture at OpenSIPS showing both TCP/UDP traffic (interface.pcap) (this is then mirrored over TLS from heplify to heplify-server)
  • Resulting HOMER PCAP (includes messages from other parts of our infrastructure) showing that the TCP SIP messages are missing
  • The heplify-server.log showing warnings with malformed packets
  • The configuration of heplify-server

Please let me know if there's anything else I can attach to help with debugging.

Setting up Heplify to read from pcap files

Hello,

I'm trying to get Heplify to read from pcap files, so far I'm able to get the standalone server to run and connect to a local host MySQL DB.

Can anyone point me towards the right direct? Where do I configure Heplify to read from pcap files?

Also, how do I access the web interface once the server is setup correctly?

Thank you.
//M

Support for SIP RTCP-XR PUBLISH messages

This is a "copy/paste" of heplify issue 44, better suited here as a reminder :)

===

Hi,

The readme for heplify says "Heplify is able to send SIP, correlated RTCP, RTCPXR, DNS, Logs into homer.", I'm wondering how it goes about doing this, I'm guessing while sniffing it "detects" an RTCP-XR PUBLISH from a UA and does its thing and sends the stats over to homer/kamailio, homer/heplify-server.

If my understanding above is correct, then, I think I've may have come across an RTCP-XR PUBLISH message that heplify is not handling. What I mean is, I can see the publish in homer's data/web-ui, but when I look at the call that the rtcp-xr publish is associated with, then, I don't see any rtcp-xr stats on the QoS reports tab of the call.

I have a pcap of the publish, but it contains some sensitive info, so I cannot upload it to github.

Any thoughts?

===

PS: When you get around to implementing this, let me know and I could generate some pcaps for you to assist in testing.

Kubernetes Helm Chart

I am loving all your most recent work, guys. Way to go. We have heplify-server running in kubernetes now. Would you be interested in us creating a Helm chart/package for kubernetes deployments? It would take some work to get flexible enough for general deployments, but we can get there.

Multiple ES IP addresses

I (now) have a 3-node cluster for Elasticsearch. Will Heplify-Server support outputting to the 3 IPs (round-robin or whatever load balancing solution)? If so, is it just a matter of a comma separated list of ES IPs in the config file, or do I need to specify a FQDN rather than IP?

export to pcap

Don't work pcap export in docker version.
file HOMER5-91.218.111.140-79088697999-8_3_2018\ 15_13_12.pcap
HOMER5-91.218.111.140-79088697999-8_3_2018 15_13_12.pcap: data

Multiple captagent

Hello. Is there are way to target heplify-server metrics for each captagent instance? As long as there is more than one captagent.

Error 1406: Data too long for column 'msg'

I have set up heplify-server to send data to a separate homer server, it works for the most part but is missing some calls when searching sip records

E.g a call occurred at 1308 which was from extension 140 and after exporting a pcap from homer with the time range 0900 - 1400, the call is missing, however, other calls and information exist for that extension before and after 1308 so it's partially working

This is pretty consistent and there are quite a lot of missing calls

When looking into the heplify-server logs, there are lots of error messages saying "Data too long for column"

2018-06-07T14:07:54+10:00 ERR  Error 1406: Data too long for column 'msg' at row 3
2018-06-07T14:07:56+10:00 ERR  Error 1406: Data too long for column 'msg' at row 56
2018-06-07T14:07:59+10:00 ERR  Error 1406: Data too long for column 'msg' at row 39
2018-06-07T14:08:08+10:00 ERR  Error 1406: Data too long for column 'msg' at row 9
2018-06-07T14:08:08+10:00 ERR  Error 1406: Data too long for column 'msg' at row 47
2018-06-07T14:08:08+10:00 ERR  Error 1406: Data too long for column 'msg' at row 25
2018-06-07T14:08:10+10:00 ERR  Error 1406: Data too long for column 'msg' at row 79
2018-06-07T14:08:12+10:00 ERR  Error 1406: Data too long for column 'msg' at row 1
2018-06-07T14:08:12+10:00 ERR  Error 1406: Data too long for column 'msg' at row 10
2018-06-07T14:08:12+10:00 ERR  Error 1406: Data too long for column 'msg' at row 95
2018-06-07T14:08:12+10:00 ERR  Error 1406: Data too long for column 'msg' at row 26
2018-06-07T14:08:14+10:00 ERR  Error 1406: Data too long for column 'msg' at row 4
2018-06-07T14:08:14+10:00 ERR  Error 1406: Data too long for column 'msg' at row 117
2018-06-07T14:08:17+10:00 ERR  Error 1406: Data too long for column 'msg' at row 21
2018-06-07T14:08:17+10:00 ERR  Error 1406: Data too long for column 'msg' at row 21
2018-06-07T14:08:17+10:00 ERR  Error 1406: Data too long for column 'msg' at row 5
2018-06-07T14:08:20+10:00 ERR  Error 1406: Data too long for column 'msg' at row 18
2018-06-07T14:08:20+10:00 ERR  Error 1406: Data too long for column 'msg' at row 6
2018-06-07T14:08:26+10:00 ERR  Error 1406: Data too long for column 'msg' at row 2
2018-06-07T14:08:26+10:00 ERR  Error 1406: Data too long for column 'msg' at row 13
2018-06-07T14:08:26+10:00 ERR  Error 1406: Data too long for column 'msg' at row 16
2018-06-07T14:08:29+10:00 ERR  Error 1406: Data too long for column 'msg' at row 20
2018-06-07T14:08:29+10:00 ERR  Error 1406: Data too long for column 'msg' at row 13
2018-06-07T14:08:29+10:00 ERR  Error 1406: Data too long for column 'msg' at row 9
2018-06-07T14:08:35+10:00 ERR  Error 1406: Data too long for column 'msg' at row 2
2018-06-07T14:08:44+10:00 ERR  Error 1406: Data too long for column 'msg' at row 53
2018-06-07T14:09:11+10:00 ERR  Error 1406: Data too long for column 'msg' at row 96
2018-06-07T14:09:17+10:00 ERR  Error 1406: Data too long for column 'msg' at row 7
2018-06-07T14:09:35+10:00 ERR  Error 1406: Data too long for column 'msg' at row 36
2018-06-07T14:09:47+10:00 ERR  Error 1406: Data too long for column 'msg' at row 1
2018-06-07T14:09:53+10:00 ERR  Error 1406: Data too long for column 'msg' at row 22
2018-06-07T14:10:08+10:00 ERR  Error 1406: Data too long for column 'msg' at row 11
2018-06-07T14:10:11+10:00 ERR  Error 1406: Data too long for column 'msg' at row 100
2018-06-07T14:10:14+10:00 ERR  Error 1406: Data too long for column 'msg' at row 5
2018-06-07T14:10:14+10:00 ERR  Error 1406: Data too long for column 'msg' at row 13
2018-06-07T14:10:17+10:00 ERR  Error 1406: Data too long for column 'msg' at row 38
2018-06-07T14:10:20+10:00 ERR  Error 1406: Data too long for column 'msg' at row 8
2018-06-07T14:10:20+10:00 ERR  Error 1406: Data too long for column 'msg' at row 28
2018-06-07T14:10:20+10:00 ERR  Error 1406: Data too long for column 'msg' at row 1
2018-06-07T14:10:26+10:00 ERR  Error 1406: Data too long for column 'msg' at row 12
2018-06-07T14:10:26+10:00 ERR  Error 1406: Data too long for column 'msg' at row 19
2018-06-07T14:10:29+10:00 ERR  Error 1406: Data too long for column 'msg' at row 13
2018-06-07T14:10:29+10:00 ERR  Error 1406: Data too long for column 'msg' at row 10
2018-06-07T14:10:35+10:00 ERR  Error 1406: Data too long for column 'msg' at row 6
2018-06-07T14:10:41+10:00 ERR  Error 1406: Data too long for column 'msg' at row 4
2018-06-07T14:10:47+10:00 ERR  Error 1406: Data too long for column 'msg' at row 12
2018-06-07T14:10:47+10:00 ERR  Error 1406: Data too long for column 'msg' at row 15
2018-06-07T14:10:50+10:00 ERR  Error 1406: Data too long for column 'msg' at row 8
2018-06-07T14:10:50+10:00 ERR  Error 1406: Data too long for column 'msg' at row 1
2018-06-07T14:10:53+10:00 ERR  Error 1406: Data too long for column 'msg' at row 17
2018-06-07T14:11:08+10:00 ERR  Error 1406: Data too long for column 'msg' at row 5
2018-06-07T14:11:11+10:00 ERR  Error 1406: Data too long for column 'msg' at row 92

PCAP timestamp not sent to DB

Hi,

I've successfully got heplify to read from pcap file, and send it over to the heplify-server.

I am testing with pcap files from 20180707 (yesterday), but I can see them entering the DB as if they were captured now, not even today at the same time of yesterday - no, just now.

Is there a way to read from pcap and read the packets' timestamps too?

Thanks
//M

Best way of setup heplify-server

Hello,

I am trying to run this setup for testing:
heplify-server, postgres prometheus running on a kubernetes cluster.
my kamailio uses TLS, i tried to run hepifly on it, but got errors probably becasue of TLS

the error:

2018/12/07 20:32:55.708021 decoder.go:125: WARN parseStartLine err: received err while parsing start line: parseStartLineRequest err: request line did not split on LWS correctly
"\x01\x10\x02\x16\x13Żh?-\n\tSIP/2.0 200 OK\r\nVia: SIP/2.0/TLS X.X.X.X:5061;branch=z9hG4bKc87c.c11cd6c26611a5c17315f0be37b1142d.0;i=c76e;rport=48030\r\nVia: SIP/2.0/TLS X.X.X.X:48267;received=X.X.X.X;rport=48267;branch=z9hG4bKPjvE3QEkx5D9ogZ3q-UattgRrvXzHF4ODl;alias\r\nFrom: <sip:testcaller.b(domain.com)@X.X.X.X>;tag=DJLSeQEyOMrRZkGrB.-O0vfboRjVbIID\r\nTo: <sip:[email protected]>;tag=BX4r2N2Bpv6tH\r\nCall-ID: SC-B-0-jSo8yu.aM3fgzglaMKsJm12WUnFh2fi3\r\nCSeq: 9771 BYE\r\nUser-Agent: 2600hz\r\nAllow: INVITE, ACK, BYE, CANCEL, OPTIONS, MESSAGE, INFO, UPDATE, REGISTER, REFER, NOTIFY, PUBLISH, SUBSCRIBE\r\nSupported: timer, path, replaces\r\nContent-Length: 0\r\n\r\n"
nodeID: 2002, protoType: 1, version: 2, protocol: 17, length: 666, flow: 10.240.0.9:5060->10.240.0.24:9060

So I need to configure my kamailio to send it directly to heplify-server using siptrace?
I tried but then got this error:

2018/12/07 20:45:21.987002 decoder.go:103: WARN malformed packet with length 632 which is neither hep nor protobuf encapsulated
2018/12/07 20:45:24.049094 decoder.go:103: WARN malformed packet with length 670 which is neither hep nor protobuf encapsulated
2018/12/07 20:45:24.060466 decoder.go:103: WARN malformed packet with length 798 which is neither hep nor protobuf encapsulated

What is the suggestion to this setup?

Thanks

Seeting up the UI on heplify-server

Hey negbie,

I set up a heplify-server with docker and connected my client and it all works fine. I'm now looking for the domain to look at the Homer UI although I cant find it.

I know that nginx, php, and apache are running but the directory /var/www/html does not exist.

Maybe its not set up.

the server is also producing the following:

adminer_1         | [Thu Jul 12 22:43:58 2018] ::ffff:OMITTED:40695 [200]: /
adminer_1         | [Thu Jul 12 22:45:16 2018] ::ffff:OMITTED:7299 [200]: /
alertmanager      | level=error ts=2018-07-12T22:45:29.85267626Z caller=notify.go:332 component=dispatcher msg="Error on notify" err="dial tcp 127.0.0.1:25: connect: connection refused"
alertmanager      | level=error ts=2018-07-12T22:45:29.852946163Z caller=dispatch.go:280 component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="dial tcp 127.0.0.1:25: connect: connection refused"

Discard option

Hi again,

Is there a way to discard cseq method (like heplify's -dim) for packets collected by captagent or any others hep tool ?

thx

WARN malformed packet which is neither hep nor protobuf encap

I am attempting to transition from Kamailio to heplify-server, mainly due to this issue: Errors with OpenSIPs 2.3 (agent) to Kamailio 5.1.4 (node) using HEPv3 #320.

We have since reverted our changes and are using HEPv2, which is working correctly between OpenSIPs 2.3 (agent) and Kamailio 5.1.2 (node).

Continuing to use OpenSIPs as the SIP server and agent, when I disable Kamailio and enable heplify-server as the capture node, I am receiving the following errors:

2018-08-30T16:09:12Z WARN malformed packet with length 481 which is neither hep nor protobuf encapsulated
2018-08-30T16:09:22Z WARN malformed packet with length 475 which is neither hep nor protobuf encapsulated
2018-08-30T16:09:22Z WARN malformed packet with length 475 which is neither hep nor protobuf encapsulated
2018-08-30T16:09:32Z WARN malformed packet with length 476 which is neither hep nor protobuf encapsulated
2018-08-30T16:09:32Z WARN malformed packet with length 475 which is neither hep nor protobuf encapsulated
2018-08-30T16:09:36Z WARN malformed packet with length 921 which is neither hep nor protobuf encapsulated
2018-08-30T16:09:36Z WARN malformed packet with length 675 which is neither hep nor protobuf encapsulated
2018-08-30T16:09:36Z WARN malformed packet with length 917 which is neither hep nor protobuf encapsulated
2018-08-30T16:09:36Z WARN malformed packet with length 671 which is neither hep nor protobuf encapsulated

My heplify-server.toml looks like this:
HEPAddr = "0.0.0.0:9060"
ESAddr = ""
ESDiscovery = false
MQDriver = ""
MQAddr = ""
MQTopic = ""
PromAddr = ""
PromTargetIP = ""
PromTargetName = ""
HoraclifixStats = false
RTPAgentStats = false
DBShema = "homer5"
DBDriver = "mysql"
DBAddr = "xxxxxxx:3306"
DBUser = "zzzz"
DBPass = "xxxxxxx"
DBDataTable = "homer_data"
DBConfTable = "homer_configuration"
DBTableSpace = ""
DBBulk = 200
DBTimer = 2
DBRotate = false
DBPartLog = "2h"
DBPartSip = "1h"
DBPartQos = "12h"
DBDropDays = 0
DBDropOnStart = false
Dedup = false
DiscardMethod = []
AlegIDs = []
LogDbg = ""
LogLvl = "debug"
LogStd = false
Version = false

And I start heplify-server by using this:
./heplify-server -config /etc/heplify-server.toml &

I can provide any other information that you might need, or if you want a pcap I can provide that.

No Elasticserch node available

Hey!

I set up heplify-server on debian 8.10, allongwith mySQL and HomerUI (everything is up and running) except when I tried to do the following to check if the server was receiving:

./heplify-server -dbaddr "" -logstd -loglvl debug
sudo ./heplify-mips64 -hs myip:9060 -nt tls

I get the following:

2018/07/13 01:05:28.559868 server.go:111: INFO start heplify-server with config.HeplifyServer{HEPAddr:"0.0.0.0:9060", ESAddr:"myIP:9200", MQDriver:"", MQAddr:"", MQTopic:"", PromAddr:"0.0.0.0:9069", PromTargetIP:"", PromTargetName:"", HoraclifixStats:false, RTPAgentStats:false, DBShema:"homer5", DBDriver:"mysql", DBAddr:"", DBUser:"homer", DBPass:"mypw", DBDataTable:"homer_data", DBConfTable:"homer_configuration", DBTableSpace:"", DBBulk:200, DBTimer:2, DBRotate:true, DBPartLog:"2h", DBPartSip:"1h", DBPartQos:"12h", DBDropDays:0, DBDropOnStart:false, Dedup:false, DiscardMethod:[]string{}, AlegIDs:[]string{}, LogDbg:"", LogLvl:"debug", LogStd:true, Config:"./heplify-server.toml", Version:false}
2018/07/13 01:05:28.560235 prometheus.go:55: INFO expose metrics with no or unbalanced targets
2018/07/13 01:05:33.573897 elasticsearch.go:32: ERR health check timeout: no Elasticsearch node available

I'm not sure what is going on.

RURI User, From User and To User is not recorded if format it TEL URI

I compile the latest helplify-server "go install heplify-server.go" from the path location heplify-server\cmd\heplify-server

Then i run the server with ./heplify-server -loglvl debug
I can see the data going into the mysql server. I notice that calls with Tel URI are not stored in mysql properly.

  • RURI User, From User and To User are not stored (the fields are empty).
  • The call itself is stored as i can find the entry in mysql (with all the information in msg column)

If the calls is with SIP URI, the RURI User, From User and To User are stored correctly.

heplify-server stopped executing unexpectedly

Hi,

I started heplify-sever on last Friday with:
./heplify-server&

Today I saw this:

panic: runtime error: slice bounds out of range

goroutine 40 [running]:
github.com/negbie/heplify-server.(*HEP).parseHEP(0xc4226a5110, 0xc422d3e000, 0x20f, 0x2000, 0xa34680, 0xc4200727e0)
        /home/negbie/go/src/github.com/negbie/heplify-server/decoder.go:126 +0x98e
github.com/negbie/heplify-server.(*HEP).parse(0xc4226a5110, 0xc422d3e000, 0x20f, 0x2000, 0xc421a2a500, 0x90c101)
        /home/negbie/go/src/github.com/negbie/heplify-server/decoder.go:72 +0x5c1
github.com/negbie/heplify-server.DecodeHEP(0xc422d3e000, 0x20f, 0x2000, 0xc4219d1e97, 0x0, 0x0)
        /home/negbie/go/src/github.com/negbie/heplify-server/decoder.go:62 +0x62
github.com/negbie/heplify-server/server.(*HEPInput).hepWorker(0xc4217220c0, 0xc42008a540)
        /home/negbie/go/src/github.com/negbie/heplify-server/server/hep.go:236 +0x1e5
github.com/negbie/heplify-server/server.(*HEPInput).Run.func1(0xc4217220c0)
        /home/negbie/go/src/github.com/negbie/heplify-server/server/hep.go:80 +0x79
created by github.com/negbie/heplify-server/server.(*HEPInput).Run
        /home/negbie/go/src/github.com/negbie/heplify-server/server/hep.go:77 +0x204

Can anybody tell what was wrong with my instance of heplify-server?

question - RTCPXR via PUBLISH support

i read the past issue on this similar topic but I didn't follow the expected behavior. I notice in my setup when I use captagent to forward SIP-PUBLISH with rtcp-xr stats to the Homer5 backend, the SIP PUBLISH messages are stored and displayed.

when i use captagent to send the same to the heplify-server docker instance (port 9060), the publish does not show up in the Homer5 gui instance of that docker system. Also tried using heplify to send the PUBLISH to heplify-server and no luck.

Just wanted to confirm if this is the expected behavior and would possibly be addressed as part of Homer7.

Could not find a valid Call-ID in packet

We are testing heplify-server as a replacement for the kamailio solution. The first impression is very good.

With some packets I run into the following problem:

2018/06/27 11:54:15.076996 decoder.go:101: WARN Could not find a valid Call-ID in packet
"REGISTER sip:172.16.214.177 SIP/2.0\r\n
Via: SIP/2.0/UDP 10.0.3.1:5063;branch=z9hG4bK8c39.0c1a4d08fd1ea16d4d3214184ed4fa19.0;i=2\r\n
Via: SIP/2.0/TLS 172.16.214.1:62306;received=172.16.214.1;rport=62306;branch=z9hG4bKPjKui9oP4ADG7HLaCQttEpK7AOVbAeaEPL;alias\r\n
Max-Forwards: 69\r\n
From: <sip:on570MfDxSGbUF5@server>;tag=q3vJVOgNCd60bu2kyfs61rbA76qyYk.L\r\n
To: <sip:on570MfDxSGbUF5@server>\r\n
P-cs-if: ifens33.cloudstack\n
Call-ID: VgCxwcYMBL2BJ45VZdAgs-ufeElh-C1z\r\n
CSeq: 9956 REGISTER\r\n
...

As you can see the Call-ID is in the packet. But somehow the parser fails...

segfault report

I'd like to report issues of heplify-server crashing when running in production.
Here's the interesting part of the logs:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x48 pc=0x81770e]
goroutine 482 [running]:
github.com/negbie/heplify-server/database.formDataHeader(0xc42885f5f0, 0xc42731d400, 0x1a, 0x122, 0x1a, 0xc42731d400)
    /go/src/github.com/negbie/heplify-server/database/sqlhomer7.go:374 +0xde
github.com/negbie/heplify-server/database.(*SQLHomer7).insert(0xc421729230, 0xc421794780)
    /go/src/github.com/negbie/heplify-server/database/sqlhomer7.go:125 +0x76c
github.com/negbie/heplify-server/database.(*Database).Run.func1(0xc4217914a0)
    /go/src/github.com/negbie/heplify-server/database/database.go:59 +0x3c
created by github.com/negbie/heplify-server/database.(*Database).Run
    /go/src/github.com/negbie/heplify-server/database/database.go:58 +0x1ff

I'm also attaching the full log in case it helps.
heplify-server.log
Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.