Code Monkey home page Code Monkey logo

sflowtool's Introduction

sflowtool

Print binary sFlow feed to ASCII, or forward it to other collectors.

This tool receives sFlow data, and generates ASCII, JSON, CSV, tcpdump(1) or NetFlow(TM) output. Options are also available to forward the sFlow feed to additional collectors, or read packets from a capture file and forward as sFlow samples.

Please read the licence terms in ./COPYING.

Build from sources

./boot.sh
./configure
make
sudo make install

(Start from ./configure if you downloaded a released version.)

Usage examples

If sFlow is arriving on port 6343, you can pretty-print the data like this:

% ./sflowtool -p 6343

or get a line-by-line output like this:

% ./sflowtool -p 6343 -l

or a custom line-by-line output by listing fields like this:

% ./sflowtool -p 6343 -L localtime,srcIP,dstIP

or a JSON representation like this:

% ./sflowtool -p 6343 -J

In a typical application, this output would be parsed by an awk or perl script, perhaps to extract MAC->IP address-mappings or to extract a particular counter for trending. The usage might then look more like this:

% ./sflowtool -p 6343 | my_perl_script.pl > output

Alternatively, you can show packet decodes like this:

% ./sflowtool -p 6343 -t | tcpdump -r -

To forward Cisco NetFlow v5 records to UDP port 9991 on host collector.mysite.com, the options would be:

% ./sflowtool -p 6343 -c collector.mysite.com -d 9991

If you compiled with -DSPOOFSOURCE, then you have the option of "spoofing" the IP source address of the netflow packets to match the IP address(es) of the original sflow agent(s)...

% ./sflowtool -p 6343 -c collector.mysite.com -d 9991 -S

To replicate the input sflow stream to several collectors, use the "-f host/port" option like this:

% ./sflowtool -p 6343 -f localhost/7777 -f localhost/7778 -f collector.mysite.com/6343

Example Output

An example of the pretty-printed output is shown below. Note that every field can be parsed as two space-separated tokens (tag and value). Newlines separate one field from the next. The first field in a datagram is always the "unixSecondsUTC" field, and the first field in a flow or counters sample is always the "sampleSequenceNo" field. In this example, the datagram held two flow-samples and two counters-samples. Comments have been added in <<>> brackets. These are not found in the output.

 unixSecondsUTC 991362247      <<this is always the first field of a new datagram>>
 datagramVersion 2
 agent 10.0.0.254              <<the sFlow agent>>
 sysUpTime 10391000
 packetSequenceNo 5219         <<the sequence number for datagrams from this agent>>
 samplesInPacket 4
 sampleSequenceNo 9466         <<the sequence number for the first sample - a flow sample from 0:0>>
 sourceId 0:0
 sampleType FLOWSAMPLE
 meanSkipCount 10
 samplePool 94660
 dropEvents 0
 inputPort 14
 outputPort 16
 packetDataTag INMPACKETTYPE_HEADER
 headerProtocol 1
 sampledPacketSize 1014
 headerLen 128
 headerBytes 00-50-04-29-1B-D9-00-D0-B7-23-B7-D8-08-00-45-00-03-E8-37-44-40-00-40-06-EB-C6-0A-00-00-01-0A-00-00-05-0D-F1-17-70-A2-4C-D2-AF-B1-F0-BF-01-80-18-7C-70-82-E0-00-00-01-01-08-0A-23-BC-42-93-01-A9-
 dstMAC 005004291bd9               <<a rudimentary decode, which assumes an ethernet packet format>>
 srcMAC 00d0b723b7d8
 srcIP 10.0.0.1
 dstIP 10.0.0.5
 IPProtocol 6
 TCPSrcPort 3569
 TCPDstPort 6000
 TCPFlags 24
 extendedType ROUTER               <<we have some layer3 forwarding information here too>>
 nextHop 129.250.28.33
 srcSubnetMask 24
 dstSubnetMask 24
 sampleSequenceNo 346              <<the next sample is a counters sample from 0:92>>
 sourceId 0:92
 sampleType COUNTERSSAMPLE
 statsSamplingInterval 20
 counterBlockVersion 1
 ifIndex 92
 networkType 53
 ifSpeed 0
 ifDirection 0
 ifStatus 0
 ifInOctets 18176791
 ifInUcastPkts 92270
 ifInMulticastPkts 0
 ifInBroadcastPkts 100
 ifInDiscards 0
 ifInErrors 0
 ifInUnknownProtos 0
 ifOutOctets 40077590
 ifOutUcastPkts 191170
 ifOutMulticastPkts 1684
 ifOutBroadcastPkts 674
 ifOutDiscards 0
 ifOutErrors 0
 ifPromiscuousMode 0
 sampleSequenceNo 9467             <<another flow sample from 0:0>>
 sourceId 0:0
 sampleType FLOWSAMPLE
 meanSkipCount 10
 samplePool 94670
 dropEvents 0
 inputPort 16
 outputPort 14
 packetDataTag INMPACKETTYPE_HEADER
 headerProtocol 1
 sampledPacketSize 66
 headerLen 66
 headerBytes 00-D0-B7-23-B7-D8-00-50-04-29-1B-D9-08-00-45-00-00-34-1E-D7-40-00-40-06-07-E8-0A-00-00-05-0A-00-00-01-17-70-0D-F1-B1-F0-BF-01-A2-4C-E3-A3-80-10-7C-70-E2-62-00-00-01-01-08-0A-01-A9-7F-A0-23-BC-
 dstMAC 00d0b723b7d8
 srcMAC 005004291bd9
 srcIP 10.0.0.5
 dstIP 10.0.0.1
 IPProtocol 6
 TCPSrcPort 6000
 TCPDstPort 3569
 TCPFlags 16
 extendedType ROUTER
 nextHop 129.250.28.33
 srcSubnetMask 24
 dstSubnetMask 24
 sampleSequenceNo 346             <<and another counters sample, this time from 0:93>>
 sourceId 0:93
 sampleType COUNTERSSAMPLE
 statsSamplingInterval 30
 counterBlockVersion 1
 ifIndex 93
 networkType 53
 ifSpeed 0
 ifDirection 0
 ifStatus 0
 ifInOctets 103959
 ifInUcastPkts 448
 ifInMulticastPkts 81
 ifInBroadcastPkts 93
 ifInDiscards 0
 ifInErrors 0
 ifInUnknownProtos 0
 ifOutOctets 196980
 ifOutUcastPkts 460
 ifOutMulticastPkts 599
 ifOutBroadcastPkts 153
 ifOutDiscards 0
 ifOutErrors 0
 ifPromiscuousMode 0

Other ExtendedTypes

If your sFlow agent is running BGP, you may also see GATEWAY extendedType sections like this:

extendedType GATEWAY my_as 65001 src_as 0 src_peer_as 0 dst_as_path_len 3 dst_as_path 65000-2828-4908

The SWITCH, USER and URL extendedTypes may also appear. The SWITCH extendedType provides information on input and output VLANs and priorities. The USER extendedType provides information on the user-id that was allocated this IP address via a remote access session (e.g. RADIUS or TACAS). The URL field indicates for an HTTP flow what the original requested URL was for the flow. For more information, see the published sFlow documentation at http://www.sflow.org.

line-by-line csv output

If you run sflowtool using the "-l" option then only one row of output will be generated for each flow or counter sample. It will look something like this:

[root@server src]# ./sflowtool -l
CNTR,10.0.0.254,17,6,100000000,0,2147483648,175283006,136405187,2578019,297011,0,3,0,0,0,0,0,0,0,1
FLOW,10.0.0.254,0,0,00902773db08,001083265e00,0x0800,0,0,10.0.0.1,10.0.0.254,17,0x00,64,35690,161,0x00,143,125,80

The counter samples are indicated with the "CNTR" entry in the first column. The second column is the agent address. The remaining columns are the fields from the generic counters structure (see SFLIf_counters in sflow.h).

The flow samples are indicated with the "FLOW" entry in the first column. The second column is the agent address. The remaining columns are:

inputPort
outputPort
src_MAC
dst_MAC
ethernet_type
in_vlan
out_vlan
src_IP
dst_IP
IP_protocol
ip_tos
ip_ttl
udp_src_port OR tcp_src_port OR icmp_type
udp_dst_port OR tcp_dst_port OR icmp_code
tcp_flags
packet_size
IP_size
sampling_rate

To request a custom line output, use the -L option, like this:

% sflowtool -L localtime,srcIP,dstIP

grep-friendly output

The "-g" option causes sflowtool to include contextual information on every line of output. The fields are:

 agentIP
 agentSubId
 datasource_sequenceNo
 datasource_class
 datasource_index
 sampletype_tag
 elementtype_tag

For example, this makes it much easier to extract a particular counter for each agent, accumulate the deltas, and stream it to a time-series database.

JSON output

The -J option prints human-readable JSON with a blank line between datagrams. To print more compact JSON with each datagram on one line, use -j instead.



Neil McKee ([email protected]) InMon Corp. http://www.inmon.com

sflowtool's People

Contributors

dominicx avatar jpoliv avatar ng-labo avatar pkovacs avatar sflow avatar wydrych avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sflowtool's Issues

Piping sflowtool into Snort & Bro

I am getting a strange issue with sflowtool when piping it into the Snort and Bro IDS's. I am sampling at a 1:1 ratio for testing, and I am sampling up to 4096 bytes of the data (this covers all of my test packets).

Here is the command I am using for snort. Some packets are getting ignored by snort.
sflowtool -p 6343 -t | sudo snort -r - -pcap-no-filter -e -X -c custom.rules -A console

but this works great when i just mirror the packets without sflow tool.
sudo snort -i eth0 -e -X -c custom.rules -A console

I am in a controlled environment for testing this, so I looked at a tcpdump of eth0 compared to the following output:
sflowtool -p 6343 -t > output-sflow.pcap

The only major differences are the timestamps and sequence/ack numbers. Why does the last tcp PSH packet get ignored by the IDS software? This has to be an issue with The rules I am using alert on EVERY packet, and there is 1-2 less alerts when using sflow 1:1 even when sflowtool is outputting that packet.

Has anyone done this process before?

IPv4 and IPv6 socket bind error

Hi Team,

The SFLOW v5 packets are being forwarded from Source port number 6343 to Destination port number 2055. (FYI - 2055 is the port number configured in my server)
I am facing the below issues when running the sflowtool commands to pretty print the data,

Issue1# IPv6
./sflowtool -p 2055 -l
v6 socket() creation failed, Address family not supported by protocol

I saw few options to run IPv4 alone. So tried the below scenario,
Issue2# IPv4
./sflowtool -4 -p 2055 -l
v4 bind() failed, port = 2055 : Address already in use
unable to open UDP read socket

Does the tool runs on the same port number(2055 in my case) it is listening ?
If so is there a way I can change the config so that the process can run on a port number and listen on another port number ?

Could you please help me out in fixing these above 2 issues?

Thanks.

FreeBSD: fatal error: 'byteswap.h' file not found

Hello,

I'm updating sflowtool to 6.01 and geting compile error on 12 and 13 (14 current build is fine):

===>  Building for sflowtool-6.01
--- all ---
/usr/bin/make  all-recursive
--- all-recursive ---
Making all in src
--- sflowtool.o ---
cc -DHAVE_CONFIG_H -I. -I..      -O2 -pipe  -fstack-protector-strong -fno-strict-aliasing -MT sflowtool.o -MD -MP -MF .deps/sflowtool.Tpo -c -o sflowtool.o sflowtool.c
sflowtool.c:32:10: fatal error: 'byteswap.h' file not found
#include <byteswap.h>
         ^~~~~~~~~~~~
1 error generated.
*** [sflowtool.o] Error code 1

make[3]: stopped in /wrkdirs/usr/ports/net/sflowtool/work/sflowtool-6.01/src
1 error

Old version 5.08 builds fine on 14, 13 and 12.
Any clues since both versions include #include <byteswap.h>?

Thanks

output port number for the same interface,

Hello,
First of all I want to say that, You are making good job with sflowtool, my company use that great tool as proxy to store sflow information in Logstash according to that guide (https://whiskeyalpharomeo.com/2015/06/13/logstash-and-sflow/)

now more about problem:
so recently I checked the logs and I noticed when the packet is coming from the same physical interface
for example: let say when some packet arrived from eth1 vlan1 and after routing process it was send back to the same interfaces eth1 but others vlan2, the sflowtool gives me correct information about incoming interfaces but outgoing interfaces is 0

Sample:

....
dstIP       10.9.9.13
dstPort         42684
in_vlan         199
inputPort       573
inputPortName       Trk12
out_vlan        210
outputPort      0
outPortName
srcIP       10.10.10.16
srcPort         6379
tcpFlags        0x10
#traffic3       148KB
....

In sample you can see that packet from IP 10.9.9.13, vlan 199 from interfaces 573 (HP port numbering) was routed and sent to IP 10.10.10.16, vlan 210 and to the same interfaces 573(after my investigation) but sflowtool gave me 0 for outgoing interfaces .

If you want I can send you more samples, the problem occurs only for packets which back to the same interface.

Help to analyze packets with sflowtool

Hello!

I try to analyze raw packets with sflowtool which i found in wild. That packets was googled in Wireshark pcaps format, pcap file named sflow.cap. sflow.cap normally decoded to human view by Wireshark and by that i suppose that packets from sflow.cap is not broken. But when i try to analyze that packets with sflowtool i had some errors.

Let's look at 1 packet in sflow.cap:
0000000500000001ac152311000000010000019f673dd71000000001000000020000006c000021250000040c0000000100000001000000580000040c000000060000000005f5e100000000010000000300000000018c2ccc00009b83000290160001f6730000000000000000000000000000000000533dc10000a0b700002187000008d7000000000000000000000000

In Whireshark it's ok:
ws

I downloaded sflowtool and started it to listen 6343 port:

test@srv-test-01:~$ sudo sflowtool -p 6343

Then, I try to send raw packets in hex (shown below) and in binary (not shown) formats:

echo -n "0000000500000001ac152311000000010000019f673dd71000000001000000020000006c000021250000040c0000000100000001000000580000040c000000060000000005f5e100000000010000000300000000018c2ccc00009b83000290160001f6730000000000000000000000000000000000533dc10000a0b700002187000008d7000000000000000000000000" >/dev/udp/localhost/6343

In sflowtool that packet generate error:

startDatagram =================================
datagramSourceIP 0.0.0.1
datagramSize 288
unixSecondsUTC 1607333061
localtime 2020-12-07T04:24:21-0500
datagramVersion 808464432
unexpected datagram version number
 (source IP = 0.0.0.1)
30-30-30-30-<*>-30-30-30-35-30-30-30-30-30-30-30-31
61-63-31-35-32-33-31-31-30-30-30-30-30-30-30-31
30-30-30-30-30-31-39-66-36-37-33-64-64-37-31-30
30-30-30-30-30-30-30-31-30-30-30-30-30-30-30-32
30-30-30-30-30-30-36-63-30-30-30-30-32-31-32-35
30-30-30-30-30-34-30-63-30-30-30-30-30-30-30-31
30-30-30-30-30-30-30-31-30-30-30-30-30-30-35-38
30-30-30-30-30-34-30-63-30-30-30-30-30-30-30-36
30-30-30-30-30-30-30-30-30-35-66-35-65-31-30-30
30-30-30-30-30-30-30-31-30-30-30-30-30-30-30-33
30-30-30-30-30-30-30-30-30-31-38-63-32-63-63-63
30-30-30-30-39-62-38-33-30-30-30-32-39-30-31-36
30-30-30-31-66-36-37-33-30-30-30-30-30-30-30-30
30-30-30-30-30-30-30-30-30-30-30-30-30-30-30-30
30-30-30-30-30-30-30-30-30-30-35-33-33-64-63-31
30-30-30-30-61-30-62-37-30-30-30-30-32-31-38-37
30-30-30-30-30-38-64-37-30-30-30-30-30-30-30-30
30-30-30-30-30-30-30-30-30-30-30-30-30-30-30-30
caught exception: 2
endDatagram   =================================

Founded by me sflow.cap example:
sflow.cap.zip

  1. So what i do wrong?
  2. How can i test another sflow packets which in binary/hex format?

sflowtool Ipv6 error

When I execute the command sflowtool +4 -p 6343 -l for listen in ipv4 and ipv6 I have this error:
"v4 bind() failed, port = 6343 : Address already in use" and I can only see ipV4 directions.

Timestamps output for -l switch

Currently, the -l switch outputs a nice CSV file containing the following columns:

['sampleType', 'agentAddress', 'inputPort', 'outputPort', 'src_MAC', 'dst_MAC', 'ethernet_type', 'in_vlan', 'out_vlan', 'src_IP', 'dst_IP', 'IP_protocol', 'ip_tos', 'ip_ttl', 'src_port', 'dst_port', 'tcp_flags', 'packet_size', 'IP_size', 'sampling_rate']

Unfortunately, it does not print the unixSecondsUTC field of the samples. I think, knowing the timestamp of the packets would be very beneficial for further analysis.

Is an extension possible, maybe by another switch -l -t?

Greets.

Questions about sFlow Optical Interface Structures

I have some questions about the definition of the SFP counter Sample in sFlow Optical Interface Structures.
According to https://sflow.org/sflow_optics.txt, The information of SFP is defined as follows:

struct lane {
unsigned int index; /* 1-based index of lane within module, 0=unknown /
unsigned int tx_bias_current; /
microamps /
unsigned int tx_power; /
microwatts /
unsigned int tx_power_min; /
microwatts /
unsigned int tx_power_max; /
microwatts /
unsigned int tx_wavelength; /
nanometers /
unsigned int rx_power; /
microwatts /
unsigned int rx_power_min; /
microwatts /
unsigned int rx_power_max; /
microwatts /
unsigned int rx_wavelength; /
nanometers */
}

/* Optical SFP / QSFP metrics /
/
opaque = counter_data; enterprise=0; format=10 /
struct sfp {
unsigned int module_id;
unsigned int module_num_lanes; /
total number of lanes in module /
unsigned int module_supply_voltage; /
millivolts /
int module_temperature; /
thousandths of a degree Celsius */
lane<> lanes;
}

However, According to the definition in sflowtool, there is an extra num_lanes for the definition of SFP information. This make us confused, it conflicts with the definition in https://sflow.org/sflow_optics.txt. Logically, module_total_lanes has already defined the number of lanes. Is it meaningless to add num_lanes?
I want to confirm whether the definition of Optical SFP in sflowTool is wrong? I look forward to your reply. Thanks.

/* Optical SFP/QSFP metrics /
/
opaque = counter_data; enterprise = 0; format = 10 */

typedef struct {
uint32_t lane_index; /* index of lane in module - starting from 1 /
uint32_t tx_bias_current; /
microamps /
uint32_t tx_power; /
microwatts /
uint32_t tx_power_min; /
microwatts /
uint32_t tx_power_max; /
microwatts /
uint32_t tx_wavelength; /
nanometers /
uint32_t rx_power; /
microwatts /
uint32_t rx_power_min; /
microwatts /
uint32_t rx_power_max; /
microwatts /
uint32_t rx_wavelength; /
nanometers */
} SFLLane;

typedef struct {
uint32_t module_id;
uint32_t module_total_lanes; /* total lanes in module /
uint32_t module_supply_voltage; /
millivolts /
int32_t module_temperature; /
signed - in oC / 1000 /
uint32_t num_lanes; /
number of active lane structs to come */ ===== Why add the definition of this field
SFLLane *lanes;
} SFLSFP_counters;

Some possibly useful very old WIP commits of mine

After the changes I had submitted pre-github, I was working on a few other things intending to submit some PRs here about 2 years ago, then life really got in the way and that branch just stagnated locally. I rediscovered it now when searching for something else, and when I looked I realized I probably won't have the time to get back the mental context to know what to do with it, and in some cases even to know what I was intending. As more time passes that becomes even more likely.

So in an attempt to make at least some of that work useful, I rebased the WIP commits on latest master branch, resolving merge-conflicts as logically as could be quickly apparent, and am opening this issue for you to have a quick look at the latest 3 commits there when you have time (as you will still have the mental context to perhaps find some of it useful, with a little extra work). Of course I am not opening a PR because none of it is PR-worthy, or even finished. Whenever you've had a look and found any of it useful/useless/whatever, please feel free to close this issue.

It is noteworthy that a few of the commits since sflowtool moved to github have implemented what my original WIP included, so rebasing actually shrunk my changes considerably [sigh].

About skipping block on decodeIPV6 function.

I'd like to suggest that a bit of the correction about skipping IP6HeaderExtension on decodeIPV6 function.
After this code block, the variable 'ptr' shall point to the Layer 4 header. I don't understand what '-2' means on 'skip = optionLen - 2;'. This would cause the wrong pointer for next header. I think the adjustment does not be needed.

I've checked the case of "fragment ". In this case, just the first packet of fragments would be effective for decodeIPLayer4() next in samples. With correction, it would be decoded expectedly.

    /* skip over some common header extensions...
       http://searchnetworking.techtarget.com/originalContent/0,289142,sid7_gci870277,00.html */
    while(nextHeader == 0 ||  /* hop */
	  nextHeader == 43 || /* routing */
	  nextHeader == 44 || /* fragment */
	  /* nextHeader == 50 => encryption - don't bother coz we'll not be able to read any further */
	  nextHeader == 51 || /* auth */
	  nextHeader == 60) { /* destination options */
      uint32_t optionLen, skip;
      sf_logf_U32(sample, "IP6HeaderExtension", nextHeader);
      nextHeader = ptr[0];
      optionLen = 8 * (ptr[1] + 1);  /* second byte gives option len in 8-byte chunks, not counting first 8 */
      skip = optionLen - 2;
      ptr += skip;
      if(ptr > end) return; /* ran off the end of the header */
    }

[patch] src/sflowtool.c warning if build with clang

--- sflowtool.o ---
cc -DHAVE_CONFIG_H -I. -I..      -O2 -pipe  -fstack-protector -fno-strict-aliasing -MT sflowtool.o -MD -MP -MF .deps/sflowtool.Tpo -c -o sflowtool.o sflowtool.c
sflowtool.c:4316:69: warning: passing 'int *' to parameter of type 'socklen_t *' (aka 'unsigned int *') converts between pointers to integer types with different sign [-Wpointer-sign]
  cc = recvfrom(soc, buf, MAX_PKT_SIZ, 0, (struct sockaddr *)&peer, &alen);
                                                                    ^~~~~
/usr/include/sys/socket.h:609:96: note: passing argument to parameter here
ssize_t recvfrom(int, void *, size_t, int, struct sockaddr * __restrict, socklen_t * __restrict);
                                                                                               ^

Fix:

@@ -4308,7 +4308,8 @@ static int ipv4MappedAddress(SFLIPv6 *ip
 static void readPacket(int soc)
 {
   struct sockaddr_in6 peer;
-  int alen, cc;
+  int cc;
+  u_int alen;
 #define MAX_PKT_SIZ 65536
   char buf[MAX_PKT_SIZ];
   alen = sizeof(peer);

Extra steps needed to compile for Debian

To compile for Debian

$ uname -a
Linux xxxxxxxxxxxx 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt11-1 (2015-05-24) x86_64 GNU/Linux

I had to do

aclocal
automake
automake --add-missing
autoconf
./configure
make
make install

Is this intended, and if so, could this be added to the README?

Strange ARP packet handling

Hi,

I try to convert saved sFlow samples to line by line CSV format using sflowtool -l -r file.pcap and get lines as described below for ARP packets.

49835 0.000000 192.168.2.3	 172.168.3.4	   TCP 1522 80 → 28449 [ACK] Seq=1 Ack=1 Win=118 Len=1448 TSval=123456 TSecr=123456
49836 0.000000 12:34:56:78:9a:bc 12:34:56:78:9a:be ARP 68   192.168.7.1 is at 12:34:56:78:9a:bc[Packet size limited during capture]

was converted to:

FLOW,192.168.5.1,12345,12346,abcdef123456,abcdef123457,0x0800,1,1,192.168.2.3,172.168.3.4,17,0x00,254,80,28449,0x10,1522,1500,1024
FLOW,192.168.5.1,12345,12346,12cdef123456,12cdef123457,0x0806,1,1,192.168.2.3,172.168.3.4,17,0x00,254,80,28449,0x10,68,46,1024

Why are the fields that are not used in the ARP packet filled with the contents of the previous packet?

Could you fix this behaviour?

Getting error to parse JSON

I used this script that I got online to parse the JSON output:
#!/usr/bin/env python

import subprocess
from json import loads

p = subprocess.Popen(
['/usr/local/bin/sflowtool','-j'],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT
)
lines = iter(p.stdout.readline,'')
for line in lines:
print line
datagram = loads(line)
localtime = datagram["localtime"]
samples = datagram["samples"]
for sample in samples:
sampleType = sample["sampleType"]
elements = sample["elements"]
if sampleType == "FLOWSAMPLE":
for element in elements:
tag = element["flowBlock_tag"]
if tag == "0:1":
try:
src = element["srcIP"]
dst = element["dstIP"]
pktsize = element["sampledPacketSize"]
print "%s %s %s %s" % (localtime,src,dst,pktsize)
except KeyError:
pass

And I get this error:
Traceback (most recent call last):
File "./flow.py", line 14, in
datagram = loads(line)
File "/usr/lib64/python2.7/json/init.py", line 339, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 364, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python2.7/json/decoder.py", line 380, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Expecting , delimiter: line 1 column 264 (char 263)

When I looked through the JSON, the JSON format is missing a ",":
"datagramSourceIP":"104.3.28.3","datagramSize":"716","unixSecondsUTC":"1563314035","localtime":"2019-07-16T21:53:55+0000","datagramVersion":"5","agentSubId":"0","agent":"15.11.20.2","packetSequenceNo":"1812216","sysUpTime":"1122908000","samplesInPacket":"4""samples":[{"sampleType_tag":"0:1"

Full Parameters for sflowtool -L from sflowtool -l

Good day everyone,

I would like to check what is the full params for sflowtool -L if I would like to have same data output as sflowtool -l

Currently I am using

#sflowtool -4 -p 55500 -L sampleType,agent,inputPort,outputPort,srcMAC,dstMAC,ethernet_type,in_vlan,out_vlan,srcIP,dstIP,IPProtocol,IPTOS,IPTTL,TCPSrcPort,TCPDstPort,TCPFlags,sampledPacketSize,IPSize,meanSkipCount

However, some of the data i.e ethernet_type, TCPFlags is not the same format as sflowtool -l

Release source tarball

Hello,

Could this project provide a release source tarball, e.g, sflowtool-5.0.7.tar.gz to be included toguether with Release Assets?

This have the advantage to download source code from a fixed tarball instead of an auto-generated one: https://github.com/sflow/sflowtool/archive/refs/tags/v5.07.tar.gz.

Since I'm maintaining FreeBSD CopyQ port, instead of using github account name/project to get auto-generated source tarball, I could simply use a simple command to fetch from a static url:

from:

PORTNAME=       sflowtool
DISTVERSIONPREFIX=      v
DISTVERSION=    5.0.7

USE_GITHUB=     yes
GH_ACCOUNT=     sflow
GH_PROJECT=     sflowtool

distinfo:
SHA256 (sflow-sflowtool-v5.07_GH0.tar.gz) = ###
SIZE (sflow-sflowtool-v5.07_GH0.tar.gz) = ###

to:

PORTNAME=       sflowtool
DISTVERSION=    5.0.7
MASTER_SITES= https://github.com/sflow/sflowtool/releases/download/v${DISTVERSION}/

distinfo:
SHA256 (sflowtool-5.0.7.tar.gz) = ###
SIZE (sflowtool-5.0.7.tar.gz) = ###

IMO, all packaging systems that needs fetching and compiling from source will beneficit with this static tarball.

Example: https://github.com/jordansissel/xdotool/releases

Thanks,

Nuno Eduardo Teixeira
[email protected]

GRE/IPv6/TCP parsing seems to be incorrect

Hi,

First of all thanks very much for such wonderful tool, it powers up our OVS based telemetry for a while and we absolutely love it!

We've used it with GRE/Ethernet/IPv6/TCP encapsulation and it was just perfect. Recently we've stripped Ethernet headers to have GRE/IPv6/TCP (we had a request to reduce amount headers a bit) and unfortunately it doesn't work anymore.

Sample JSON (IP addresses are obfuscated in the clear output):

{
  "sampleType_tag": "0:1",
  "sampleType": "FLOWSAMPLE",
  "sampleSequenceNo": "493",
  "sourceId": "2:1000",
  "meanSkipCount": "512",
  "samplePool": "252416",
  "dropEvents": "0",
  "inputPort": "0",
  "outputPort": "multiple 1",
  "elements": [
    {
      "flowBlock_tag": "0:1001",
      "extendedType": "SWITCH",
      "in_vlan": "0",
      "in_priority": "0",
      "out_vlan": "0",
      "out_priority": "0"
    },
    {
      "flowBlock_tag": "0:1",
      "flowSampleType": "HEADER",
      "headerProtocol": "1",
      "sampledPacketSize": "108",
      "strippedBytes": "4",
      "headerLen": "104",
      "headerBytes": "60-06-B7-F3-00-40-3A-40-FD-9D-3A-19-DE-FA-D6-A7-20-FE-71-DB-87-F2-E5-12-FD-9D-3A-19-DE-FA-B0-40-30-B9-F8-32-4E-5F-0C-8A-80-00-21-48-65-03-00-07-43-0B-D7-5D-00-00-00-00-E4-F5-03-00-00-00-00-00-10-11-12-13-14-15-16-17-18-19-1A-1B-1C-1D-1E-1F-20-21-22-23-24-25-26-27-28-29-2A-2B-2C-2D-2E-2F-30-31-32-33-34-35-36-37",
      "dstMAC": "6006b7f30040",
      "srcMAC": "3a40fd9d3a19"
    },
    {
      "flowBlock_tag": "0:1024",
      "flowSampleType": "tunnel_ipv4_in_IPV4",
      "tunnel_ipv4_in_sampledPacketSize": "0",
      "tunnel_ipv4_in_IPSize": "0",
      "tunnel_ipv4_in_srcIP": "15.145.185.129",
      "tunnel_ipv4_in_dstIP": "10.0.3.49",
      "tunnel_ipv4_in_IPProtocol": "47",
      "tunnel_ipv4_in_IPTOS": "0"
    }
  ]
}

So, it seems that sflowtool:

  • recognises proto 47 inside IPv4 and concludes that this is GRE - correct
  • then strips 4 bytes of GRE header - correct
  • then parses next two fields as Ethernet dst and src - not correct

Checking same flow with ovs-dpctl dump-flows shows everything correctly.

Is it possible to have such situation handled?

Thanks again for the great tool and ready to provide any input necessary.

Padding in sFLow packets

Hello guys!

I just discovered that some devices (Brocade ICX6610) could add padding packets to sFlow v5 frames.

sflow_padding_brocade

When I run this pcap dump over latest version of sflowtool from master I receive:

flow_sample length error (expected 284, found 280)
caught exception: 3

Could you fix this behaviour?

Incorrectly ordered MAC addresses in version 5.01

Hello again,

I convert saved flow samples to CSV with sflowtool -l -r file.pcap and get following results with version 3.41

FLOW,192.168.5.1,12345,12346,abcdef123456,abcdef123457,0x0800,1,1,192.168.2.3,172.168.3.4,17,0x00,254,80,28449,0x10,1522,1500,1024

and with version 5.01

FLOW,192.168.5.1,12345,12346,badcfe214365,badcfe214375,0x0800,1,1,192.168.2.3,172.168.3.4,17,0x00,254,80,28449,0x10,1522,1500,1024

but abcdef123456 and badcfe214365 are not the same MAC addresses.

Please can you fix this.

sflowtool Command Questions

why I use “sudo sflowtool -p 6345 -t | stdbuf -oL sudo tcpdump -r - -Z root -G 20 -w %Y_%m%d_%H%M_%S.pcap” I can get the right file ,but I use "sudo sflowtool -p 6345 -f 127.0.0.1/6343 -t | stdbuf -oL sudo tcpdump -r - -Z root -G 20 -w %Y_%m%d_%H%M_%S.pcap",pcap file is empty.
I want to use sflowtool to save files regularly through tcpdump while sending them to the collector. Can you do that? What would I do if I could? thank you!

unknown address type

use sflowtool and receive the following warnings repeatedly:
unknown address type = 3
caught exception: 1
unknown address type = 0
caught exception: 1
unknown address type = 0
caught exception: 1
unknown address type = 0
caught exception: 1
unknown address type = 3
caught exception: 1
unknown address type = 25972
caught exception: 1
unknown address type = 25972
caught exception: 1

SourceID lost in sFlow->Netflow conversion

When capturing sFlow to stdout I can see that the packet has a sourceId field that tells me the interface the packet was sent from. For Comware this field has a value of 0:4 for interface GigabitEthernet1/0/4. When I convert the flow to Netflow v9, SourceId is always 0. In Netflow v5 there is a EngineId field, not sure if this is for the same use, but this is always 0 also.

sflowtool doesn't process QinQ sampled packets appropriately

If the sampled traffic has QinQ (802.1ad and 802.1q) the fields inside the inner vlan in the sample are missing. The fields listed below are ones that I see in regular ipv4 traffic if there is a single vlan present or not:

IPSize
ip.tot_len
srcIP
dstIP
IPProtocol
IPTOS
IPTTL
TCPSrcPort
TCPDstPort
TCPFlags

Once you have and outer vlan, inner vlan, then ip information, the sample no longer has these fields.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.