simsong / tcpflow Goto Github PK
View Code? Open in Web Editor NEWTCP/IP packet demultiplexer. Download from:
Home Page: http://downloads.digitalcorpora.org/downloads/tcpflow/
License: GNU General Public License v3.0
TCP/IP packet demultiplexer. Download from:
Home Page: http://downloads.digitalcorpora.org/downloads/tcpflow/
License: GNU General Public License v3.0
If there are too many files in a directory, roll it. (Do we want to do by time, or just by number?) Or just have a flag to store the packets in 0000/filename
, 0001/filename
, etc.
Currently tcpflow doesn't work with packets that have the ful 802.11 header. Handle them! You can also make them with tcpdump -I
on a mac.
Test with the Dartmouth Crawdad corpus
tcpflow should dig the directory if a directory is provided and process all pcap files recursively.
The system should track the received data and only deem a connection closed when the FIN
is received and all of the segments have been received (ie: no holes in the byte_runs
)
We need a pretty icon for this project.
Create flow files for UDP packets and other non-TCP packets.
tcpflow -i wlan0 port 80 works
tcpflow -AH -i wlan0 port 80 doesn't work
I read in another issue that AH option may be deprecated. If so, the docs need to be updated with the alternative.
It would be very useful to have an option to have a hex output of the flow. Sadly as of right now tcpflow is not usable for binary protocols.
I'm trying to feed the output files from tcpflow into another program. I use inotify to monitor for files that have been closed and pick them up that way.
However, for each packet tcpflow receives, it seems to open the output file, write the data, then close the file. This means that inotify picks up the file after the very first packet.
Is it possible to keep the files open until a FIN/RST has been received (or a timeout), then close them?
When running live, http objects won't get processed until the stream is closed. But the stream may not get closed for a long time! Ideally we would process all open streams in real-time, but that's not the design of tcpflow. Another approach is to add a timeout that scans all open connections and closes those that haven't had any activity in a while.
If bytes are inserted at the beginning of a capture, the recon_set seen
needs to be updated to indicate that the bytes received are actually further into the transcript than was previously thought.
I cloned the master and then ran 1) sh bootstrap 2) sh configure, and when I run 'make' I get this error:
tcpflow.cpp:29:1: error: ‘config_t’ in ‘class scanner_info’ does not name a type
I'm running 64bit Ubuntu 12.04 desktop
Also: The 1.4 zip had no problems.
Hi,
first I wanted to thank you for taking over the maintenance of tcpflow, it's great to see this is receiving updates again.
I've got a question regarding the output in report.xml in version (I'm at 1.2.7). Each tcpflow element has starttime and endtime attributes. In my experience they always have the same value (example below). So, I'm wondering what the endtime attribute represents.
I would have guessed that this is supposed to be the last time a packet belonging to a flow was seen. But as the values are equal for multi-packet flows, I guess I'm wrong. Could you elaborate on the use of the endtime attribute?
Thanks,
Christian
<fileobject>
<filename>093.184.221.133.00080-010.006.115.023.59388</filename>
<filesize>8782</filesize>
<tcpflow startime='2012-08-06T22:07:21.891993Z' endtime='2012-08-06T22:07:21.891993Z' src_ipn='93.184.221.133' dst_ipn='10.6.115.23' packets='11' srcport='80' dstport='59388' family='2' out_of_order_count='0' />
</fileobject>
Actual time of last packet is about 0.4 seconds after startime.
I hope monitoring all the user Url request on Android platform, can i use the tcpflow?
With the current latest commit (eeff2a6), ran bootstrap, configure, make.
Looks like a problem with a function not included in a plugin API.
~/src/tcpflow/src$ make
make all-am
make[1]: Entering directory /home/crusso/src/tcpflow/src' depbase=
echo tcpflow.o | sed 's|[^/]_$|.deps/&|;s|.o$||';\ g++ -DHAVE_CONFIG_H -I. -I../src/be13_api -pthread -I/usr/local/include -DUTC_OFFSET=-0400 -DGIT_COMMIT=tcpflow-1.4.0beta1-62-geeff2a6-dirty -g -pthread -g -O3 -Wall -MD -D_FORTIFY_SOURCE=2 -Wpointer-arith -Wshadow -Wwrite-strings -Wcast-align -Wredundant-decls -Wdisabled-optimization -Wfloat-equal -Wmultichar -Wmissing-noreturn -Woverloaded-virtual -Wsign-promo -funit-at-a-time -Wstrict-null-sentinel -Weffc++ -MT tcpflow.o -MD -MP -MF $depbase.Tpo -c -o tcpflow.o tcpflow.cpp &&\ mv -f $depbase.Tpo $depbase.Po tcpflow.cpp: In function ‘int main(int, char__)’: tcpflow.cpp:565:5: error: ‘scanners_process_commands’ is not a member of ‘be13::plugin’ make[1]: *_\* [tcpflow.o] Error 1 make[1]: Leaving directory
/home/crusso/src/tcpflow/src'
make: *** [all] Error 2
:~/src/tcpflow$ grep -r scanners_process_commands src/*
src/tcpflow.cpp: be13::plugin::scanners_process_commands();
I'll try to scrape the cobwebs out of the C++ corner of my mind and look at this further but wanted to note it for now.
Hello sir,
I am new to networking. I am a fresher to my office. I have the training in my office. But i have the project based on networking. So i want some modification in tcpflow for my project. Now the tcpflow prints the information in two files based on client and server. But I want all information of packets is stored in the same file. So only i collect that information easily for my project. Please help me and reply me........
Thank You
Subj.
Here's a patch to fix this:
diff --git a/src/datalink.cpp b/src/datalink.cpp
index 934f4c6..4b52fd5 100644
--- a/src/datalink.cpp
+++ b/src/datalink.cpp
@@ -183,6 +183,16 @@ void dl_ppp(u_char *user, const struct pcap_pkthdr *h, const u_char *p)
#ifdef DLT_LINUX_SLL
#define SLL_HDR_LEN 16
+
+#define SLL_ADDRLEN 8
+
+#ifndef ETHERTYPE_MPLS
+#define ETHERTYPE_MPLS 0x8847
+#endif
+#ifndef ETHERTYPE_MPLS_MULTI
+#define ETHERTYPE_MPLS_MULTI 0x8848
+#endif
+
void dl_linux_sll(u_char *user, const struct pcap_pkthdr *h, const u_char *p){
u_int caplen = h->caplen;
u_int length = h->len;
@@ -196,8 +206,25 @@ void dl_linux_sll(u_char *user, const struct pcap_pkthdr *h, const u_char *p){
DEBUG(6) ("warning: received incomplete Linux cooked frame");
return;
}
+
+ struct _sll_header {
+ u_int16_t sll_pkttype; /* packet type */
+ u_int16_t sll_hatype; /* link-layer address type */
+ u_int16_t sll_halen; /* link-layer address length */
+ u_int8_t sll_addr[SLL_ADDRLEN]; /* link-layer address */
+ u_int16_t sll_protocol; /* protocol */
+ };
+
+ _sll_header *sllp = (_sll_header*)p;
+ int mpls_sz = 0;
+ if (ntohs(sllp->sll_protocol) == ETHERTYPE_MPLS) {
+ // unwind MPLS stack
+ do {
+ mpls_sz += 4;
+ } while ( ((*(p+SLL_HDR_LEN + mpls_sz - 2)) & 1) == 0 );
+ }
- packet_info pi(DLT_LINUX_SLL,h,p,tvshift(h->ts),p + SLL_HDR_LEN, caplen - SLL_HDR_LEN);
+ packet_info pi(DLT_LINUX_SLL,h,p,tvshift(h->ts),p + SLL_HDR_LEN + mpls_sz, caplen - SLL_HDR_LEN - mpls_sz);
process_packet_info(pi);
}
#endif
Sorry for not providing the most common solution ;) I just needed this only case.
Just sharing back my own patch.
// Yury Ershov
When tcpflow -AH
sees an HTTP response, it productes a -HTTPBODY
file containing the body of the response. However, when this connection is re-used for a subsequent HTTP request and response, the -HTTPBODY
file contains the first response body concatenated with subsequent HTTP response headers and bodies. -AH
really only works properly for HTTP/1.1 with Connection: close
.
The scanner should extract the Content-Length
header and emit multiple HTTP bodies to separate files as appropriate.
I note that the scan_http implementation is rather simple, e.g. also not supporting HTTP/1.1 chunked transfer encoding (which is a MUST
for RFC compliance). In case full HTTP support is a long time coming, a mention in the documentation with current limitations would be helpful too. tcpflow
is useful for me even as-is, but knowing about possible issues ahead of time would have saved me troubleshooting.
I'm using tcpflow on gentoo. Version 1.2.6 works as expected, but version 1.2.7 just exits without error message. The most information I can get is with the -v flag:
tcpflow[16368]: printing packets to console only
tcpflow[16368]: packet header dump suppressed
tcpflow[16368]: converting non-printable characters to '.'
tcpflow[16368]: tcpflow version 1.2.7
tcpflow[16368]: Open FDs at end of processing: 0
tcpflow[16368]: Flow map size at end of processing: 0
tcpflow[16368]: Total flows processed: 0
tcpflow[16368]: Total packets processed: 0
I also checked the latest git checkout, here the problem is the same with just a little bit different debugging output:
tcpflow[14985]: printing packets to console only
tcpflow[14985]: packet header dump suppressed
tcpflow[14985]: converting non-printable characters to '.'
tcpflow[14985]: tcpflow version 1.2.7
tcpflow[14985]: Open FDs at end of processing: 0
tcpflow[14985]: Flow map size at end of processing: 0
pcap_fake.cpp DEBUG: Total flows processed = 0
pcap_fake.cpp DEBUG: Total packets processed = 0
Thank you for maintaining this package!
Best regards.
Timestamps on HTTP parser generated files should match the original TCP connection
The libpcap URL (ftp://ftp.ee.lbl.gov/libpcap.tar.Z) in the INSTALL file is no longer valid.
Because an absolute offset into the file is calculated, this function won't handle flows larger than 4GiB.
I'm using the new tcpflow 1.2.4. It seems that when using the "-cr" option, it always outputs "write error to fwrite?" to stderr. Is this correct? This appears to be new code in tcpip.cpp that wasn't in tcpflow 1.1.1.
Here's an example:
tcpflow -T%A.%a-%B.%b -cr /tmp/172.16.116.244:60287_217.160.51.31:80-6.raw
172.016.116.244.60287-217.160.051.031.00080: GET / HTTP/1.1
User-Agent: curl/7.19.7 (i486-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15
Host: testmyids.com
Accept: /
write error to fwrite?
217.160.051.031.00080-172.016.116.244.60287: HTTP/1.1 200 OK
Date: Thu, 19 Apr 2012 12:58:02 GMT
Server: Apache
Last-Modified: Mon, 15 Jan 2007 23:11:55 GMT
ETag: "61c22f22-27-4271c5f1ac4c0"
Accept-Ranges: bytes
Content-Length: 39
Content-Type: text/html
uid=0(root) gid=0(root) groups=0(root)
write error to fwrite?
The Win32 build on the site does not currently work, it says that it was compiled without libpcap support, so it can't capture any packets. This makes the program almost completely useless in its present state.
I tried to quickly make a cygwin build, but it failed with compiler errors.
Ran commands exactly as indicated:
Be sure you have the necessary precursors:
Download the sources with git, run bootstrap.sh, configure and make:
git clone --recursive https://github.com/simsong/tcpflow.git
cd tcpflow
sh bootstrap.sh
All Yum installs went fine but when I ran #sh bootstrap.sh it ends abruptly with the following output:
src/Makefile.am:50: compiling http-parser/http_parser.c' in subdir requires
AM_PROG_CC_C_O' in `configure.ac'
Server is CentOS 6
Here's a patch to improve test-time error reporting:
Before this patch the errors weren't being reported well, and the test suite was passing if openssl wasn't installed.
Hello,
I get the following error while running make:
Making all in src
make[1]: Entering directory /home/dbxmkb/tcpflow1_3/tcpflow-master/src' make all-am make[2]: Entering directory
/home/dbxmkb/tcpflow1_3/tcpflow-master/src'
g++ -DHAVE_CONFIG_H -I. -I../src/be13_api -pthread -I/usr/local/include -DUTC_OFFSET=+0100 -g -pthread -g -O3 -Wall -MD -D_FORTIFY_SOURCE=2 -Wpointer-arith -Wshadow -Wwrite-strings -Wcast-align -Wredundant-decls -Wdisabled-optimization -Wfloat-equal -Wmultichar -Wmissing-noreturn -Woverloaded-virtual -Wsign-promo -funit-at-a-time -Wstrict-null-sentinel -Weffc++ -MT time_histogram.o -MD -MP -MF .deps/time_histogram.Tpo -c -o time_histogram.o test -f 'netviz/time_histogram.cpp' || echo './'
netviz/time_histogram.cpp
In file included from netviz/time_histogram.h:8:0,
from netviz/time_histogram.cpp:14:
./tcpflow.h:252:39: fatal error: be13_api/bulk_extractor_i.h: No such file or directory
compilation terminated.
make[2]: *** [time_histogram.o] Error 1
make[2]: Leaving directory /home/dbxmkb/tcpflow1_3/tcpflow-master/src' make[1]: *** [all] Error 2 make[1]: Leaving directory
/home/dbxmkb/tcpflow1_3/tcpflow-master/src'
make: *** [all-recursive] Error 1
The main reason for compilation is that I want an adjusment to include microseconds to the timestamp.
Tried to compile latest version from github. Make generated the following message:
g++ -DHAVE_CONFIG_H -I. -I.. -I../src/be13_api -pthread -I/usr/local/include -DUTC_OFFSET=+0000 -g -pthread -g -O3 -Wall -MD -Wpointer-arith -Wshadow -Wwrite-strings -Wcast-align -Wredundant-decls -Wdisabled-optimization -Wfloat-equal -Wmultichar -Wmissing-noreturn -Woverloaded-virtual -Wsign-promo -funit-at-a-time -Wstrict-null-sentinel -Weffc++ -D_FORTIFY_SOURCE=2 -MT pcap_fake.o -MD -MP -MF .deps/pcap_fake.Tpo -c -o pcap_fake.o test -f 'be13_api/pcap_fake.cpp' || echo './'
be13_api/pcap_fake.cpp
be13_api/pcap_fake.cpp:2:21: fatal error: tcpflow.h: No such file or directory
compilation terminated.
make[2]: *** [pcap_fake.o] Error 1
make[2]: Leaving directory /usr/local/bulk_extractor/src' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory
/usr/local/bulk_extractor'
make: *** [all] Error 2
I want to compile without tcpflow being installed.
I'm running tcpflow 1.2.7:
tcpflow -V
tcpflow 1.2.7
The manpage states that the -AH option will create a third file with HTTPBODY-GZIP appended if GZIP compression was present:
man tcpflow |grep -A7 AH
-AH Perform HTTP post-processing ("After" processing). If the output file is
208.111.153.175.00080-192.168.001.064.37314,
Then the post-processing will create the files:
208.111.153.175.00080-192.168.001.064.37314-HTTP
208.111.153.175.00080-192.168.001.064.37314-HTTPBODY
If the HTTPBODY was compressed with GZIP, you may get a third file as well:
208.111.153.175.00080-192.168.001.064.37314-HTTPBODY-GZIP
Additional information about these streams, such as their MD5 hash value, is also written to the DFXML file
If I process an entire day's worth of pcap using "tcpflow -AH", I don't get any GZIP files even though there was GZIP compression present:
for i in /nsm/sensor_data/MY_SENSOR/dailylogs/2012-10-03/snort.log*; do tcpflow -AH -r $i; done
ls *GZIP*
ls: cannot access *GZIP*: No such file or directory
ls *HTTPBODY* |wc -l
882
grep "Content-Encoding: gzip" *-HTTP |wc -l
205
What am I doing wrong?
Thanks!
we want to compile the source code without using make files. we are getting lot of errors. how we can do this. please let us know.
Sometimes the time on the capture system is wrong, so this option would allow the user to normalize +/- a time delta.
if the filename is %A%a/%A%a-%B%B
, Then automatically create the directory %A%a/
as necessary.
The build fails with this:
...
mv -f .deps/md5.Tpo .deps/md5.Po
mv -f .deps/util.Tpo .deps/util.Po
tcpip.cpp: In member function 'void tcpip::close_file()':
tcpip.cpp:355:29: error: 'futimes' was not declared in this scope
mv -f .deps/datalink.Tpo .deps/datalink.Po
make[2]: *** [tcpip.o] Error 1
make[2]: *** Waiting for unfinished jobs....
mv -f .deps/flow.Tpo .deps/flow.Po
mv -f .deps/main.Tpo .deps/main.Po
mv -f .deps/xml.Tpo .deps/xml.Po
make[2]: Leaving directory /home/ncopa/aports/main/tcpflow/src/tcpflow-1.2.4/src' make[1]: *** [all] Error 2 make[1]: Leaving directory
/home/ncopa/aports/main/tcpflow/src/tcpflow-1.2.4/src'
make: *** [all-recursive] Error 1
The problem is the use of futimes() which is not implemented in uclibc because its not defined in POSIX. It would be nice if futimens() would be used in stead.
http://pubs.opengroup.org/onlinepubs/9699919799/functions/utimes.html
Thanks!
tcpflow leaves "Chunked HTTP transfer encoding" headers and footers in the output file. This is quite annoying as it leaves garbage data in the flow and a file carving tool like foremost does not ignore them. If your binary splits into for example three chunks, tcpflow output file leaves three instances of following pattern in the output.
0D 0A SIZE_OF_CURRENT_CHUNK_IN_HEX 0D 0A
Initial CRLF is from end of previous chunk.
As an example, use http://forensicscontest.com/contest05/infected.pcap (From Forensic Contest Puzzle 5) generate the flow and try to extract the binary file using foremost (dst port 1066). The final binary will not have correct checksum. NetworkMinner does extract the original binary without any garbage.
I tried this using tcpflow in the debian repository or the complied version off from github (v1.3 - v1.4).
p.s. some may say this is an issue with the file carving tool, however, tcpflow shouldn't leave unnecessary chunk data in the final flow.
I have a pcap with gzip encoding and am using tcpflow 1.3.0 with the -AH option and am getting the proper *-GZIP file output:
file gzip.pcap
gzip.pcap: tcpdump capture file (little-endian) - version 2.4 (Ethernet, capture length 65535)
./tcpflow -V
tcpflow 1.3.0
./tcpflow -AH -r gzip.pcap
ls *-GZIP |wc -l
1
However, when sending output to the console with -c, I see no difference when adding the -AH option. The content still shows the raw gzip data:
Content-Type: text/javascript; charset=UTF-8
Expires: Mon, 08 Oct 2012 10:14:31 GMT
Date: Mon, 08 Oct 2012 10:14:31 GMT
Cache-Control: private, max-age=0
Last-Modified: Sun, 07 Oct 2012 06:10:54 GMT
ETag: "0691f87f-51bb-4089-b8f3-7363c3de7d14"
Content-Encoding: gzip
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Content-Length: 258
Server: GSE
.z.].(Y.#..<..r..B.T^#Qa]_..z.T.....B.@(..Ml.....N...[.D...of.....y1.G~.............Y"tO.'.\....]....b."z.
[email protected]..!V..k1.'G......L......%...-lNY.Vr.!.M...t..yD.A|...b...3.*.J ]r..D.IC.b9.....!......&f...
./tcpflow -c -r gzip.pcap > test1
./tcpflow -AH -c -r gzip.pcap > test2
diff test1 test2
# NO OUTPUT = NO DIFF
Is the -AH option supposed to work with -c?
Hi,
Using tcpflow version 1.2.6, with libpcap 1.2.1, in a NAS which is running Linux 2.6.12, I'm unable to see any UDP traffic.
I'm guessing is a problem in my environment, but I just tested with tcpdump and that works.
Any ideas?
Hi,
When I run the current release of tcpflow on my AWS EC2 AMI, I get the following segfault error:
Mar 26 09:54:34 ip-10-212-82-162 kernel: [408079.567855] tcpflow[6412]: segfault at 0 ip (null) sp 00007fffdc42f5a8 error 14 in tcpflow[400000+7a000]
When I revert back to commit 25da19c, everything works fine.
git clone --recursive https://github.com/simsong/tcpflow.git
git reset --hard 25da19c
git submodule update
sh bootstrap.sh
./configure
make
make install
Is something broken?
Thanks,
Sun
Currently, tcpflow
will take packets received after a FIN is received and put them in their own flow. That's the best that can be done, because those packets don't have an ISN associated with them so you can't figure out where they would need to go. The solution is to cache all closed flows with the ISN and time of closing. If packets are received after the flow is closed then the cache should be checked and if the packets match the data already present they can be silently discarded.
im trying to install tcpflow on debian 6, but when i configure it always failed with error: please install boost-devel. i already install libboost-all-dev.
I pulled out all the relevant XML code and it got it to compile ;)
Here is my output plus some minor changes:
steve@steve-ws:~/t/tcpflow$ git diff
diff --git a/src/main.cpp b/src/main.cpp
index 18d1cac..b4e22bc 100644
--- a/src/main.cpp
+++ b/src/main.cpp
@@ -34,6 +34,7 @@ sem_t *semlock = 0;
#endif
#include
+#include
#include <semaphore.h>
void print_usage()
@@ -86,7 +87,7 @@ void print_usage()
-static void dfxml_create(xml &xreport,const string &command_line)
+static void dfxml_create(xml &xreport,const std::string &command_line)
{
xreport.push("dfxml","xmloutputversion='1.0'");
xreport.push("metadata",
@@ -126,7 +127,7 @@ int main(int argc, char *argv[])
char *device = NULL;
const char *lockname = 0;
int need_usage = 0;
/home/steve/t/tcpflow/src' make all-am make[2]: Entering directory
/home/steve/t/tcpflow/src'/home/steve/t/tcpflow/src' make[1]: *** [all] Error 2 make[1]: Leaving directory
/home/steve/t/tcpflow/src'I'm getting the following error dump when I try to make. Do you have any suggestions on how to resolve this?
Making install in src make[1]: Entering directory
/usr/local/src/security/tcpflow/src'
g++ -DHAVE_CONFIG_H -I. -I../src/be13_api -pthread -I/usr/local/include -DUTC_OFFSET=-0600 -g -pthread -g -O3 -Wall -MD -D_FORTIFY_SOURCE=2 -Wpointer-arith -Wshadow -Wwrite-strings -Wcast-align -Wredundant-decls -Wdisabled-optimization -Wfloat-equal -Wmultichar -Wmissing-noreturn -Woverloaded-virtual -Wsign-promo -funit-at-a-time -Wstrict-null-sentinel -Weffc++ -MT tcpflow.o -MD -MP -MF .deps/tcpflow.Tpo -c -o tcpflow.o tcpflow.cpp
tcpflow.cpp: In function ‘void usage()’:
tcpflow.cpp:73: error: ‘PACKAGE’ was not declared in this scope
tcpflow.cpp:73: error: ‘VERSION’ was not declared in this scope
tcpflow.cpp: In function ‘int main(int, char**)’:
tcpflow.cpp:397: error: ‘PACKAGE’ was not declared in this scope
tcpflow.cpp:495: error: ‘PACKAGE’ was not declared in this scope
tcpflow.cpp:495: error: ‘VERSION’ was not declared in this scope
make[1]: *** [tcpflow.o] Error 1
make[1]: Leaving directory /usr/local/src/security/tcpflow/src' make: *** [install-recursive] Error 1
I am trying to catch large gzip-encoded "application/amf" traffic and hit coredumps at
scan_http_write_data_zlib(), after inflate() returned Z_STREAM_ERROR
/usr/local/bin/tcpflow[29754]: 094.236.007.012.00080-192.168.000.100.49766: closing file
/usr/local/bin/tcpflow[29754]: ::open(094.236.007.012.00080-192.168.000.100.49766,0,0)=13
/usr/local/bin/tcpflow[29754]: 094.236.007.012.00080-192.168.000.100.49766-HTTPBODY: detected zlib content, decompressing
/usr/local/bin/tcpflow[29754]: ::open(094.236.007.012.00080-192.168.000.100.49766-HTTPBODY,1089,420)=15
/usr/local/bin/tcpflow[29754]: 094.236.007.012.00080-192.168.000.100.49766-HTTPBODY-2: detected zlib content, decompressing
/usr/local/bin/tcpflow[29754]: ::open(094.236.007.012.00080-192.168.000.100.49766-HTTPBODY-2,1089,420)=15
29754 Segmentation Fault (core dumped) /usr/local/bin/tcpflow -a -b 500000 -d10 -AH -E http 'port 80' > /dev/null
Ubuntu 12.04 on i383, default zlib, etc
scan_http
should handle Content-Encoding: gzip
and deflate
. Right now, it writes .html files that are gzipped.
demonstrated by tests/bug2.pcap
Without the -P
option, the flow is thrown away and the retransmitted packet received after the FIN looks like a new flow. The new flow is not in the cache so it gets the old filename and then overwrites.
What's needed:
-P
flag needs to be removed. Retransmitted packets shouldn't create a new flow, the program should see if the data matches data already received. if it does (or if the previous data in the file was all NULs, indicating it was never received), then write in place and continue. If they don't match a new flow should be created.tcpflow could load an sqlite database. Is there a standard schema for storing tcp connections?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.