Comments (4)
As you could see in the hadoop-pcap/hadoop-pcap-lib/pom.xml
, it has been compiled with hadoop-core version 2.6.0-mr1-cdh5.5.2, and hadoop-common version 2.6.0-cdh5.5.2. So strictly speaking it's not guaranteed to be working with hadoop 2.7.
You could try to change versions in pom.xml and compile your own version of the library with the required version of Hadoop API.
from hadoop-pcap.
Hi Manesh,
I do not know what you have tried, but this library does not support Spark APIs, just map/reduce and Hive.
However, you should be able to use org.apache.spark.SparkContext#hadoopFile
methods with classes from the net.ripe.hadoop.pcap.mr1.io
package.
from hadoop-pcap.
That is exactly what I did, let me paste the code below. The code below does not work with spark 2.0+ and my guess is its due to change in some hadoop api going from 2.6 to 2.7. So may be my question should be, does this library work with hadoop 2.7+
System.setProperty("hadoop.home.dir", "D:/hadoop-2.6.5")
val conf: SparkConf = new SparkConf().setAppName("Simple Application").setMaster("local[*]")
val sc: SparkContext = new SparkContext(conf)
sc.setLogLevel("ERROR")
val hadoopConf = new org.apache.hadoop.conf.Configuration(sc.hadoopConfiguration)
val jobConf = new JobConf(sc.hadoopConfiguration)
//FileInputFormat.setInputPaths(jobConf, "hdfs://localhost:9000/pcap/small_00000_20120316062947.pcap")
val something = sc.hadoopFile(
//hadoopConf,
"hdfs://localhost:9000/pcap/",
classOf[PcapInputFormat],
classOf[LongWritable],
classOf[ObjectWritable],
2
)
val another = something.map{case(k,v) => (k.get(),v.get().asInstanceOf[Packet])}
another.take(10).map(println)*
Output is as follows. Once we get the data in RDD, we can then put it in hive.
*
17/06/30 17:39:27 INFO BlockManagerMaster: Registered BlockManager
(1,dst=192.168.202.79,ip_flags_df=false,tcp_flag_ns=false,ip_header_length=20,pr
otocol=TCP,ip_version=4,len=47,tcp_seq=2162570451,id=36140,tcp_flag_urg=false,tc
p_header_length=32,fragment_offset=0,tcp_flag_cwr=false,src=192.168.229.254,ttl=
254,src_port=443,tcp_flag_rst=false,fragment=false,tcp_ack=4204467708,dst_port=4
6117,tcp_flag_ack=true,tcp_flag_fin=false,ts_usec=1.331901E9,ip_flags_mf=false,t
cp_flag_syn=false,tcp_flag_psh=true,ts=1331901000,ts_micros=0,tcp_flag_ece=false
)
...
(9,dst=192.168.202.79,ip_flags_df=false,tcp_flag_ns=false,ip_header_length=20,pr
otocol=TCP,ip_version=4,len=0,tcp_seq=3045988242,id=1572,tcp_flag_urg=false,tcp_
header_length=44,fragment_offset=0,tcp_flag_cwr=false,src=192.168.229.251,ttl=12
7,src_port=80,tcp_flag_rst=false,fragment=false,tcp_ack=2662467557,dst_port=5046
5,tcp_flag_ack=true,tcp_flag_fin=false,ts_usec=1.331901E9,ip_flags_mf=false,tcp_
flag_syn=true,tcp_flag_psh=false,ts=1331901000,ts_micros=0,tcp_flag_ece=false)
(10,dst=192.168.202.79,ip_flags_df=false,tcp_flag_ns=false,ip_header_length=20,p
rotocol=TCP,ip_version=4,len=50,tcp_seq=2162570606,id=56707,tcp_flag_urg=false,t
cp_header_length=32,fragment_offset=0,tcp_flag_cwr=false,src=192.168.229.254,ttl
=254,src_port=443,tcp_flag_rst=false,fragment=false,tcp_ack=4204467907,dst_port=
46117,tcp_flag_ack=true,tcp_flag_fin=false,ts_usec=1.331901E9,ip_flags_mf=false,
tcp_flag_syn=false,tcp_flag_psh=true,ts=1331901000,ts_micros=0,tcp_flag_ece=fals
e)*
from hadoop-pcap.
Thanks Oleg, I will try to port it when I find the need/time :), its a very useful library.
Btw my name is maHesh :)
from hadoop-pcap.
Related Issues (20)
- New fields addition HOT 2
- Re-compile to include HiveDecimal HOT 2
- headerKeys in HttpPcapReader not initialize properly HOT 1
- java.lang.RuntimeException: java.lang.RuntimeException: class net.ripe.hadoop.pcap.DnsPcapReader not net.ripe.hadoop.pcap.PcapReader HOT 3
- parsed pcaket with NULL source and destination HOT 3
- "Not a PCAP file" error HOT 2
- Fragment ordering
- Ignore malformed packets HOT 1
- Re-assembly for out-of-order packets HOT 1
- Not able to extract packet payload data HOT 9
- new MapReduce API support would be nice HOT 4
- NullPointerException at "NewTrackingRecordReader.initialize(MapTask.java:548)"
- No Module in PySpark HOT 4
- get URL in HTTP_HEADERS
- java.lang.ClassCastException: net.ripe.hadoop.pcap.io.PcapInputFormat cannot be cast to org.apache.hadoop.mapred.InputFormat HOT 2
- Releasing 1.2 in Maven repository?
- The method close() of type PcapRecordReader must override a superclass method
- How To Run Hadoop PCAP File In Eclipse
- how to read incoming network traffic information through hadoop pcap lib and hadoop pacp serde libraries HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from hadoop-pcap.