Code Monkey home page Code Monkey logo

opennetvm's Introduction

Please let us know if you use OpenNetVM in your research by emailing us or completing this short survey.

Want to get started quickly? Try using our NSF CloudLab profile: https://www.cloudlab.us/p/GWCloudLab/onvm

Notes

We have updated our DPDK submodule to point to a new version, v20.05. If you have already cloned this repository, please update your DPDK submodule by running:

git submodule sync
git submodule update --init

And then rebuild DPDK using the install guide or running these commands:

cd dpdk
make config T=$RTE_TARGET
make T=$RTE_TARGET -j 8
make install T=$RTE_TARGET -j 8

The current OpenNetVM version is 20.10. Please see our release document for more information.

About

openNetVM is a high performance NFV platform based on DPDK and Docker containers. openNetVM provides a flexible framework for deploying network functions and interconnecting them to build service chains.

openNetVM is an open source version of the NetVM platform described in our NSDI 2014 and HotMiddlebox 2016 papers, released under the BSD license.

The develop branch tracks experimental builds (active development) whereas the master branch tracks verified stable releases. Please read our releases document for more information about our releases and release cycle.

You can find information about research projects building on OpenNetVM at the UCR/GW SDNFV project site. OpenNetVM is supported in part by NSF grants CNS-1422362 and CNS-1522546.

Installing

To install openNetVM, please see the openNetVM Installation guide for a thorough walkthrough.

Using openNetVM

openNetVM comes with several sample network functions. To get started with some examples, please see the Example Uses guide

Creating NFs

The NF Development guide will provide what you need to start creating your own NFs.

Dockerize NFs

NFs can be run inside docker containers, with the NF being automatically or hand started. For more informations, see our Docker guide.

TCP Stack

openNetVM can run mTCP applications as NFs. For more information, visit mTCP.

Citing OpenNetVM

If you use OpenNetVM in your work, please cite our paper:

@inproceedings{zhang_opennetvm:_2016,
	title = {{OpenNetVM}: {A} {Platform} for {High} {Performance} {Network} {Service} {Chains}},
	booktitle = {Proceedings of the 2016 {ACM} {SIGCOMM} {Workshop} on {Hot} {Topics} in {Middleboxes} and {Network} {Function} {Virtualization}},
	publisher = {ACM},
	author = {Zhang, Wei and Liu, Guyue and Zhang, Wenhui and Shah, Neel and Lopreiato, Phillip and Todeschi, Gregoire and Ramakrishnan, K.K. and Wood, Timothy},
	month = aug,
	year = {2016},
}

Please let us know if you use OpenNetVM in your research by emailing us or completing this short survey.

opennetvm's People

Contributors

1995parham avatar aaroncoplan avatar aaronmhill01 avatar bdevierno1 avatar catherinemeadows avatar chrisquion avatar dennisafa avatar dreidenbaugh avatar ethanbaron14 avatar evandengdbw avatar frankduan avatar gkatsikas avatar grace-liu avatar gregoire-todeschi avatar jackkuo-tw avatar kevindweb avatar koolzz avatar lhahn01 avatar mrdude avatar nks5295 avatar noahchinitz avatar pcodes avatar phil-lopreiato avatar rohit-mp avatar rskennedy avatar sreyanalla avatar twood02 avatar wenhuizhang avatar williammaa avatar zhangwei1984 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

opennetvm's Issues

The new DPDK version 17.08 cannot use the binded NICs

Hi,

The new version of openNetVM is based on dpdk version 17.08.
I have finished the installment of dpdk and binded NIC ports to DPDK. The NIC devices are Intel 82599.
However, I meet the following error when running the manager.

EAL: Detected 12 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:82:00.0 on NUMA socket 1
EAL: probe driver: 8086:154d net_ixgbe
EAL: Requested device 0000:82:00.0 cannot be used
EAL: PCI device 0000:82:00.1 on NUMA socket 1
EAL: probe driver: 8086:154d net_ixgbe
EAL: Requested device 0000:82:00.1 cannot be used
WARNING: requested port 0 not present - ignoring
WARNING: requested port 1 not present - ignoring
Creating mbuf pool 'MProc_pktmbuf_pool' [24576 mbufs] ...
Creating mbuf pool 'NF_INFO_MEMPOOL' ...
Creating mbuf pool 'NF_MSG_MEMPOOL' ...

The binded NIC ports cannot be used. I also check the helloworld example in the dpdk 17.08. The same error occurs.

EAL: Detected 12 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:82:00.0 on NUMA socket 1
EAL: probe driver: 8086:154d net_ixgbe
EAL: Requested device 0000:82:00.0 cannot be used
EAL: PCI device 0000:82:00.1 on NUMA socket 1
EAL: probe driver: 8086:154d net_ixgbe
EAL: Requested device 0000:82:00.1 cannot be used
hello from core 1
hello from core 0

Therefore, the openNetVM cannot receive packets from other servers. Have you ever met the above errors? Could you please give me some guidance to solve it? Thanks a lot!

Best Regards,
Lishan

EAL: invalid coremask

Hello, all.

I'm trying to fulfill Pktgen to test the performance of openNetVM, however, something unexpected occurred. How could I resolve this problem?

Steps to reproduce error:

All the following steps are based on Pktgen Installation.

1.binding 2*100Mbps network adapter to the dpdk, using the script named dpdk-setup-iface.sh:

$ cd openNetVM/dpdk/tools/
$ ./dpdk-setup-iface.sh eth1 10.0.0.1 255.255.255.0 up
$ ./dpdk-setup-iface.sh eth2 10.0.0.2 255.255.255.0 up

2.configure pktgen

$ cd openNetVM/tools/Pktgen/pktgen-dpdk/
$ make

3.update the lua script:

-- Please update this part with the destination mac address, source and destination ip address you would like to sent packets to 

pktgen.set_mac("0", "00:0E:C6:D3:54:36");
pktgen.set_ipaddr("0", "dst", "10.0.0.2");
pktgen.set_ipaddr("0", "src", "10.0.0.1");

4.try:

$ sudo bash run-pktgen.sh

and get this info:

pass an argument for port count
example usage: sudo bash run-pktgen.sh 1

So I added an argument and applied again:

$ sudo bash run-pktgen.sh 1

Error log:

root@sdn:/home/sdn/openNetVM/tools/Pktgen/pktgen-dpdk/openNetVM-Scripts# sudo bash run-pktgen.sh 1
Start pktgen

Copyright (c) <2010-2016>, Intel Corporation. All rights reserved. Powered by Intelยฎ DPDK
EAL: Detected 8 lcore(s)
EAL: lcore 8 unavailable
EAL: invalid coremask

Usage: ./app/app/x86_64-native-linuxapp-gcc/app/pktgen [options]

EAL common options:
  -c COREMASK         Hexadecimal bitmask of cores to run on
  -l CORELIST         List of cores to run on
                      The argument format is <c1>[-c2][,c3[-c4],...]
                      where c1, c2, etc are core indexes between 0 and 128
  --lcores COREMAP    Map lcore set to physical cpu set
                      The argument format is
                            '<lcores[@cpus]>[<,lcores[@cpus]>...]'
                      lcores and cpus list are grouped by '(' and ')'
                      Within the group, '-' is used for range separator,
                      ',' is used for single number separator.
                      '( )' can be omitted for single element group,
                      '@' can be omitted if cpus and lcores have the same value
  --master-lcore ID   Core ID that is used as master
  -n CHANNELS         Number of memory channels
  -m MB               Memory to allocate (see also --socket-mem)
  -r RANKS            Force number of memory ranks (don't detect)
  -b, --pci-blacklist Add a PCI device in black list.
                      Prevent EAL from using this PCI device. The argument
                      format is <domain:bus:devid.func>.
  -w, --pci-whitelist Add a PCI device in white list.
                      Only use the specified PCI devices. The argument format
                      is <[domain:]bus:devid.func>. This option can be present
                      several times (once per device).
                      [NOTE: PCI whitelist cannot be used with -b option]
  --vdev              Add a virtual device.
                      The argument format is <driver><id>[,key=val,...]
                      (ex: --vdev=net_pcap0,iface=eth2).
  -d LIB.so|DIR       Add a driver or driver directory
                      (can be used multiple times)
  --vmware-tsc-map    Use VMware TSC map instead of native RDTSC
  --proc-type         Type of this process (primary|secondary|auto)
  --syslog            Set syslog facility
  --log-level         Set default log level
  -v                  Display version information on startup
  -h, --help          This help

EAL options for DEBUG use only:
  --huge-unlink       Unlink hugepage files after init
  --no-huge           Use malloc instead of hugetlbfs
  --no-pci            Disable PCI
  --no-hpet           Disable HPET
  --no-shconf         No shared config (mmap'd files)

EAL Linux options:
  --socket-mem        Memory to allocate on sockets (comma separated values)
  --huge-dir          Directory where hugetlbfs is mounted
  --file-prefix       Prefix for hugepage filenames
  --base-virtaddr     Base virtual address
  --create-uio-dev    Create /dev/uioX (usually done by hotplug)
  --vfio-intr         Interrupt mode for VFIO (legacy|msi|msix)
  --xen-dom0          Support running on Xen dom0 without hugetlbfs

===== Application Usage =====

Usage: ./app/app/x86_64-native-linuxapp-gcc/app/pktgen [EAL options] -- [-h] [-P] [-G] [-T] [-f cmd_file] [-l log_file] [-s P:PCAP_file] [-m <string>]
  -s P:file    PCAP packet stream file, 'P' is the port number
  -f filename  Command file (.pkt) to execute or a Lua script (.lua) file
  -l filename  Write log to filename
  -P           Enable PROMISCUOUS mode on all ports
  -g address   Optional IP address and port number default is (localhost:0x5606)
               If -g is used that enable socket support as a server application
  -G           Enable socket support using default server values localhost:0x5606 
  -N           Enable NUMA support
  -T           Enable the color output
  --crc-strip  Strip CRC on all ports
  -m <string>  matrix for mapping ports to logical cores
      BNF: (or kind of BNF)
      <matrix-string>   := """ <lcore-port> { "," <lcore-port>} """
      <lcore-port>      := <lcore-list> "." <port-list>
      <lcore-list>      := "[" <rx-list> ":" <tx-list> "]"
      <port-list>       := "[" <rx-list> ":" <tx-list>"]"
      <rx-list>         := <num> { "/" (<num> | <list>) }
      <tx-list>         := <num> { "/" (<num> | <list>) }
      <list>            := <num> { "/" (<range> | <list>) }
      <range>           := <num> "-" <num> { "/" <range> }
      <num>             := <digit>+
      <digit>           := 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
      1.0, 2.1, 3.2                 - core 1 handles port 0 rx/tx,
                                      core 2 handles port 1 rx/tx
                                      core 3 handles port 2 rx/tx
      1.[0-2], 2.3, ...             - core 1 handle ports 0,1,2 rx/tx,
                                      core 2 handle port 3 rx/tx
      [0-1].0, [2/4-5].1, ...       - cores 0-1 handle port 0 rx/tx,
                                      cores 2,4,5 handle port 1 rx/tx
      [1:2].0, [4:6].1, ...         - core 1 handles port 0 rx,
                                      core 2 handles port 0 tx,
      [1:2].[0-1], [4:6].[2/3], ... - core 1 handles port 0 & 1 rx,
                                      core 2 handles port  0 & 1 tx
      [1:2-3].0, [4:5-6].1, ...     - core 1 handles port 0 rx, cores 2,3 handle port 0 tx
                                      core 4 handles port 1 rx & core 5,6 handles port 1 tx
      [1-2:3].0, [4-5:6].1, ...     - core 1,2 handles port 0 rx, core 3 handles port 0 tx
                                      core 4,5 handles port 1 rx & core 6 handles port 1 tx
      [1-2:3-5].0, [4-5:6/8].1, ... - core 1,2 handles port 0 rx, core 3,4,5 handles port 0 tx
                                      core 4,5 handles port 1 rx & core 6,8 handles port 1 tx
      [1:2].[0:0-7], [3:4].[1:0-7], - core 1 handles port 0 rx, core 2 handles ports 0-7 tx
                                      core 3 handles port 1 rx & core 4 handles port 0-7 tx
      BTW: you can use "{}" instead of "[]" as it does not matter to the syntax.
  -h           Display the help information
Pktgen done

Additional Information

System: Ubuntu 14.04, 64bit.

hugepage(using grep -i huge /proc/meminfo):

AnonHugePages:    391168 kB
HugePages_Total:    1024
HugePages_Free:     1024
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

I'm a new hand on this domain. Is there somthing wrong with my steps? Thank you for your help!

Service chaining example

This issue describes a simple bug/improvement for the OpenNetVM code, and is a great way for YOU to get involved with improving this open source project! If you want to fix it, just create a pull request with the fix as detailed in the Contributing document. If you have questions, post here!


By default, OpenNetVM creates a default service chain with the action "send to NF with service ID 1". However, when the manager starts it prints "Default service chain: send to sdn NF". We should revise this comment to say "send to service ID 1"

https://github.com/sdnfv/openNetVM/blob/develop/onvm/onvm_mgr/onvm_init.c#L249

We also should create a new NF that demonstrates setting up different default service chains, or service chains for specific flows. This NF would show how to use the Flow Director API. The NF could be very simple -- it might just start up, create some new rules with the flow director, and then exit (the rules will still apply to new traffic that arrives).

Need help to do some changes to implement NFP

Hello Sir,

I have completed my set up of OpenNetVM. It is working fine. Now, I am trying to implement NFP(Network Function Parallelism) in it (OpenNetVM is base in that work). I have to convert service chains into Service graphs to run NFs in parallel. If you have any idea, can you please give some insight to do this? Where should I do necessary changes? It will be helpful for me.

Need Help to Recover from Errors Caused Example Programs

Hi all,

I believe I have set up openNetVM correctly. However, when I fire up load_generator along with basic_monitor, usually after around 1-30s, the manager threads dies (in another terminal) without giving any info. When I try to fire it up again, I always get:

my_user_name@server-name-8:~$ ~/openNetVM/onvm/go.sh 0,1,2 1 0xF8 -s stdout
EAL: Detected 8 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: Cannot allocate memzone list
EAL: FATAL: Cannot init memzone
EAL: Cannot init memzone"

I am trying to figure out why and how to recover from this, I tried sudo rm /mnt/huge/* -rf and checked there are huge pages free (see below). Right now, I have no choice but open up one after another virtual machines.

cat /proc/meminfo | grep Huge
AnonHugePages: 0 kB
HugePages_Total: 1024
HugePages_Free: 965
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB

Steps to reproduce
when I can normally fire up the manager, I run
nohup ~/openNetVM/examples/start_nf.sh basic_monitor 7 -p 100000 &
nohup ~/openNetVM/examples/start_nf.sh load_generator 1 -d 7 & then I get the error and can never recover, even I tried to restart the machine, bind ONVM_NIC_PCI and run setup_environment.shagain.

Environment

  • OS: Google Cloud Platform Ubuntu 14.04 LTS
  • Spec: 16 vCPUs (it appears as 8 in openNetVM), 32 GB Mem
  • onvm version: from the current master branch
  • dpdk version: from the current master branch (v18.11)
  • other info: Here is the setup script for installing dependecies and openNetVM.

In Addition...
By the way, here are the outputs when I run simple_forward along with speed_tester examples, which I consider they are running fine (correct me if I am wrong).

my_user_name@server-name-8:~$ ~/openNetVM/onvm/go.sh 0,1,2 1 0xF8 -s stdout
EAL: Detected 8 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:00:04.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 1af4:1000 net_virtio
EAL: Error: Invalid memory
WARNING: requested port 0 not present - ignoring
Registered 5 cores for NFs: 3, 4, 5, 6, 7
Creating mbuf pool 'MProc_pktmbuf_pool' [32767 mbufs] ...
Creating mbuf pool 'NF_INFO_MEMPOOL' ...
Creating mbuf pool 'NF_MSG_MEMPOOL' ...

Checking link statusdone
Default service chain: send to sdn NF
cur_index:1, action:2, destination:1

APP: Finished Process Init.
APP: 3 cores available in total
APP: 1 cores available for handling manager RX queues
APP: 1 cores available for handling TX queues
APP: 1 cores available for handling stats
APP: Core 1: Running TX thread for NFs 1 to 127
APP: Core 2: Running RX thread for RX queue 0
APP: Core 0: Running master thread
APP: Stats verbosity level = 1

Then I run, nohup ~/openNetVM/examples/start_nf.sh simple_forward 2 -d 1 &
nohup ~/openNetVM/examples/start_nf.sh speed_tester 1 -d 2 -c 16000 & so I get
image

I can run simple_forward with load_generator just fine, but with some rx and tx dropped (I am also wondering why).

Best wishes,
Roger

Pktgen is not starting on a machine with 6 cores

Hello sir,
I am trying to run pktgen in a machine which has 6 cores. So I made changes in the file run-pktgen.sh by giving coremask as 003f. But pktgen is unable to transmit the packet. Is it because of cores or some other reason? NIC I am using is IC10 and is supported in dpdk.

Manager crashes in the modified openNetVM code (supporting parallelism) for large number of packets

Hi,
I have modified the openNetVM code to support parallelism. The code is working fine when a number of packets transmitted through the chain are low, say 100 or 1000. When I try to send more number of packets (more than 5000), manager crashes. I tried debugging the code, I found that the manager crashes while executing the function "onvm_pkt_flush_port_queue". Can you please tell me what could be the reason for this behavior?

Pktgen installation issue

While running the command sudo ./app/app/x86_64-native-linuxapp-gcc/pktgen -c 3 -n 1, I am getting the following error.

Copyright (c) <2010-2017>, Intel Corporation. All rights reserved. Powered by DPDK
EAL: Detected 28 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
Lua 5.3.4 Copyright (C) 1994-2017 Lua.org, PUC-Rio
Copyright (c) <2010-2017>, Intel Corporation. All rights reserved.
Pktgen created by: Keith Wiles -- >>> Powered by DPDK <<<

Packet Burst 64, RX Desc 1024, TX Desc 2048, mbufs/port 16384, mbuf cache 2048
!PANIC!: *** Did not find any ports to use ***
PANIC in pktgen_config_ports():
*** Did not find any ports to use ***6: [./app/x86_64-native-linuxapp-gcc/pktgen(_start+0x29) [0x44a799]]
5: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0) [0x7ffff6cae830]]
4: [./app/x86_64-native-linuxapp-gcc/pktgen(main+0x598) [0x447b38]]
3: [./app/x86_64-native-linuxapp-gcc/pktgen(pktgen_config_ports+0x15df) [0x472c5f]]
2: [./app/x86_64-native-linuxapp-gcc/pktgen(__rte_panic+0xc3) [0x44207e]]
1: [./app/x86_64-native-linuxapp-gcc/pktgen(rte_dump_stack+0x2b) [0x4c006b]]
Aborted (core dumped)

Can you please give me some solution to this problem ?

Speed tester should be able to create 0 packets

This issue describes a simple bug/improvement for the OpenNetVM code, and is a great way for YOU to get involved with improving this open source project! If you want to fix it, just create a pull request with the fix as detailed in the Contributing document. If you have questions, post here!


Currently the speed tester NF requires a -c parameter of 1 or greater, causing it to create at least 1 packet. This is unnecessary in cases where you are running a chain of speed testers (only 1 NF in the chain needs to create packets) and is unnecessary when using the speed tester to measure throughput with packets generated by another source (note that those packets will still need to have the SPEED_TESTER_BIT set or the packets will be dropped).

Simple fix:

  • Change this condition to only produce an error if parameter is < 0
  • Print a warning message that no packets were created

./onvm/go.sh 0,1,2,3 1 -s stdoutCause: Cannot initialise port 0

According to openNetVM/docs/Install.md, 1-18 step is successful, But at 19 step Run openNetVM manager, there is a issue (I git clone -b develop latest version)
`root@cjx:/home/cjx/openNetVM# ./onvm/go.sh 0,1,2,3 1 -s stdout

EAL: Detected 12 lcore(s)
EAL: Probing VFIO support...
EAL: PCI device 0000:04:00.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:04:00.1 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:06:00.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:105e net_e1000_em
EAL: PCI device 0000:06:00.1 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:105e net_e1000_em
Creating mbuf pool 'MProc_pktmbuf_pool' [26112 mbufs] ...
Creating mbuf pool 'NF_INFO_MEMPOOL' ...
Creating mbuf pool 'NF_MSG_MEMPOOL' ...
Port 0 init ...
Port 0 socket id 0 ...
Port 0 Rx rings 1 ...
EAL: Error - exiting with code: 1

Cause: Cannot initialise port 0`

I will appreciate it if you can give me some advices?

Speed tester crashes if there are no free mbufs to allocate

This issue describes a simple bug/improvement for the OpenNetVM code, and is a great way for YOU to get involved with improving this open source project! If you want to fix it, just create a pull request with the fix as detailed in the Contributing document. If you have questions, post here!


If there are no free mbufs, the Speed tester example NF will seg fault with no warning/error message on line 413:

for (i = 0; i < packet_number; ++i) {
struct onvm_pkt_meta* pmeta;
struct ether_hdr *ehdr;
int j;
struct rte_mbuf *pkt = rte_pktmbuf_alloc(pktmbuf_pool);
/*set up ether header and set new packet size*/
ehdr = (struct ether_hdr *) rte_pktmbuf_append(pkt, packet_size);

To reproduce this, run this command twice:

/openNetVM/examples/speed_tester$ ./go.sh 6 1 1 -c 16000

The first time will work correctly and should report > 30 Million packets per second on a powerful server. The second time you run the command it should crash immediately after printing Creating 16000 packets to send to 1.

This can happen if you start/stop the NF several times without reloading the manager (leading to a memory leak for packets that weren't fully processed by the NF before exiting -- a more complex issue we can resolve after this!).

To fix the segmentation fault, check for a null return from the rte_pktmbuf_alloc call. If the call fails, then break out of the for loop and continue running with however many packets it has created so far (from the i variable). This should update the packet_number variable to reflect how many packets were actually created.

rte_eth_dev_configure() exiting with code -22

Hello, I encountered a problem when running onvm. When the program is initializing port 0, it always fail.

I think there must be something wrong with my configuration. Because when I run dpdk example -- distributor, it fails in initializing port 0, too. It shows that ethdev port_id=0 nb_tx_queues=2 > 1. It shows that my port_conf has max_tx_queues=1. Would you like to tell me what is the difference between NIC and PORT? I'm confused about that.

I run onvm in virtualbox, so that I encounter above problem. When I run onvm in a physical server, everything goes well, fortunately. ;)

By the way, how can I enable openNetVM to show more debug messages? I try to modify files in $RTE_SDK/config/, but it doesn't work.

Would you like to help me? Any suggestion is appreciated. Thank you very much.

Some problem about packet speed

When I run the example application basic_monitor, the NF finish the function that send the packets it received out the system.
In my understanding, the packet receive through the port, then send to the basic_monitor's RX,then to the basic_monitor's TX, and lastly, send out through the NIC TX. But, when I run the application, I saw that, the basic_monitor's RX and TX are the same rate with about 1488345 pkt/s, however, the NIC RX and TX are the same number with about 1488455 pkt/s.
It seems that the packet send out the system directly not through the basic_monitor? and why the NIC TX is larger than the NF's RX and TX?

the cpu core ultilization

When I run the openNetVM manager, it requires 4 cores. But when I use the command 'top', why I only see the manager occupy the cpu by 300%, not 400%

How many ports should I use?

Dear Tim,

Sorry for this stupid question as I am quite dumb. I see from the manual that I need to pass a portmask option to onvm manager. Like this:

# onvm/go.sh 0,2,4,6,8,10,12 1 -s stdout

From the onvm_args.c, I see that parse_portmask() function accepts 0 port situation and it just shows a warning message. Thus I try the following command:

# onvm/go.sh 0,2,4,6,8,10,12 0 -s stdout

The ONVM Manager starts normally. In this situation, I wonder how to transfer packets to ONVM. Would you like to explain this? Thank you in advance!

Yours,
Shuo.

EAL: pthread_setaffinity_np failed

DPDK is successfully installed and tested. But I am facing some problem while testing OpenNetVM. Any hints?

EAL: Detected 2 lcore(s)
EAL: Probing VFIO support...
EAL: pthread_setaffinity_np failed
PANIC in eal_thread_loop():
cannot set affinity
5: [/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7ffff6fd741d]]
4: [/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba) [0x7ffff72a16ba]]
3: [/home/mdasari/openNetVM/onvm/onvm_mgr/x86_64-native-linuxapp-gcc/onvm_mgr(eal_thread_loop+0x260) [0x491bd0]]
2: [/home/mdasari/openNetVM/onvm/onvm_mgr/x86_64-native-linuxapp-gcc/onvm_mgr(__rte_panic+0xc3) [0x43a9eb]]
1: [/home/mdasari/openNetVM/onvm/onvm_mgr/x86_64-native-linuxapp-gcc/onvm_mgr(rte_dump_stack+0x2b) [0x49682b]]
./go.sh: line 61: 30625 Aborted                 (core dumped) sudo $SCRIPTPATH/onvm_mgr/$RTE_TARGET/onvm_mgr -l $cpu -n 4 --proc-type=primary ${virt_addr} -- -p ${ports} ${num_srvc} ${def_srvc} ${stats} ${stats_sleep_time}

ONVM installation issues - mount huge pages

I hit a couple issues when installing ONVM on a cloudlab server.

First, the instructions don't tell us to configure ONVM_HOME (it is mentioned under troubleshooting in the install guide, but it should be one of the steps at the top). This should be an easy fix.

  • TODO: Update install docs to mention ONVM_HOME at the top.

I ran the install script and got the "ONVM INSTALL COMPLETED SUCCESSFULLY" message, but it does not appear to have correctly mounted the huge page directory, despite showing these messages:

Build complete [x86_64-native-linuxapp-gcc]
Installation cannot run with T defined and DESTDIR undefined
Configuring 1024 hugepages with size 2048
Adding huge fs to /etc/fstab
Configuring environment
Loading uio kernel modules
Checking NIC status

but running mount showed this:

$ mount
/dev/sda1 on / type ext3 (rw,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/cgroup type tmpfs (rw)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755)
none on /sys/fs/pstore type pstore (rw)
rpc_pipefs on /run/rpc_pipefs type rpc_pipefs (rw)
systemd on /sys/fs/cgroup/systemd type cgroup (rw,noexec,nosuid,nodev,none,name=systemd)
ops.wisc.cloudlab.us:/proj/gwcloudlab-PG0 on /proj/gwcloudlab-PG0 type nfs (rw,vers=3,nolock,udp,addr=128.104.222.8)
ops.wisc.cloudlab.us:/share on /share type nfs (rw,vers=3,nolock,udp,addr=128.104.222.8)

ONVM did add the line to /etc/fstab/, in fact it added it twice because I reran the install script a couple times (I'm not sure why that would matter for this problem):

timwoo0 at onvm-0 in /local/onvm/openNetVM on master
$ cat /etc/fstab
UUID=dffd32c2-2791-42df-8922-bacee501067b /               ext3    errors=remount-ro 0       1
# the following swap devices added by /usr/local/etc/emulab/fixup-fstab-swaps
/dev/sda3		swap	swap	defaults	0	0
huge /mnt/huge hugetlbfs defaults 0 0
huge /mnt/huge hugetlbfs defaults 0 0
  • TODO: Update install script so it won't write the same line if it is already in the file (use grep to check)

If I try to run the manger it crashes with an error Cannot get hugepage information

timwoo0 at onvm-0 in /local/onvm/openNetVM on master
$ ./onvm/go.sh 0,1,2,3 1 -s stdout
EAL: Detected 20 lcore(s)
EAL: 1024 hugepages of size 2097152 reserved, but no mounted hugetlbfs found for that size
PANIC in rte_eal_init():
Cannot get hugepage information
7: [/local/onvm/openNetVM/onvm/onvm_mgr/onvm_mgr/x86_64-native-linuxapp-gcc/onvm_mgr() [0x4313ab]]
6: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7ffff7102f45]]
5: [/local/onvm/openNetVM/onvm/onvm_mgr/onvm_mgr/x86_64-native-linuxapp-gcc/onvm_mgr(main+0x1f) [0x4307df]]
4: [/local/onvm/openNetVM/onvm/onvm_mgr/onvm_mgr/x86_64-native-linuxapp-gcc/onvm_mgr(init+0x1d) [0x431b2d]]
3: [/local/onvm/openNetVM/onvm/onvm_mgr/onvm_mgr/x86_64-native-linuxapp-gcc/onvm_mgr(rte_eal_init+0xc55) [0x465725]]
2: [/local/onvm/openNetVM/onvm/onvm_mgr/onvm_mgr/x86_64-native-linuxapp-gcc/onvm_mgr(__rte_panic+0xbe) [0x42d023]]
1: [/local/onvm/openNetVM/onvm/onvm_mgr/onvm_mgr/x86_64-native-linuxapp-gcc/onvm_mgr(rte_dump_stack+0x1a) [0x46ceba]]

If I run mount /mnt/huge then the problem is resolved and the manager works as expected.

  • TODO: Figure out why mount command isn't always run correctly by install script

Limit number of TX queues created to prevent crash at startup

This issue describes a simple bug/improvement for the OpenNetVM code, and is a great way for YOU to get involved with improving this open source project! If you want to fix it, just create a pull request with the fix as detailed in the Contributing document. If you have questions, post here!


As described in #27, if you start the ONVM manager with a NIC supporting fewer than 16 transmit queues it will crash. Problems will also occur if you try to run more than 16 TX threads in the ONVM manager.

This happens because ONVM sets the default number of TX queues to the MAX_NFS macro (typically 16):

static int
init_port(uint8_t port_num) {
const uint16_t rx_rings = ONVM_NUM_RX_THREADS, tx_rings = MAX_NFS;

and then has a for loop to configure each of the TX rings on the NIC:
for (q = 0; q < tx_rings; q++) {
retval = rte_eth_tx_queue_setup(port_num, q, tx_ring_size,
rte_eth_dev_socket_id(port_num),
&tx_conf);
if (retval < 0) return retval;
}

We do this because in the past, each NF had access to a NIC TX ring, but that is no longer the case.

A simple fix to this problem is to limit the number of TX rings created based on the HW capabilities of the NIC. This can be found with the rte_eth_dev_info_get() function provided by DPDK. The function sets a rte_eth_dev_info structure with a field indicating the maximum number of TX queues for the requested device.

A more complex fix is to set the number of NIC TX rings equal to the number of TX threads in the ONVM manager (since there is a 1-1 mapping of these). This will require exposing more information about how many TX threads are created at initialization time.

If you want to try to solve this, I suggest you do the simple fix first, and then we can discuss how to do the complex fix in a second iteration.

make error in onvm directory

I am following the installation instructions and when it was instructed to enter the onvm directory and use the make command, I am getting the following errors.

/home/indranil/openNetVM/onvm/onvm_mgr/onvm_init.c:108:18: error: โ€˜struct rte_eth_rxmodeโ€™ has no member named โ€˜header_splitโ€™
.header_split = 0, /* header split disabled /
^~~~~~~~~~~~
/home/indranil/openNetVM/onvm/onvm_mgr/onvm_init.c:109:18: error: โ€˜struct rte_eth_rxmodeโ€™ has no member named โ€˜hw_ip_checksumโ€™
.hw_ip_checksum = 1, /
IP checksum offload enabled /
^~~~~~~~~~~~~~
/home/indranil/openNetVM/onvm/onvm_mgr/onvm_init.c:109:35: warning: excess elements in struct initializer
.hw_ip_checksum = 1, /
IP checksum offload enabled /
^
/home/indranil/openNetVM/onvm/onvm_mgr/onvm_init.c:109:35: note: (near initialization for โ€˜port_conf.rxmodeโ€™)
/home/indranil/openNetVM/onvm/onvm_mgr/onvm_init.c:110:18: error: โ€˜struct rte_eth_rxmodeโ€™ has no member named โ€˜hw_vlan_filterโ€™
.hw_vlan_filter = 0, /
VLAN filtering disabled /
^~~~~~~~~~~~~~
/home/indranil/openNetVM/onvm/onvm_mgr/onvm_init.c:110:35: warning: excess elements in struct initializer
.hw_vlan_filter = 0, /
VLAN filtering disabled /
^
/home/indranil/openNetVM/onvm/onvm_mgr/onvm_init.c:110:35: note: (near initialization for โ€˜port_conf.rxmodeโ€™)
/home/indranil/openNetVM/onvm/onvm_mgr/onvm_init.c:111:18: error: โ€˜struct rte_eth_rxmodeโ€™ has no member named โ€˜jumbo_frameโ€™
.jumbo_frame = 0, /
jumbo frame support disabled /
^~~~~~~~~~~
/home/indranil/openNetVM/onvm/onvm_mgr/onvm_init.c:111:35: warning: excess elements in struct initializer
.jumbo_frame = 0, /
jumbo frame support disabled /
^
/home/indranil/openNetVM/onvm/onvm_mgr/onvm_init.c:111:35: note: (near initialization for โ€˜port_conf.rxmodeโ€™)
/home/indranil/openNetVM/onvm/onvm_mgr/onvm_init.c:112:18: error: โ€˜struct rte_eth_rxmodeโ€™ has no member named โ€˜hw_strip_crcโ€™
.hw_strip_crc = 1, /
CRC stripped by hardware /
^~~~~~~~~~~~
/home/indranil/openNetVM/onvm/onvm_mgr/onvm_init.c:112:35: warning: excess elements in struct initializer
.hw_strip_crc = 1, /
CRC stripped by hardware */
^
/home/indranil/openNetVM/onvm/onvm_mgr/onvm_init.c:112:35: note: (near initialization for โ€˜port_conf.rxmodeโ€™)
/home/indranil/openNetVM/onvm/onvm_mgr/onvm_init.c:142:10: error: โ€˜const struct rte_eth_txconfโ€™ has no member named โ€˜txq_flagsโ€™
.txq_flags = 0,
^~~~~~~~~
/home/indranil/openNetVM/onvm/onvm_mgr/onvm_init.c:142:27: warning: initialized field overwritten [-Woverride-init]
.txq_flags = 0,
^
/home/indranil/openNetVM/onvm/onvm_mgr/onvm_init.c:142:27: note: (near initialization for โ€˜tx_conf.tx_free_threshโ€™)
/home/indranil/openNetVM/onvm/onvm_mgr/onvm_init.c: In function โ€˜initโ€™:
/home/indranil/openNetVM/onvm/onvm_mgr/onvm_init.c:171:9: warning: โ€˜rte_eth_dev_countโ€™ is deprecated [-Wdeprecated-declarations]
total_ports = rte_eth_dev_count();
^~~~~~~~~~~
In file included from /home/indranil/openNetVM/onvm/onvm_mgr/../onvm_nflib/onvm_includes.h:92:0,
from /home/indranil/openNetVM/onvm/onvm_mgr/../onvm_mgr/onvm_args.h:57,
from /home/indranil/openNetVM/onvm/onvm_mgr/../onvm_mgr/onvm_init.h:73,
from /home/indranil/openNetVM/onvm/onvm_mgr/onvm_init.c:52:
/home/indranil/openNetVM/dpdk/x86_64-native-linuxapp-gcc/include/rte_ethdev.h:1474:10: note: declared here
uint16_t rte_eth_dev_count(void);
^~~~~~~~~~~~~~~~~
/home/indranil/openNetVM/dpdk/mk/internal/rte.compile-pre.mk:114: recipe for target 'onvm_init.o' failed
make[2]: *** [onvm_init.o] Error 1
/home/indranil/openNetVM/dpdk/mk/rte.extapp.mk:14: recipe for target 'all' failed
make[1]: *** [all] Error 2
/home/indranil/openNetVM/dpdk/mk/rte.extsubdir.mk:21: recipe for target 'onvm_mgr' failed
make: *** [onvm_mgr] Error 2

Message Passing between NFs

How can I pass a message between two NFs?

I see there are two functions,

first one in onvm_mgr/onvm_nf.h

*int onvm_nf_send_msg(uint16_t dest, uint8_t msg_type, void msg_data);

and next one in onvm_nflib/onvm_nflib.h

*int onvm_nflib_handle_msg(struct onvm_nf_msg msg);

However, they seem to be apparatus for just ONVM manager to NF communication, how can I communicate between two NFs?

Do we already have functions that I can pass a certain message to certain NF and the destination NF will automatically process the message?

Thank you,
Aditya

how to send packet

I have a question about how to use the dpdk-pktgen and the dpdk:
how can I connect this two environment๏ผŸbecause when I deploy the DPDK, I can't use the NIC to connect others and I can't get the ip address? How can I do?

Comprehensive list of enhancements

Possible new feature/updates for onvm, both WIP, Planned and Finished

Description:
This will serve as a pinned issue with tasks that need to be implemented. This is easier to edit than the projects board and provides a quick overview. We might utilize the Projects board for tasks that are actually assigned and under development.

When a task is WIP it will have a check-mark box next to it and an owner/Projects link after. When a task is finished the box will be checked moved to the DONE section.

Adding or Claiming tasks:
If anyone has other ideas for OpenNetVM add a comment with a description and I'll append them to the issue. Similarly, if you want to claim and start a task drop a comment and I'll assign it/move it to the project board.

Overall improvements, scripts, docs, other:

  • Any reason to maintain a wiki section in the main repo? Would updating docs be better? Does anyone care or want to do it? Discuss
  • Docker documentation & setup needs an update

NF related:

  • Add a shutdown callback similar to nf_setup but on shut down, this can be used to print out average stats and other useful info. #179
  • Improved router NF, routes based on rules defined in the config but can now also accept different parameters(port/flags/seq) from the ip_hdr, udp_hdr, packet payload, etc. Could use lpm?
  • Based on the NF-to-NF messaging we can implement complex experiments that require multiple NFs that communicate with each other. #297

Scripting:

  • We could provide helper scripts to launch linear and circular chains of NFS, probably using python & JSON for this #184

Web stats(currently all assigned to @kevindweb, link:

  • Fix the bug where you click on one panel but the other one is highlighted
  • Show all allocated cores (not just the ones used)

Cores/Scaling/Magic:

  • Automatic scaling for NFs, take f.e. speed tester, based on some data metrics in onvm_mgr decide when speed tester is overloaded (some queue data might be useful) -> scale more speed testers. Assigned to @ratnadeepb, link

Performance:

  • Overall performance tweaks and monitoring which parts of the code lead to slowdowns
  • Separating the NF startup msging processing from the stats, currently, it's all being done in one thread

Multi-Host ONVM

  • service chaining across hosts (VXLAN encapsulation)
  • See Phill's old code

CloudLab:

  • Update main profile
  • Provide complex templates, a 3 node setup with a client, onvm middlebox with a local server and a remote server is more fun that just bare onvm installation

mTCP:

  • Fix scaling, with the current version scaling probably won't work as the nf_info is stored in a global variable
  • Performance tests, optimizations & other things

mOS

Snort

  • Update Snort, an imaginary prize goes to whoever gets it up and running, currently, @ratnadeepb is trying to set it up

Cluster

  • Have a webpage that shows if the nodes are up, just pings all nodes

Other endpoint application:
We could explore other applications, such as the dpdk nginx module that can be used with onvm as examples of complex NFs. Before doing this we have to think about what benefits it would give us to port something.

Also, we should look at f-stack, it seems similar to our project.

DONE (As of Spring 2020):

  • Check ports before trying to send (running basic monitor with speed_testser or load_generator will crash the onvm_mgr) #160
  • Check that no NFs are running before starting the onvm_mgr (when mgr dies but the NF stays alive relaunching mgr will fail without a very descriptive error message) #180
  • Half of the nodes are not really alive, fix them
  • Migration of NFs to another core, Assigned to @koolzz pr link #87
  • onvm_mgr launch script should check if any NFs are running before launching #180

DONE (Spring 2019):

  • Update the good old ./install.sh / setup_environment.sh scripts as they fail to deal with hugepages. Assigned to @koolzz
  • Ubuntu 18.04.01 update? Assigned to @dennisafa, link
  • Get a sysadmin sysadmins @kevindweb @dennisafa
  • Ensuring we only have one PR being tested at once for CI. @koolzz
  • Finish the Firewall NF #225 Assigned to @dennisafa, pr link
  • Web stats improvements:
    • Add new events? The easy one is scaling
    • After the pthread pr we will have core info -> we need to put that onto the web page.
  • Reuse instance ids(basically reuse onvm_nf structs) doesn't make sense for us not to do this. Assigned to @koolzz, pr here

error when use docker

when I run the opennetvm's example in docker, it occurs the following problem: /openNetVM/examples/basic_monitor/build/app/monitor: error while loading shared libraries: libnuma.so.1: cannot open shared object file: No such file or directory

How does openNetVM pull packets from NIC and send to an NF?

Hi there,

I'm wondering after a NIC receives a packet from a remote host, how does ONVM decide which first NF to forward this packet to?

So for example, if I set up a chain that looks like: simple_forwarder -> basic_monitor, where I want simple_forwarder to be the "gateway" NF into my chain. How do I tell ONVM to send all the NIC packets to simple_forwarder instead of basic_monitor? (Or how does ONVM even know it should forward a NIC packet to a NF? Does it do it by default?)

Thanks

Env var check

This issue describes a simple bug/improvement for the OpenNetVM code, and is a great way for YOU to get involved with improving this open source project! If you want to fix it, just create a pull request with the fix as detailed in the Contributing document. If you have questions, post here!


We should add an ONVM_HOME environment variable check to onvm/onvm_nflib/Makefile. If the variable is not defined we should print that and exit as installation will fail if its not defined. Same as we do with RTE_SDK

The variable is accessed here

This is low priority and can be a good first issue if we want more of those.

New version can't be compiled ?

Hi, I find that the code is updated to v17.11, while install.md file is not. And "make" in onvm directory cannot make the project, with error:
onvm_pkt.o: In function onvm_pkt_process_rx_batch: onvm_pkt.c:(.text+0x97): undefined reference to onvm_pkt_enqueue_nf
So, place update the install.md file, or tell me how to compile the project, thank you very much!

openNetVM NF manager crashed when running with more than 15 NFs

I know that in your onvm_commom.h file,there has a max NF limit ,the value is 16. I adjusted this value to 128 , then the NF manager got an error:
EAL: Error - exiting with code: 1
Cause: Cannot initialise port 0
but setting for 64 is OK for 15 NFs. Then the NF manager would get crashed with more NFs......
Also, I changed the value of NF_INFO_CACHE to 0 in file /onvm/onvm_msg/onvm_init.h,since the default value (8) can only run 9 NFs in our server.By the way ,the server has 2x8 physical cores.

Query: usage with KVM?

Hi all,

Can the current setup designed for containers be extended for inter-VM communication using KVM?

Thanks and Regards,
Gurkanwal Singh

HugePages Free Problem

After I run openNetVM, I try to execute command grep -i 'huge' /proc/meminfo and find HugePages_Free is 0. How can I free these used hugepage memory? I can find nothing in /mnt/huge/ directory.

Then, I allocate another 1024 hugepages and run dpdk's simplest example helloworld, I find HugePage_Free is 1022 which indicates that 2 pages is leaked by helloworld.

There must be something wrong with my configuration or the program. Would you like to help me? Any suggestions is appreciated. Thank you very much!

Enable/Disable Flow table lookup for incoming packets

This issue describes a simple bug/improvement for the OpenNetVM code, and is a great way for YOU to get involved with improving this open source project! If you want to fix it, just create a pull request with the fix as detailed in the Contributing document. If you have questions, post here!


By default, ONVM does a flow table lookup for all newly arrived packets (using our Flow Director API). However, many users don't use the built-in flow table, so this is wasteful since packets just get sent to the default (service ID 1).

Instead, this should be configurable either as a compile time variable or a command line argument to control whether a lookup is performed or not.

The code to change is here:

ret = onvm_flow_dir_get_pkt(pkts[i], &flow_entry);
if (ret >= 0) {
sc = flow_entry->sc;
meta->action = onvm_sc_next_action(sc, pkts[i]);
meta->destination = onvm_sc_next_destination(sc, pkts[i]);
} else {
meta->action = onvm_sc_next_action(default_chain, pkts[i]);
meta->destination = onvm_sc_next_destination(default_chain, pkts[i]);
}

Some remaining questions on running performance tests

Hi, all. First, thank you for your great works!

Recently we are trying to test the performance of openNetVM SC on a commodity server. We have successfully built a simple service chain showed in Linear NF Chain and planed to test the performance of this SC. In another PC, we established the DPDK environment and installed Pktgen.

However, because we are lacking of experience in this aspect, we still have following problems:

(1)We are not capable to have two 10G NICs so that we use two original network adapters instead and have binded them to DPDK. But it seems to be not appropriate for Pktgen. Once we run the script "run-pktgen.sh", it reports the error like:

!PANIC!: *** Did not find any ports to use *** 
PANIC in pktgen_config_ports():
*** Did not find any ports to use ***

We have checked the binding status using "dpdk-devbind.py". Can we resolve this error?

(2)So we consider other tools for performance testing, such as iperf and netperf. We plan to implement a topo like(assume we are using iperf):

PC1, with iperf client <---> the server which maintains openNetVM SC <---> PC2, with iperf server

Is it available in the scenario of openNetVM? Could you give us some suggestions on it?

Thank you very much!

Speed Tester - tx rate control

Hi,

I have noticed that the new version of speed tester adds the function of latency measurement. Now, I want to measure the process latency of another NF using the speed tester, which sends some sample packets to the NF. However, the current speed tester NF repeatedly sending that batch of packets to itself or another NF.

So, how can I control the transition rate of speed tester? One way is to delay some time after sending a batch of packets by function rte_delay_us(unsigned us). But after reading the code of speed tester, I have no idea why the speed tester repeatedly sending packets? And how to adds rte_delay_us to limit the transition rate?

Lishan

Error in Linux Kernel version 4.16

Hi,
I was trying to install openNetVM on an ubuntu server (Kernel 4.16). It installs properly in Kernel 4.4, however, I am getting the following errors (attached) on kernel 4.16.

I solved the first one related to UTS_UBUNTU_RELEASE_ABI.

It would be great if you could help me with the others. Thanks!

error log after running script: ./install.sh

----------------------------------------
ONVM Environment Variables:
----------------------------------------
RTE_SDK: /root/openNetVM/dpdk
RTE_TARGET: x86_64-native-linuxapp-gcc
ONVM_NUM_HUGEPAGES: 1024
ONVM_SKIP_HUGEPAGES: 
ONVM_SKIP_FSTAB: 
----------------------------------------
Compiling and installing dpdk in /root/openNetVM/dpdk
Configuration done using x86_64-native-linuxapp-gcc
== Build lib
== Build lib/librte_eal
== Build lib/librte_compat
== Build lib/librte_eal/common
  SYMLINK-FILE include/rte_compat.h
  SYMLINK-FILE include/generic/rte_byteorder.h
  SYMLINK-FILE include/generic/rte_atomic.h
  SYMLINK-FILE include/generic/rte_prefetch.h
  SYMLINK-FILE include/generic/rte_cycles.h
  SYMLINK-FILE include/generic/rte_spinlock.h
  SYMLINK-FILE include/generic/rte_memcpy.h
  SYMLINK-FILE include/generic/rte_cpuflags.h
  SYMLINK-FILE include/generic/rte_rwlock.h
  SYMLINK-FILE include/generic/rte_vect.h
  SYMLINK-FILE include/generic/rte_pause.h
  SYMLINK-FILE include/generic/rte_io.h
  SYMLINK-FILE include/rte_branch_prediction.h
  SYMLINK-FILE include/rte_common.h
  SYMLINK-FILE include/rte_debug.h
  SYMLINK-FILE include/rte_eal.h
  SYMLINK-FILE include/rte_errno.h
  SYMLINK-FILE include/rte_launch.h
  SYMLINK-FILE include/rte_lcore.h
  SYMLINK-FILE include/rte_log.h
  SYMLINK-FILE include/rte_memory.h
  SYMLINK-FILE include/rte_memzone.h
  SYMLINK-FILE include/rte_pci.h
  SYMLINK-FILE include/rte_per_lcore.h
  SYMLINK-FILE include/rte_random.h
  SYMLINK-FILE include/rte_tailq.h
  SYMLINK-FILE include/rte_interrupts.h
  SYMLINK-FILE include/rte_alarm.h
  SYMLINK-FILE include/rte_string_fns.h
  SYMLINK-FILE include/rte_version.h
  SYMLINK-FILE include/rte_eal_memconfig.h
  SYMLINK-FILE include/rte_malloc_heap.h
  SYMLINK-FILE include/rte_hexdump.h
  SYMLINK-FILE include/rte_devargs.h
  SYMLINK-FILE include/rte_bus.h
  SYMLINK-FILE include/rte_dev.h
  SYMLINK-FILE include/rte_vdev.h
  SYMLINK-FILE include/rte_pci_dev_feature_defs.h
  SYMLINK-FILE include/rte_pci_dev_features.h
  SYMLINK-FILE include/rte_malloc.h
  SYMLINK-FILE include/rte_keepalive.h
  SYMLINK-FILE include/rte_time.h
  SYMLINK-FILE include/rte_service.h
  SYMLINK-FILE include/rte_service_component.h
  SYMLINK-FILE include/rte_rwlock.h
  SYMLINK-FILE include/rte_memcpy.h
  SYMLINK-FILE include/rte_cycles.h
  SYMLINK-FILE include/rte_pause.h
  SYMLINK-FILE include/rte_spinlock.h
  SYMLINK-FILE include/rte_atomic_32.h
  SYMLINK-FILE include/rte_vect.h
  SYMLINK-FILE include/rte_prefetch.h
  SYMLINK-FILE include/rte_byteorder_32.h
  SYMLINK-FILE include/rte_atomic_64.h
  SYMLINK-FILE include/rte_byteorder_64.h
  SYMLINK-FILE include/rte_cpuflags.h
  SYMLINK-FILE include/rte_rtm.h
  SYMLINK-FILE include/rte_atomic.h
  SYMLINK-FILE include/rte_io.h
  SYMLINK-FILE include/rte_byteorder.h
== Build lib/librte_eal/linuxapp
== Build lib/librte_eal/linuxapp/igb_uio
== Build lib/librte_eal/linuxapp/eal
  CC eal.o
  CC eal_hugepage_info.o
  CC eal_memory.o
  CC eal_thread.o
  CC eal_log.o
  CC eal_vfio.o
  CC eal_vfio_mp_sync.o
  CC eal_pci.o
  CC eal_pci_uio.o
  CC eal_pci_vfio.o
  CC eal_debug.o
  CC eal_lcore.o
  CC eal_timer.o
  CC eal_interrupts.o
  CC eal_alarm.o
  CC eal_common_lcore.o
  CC eal_common_timer.o
  CC eal_common_memzone.o
  CC eal_common_log.o
  CC eal_common_launch.o
  CC eal_common_vdev.o
  CC eal_common_pci.o
  CC eal_common_pci_uio.o
  CC eal_common_memory.o
  CC eal_common_tailqs.o
  CC eal_common_errno.o
  CC eal_common_cpuflags.o
  CC eal_common_string_fns.o
  CC eal_common_hexdump.o
  CC eal_common_devargs.o
  CC eal_common_bus.o
  CC eal_common_dev.o
  CC eal_common_options.o
  CC eal_common_thread.o
  CC eal_common_proc.o
  CC rte_malloc.o
  CC malloc_elem.o
  CC malloc_heap.o
  CC rte_keepalive.o
  CC rte_service.o
  CC rte_cpuflags.o
  CC rte_spinlock.o
  SYMLINK-FILE include/exec-env/rte_interrupts.h
  SYMLINK-FILE include/exec-env/rte_kni_common.h
  SYMLINK-FILE include/exec-env/rte_dom0_common.h
  AR librte_eal.a
  INSTALL-LIB librte_eal.a
== Build lib/librte_eal/linuxapp/kni
/usr/src/linux-headers-4.16.0-041600-generic/Makefile:982: "Cannot use CONFIG_STACK_VALIDATION=y, please install libelf-dev, libelf-devel or elfutils-libelf-devel"
  CC [M]  /root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/igbuio.o
  CC [M]  /root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_main.o
  CC [M]  /root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_adminq.o
  CC [M]  /root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_dcb.o
  CC [M]  /root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_diag.o
  CC [M]  /root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_hmc.o
  CC [M]  /root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_nvm.o
/usr/src/linux-headers-4.16.0-041600-generic/Makefile:982: "Cannot use CONFIG_STACK_VALIDATION=y, please install libelf-dev, libelf-devel or elfutils-libelf-devel"
In file included from /root/openNetVM/dpdk/lib/librte_eal/linuxapp/igb_uio/i40e/i40e_osdep.h:57:0,
                 from /root/openNetVM/dpdk/lib/librte_eal/linuxapp/igb_uio/i40e/i40e_type.h:28,
                 from /root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_adminq.c:25:
/root/openNetVM/dpdk/lib/librte_eal/linuxapp/igb_uio/i40e/kcompat.h:809:2: error: #error UTS_UBUNTU_RELEASE_ABI is too large...
 #error UTS_UBUNTU_RELEASE_ABI is too large...
  ^
In file included from /root/openNetVM/dpdk/lib/librte_eal/linuxapp/igb_uio/i40e/i40e_osdep.h:57:0,
                 from /root/openNetVM/dpdk/lib/librte_eal/linuxapp/igb_uio/i40e/i40e_type.h:28,
                 from /root/openNetVM/dpdk/lib/librte_eal/linuxapp/igb_uio/i40e/i40e_prototype.h:27,
                 from /root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_nvm.c:24:
/root/openNetVM/dpdk/lib/librte_eal/linuxapp/igb_uio/i40e/kcompat.h:809:2: error: #error UTS_UBUNTU_RELEASE_ABI is too large...
 #error UTS_UBUNTU_RELEASE_ABI is too large...
  ^
In file included from /root/openNetVM/dpdk/lib/librte_eal/linuxapp/igb_uio/i40e/i40e_osdep.h:57:0,
                 from /root/openNetVM/dpdk/lib/librte_eal/linuxapp/igb_uio/i40e/i40e_type.h:28,
                 from /root/openNetVM/dpdk/lib/librte_eal/linuxapp/igb_uio/i40e/i40e_diag.h:27,
                 from /root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_diag.c:24:
/root/openNetVM/dpdk/lib/librte_eal/linuxapp/igb_uio/i40e/kcompat.h:809:2: error: #error UTS_UBUNTU_RELEASE_ABI is too large...
 #error UTS_UBUNTU_RELEASE_ABI is too large...
  ^
In file included from /root/openNetVM/dpdk/lib/librte_eal/linuxapp/igb_uio/i40e/i40e_osdep.h:57:0,
                 from /root/openNetVM/dpdk/lib/librte_eal/linuxapp/igb_uio/i40e/i40e_adminq.h:27,
                 from /root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_dcb.c:24:
/root/openNetVM/dpdk/lib/librte_eal/linuxapp/igb_uio/i40e/kcompat.h:809:2: error: #error UTS_UBUNTU_RELEASE_ABI is too large...
 #error UTS_UBUNTU_RELEASE_ABI is too large...
  ^
In file included from /root/openNetVM/dpdk/lib/librte_eal/linuxapp/igb_uio/i40e/i40e_osdep.h:57:0,
                 from /root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_hmc.c:24:
/root/openNetVM/dpdk/lib/librte_eal/linuxapp/igb_uio/i40e/kcompat.h:809:2: error: #error UTS_UBUNTU_RELEASE_ABI is too large...
 #error UTS_UBUNTU_RELEASE_ABI is too large...
  ^
  CC [M]  /root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/kni/kni_misc.o
In file included from /root/openNetVM/dpdk/lib/librte_eal/linuxapp/igb_uio/i40e/i40e.h:52:0,
                 from /root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_main.c:25:
/root/openNetVM/dpdk/lib/librte_eal/linuxapp/igb_uio/i40e/kcompat.h:809:2: error: #error UTS_UBUNTU_RELEASE_ABI is too large...
 #error UTS_UBUNTU_RELEASE_ABI is too large...
  ^
/usr/src/linux-headers-4.16.0-041600-generic/scripts/Makefile.build:324: recipe for target '/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_diag.o' failed
make[8]: *** [/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_diag.o] Error 1
make[8]: *** Waiting for unfinished jobs....
  CC [M]  /root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/kni/kni_net.o
/usr/src/linux-headers-4.16.0-041600-generic/scripts/Makefile.build:324: recipe for target '/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_hmc.o' failed
make[8]: *** [/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_hmc.o] Error 1
/usr/src/linux-headers-4.16.0-041600-generic/scripts/Makefile.build:324: recipe for target '/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_dcb.o' failed
make[8]: *** [/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_dcb.o] Error 1
/usr/src/linux-headers-4.16.0-041600-generic/scripts/Makefile.build:324: recipe for target '/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_adminq.o' failed
make[8]: *** [/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_adminq.o] Error 1
/usr/src/linux-headers-4.16.0-041600-generic/scripts/Makefile.build:324: recipe for target '/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_nvm.o' failed
make[8]: *** [/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_nvm.o] Error 1
/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_main.c: In function 'i40e_get_netdev_stats_struct':
/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_main.c:425:13: error: implicit declaration of function 'ACCESS_ONCE' [-Werror=implicit-function-declaration]
   tx_ring = ACCESS_ONCE(vsi->tx_rings[i]);
             ^
/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_main.c:425:11: error: assignment makes pointer from integer without a cast [-Werror=int-conversion]
   tx_ring = ACCESS_ONCE(vsi->tx_rings[i]);
           ^
/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_main.c: In function 'i40e_update_vsi_stats':
/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_main.c:859:5: error: assignment makes pointer from integer without a cast [-Werror=int-conversion]
   p = ACCESS_ONCE(vsi->tx_rings[q]);
     ^
/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_main.c: At top level:
/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_main.c:5750:28: error: 'struct tc_to_netdev' declared inside parameter list [-Werror]
       __be16 proto, struct tc_to_netdev *tc)  
                            ^
/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_main.c:5750:28: error: its scope is only this definition or declaration, which is probably not what you want [-Werror]
/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_main.c: In function '__i40e_setup_tc':
/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_main.c:5754:8: error: dereferencing pointer to incomplete type struct tc_to_netdev'
  if (tc->type != TC_SETUP_MQPRIO)
        ^
/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_main.c:5754:18: error: 'TC_SETUP_MQPRIO' undeclared (first use in this function)
  if (tc->type != TC_SETUP_MQPRIO)
                  ^
/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_main.c:5754:18: note: each undeclared identifier is reported only once for each function it appears in
/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_main.c: At top level:
/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_main.c:9950:19: error: initialization from incompatible pointer type [-Werror=incompatible-pointer-types]
  .ndo_setup_tc  = __i40e_setup_tc,
                   ^
/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_main.c:9950:19: note: (near initialization for 'i40e_netdev_ops.ndo_setup_tc')
/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_main.c: In function 'i40e_get_local_mac_addr':
/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_main.c:12027:2: error: implicit declaration of function 'setup_timer' [-Werror=implicit-function-declaration]
  setup_timer(&pf->service_timer, i40e_service_timer, (unsigned long)pf);
  ^
/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_main.c: In function 'i40e_remove':
/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_main.c:12949:23: error: 'struct timer_list' has no member named 'data'
  if (pf->service_timer.data)
                       ^
/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_main.c: In function '__i40e_setup_tc':
/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_main.c:5761:1: error: control reaches end of non-void function [-Werror=return-type]
 }
 ^
cc1: all warnings being treated as errors
/usr/src/linux-headers-4.16.0-041600-generic/scripts/Makefile.build:324: recipe for target '/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_main.o' failed
make[8]: *** [/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_main.o] Error 1
/usr/src/linux-headers-4.16.0-041600-generic/Makefile:1567: recipe for target '_module_/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio' failed
make[7]: *** [_module_/root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio] Error 2
Makefile:146: recipe for target 'sub-make' failed
make[6]: *** [sub-make] Error 2
/root/openNetVM/dpdk/mk/rte.module.mk:78: recipe for target 'igb_uio.ko' failed
make[5]: *** [igb_uio.ko] Error 2
/root/openNetVM/dpdk/mk/rte.subdir.mk:63: recipe for target 'igb_uio' failed
make[4]: *** [igb_uio] Error 2
make[4]: *** Waiting for unfinished jobs....
  LD [M]  /root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/kni/rte_kni.o
  Building modules, stage 2.
  MODPOST 1 modules
  CC      /root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/kni/rte_kni.mod.o
  LD [M]  /root/openNetVM/dpdk/build/build/lib/librte_eal/linuxapp/kni/rte_kni.ko
INSTALL-MODULE rte_kni.ko
/root/openNetVM/dpdk/mk/rte.subdir.mk:63: recipe for target 'linuxapp' failed
make[3]: *** [linuxapp] Error 2
/root/openNetVM/dpdk/mk/rte.subdir.mk:63: recipe for target 'librte_eal' failed
make[2]: *** [librte_eal] Error 2
/root/openNetVM/dpdk/mk/rte.sdkbuild.mk:76: recipe for target 'lib' failed
make[1]: *** [lib] Error 2
/root/openNetVM/dpdk/mk/rte.sdkroot.mk:128: recipe for target 'all' failed
make: *** [all] Error 2

Error Compiling DPDK During Install Process - numa.h

I was doing a clean install of ONVM, and encountered an error while running openNetVM/scripts/install.sh. This is the output:

== Build lib/librte_eal/linuxapp/igb_uio
== Build lib/librte_eal/linuxapp/eal
  CC eal.o
  CC eal_hugepage_info.o
  CC eal_memory.o
  CC eal_thread.o
  CC eal_log.o
  CC eal_vfio.o
  CC eal_vfio_mp_sync.o
 [PATH_TO_DPDK]/lib/librte_eal/linuxapp/eal/eal_memory.c:56:18: fatal error: numa.h: No such file or directory
 #include <numa.h>
                  ^
compilation terminated.
make[5]: *** [eal_memory.o] Error 1
make[5]: *** Waiting for unfinished jobs....
make[4]: *** [eal] Error 2
make[4]: *** Waiting for unfinished jobs....
  LD      [PATH_TO_DPDK]/build/build/lib/librte_eal/linuxapp/igb_uio/built-in.o
  CC [M]  [PATH_TO_DPDK]/build/build/lib/librte_eal/linuxapp/igb_uio/igbuio.o
  CC [M]  [PATH_TO_DPDK]/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_main.o
  CC [M]  [PATH_TO_DPDK]/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_adminq.o
  CC [M]  [PATH_TO_DPDK]build/build/lib/librte_eal/linuxapp/igb_uio/i40e_dcb.o
  CC [M]  [PATH_TO_DPDK]/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_diag.o
  CC [M]  [PATH_TO_DPDK]/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_hmc.o
  CC [M]  [PATH_TO_DPDK]/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_nvm.o
  CC [M]  [PATH_TO_DPDK]/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_virtchnl_pf.o
  CC [M]  [PATH_TO_DPDK]/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_client.o
  CC [M]  [PATH_TO_DPDK]/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_dcb_nl.o
  CC [M]  [PATH_TO_DPDK]/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_ethtool.o
  CC [M]  [PATH_TO_DPDK]/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_lan_hmc.o
  CC [M]  [PATH_TO_DPDK]/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_ptp.o
  CC [M]  [PATH_TO_DPDK]/build/build/lib/librte_eal/linuxapp/igb_uio/kcompat.o
  CC [M]  [PATH_TO_DPDK]/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_common.o
  CC [M]  [PATH_TO_DPDK]/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_debugfs.o
  CC [M]  [PATH_TO_DPDK]/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_fcoe.o
  CC [M]  [PATH_TO_DPDK]/build/build/lib/librte_eal/linuxapp/igb_uio/i40e_txrx.o
  LD [M]  [PATH_TO_DPDK]/build/build/lib/librte_eal/linuxapp/igb_uio/igb_uio.o
  Building modules, stage 2.
  MODPOST 1 modules
  CC      [PATH_TO_DPDK]/build/build/lib/librte_eal/linuxapp/igb_uio/igb_uio.mod.o
  LD [M]  [PATH_TO_DPDK]/build/build/lib/librte_eal/linuxapp/igb_uio/igb_uio.ko
  INSTALL-MODULE igb_uio.ko
  make[3]: *** [linuxapp] Error 2
  make[2]: *** [librte_eal] Error 2
  make[1]: *** [lib] Error 2
  make: *** [all] Error 2

Error while setting up OpenNetVM

Hello Sir,
I would like to setup OpenNetVM on my system.

My system configuration is as follows:

brajesh@brajesh:~$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 60
Model name: Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz
Stepping: 3
CPU MHz: 3890.390
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 7183.36
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 8192K
NUMA node0 CPU(s): 0-7

My NIC Device is

DeviceName: Onboard LAN
Subsystem: Dell Ethernet Connection I217-LM
Kernel driver in use: e1000e
Kernel modules: e1000e

When I Configure and compile DPDK, there is a problem, details are shown below:

brajesh@brajesh:~/openNetVM/scripts$ ./install.sh


ONVM Environment Variables:

RTE_SDK: /home/brajesh/openNetVM
RTE_TARGET: x86_64-native-linuxapp-gcc
ONVM_NUM_HUGEPAGES: 1024
ONVM_SKIP_HUGEPAGES:
ONVM_SKIP_FSTAB:

[sudo] password for brajesh:
Compiling and installing dpdk in /home/brajesh/openNetVM/dpdk
Configuration done using x86_64-native-linuxapp-gcc
== Build lib
== Build lib/librte_compat
== Build lib/librte_eal
== Build lib/librte_eal/common
== Build lib/librte_eal/linuxapp
== Build lib/librte_eal/linuxapp/eal
== Build lib/librte_eal/linuxapp/igb_uio
== Build lib/librte_eal/linuxapp/kni
Building modules, stage 2.
MODPOST 1 modules
Building modules, stage 2.
MODPOST 1 modules
== Build lib/librte_ring
== Build lib/librte_cmdline
== Build lib/librte_timer
== Build lib/librte_cfgfile
== Build lib/librte_lpm
== Build lib/librte_acl
== Build lib/librte_kvargs
== Build lib/librte_jobstats
== Build lib/librte_metrics
== Build lib/librte_power
== Build lib/librte_meter
== Build lib/librte_mempool
== Build lib/librte_eventdev
== Build lib/librte_hash
== Build lib/librte_mbuf
== Build lib/librte_efd
== Build lib/librte_net
== Build lib/librte_cryptodev
== Build lib/librte_reorder
== Build lib/librte_ether
== Build lib/librte_sched
== Build lib/librte_vhost
== Build lib/librte_gro
== Build lib/librte_ip_frag
== Build lib/librte_bitratestats
== Build lib/librte_latencystats
== Build lib/librte_distributor
== Build lib/librte_pdump
== Build lib/librte_kni
== Build lib/librte_port
== Build lib/librte_table
== Build lib/librte_pipeline
== Build buildtools
== Build buildtools/pmdinfogen
== Build drivers
== Build drivers/bus
== Build drivers/mempool
== Build drivers/event
== Build drivers/event/skeleton
== Build drivers/mempool/ring
== Build drivers/mempool/stack
== Build drivers/event/sw
== Build drivers/event/octeontx
== Build drivers/crypto
== Build drivers/net
== Build drivers/crypto/scheduler
== Build drivers/crypto/null
== Build drivers/net/ark
== Build drivers/net/af_packet
== Build drivers/net/avp
== Build drivers/net/bonding
== Build drivers/net/cxgbe
== Build drivers/net/e1000
== Build drivers/net/ena
== Build drivers/net/enic
== Build drivers/net/failsafe
== Build drivers/net/fm10k
== Build drivers/net/i40e
== Build drivers/net/ixgbe
== Build drivers/net/liquidio
== Build drivers/net/nfp
== Build drivers/net/bnxt
== Build drivers/net/null
== Build drivers/net/qede
== Build drivers/net/ring
== Build drivers/net/sfc
== Build drivers/net/tap
== Build drivers/net/thunderx
== Build drivers/net/virtio
== Build drivers/net/vmxnet3
== Build drivers/net/kni
== Build drivers/net/vhost
== Build app
== Build app/test-pmd
== Build app/proc_info
== Build app/pdump
== Build app/test-eventdev
== Build app/test-crypto-perf
Build complete [x86_64-native-linuxapp-gcc]
Configuration done using x86_64-native-linuxapp-gcc
== Build lib
== Build lib/librte_compat
== Build lib/librte_eal
== Build lib/librte_eal/common
== Build lib/librte_eal/linuxapp
== Build lib/librte_eal/linuxapp/eal
== Build lib/librte_eal/linuxapp/igb_uio
== Build lib/librte_eal/linuxapp/kni
Building modules, stage 2.
Building modules, stage 2.
MODPOST 1 modules
MODPOST 1 modules
== Build lib/librte_ring
== Build lib/librte_cmdline
== Build lib/librte_timer
== Build lib/librte_cfgfile
== Build lib/librte_lpm
== Build lib/librte_kvargs
== Build lib/librte_acl
== Build lib/librte_jobstats
== Build lib/librte_metrics
== Build lib/librte_power
== Build lib/librte_meter
== Build lib/librte_mempool
== Build lib/librte_eventdev
== Build lib/librte_hash
== Build lib/librte_mbuf
== Build lib/librte_efd
== Build lib/librte_reorder
== Build lib/librte_net
== Build lib/librte_cryptodev
== Build lib/librte_ether
== Build lib/librte_sched
== Build lib/librte_vhost
== Build lib/librte_ip_frag
== Build lib/librte_gro
== Build lib/librte_bitratestats
== Build lib/librte_latencystats
== Build lib/librte_distributor
== Build lib/librte_kni
== Build lib/librte_pdump
== Build lib/librte_port
== Build lib/librte_table
== Build lib/librte_pipeline
== Build buildtools
== Build buildtools/pmdinfogen
== Build drivers
== Build drivers/bus
== Build drivers/mempool
== Build drivers/event
== Build drivers/event/skeleton
== Build drivers/event/octeontx
== Build drivers/event/sw
== Build drivers/mempool/ring
== Build drivers/mempool/stack
== Build drivers/net
== Build drivers/crypto
== Build drivers/net/af_packet
== Build drivers/net/avp
== Build drivers/net/ark
== Build drivers/net/cxgbe
== Build drivers/crypto/scheduler
== Build drivers/net/ena
== Build drivers/net/e1000
== Build drivers/net/bonding
== Build drivers/crypto/null
== Build drivers/net/enic
== Build drivers/net/failsafe
== Build drivers/net/fm10k
== Build drivers/net/i40e
== Build drivers/net/liquidio
== Build drivers/net/ixgbe
== Build drivers/net/nfp
== Build drivers/net/bnxt
== Build drivers/net/null
== Build drivers/net/qede
== Build drivers/net/ring
== Build drivers/net/sfc
== Build drivers/net/thunderx
== Build drivers/net/tap
== Build drivers/net/virtio
== Build drivers/net/vmxnet3
== Build drivers/net/kni
== Build drivers/net/vhost
== Build app
== Build app/test-pmd
== Build app/proc_info
== Build app/test-crypto-perf
== Build app/pdump
== Build app/test-eventdev
Build complete [x86_64-native-linuxapp-gcc]
Installation cannot run with T defined and DESTDIR undefined
Configuring 1024 hugepages with size 1048576
huge /mnt/huge hugetlbfs defaults 0 0
Mounting hugepages
Creating 1024 hugepages
Configuring environment
Loading uio kernel modules
insmod: ERROR: could not insert module igb_uio.ko: Required key not available

Please resolve the problem.

Git submodules problem

when I use the comman
git submodule update
it appear the following problem:
fatal: reference is not a tree: 091bae1beaeedd572be794eb04566903ba10bbcf Unable to checkout 'd862f7a09ebc321ec54f8a910bcd410f5c4b8376' in submodule path 'dpdk' Unable to checkout '091bae1beaeedd572be794eb04566903ba10bbcf' in submodule path 'tools/Pktgen/pktgen-dpdk'
How can fix it?

Low performance with OpenNetVM

Hi all,
I'm trying to do performance evaluation with OpenNetVM with simple application (examples/bridge). The test is RFC2544 throughput testing. However the result (Zero-Loss point) is quite low for small packet size(compare to l2fwd-dpdk example)

Packet size(bytes) OpenNetVM-Bridge app(% of 10GbE) Intel DPDK-L2FWD app(% of 10GbE)
64 28 80
128 51 100
256 94 100
512 100 100

The setup topo is.
bridgeapp
The connection between two server is cross-over (no switch).
Two servers is Dell R730, 192Gb RAM and 24 cores of each.
The NICs are on socket 0 with cores 0,2,4,6,8,10,12,14...,22. All cores are isolated by declaration in the grub.
Manager: ./go.sh 0,2,4,6,8,10,12 3
Bridge app: ./go 14 1

Could you please give me some hints to get high test result?
Thank and Best Regards
Lapd

Execution of example's NFs faces some issues(problems).

Hello sir,
When I execute flow table NF given in example. It faces following issues. Also, It faces similar issues with other NF's examples.

cloudcomp@cloudcomp:~/openNetVM/examples/flow_table$ ./go.sh 3,5,7 1
[sudo] password for cloudcomp:
Sorry, try again.
[sudo] password for cloudcomp:
EAL: Detected 16 lcore(s)
PANIC in rte_eal_config_attach():
Cannot open '/var/run/.rte_config' for rte_mem_config
7: [/home/cloudcomp/openNetVM/examples/flow_table/build/app/flow_table(_start+0x29) [0x442799]]
6: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0) [0x7ffff71f9830]]
5: [/home/cloudcomp/openNetVM/examples/flow_table/build/app/flow_table(main+0x13) [0x43fb43]]
4: [/home/cloudcomp/openNetVM/examples/flow_table/build/app/flow_table(onvm_nflib_init+0x2b) [0x6a99eb]]
3: [/home/cloudcomp/openNetVM/examples/flow_table/build/app/flow_table(rte_eal_init+0xc2d) [0x48259d]]
2: [/home/cloudcomp/openNetVM/examples/flow_table/build/app/flow_table(__rte_panic+0xc3) [0x43a657]]
1: [/home/cloudcomp/openNetVM/examples/flow_table/build/app/flow_table(rte_dump_stack+0x2b) [0x48b96b]]
Aborted (core dumped)

Help with packetgen

Hi,

I was getting errors when trying to use the packetgen instead of the provided speed-tester app to measure throughput. Can someone please add a readme with a simple example illustrating how that will work?

Thanks,
Adil

Problems when running example NF test_flow_dir

Hi,

I was getting problems when trying to run NF test_flow_dir.

When sending packets to openNetVM, this NF outputs nothing but crashed(even no any error prompts). And I found that tbl_index = rte_hash_lookup_with_hash(table->hash, (const void *)&key, pkt->hash.rss); in int onvm_ft_lookup_pkt(...) works wrong. But why? Is it some special mechanism of rte_hash in DPDK or bug of openNetVM?

Can someone give me some suggestions to make it work correctly?

Thanks,
Zhilong

OpenNetVM manager will crash if a NF's service id was assigned greater than or equal to 16

To reproduce this bug, just start OpenNetVM manager and any NF with service id 16 or greater than 16, the manager will crash immediately. The reason is the following:

The array of "mz_services" was allocated only 16 entries because currently we support 16(actually is 15, 0 is reserved) NFs. But when the NF send a NF_READY message to the manager with the service id, the manager will directly access mz_services[service_id] without checking if the service_id is less than 16, which causes manager crash.The same access operation also apply to the "nf_per_service_count", which is also a 16 length array.

Code error in onvm_pkt_common.c

In line 224 in function onvm_pkt_flush_port_queue,
tx_stats = &(ports->tx_stats); should be
tx_stats = &(ports[port]->tx_stats);

Regarding NIC information

I want to ask a basic question. Actually, I'm facing some issue. So, I need some clearance about NIC.

  1. Is it necessary to give 2 NICs to DPDK driver?
  2. Is it necessary that NIC should be 10G?
    Another question is related to CPU cores. I have 40 cores in my system, but when I'm checking the CPU architecture by the code corehelper.py, it is showing only 5 cores. So, how to allocate all the cores to ONVM so that more NFs can run?
    Thanks in advance.

web console script should serve onvm_web directory

This issue describes a simple bug/improvement for the OpenNetVM code, and is a great way for YOU to get involved with improving this open source project! If you want to fix it, just create a pull request with the fix as detailed in the Contributing document. If you have questions, post here!


When running the manager with the -s web argument, the output gives incorrect advice for starting the web console server:

...
APP: Core 1: Running TX thread for NFs 1 to 15
APP: Core 2: Running RX thread for RX queue 0
APP: Core 0: Running master thread
APP: ONVM stats can be viewed through the web console
APP: 	To activate, please run $ONVM_HOME/onvm_web/start_web_console.sh

If you run this command, it will cause whatever the current directory is (typically onvm/) to be served as the web root.

To fix, we should change the start_web_console.sh script so that it will serve the onvm_web directory regardless of where the script is run.

We should also change this message to something like:

To ensure the web console is active, run $ONVM_HOME/onvm_web/start_web_console.sh

Since the user may have started it already.

install.sh doesn't remove the old igb_uio module and install the new one when compiling dpdk

Bug Report

Current Behavior
install.sh doesn't reinstall igb_uio module generated by dpdk if it finds the system has already installed one. Since dpdk provides the driver of igb_uio which might be often updated, the old igb_uio module should be removed and replaced by the dpdk's. This can cause a run-time error for running dpdk_iface_main to create a dpdk-iface device.

Expected behavior/code
igb_uio module should be removed and re-installed at each time of compiling dpdk

Steps to reproduce

  • Apply for a new machine or reboot a machine from cloudlab, the system has already installed a default igb_uio, use grep -m 1 "igb_uio" /proc/modules to display:

igb_uio 2446317 0 - Live 0x0000000000000000 (OX)

  • Then run install.sh to compile dpdk, and use grep -m 1 "igb_uio" /proc/modules to display again:

igb_uio 2446317 0 - Live 0x0000000000000000 (OX)

The old igb_uio module was not replaced by the new one.

  • Run setup_mtcp_onvm_env.sh to create a dpdk-iface device, an error happened:
xiaosuGW@node2:~/mtcp$ ./setup_mtcp_onvm_env.sh
Checking ldflags.txt...
RTE_SDK env variable is set to /users/xiaosuGW/openNetVM-dev/dpdk
RTE_TARGET env variable is set to x86_64-native-linuxapp-gcc
Are you using an Intel NIC (y/n)? y
Creating dpdk interface entries
make -C /lib/modules/3.13.0-117-generic/build/ M=/users/xiaosuGW/mtcp/dpdk-iface-kmod modules
make[1]: Entering directory `/usr/src/linux-headers-3.13.0-117-generic'
  Building modules, stage 2.
  MODPOST 1 modules
make[1]: Leaving directory `/usr/src/linux-headers-3.13.0-117-generic'
make -C /lib/modules/3.13.0-117-generic/build/ M=/users/xiaosuGW/mtcp/dpdk-iface-kmod modules
make[1]: Entering directory `/usr/src/linux-headers-3.13.0-117-generic'
  Building modules, stage 2.
  MODPOST 1 modules
make[1]: Leaving directory `/usr/src/linux-headers-3.13.0-117-generic'
sudo ./dpdk_iface_main
Removing existing device node entry... not present.
Creating device node entry... done.
Setting permissions on the device node entry... done.
Scanning the system for dpdk-compatible devices... done.
Clearing previous entries
ioctl call failed!
make: *** [run] Error 1

Environment

  • OS: cloudlab machines
  • onvm version: develop branch with commit e305489b76c868e722470c7799625de90d5f37bb
  • dpdk version: 18.11.0
  • mtcp: branch onvm_build_readme_fix

Possible Solution
Removing the old igb_uio and re-installing the new one under $RTE_SDK/$RTE_TARGET/kmod can resolve this problem.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.