Code Monkey home page Code Monkey logo

Comments (15)

eugeneia avatar eugeneia commented on August 15, 2024

Might be that you need to make Linux release the NIC/pci resource first? Seems like Linux has already attached a driver. A sudo ./snabb pci_bind --unbind <pciaddr> should do. (Make sure the machine doesn’t actually need that interface for connectivity! I tend to shoot myself in the foot that way some times.)

If that doesn’t fix it, could you include the output of sudo strace -f ./snabb learningswitch. Seems like mapping the device memory failed for some reason and it could be useful to get the error from the mmap syscall.

Another thing from the top of my head is to make sure the kernel is booted with iommu=off, Snabb doesn’t play well with iommu enabled at the time being.

As an aside: in case you didn’t see it, there is a L2 learning bridge app in apps/bridge. Might be a useful reference.

from snabb.

alexandergall avatar alexandergall commented on August 15, 2024

This looks a lot like an instance of #1286. Unbinding the device manually as @eugeneia suggests should be a valid workaround. Unfortunately, there is no proper fix for this yet, AFAIK.

from snabb.

irevoire avatar irevoire commented on August 15, 2024

Thanks ! This worked.
I think the best way to "fix" it would be to put it in the documentation at the start of all the driver parts, I will probably do a little patch if I understand how to make my code works 😀

Now I've done a setup where enp1s0f1 send packet to enp4s0f0. I can watch the packet going from one interface to the other, but when I run my snabb code on the enp4s0f0 I never see the print got a packet appearing.
Here is the full output :

% sudo ./snabb learningswitch
init switch                                  
[mounting /var/run/snabb/hugetlbfs]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 0 -> 1]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 1 -> 2]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 2 -> 3]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 3 -> 4]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 4 -> 5]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 5 -> 6]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 6 -> 7]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 7 -> 8]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 8 -> 9]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 9 -> 10]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 10 -> 11]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 11 -> 12]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 12 -> 13]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 13 -> 14]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 14 -> 15]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 15 -> 16]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 16 -> 17]
link report:
                   0 sent on nic2.tx -> learning.input (loss rate: 0%)

Also @eugeneia I wanted to use the learning bridge but since I didn't really get how everything worked I first want to reimplement the sink and repeater application on real interface.

from snabb.

eugeneia avatar eugeneia commented on August 15, 2024

I think the best way to "fix" it would be to put it in the documentation at the start of all the driver parts, I will probably do a little patch if I understand how to make my code works grinning

The expected behaviour is that you shouldn’t have to do this and Snabb would just unbind it automatically. So I think we need to fix the underlying issue described in #1286 as @alexandergall mentioned (which includes a suggested approach IIUC).

I think your problem is that nic2.tx should be nic2.output. I.e., the links intel_mp wants to use are called input and output. Sorry for that confusion, our old Intel 82599 used rx/tx but we at some point decided those names were too cryptic and changed them for new apps.

I collected some hints before realizing this, maybe they are useful still:

0 sent on nic2.tx -> learning.input (loss rate: 0%)

This basically means that the NIC didn’t receive any packets, or at least it didn’t put any packets on its tx link. If you run snabb with SNABB_SHM_KEEP=y the process will leave behind its internal state counters, which you can query using snabb shm to get more detailed device statistics:

$ sudo ./snabb shm /var/run/snabb/intel-mp/01:00.0/stats/
or
$ sudo ./snabb shm /var/run/snabb/30407/apps/nic/pci/01:00.0/

In the latter case 40407 is the pid of the Snabb process. You can also query these while the process is running (even without SNABB_SHM_KEEP). Might do something like

while true do
   engine.main({duration=10, report = {showlinks=true}})
end

and leave the process running while debugging.

One interesting field (beyond checking if there are any rx packets) is the status register. status: 2 means the driver didn’t get a “link up” event, i.e. the link is down. You can use wait_for_link=true in the intel_mp config to tell the driver to block until it gets a link up. (This of course assuming you actually have a cable connected and are not intending to use the NIC in loopback mode.)

(In this case where there is no output link configured none of this helps, because intel_mp will just idle and never get to syncing device stats...)

Also, you could run the intel_mp selftest on two ports connected to each other to make sure the NIC is functional. (Is iommu=off?)

sudo SNABB_PCI_INTEL0=01:00.0 SNABB_PCI_INTEL1=01:00.1 apps/intel_mp/selftest.sh

from snabb.

irevoire avatar irevoire commented on August 15, 2024

Ok so I changed my code to this :

function run (parameters)
        local c = config.new()

        config.app(c, "learning", switch.Switch, parameters)
        config.app(c, "nic2", intel_mp, {pciaddr="0000:04:00.0"})

        config.link(c, "nic2.output -> learning.input")

        engine.configure(c)
        engine.main({duration=10, report = {showlinks=true}})
end

And then ran it with this SNABB_SHM_KEEP=y sudo ./snabb learningswitch

Still got zero packets, BUT as you guessed it, when looking at the stats I got this :

macaddr: 57,237,940,804,352
mtu: 9,014
promisc: 1
q0_rxbytes: 0
q0_rxdrops: 0
.
.
.
status: 2
type: 4,096

So the link is down.
The thing is, once snabb unbind the interface I don't know how to get it up since I can't see it when doing a ip link.

And obviously when I run the test they fail, here is what I get :
https://gist.github.com/irevoire/6d0b64b01654837e956292b7bf92fbba

For the iommu I don't khow how I can be sure it's not running.
I've tried to do that :

 % dmesg | grep -e DMAR -e IOMMU                                                             
[    0.008076] ACPI: DMAR 0x000000007B2FDF58 0000A8 (v01 INTEL  SKL      00000001 INTL 00000001)
[    0.307658] DMAR: Host address width 39
[    0.307659] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[    0.307663] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e
[    0.307663] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[    0.307665] DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da
[    0.307666] DMAR: RMRR base: 0x0000007a718000 end: 0x0000007a737fff
[    0.307667] DMAR: RMRR base: 0x0000007b800000 end: 0x0000007fffffff
[    0.307668] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 1
[    0.307668] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[    0.307669] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.309103] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    2.119108] vboxpci: IOMMU not found (not registered)

Here as you can see the iommu seems off. And I also tried to add the iommu=off at boot time to be sure it's disabled and I still get the same error and dmesg | grep -e DMAR -e IOMMU output.

from snabb.

eugeneia avatar eugeneia commented on August 15, 2024

You can restore device control to the kernel via sudo ./snabb pci_bind --bind <pci>.

If you added iommu=off to the boot command line I think we can be sure its disabled. From searching the web I gathered dmesg should have printed something like

[    0.000000] DMAR: IOMMU enabled

if it was enabled.

I’d double-check that the ports are cabled, maybe reboot for good luck. ;-)

Just to mention it, you can also test your code on a single port of the NIC. I.e., if you can have two instances of the intel_mp driver on a port using VMDq and the NIC will switch packets between them, no cable required.

config.app(c, "nic1", intel_mp, {pciaddr="0000:04:00.0", vmdq=true, macaddr="52:00:00:00:00:01"})
config.app(c, "nic2", intel_mp, {pciaddr="0000:04:00.0", vmdq=true, macaddr="52:00:00:00:00:02"})

from snabb.

irevoire avatar irevoire commented on August 15, 2024

Well run the sudo ./snabb pci_bind --bind <pci> command, most of the time I still can't see my nic with ip link.

I've done a reboot, then put my two interface up, put a link between them.
Then I started sending packet on one of the interface and checked with wireshark that everything worked ;
wireshark

After this I ran the following commands ;

irevoire@irevoire ~/snabb/src % sudo lshw -class network -businfo 
Bus info          Périphérique  Classe         Description
============================================================
pci@0000:01:00.0  enp1s0f0        network        82580 Gigabit Network Connection
pci@0000:01:00.1  enp1s0f1        network        82580 Gigabit Network Connection
pci@0000:01:00.2  enp1s0f2        network        82580 Gigabit Network Connection
pci@0000:01:00.3  enp1s0f3        network        82580 Gigabit Network Connection
pci@0000:02:00.0  eth5            network        I210 Gigabit Network Connection
pci@0000:03:00.0  eno1            network        I210 Gigabit Network Connection
pci@0000:04:00.0  enp4s0f0        network        82599ES 10-Gigabit SFI/SFP+ Network Connection
pci@0000:04:00.1  enp4s0f1        network        82599ES 10-Gigabit SFI/SFP+ Network Connection
                  docker0         network        Ethernet interface
irevoire@irevoire ~/snabb/src % sudo ./snabb pci_bind --unbind 0000:04:00.0
Unbound 0000:04:00.0, ready for Snabb.                      
irevoire@irevoire ~/snabb/src % sudo ./snabb pci_bind --unbind 0000:04:00.1
Unbound 0000:04:00.1, ready for Snabb.
irevoire@irevoire ~/snabb/src % sudo SNABB_PCI_INTEL0=0000:04:00.0 SNABB_PCI_INTEL1=0000:04:00.1 apps/intel_mp/selftest.sh 

And the tests still failed ☹️

FAILED: ./test_10g_1q_blast.sh

Also when running the % sudo ./snabb shm /var/run/snabb/intel-mp/04:00.X/stats I still got the status: 2.

I don't know if you have any idea left but I don't get what is going on

from snabb.

eugeneia avatar eugeneia commented on August 15, 2024

I suppose in this case it would be interesting to know how the tests failed exactly, like what’s the cause? Didn’t get a link up? Is there some sort of error message? Could you include the output of

sudo SNABB_PCI_INTEL0=0000:04:00.0 SNABB_PCI_INTEL1=0000:04:00.1 strace -f apps/intel_mp/selftest.sh

in a gist?

from snabb.

irevoire avatar irevoire commented on August 15, 2024

The log were too big so I've uploaded everything on my server :
https://irevoire.ovh/stdout.txt
and
https://irevoire.ovh/stderr.txt

from snabb.

eugeneia avatar eugeneia commented on August 15, 2024

Hmm yeah I naively underestimated the amount of data that would produce. Either there is nothing suspicious in there or I can’t find it. Does using the NIC in VMDq mode (#1434 (comment)) work?

If you are willing to give me access to the box I could try to debug this interactively when I find some time. My SSH pubkey is

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAEAQC/zwFEk3x5wI0hZAr91DIWRL0YlWwBgJ0XoFJE0aRnblQ842Cg7cKAgNVnRhgBd8Wz1xGxAOOE0uTGuUs+2wP9/XAL82pjOg9gPqL2B55NnihK4MDykAzGrTZQlUoaVH4ukmiSyaw3W83BnLjg/lQue/71DYhmmWYYj5W1RNLsQMHW/7Ddp/3vJv+Ltffct01eQvzG809/PLz7hCTNFauWTLEWB6hPXBpVR8gMRlOaDzEoGo0/lTKPZNwbPTIGrdRWWhOXfF+JBEl20lS8MVFcC66aHQoIPEg4ADJtyNJMYB1lFH4Pm+fgeaU+j6d621ju45EWOgLwSw49EjaITKnnrrOv/B+lCeIbFEi9J+Whr77KU1PsVqSkfbqStoWOWIlQJmyhuq3FDUZaYj7LSDjbSxJhmqd+SODLz1wJ1/dP2mCdErI4QyXfbV4f6AIdDYXQ4s7R3XJ2yn4rdXFDnYhJgbQ/IZIqMpg1NGjeNBJfahzzMSZTItCMb1kyY6dCMruQiEr1RRlQkIQurYVkq5NrBg2DHbmA5ZmZvd41h58o34tEsCe9cTaUdmYoiA1PHCtsl4LaEOvsjzqr7mTdfT1Le0v1//4k65XpRf9peNxtyTs1c899i2iq7WLTdrssuPo/AOrB3dm3hcUIwqO/toHAN/vKHht4242UypDLJXEcXLQmafCEiI1xW9Q9ZbDTYCksJ20WzVW5LCe0CyXMyB/0AuRvnaTDbUANH+J7JKh5zuhtjBcmYzTFt8QkJjj4yRTTMlSxC6T2JvJxaSf25kJ7eHzt+zPiQ1QN7jECPpi5jpxIcy4GQk7AfbDW5DMI1SM250Kao6BLBZ5cI5fFIufMMmLdHLaWgC9tF/A5p0c+etvXMQFkdZ05FE+aHqVrabArHIAIiNfzKKDaGyTPh9X4s0f4lWeYhu0vlEU69JW05tYm+HP+1j1lARKwKlbQ509sxP4126irMtV6ksO/3IrryKlTFMaKax10fJwvfRwQkNjuYvd5I2CWN7oGinjggAO757nI6gK+D0WfilAPguS21CFq+9hyA2THs5KXfXap2dsqFmJCiu78KslcDmCTG0PwenBii2SrYuzddJnGjTk0HMZc26nj02XgoQhlaVOvjQYzx+8PPg5V6qwjcOhKRp4/7wwFWqt1twj4O3SBd1PhTFrY+SFfSaGNTqaeWiaLkQ1nN5UsNNTonLPiCj8gwsJKg5MwwOlFcPxyIjdXayQ3dRBiyyW8sRPHx/vyK0Xt3uH3dTBMt+oxTOlxj6s0jWIJ6zbsBiyATsvf8HwNeX1KU2NSrgUj+oarmuYKa+PX2+N0EKF9u0v9iN99LH/1/v4ilpSwwugZnWXwXdJeXjqn

Sorry for the frustrating first experience. :-/

from snabb.

irevoire avatar irevoire commented on August 15, 2024

Actually I don't know how I could check if anything works with VMDq. Could you provide some sort of main I could run ?

Since I'm running these tests from my company computer I can't really give you any access to it sorry ☹️

from snabb.

eugeneia avatar eugeneia commented on August 15, 2024

You can configure two intel_mp apps on a single port using vmdq. Check this out:

#!snabb snsh

local intel_mp = require("apps.intel_mp.intel_mp")
local basic_apps = require("apps.basic.basic_apps")
local synth = require("apps.test.synth")

nic_pci = "01:00.0"
mac_vif1 = "52:00:00:00:00:01"
mac_vif2 = "52:00:00:00:00:02"

local c = config.new()

config.app(c, "vif1", intel_mp.Intel,
           {pciaddr=nic_pci, vmdq=true, macaddr=mac_vif1})
config.app(c, "vif2", intel_mp.Intel,
           {pciaddr=nic_pci, vmdq=true, macaddr=mac_vif2})

config.app(c, "source", synth.Synth, {src=mac_vif1, dst=mac_vif2})
config.app(c, "sink", basic_apps.Sink)

config.link(c, "source.output -> vif1.input")
config.link(c, "vif2.output -> sink.input")

engine.configure(c)

engine.main{duration=10}
engine.report_links()

I saved this as vmdq_source_sink.snabb and ran it like so:

-bash-4.3$ sudo ./snabb snsh vmdq_source_sink.snabb
link report:
         109,529,314 sent on source.output -> vif1.input (loss rate: 2%)
         109,516,684 sent on vif2.output -> sink.input (loss rate: 0%)

This will use the NIC but does not actually depend on the link (the packets are switched on the chip.) If this works for you we at least know the card works and its just the link up we are having issues with.

from snabb.

irevoire avatar irevoire commented on August 15, 2024

Ok so I just changed the nic_pci = "01:00.0" to nic_pci = "0000:04:00.0" and here is what I got :

% sudo ./snabb snsh vmdq_source_sink.snabb
[mounting /var/run/snabb/hugetlbfs]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 0 -> 1]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 1 -> 2]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 2 -> 3]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 3 -> 4]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 4 -> 5]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 5 -> 6]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 6 -> 7]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 7 -> 8]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 8 -> 9]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 9 -> 10]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 10 -> 11]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 11 -> 12]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 12 -> 13]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 13 -> 14]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 14 -> 15]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 15 -> 16]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 16 -> 17]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 17 -> 18]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 18 -> 19]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 19 -> 20]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 20 -> 21]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 21 -> 22]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 22 -> 23]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 23 -> 24]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 24 -> 25]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 25 -> 26]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 26 -> 27]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 27 -> 28]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 28 -> 29]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 29 -> 30]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 30 -> 31]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 31 -> 32]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 32 -> 33]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 33 -> 34]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 34 -> 35]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 35 -> 36]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 36 -> 37]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 37 -> 38]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 38 -> 39]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 39 -> 40]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 40 -> 41]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 41 -> 42]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 42 -> 43]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 43 -> 44]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 44 -> 45]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 45 -> 46]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 46 -> 47]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 47 -> 48]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 48 -> 49]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 49 -> 50]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 50 -> 51]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 51 -> 52]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 52 -> 53]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 53 -> 54]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 54 -> 55]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 55 -> 56]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 56 -> 57]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 57 -> 58]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 58 -> 59]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 59 -> 60]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 60 -> 61]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 61 -> 62]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 62 -> 63]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 63 -> 64]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 64 -> 65]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 65 -> 66]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 66 -> 67]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 67 -> 68]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 68 -> 69]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 69 -> 70]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 70 -> 71]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 71 -> 72]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 72 -> 73]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 73 -> 74]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 74 -> 75]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 75 -> 76]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 76 -> 77]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 77 -> 78]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 78 -> 79]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 79 -> 80]
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 80 -> 81]
link report:
               4,779 sent on source.output -> vif1.input (loss rate: 99%)
                   0 sent on vif2.output -> sink.input (loss rate: 0%)
sudo ./snabb snsh vmdq_source_sink.snabb  10,03s user 0,05s system 99% cpu 10,115 total

Also I was wondering, is it normal to have this much huge page allocated ?

from snabb.

eugeneia avatar eugeneia commented on August 15, 2024

Huge page allocation seems normal. It depends on the application but Snabb will initially fill the packet free list with hugepage/DMA memory and ~100 hugepages just for that is normal.

What you are experiencing certainly looks like the IOMMU issue. I just checked back with the configuration we use in the lab and noticed the boot parameter we set is actually intel_iommu=off (I mistakenly suggested iommu=off, without the intel_).

By the way, could you test if #1436 fixes your initial problem?

from snabb.

irevoire avatar irevoire commented on August 15, 2024

Hello,

Sorry I was working on something else. So I updated my boot parameters, now when run dmesg I got that :

% dmesg | grep -i iommu 
[    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-linux root=UUID=7a076980-bd4b-40d1-b5cf-91cf11f72caf rw quiet intel_iommu=off
[    0.193999] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-linux root=UUID=7a076980-bd4b-40d1-b5cf-91cf11f72caf rw quiet intel_iommu=off
[    0.194037] DMAR: IOMMU disabled
[    0.308985] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 1
[    2.080840] vboxpci: IOMMU not found (not registered)

The DMAR: IOMMU disabled seems like a good thing.
But when I run the self test it still don't works.

When I run the vmdq test I also get the same result :

irevoire@irevoire ~/snabb/src % sudo ./snabb snsh vmdq_source_sink.snabb
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 93 -> 94]
...
[memory: Provisioned a huge page: sysctl vm.nr_hugepages 156 -> 157]
link report:
               4,779 sent on source.output -> vif1.input (loss rate: 99%)
                   0 sent on vif2.output -> sink.input (loss rate: 0%)

I will test your fix in some minutes.

from snabb.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.