Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: NTop: Misc

Strange pf_ring compile issues

 

 

NTop misc RSS feed   Index | Next | Previous | View Threaded


josip.djuricic at gmail

Sep 6, 2011, 1:02 PM

Post #1 of 3 (425 views)
Permalink
Strange pf_ring compile issues

Hi,

just recently we bought DNA enabled driver for e1000e (Intel 82571EB),
and decided to implement pf_ring as part of our test solution.

But we have strange compile issues on Dell poweredge 1950 (2x Intel Dual
Core 3.0).
Operating system installed is Gentoo Linux 64 bit, compiled kernel is
2.6.39-r3. When we compile it it finishes normally, then I can load
pf_ring with:
| insmod pf_ring.ko transparent_mode=2 enable_tx_capture=0 quick_mode=1|

and it loads normally:
cat /proc/net/pf_ring/info
PF_RING Version : 4.7.3 ($Revision: $)
Ring slots : 4096
Slot version : 13
Capture TX : No [RX only]
IP Defragment : No
Socket Mode : Quick
Transparent mode : No (mode 2)
Total rings : 0
Total plugins : 0

Then trying to insert e1000e driver spawns this:
[ 882.456375] e1000e: Intel(R) PRO/1000 Network Driver - 1.3.10a-NAPI
[ 882.456378] e1000e: Copyright(c) 1999 - 2011 Intel Corporation.
[ 882.456410] e1000e 0000:0b:00.0: Disabling ASPM L1
[ 882.456425] e1000e 0000:0b:00.0: PCI INT A -> GSI 16 (level, low) ->
IRQ 16
[ 882.456438] e1000e 0000:0b:00.0: setting latency timer to 64
[ 882.456485] e1000e 0000:0b:00.0: PCI INT A disabled
[ 882.456489] e1000e: probe of 0000:0b:00.0 failed with error -5
[ 882.456494] e1000e 0000:0b:00.1: Disabling ASPM L1
[ 882.456507] e1000e 0000:0b:00.1: PCI INT B -> GSI 17 (level, low) ->
IRQ 17
[ 882.456514] e1000e 0000:0b:00.1: setting latency timer to 64
[ 882.456549] e1000e 0000:0b:00.1: PCI INT B disabled
[ 882.456552] e1000e: probe of 0000:0b:00.1 failed with error -5


But if I compile the same driver on my home machine and just transfer it
to the server:
[ 1720.874465] e1000e: Intel(R) PRO/1000 Network Driver - 1.3.10a-NAPI
[ 1720.874468] e1000e: Copyright(c) 1999 - 2011 Intel Corporation.
[ 1720.874500] e1000e 0000:0b:00.0: Disabling ASPM L1
[ 1720.874515] e1000e 0000:0b:00.0: PCI INT A -> GSI 16 (level, low) ->
IRQ 16
[ 1720.874529] e1000e 0000:0b:00.0: setting latency timer to 64
[ 1720.874636] e1000e 0000:0b:00.0: irq 106 for MSI/MSI-X
[ 1721.029675] e1000e 0000:0b:00.0: eth2: (PCI Express:2.5GB/s:Width x4)
00:15:17:3d:14:a2
[ 1721.029677] e1000e 0000:0b:00.0: eth2: Intel(R) PRO/1000 Network
Connection
[ 1721.029749] e1000e 0000:0b:00.0: eth2: MAC: 1, PHY: 4, PBA No: C57721-005
[ 1721.029758] e1000e 0000:0b:00.1: Disabling ASPM L1
[ 1721.029765] e1000e 0000:0b:00.1: PCI INT B -> GSI 17 (level, low) ->
IRQ 17
[ 1721.029775] e1000e 0000:0b:00.1: setting latency timer to 64
[ 1721.029859] e1000e 0000:0b:00.1: irq 107 for MSI/MSI-X
[ 1721.038652] net.sh used greatest stack depth: 3768 bytes left
[ 1721.184663] e1000e 0000:0b:00.1: eth3: (PCI Express:2.5GB/s:Width x4)
00:15:17:3d:14:a3
[ 1721.184665] e1000e 0000:0b:00.1: eth3: Intel(R) PRO/1000 Network
Connection
[ 1721.184741] e1000e 0000:0b:00.1: eth3: MAC: 1, PHY: 4, PBA No: C57721-005
[ 1733.298130] [DNA] Enabled DNA on eth2 (len=65536)
[ 1733.298214] e1000e 0000:0b:00.0: irq 106 for MSI/MSI-X
[ 1733.349077] e1000e 0000:0b:00.0: irq 106 for MSI/MSI-X
[ 1733.350240] ADDRCONF(NETDEV_UP): eth2: link is not ready
[ 1735.427902] e1000e: eth2 NIC Link is Up 1000 Mbps Full Duplex, Flow
Control: Rx
[ 1735.428689] ADDRCONF(NETDEV_CHANGE): eth2: link becomes ready
[ 1735.626133] [DNA] Enabled DNA on eth3 (len=65536)
[ 1735.626213] e1000e 0000:0b:00.1: irq 107 for MSI/MSI-X
[ 1735.677083] e1000e 0000:0b:00.1: irq 107 for MSI/MSI-X
[ 1735.678220] ADDRCONF(NETDEV_UP): eth3: link is not ready
[ 1737.727880] e1000e: eth3 NIC Link is Up 1000 Mbps Full Duplex, Flow
Control: None
[ 1737.728683] ADDRCONF(NETDEV_CHANGE): eth3: link becomes ready

so I can normally load DNA driver and it seems to work.

Then we tested using pfcount application and it shows 350Mbit of traffic
completely normally and 0% packet loss, but then headache starts:
when I compile libpcap included with pf_ring and tcpdump, tcpdump
segfaults with following:

[ 9554.546293] test[19674]: segfault at 0 ip 00007f1475c88f7e sp
00007fffd0152100 error 6 in libpcap.so.1.1.1[.7f1475c5d000+46000

Tried compiling same thing on my own machine, and just transfering data
remote, but that doesnt help, tried running locally no segfaults. I
though OK let's change NIC, didn't helped, both kernel and OS and
packages are the same, different things are CPU and few more additional
packages on my home computer. gcc, g++, make all are the same version.
Also pfcount_multichannel segfaults immediatelly.

Output from pfcount:
=========================
Absolute Stats: [2095523 pkts rcvd][0 pkts dropped]
Total Pkts=2095523/Dropped=0.0 %
2'095'523 pkts - 491'997'002 bytes [190'769.20 pkt/sec - 358.32 Mbit/sec]
=========================
Actual Stats: 166996 pkts [983.14 ms][169'860.18 pkt/sec]
=========================

Should I try some compile switches, I am slowly loosing ideas?

Tnx in advance,

J


josip.djuricic at gmail

Sep 6, 2011, 2:55 PM

Post #2 of 3 (401 views)
Permalink
Re: Strange pf_ring compile issues [In reply to]

Some additional info:

pf_ring revision is 4796, version 4.7.3

./tcpdump -i eth2 -s 4096 -w /dev/null
tcpdump: WARNING: eth2: no IPv4 address assigned
tcpdump: listening on eth2, link-type EN10MB (Ethernet), capture size
4096 bytes
^Ctcpdump: pcap_loop:
5608216 packets captured
5608216 packets received by filter
0 packets dropped by kernel
Segmentation fault

from /var/log/messages in same time:
kernel: [ 979.548607] tcpdump[12615]: segfault at 0 ip
0000000000488566 sp 00007fff9d152870 error 6 in tcpdump[400000+e4000]

And more importantly:
cat /proc/net/pf_ring/12615-eth2.21
Bound Device : eth2
Slot Version : 13 [4.7.3]
Active : 1
Breed : DNA
Sampling Rate : 1
Capture Direction : RX+TX
Appl. Name : <unknown>
IP Defragment : Yes
BPF Filtering : Enabled
# Sw Filt. Rules : 0
# Hw Filt. Rules : 0
Cluster Id : 0
Channel Id : -1
Min Num Slots : 6502
Poll Pkt Watermark : 1
Bucket Len : 128
Slot Len : 160 [bucket+header]
Tot Memory : 1048576
Num Poll Calls : 54928
Tot Packets : 0
Tot Pkt Lost : 0
Tot Insert : 0
Tot Read : 0
Insert Offset : 0
Remove Offset : 0
Tot Fwd Ok : 0
Tot Fwd Errors : 0
Num Free Slots : 6502

Please note Tot Packets all the time on 0, no matter if I run tcpdump or
pfcount:
./pfcount -i dna:eth2
Using PF_RING v.4.7.3
Capturing from dna:eth2 [00:15:17:3D:14:A2]
# Device RX channels: 1
# Polling threads: 1
=========================
Absolute Stats: [155981 pkts rcvd][0 pkts dropped]
Total Pkts=155981/Dropped=0.0 %
155'981 pkts - 34'901'424 bytes
=========================

=========================
Absolute Stats: [316437 pkts rcvd][0 pkts dropped]
Total Pkts=316437/Dropped=0.0 %
316'437 pkts - 70'881'585 bytes [316'403.46 pkt/sec - 566.99 Mbit/sec]
=========================
Actual Stats: 160456 pkts [1'000.10 ms][160'439.15 pkt/sec]
=========================

Weird thing is pfcount seems to see traffic but if I do -v I get:
./pfcount -i dna:eth2 -v
Using PF_RING v.4.7.3
Capturing from dna:eth2 [00:15:17:3D:14:A2]
# Device RX channels: 1
# Polling threads: 1
Killed

and if I try:
./pfcount_multichannel -i eth2
Capturing from eth2
Segmentation fault

Processor: Intel(R) Xeon(R) CPU 5160 @ 3.00GHz

LSPCI Output:
00:00.0 Host bridge: Intel Corporation 5000X Chipset Memory Controller
Hub (rev 12)
00:02.0 PCI bridge: Intel Corporation 5000 Series Chipset PCI Express x4
Port 2 (rev 12)
00:03.0 PCI bridge: Intel Corporation 5000 Series Chipset PCI Express x4
Port 3 (rev 12)
00:04.0 PCI bridge: Intel Corporation 5000 Series Chipset PCI Express x8
Port 4-5 (rev 12)
00:05.0 PCI bridge: Intel Corporation 5000 Series Chipset PCI Express x4
Port 5 (rev 12)
00:06.0 PCI bridge: Intel Corporation 5000 Series Chipset PCI Express x8
Port 6-7 (rev 12)
00:07.0 PCI bridge: Intel Corporation 5000 Series Chipset PCI Express x4
Port 7 (rev 12)
00:10.0 Host bridge: Intel Corporation 5000 Series Chipset FSB Registers
(rev 12)
00:10.1 Host bridge: Intel Corporation 5000 Series Chipset FSB Registers
(rev 12)
00:10.2 Host bridge: Intel Corporation 5000 Series Chipset FSB Registers
(rev 12)
00:11.0 Host bridge: Intel Corporation 5000 Series Chipset Reserved
Registers (rev 12)
00:13.0 Host bridge: Intel Corporation 5000 Series Chipset Reserved
Registers (rev 12)
00:15.0 Host bridge: Intel Corporation 5000 Series Chipset FBD Registers
(rev 12)
00:16.0 Host bridge: Intel Corporation 5000 Series Chipset FBD Registers
(rev 12)
00:1c.0 PCI bridge: Intel Corporation 631xESB/632xESB/3100 Chipset PCI
Express Root Port 1 (rev 09)
00:1d.0 USB Controller: Intel Corporation 631xESB/632xESB/3100 Chipset
UHCI USB Controller #1 (rev 09)
00:1d.1 USB Controller: Intel Corporation 631xESB/632xESB/3100 Chipset
UHCI USB Controller #2 (rev 09)
00:1d.2 USB Controller: Intel Corporation 631xESB/632xESB/3100 Chipset
UHCI USB Controller #3 (rev 09)
00:1d.7 USB Controller: Intel Corporation 631xESB/632xESB/3100 Chipset
EHCI USB2 Controller (rev 09)
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev d9)
00:1f.0 ISA bridge: Intel Corporation 631xESB/632xESB/3100 Chipset LPC
Interface Controller (rev 09)
00:1f.1 IDE interface: Intel Corporation 631xESB/632xESB IDE Controller
(rev 09)
01:00.0 PCI bridge: Intel Corporation 6702PXH PCI Express-to-PCI Bridge
A (rev 09)
02:08.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1068 PCI-X
Fusion-MPT SAS (rev 01)
03:00.0 PCI bridge: Broadcom EPB PCI-Express to PCI-X Bridge (rev c3)
04:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5708
Gigabit Ethernet (rev 12)
05:00.0 PCI bridge: Intel Corporation 6311ESB/6321ESB PCI Express
Upstream Port (rev 01)
05:00.3 PCI bridge: Intel Corporation 6311ESB/6321ESB PCI Express to
PCI-X Bridge (rev 01)
06:00.0 PCI bridge: Intel Corporation 6311ESB/6321ESB PCI Express
Downstream Port E1 (rev 01)
06:01.0 PCI bridge: Intel Corporation 6311ESB/6321ESB PCI Express
Downstream Port E2 (rev 01)
07:00.0 PCI bridge: Broadcom EPB PCI-Express to PCI-X Bridge (rev c3)
08:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5708
Gigabit Ethernet (rev 12)
0b:00.0 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet
Controller (rev 06)
0b:00.1 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet
Controller (rev 06)
0f:0d.0 VGA compatible controller: ATI Technologies Inc ES1000 (rev 02)



Whatever I do no luck...






On 09/06/11 22:02, Josip Djuricic wrote:
> Hi,
>
> just recently we bought DNA enabled driver for e1000e (Intel 82571EB),
> and decided to implement pf_ring as part of our test solution.
>
> But we have strange compile issues on Dell poweredge 1950 (2x Intel
> Dual Core 3.0).
> Operating system installed is Gentoo Linux 64 bit, compiled kernel is
> 2.6.39-r3. When we compile it it finishes normally, then I can load
> pf_ring with:
> | insmod pf_ring.ko transparent_mode=2 enable_tx_capture=0
> quick_mode=1|
>
> and it loads normally:
> cat /proc/net/pf_ring/info
> PF_RING Version : 4.7.3 ($Revision: $)
> Ring slots : 4096
> Slot version : 13
> Capture TX : No [RX only]
> IP Defragment : No
> Socket Mode : Quick
> Transparent mode : No (mode 2)
> Total rings : 0
> Total plugins : 0
>
> Then trying to insert e1000e driver spawns this:
> [ 882.456375] e1000e: Intel(R) PRO/1000 Network Driver - 1.3.10a-NAPI
> [ 882.456378] e1000e: Copyright(c) 1999 - 2011 Intel Corporation.
> [ 882.456410] e1000e 0000:0b:00.0: Disabling ASPM L1
> [ 882.456425] e1000e 0000:0b:00.0: PCI INT A -> GSI 16 (level, low)
> -> IRQ 16
> [ 882.456438] e1000e 0000:0b:00.0: setting latency timer to 64
> [ 882.456485] e1000e 0000:0b:00.0: PCI INT A disabled
> [ 882.456489] e1000e: probe of 0000:0b:00.0 failed with error -5
> [ 882.456494] e1000e 0000:0b:00.1: Disabling ASPM L1
> [ 882.456507] e1000e 0000:0b:00.1: PCI INT B -> GSI 17 (level, low)
> -> IRQ 17
> [ 882.456514] e1000e 0000:0b:00.1: setting latency timer to 64
> [ 882.456549] e1000e 0000:0b:00.1: PCI INT B disabled
> [ 882.456552] e1000e: probe of 0000:0b:00.1 failed with error -5
>
>
> But if I compile the same driver on my home machine and just transfer
> it to the server:
> [ 1720.874465] e1000e: Intel(R) PRO/1000 Network Driver - 1.3.10a-NAPI
> [ 1720.874468] e1000e: Copyright(c) 1999 - 2011 Intel Corporation.
> [ 1720.874500] e1000e 0000:0b:00.0: Disabling ASPM L1
> [ 1720.874515] e1000e 0000:0b:00.0: PCI INT A -> GSI 16 (level, low)
> -> IRQ 16
> [ 1720.874529] e1000e 0000:0b:00.0: setting latency timer to 64
> [ 1720.874636] e1000e 0000:0b:00.0: irq 106 for MSI/MSI-X
> [ 1721.029675] e1000e 0000:0b:00.0: eth2: (PCI Express:2.5GB/s:Width
> x4) 00:15:17:3d:14:a2
> [ 1721.029677] e1000e 0000:0b:00.0: eth2: Intel(R) PRO/1000 Network
> Connection
> [ 1721.029749] e1000e 0000:0b:00.0: eth2: MAC: 1, PHY: 4, PBA No:
> C57721-005
> [ 1721.029758] e1000e 0000:0b:00.1: Disabling ASPM L1
> [ 1721.029765] e1000e 0000:0b:00.1: PCI INT B -> GSI 17 (level, low)
> -> IRQ 17
> [ 1721.029775] e1000e 0000:0b:00.1: setting latency timer to 64
> [ 1721.029859] e1000e 0000:0b:00.1: irq 107 for MSI/MSI-X
> [ 1721.038652] net.sh used greatest stack depth: 3768 bytes left
> [ 1721.184663] e1000e 0000:0b:00.1: eth3: (PCI Express:2.5GB/s:Width
> x4) 00:15:17:3d:14:a3
> [ 1721.184665] e1000e 0000:0b:00.1: eth3: Intel(R) PRO/1000 Network
> Connection
> [ 1721.184741] e1000e 0000:0b:00.1: eth3: MAC: 1, PHY: 4, PBA No:
> C57721-005
> [ 1733.298130] [DNA] Enabled DNA on eth2 (len=65536)
> [ 1733.298214] e1000e 0000:0b:00.0: irq 106 for MSI/MSI-X
> [ 1733.349077] e1000e 0000:0b:00.0: irq 106 for MSI/MSI-X
> [ 1733.350240] ADDRCONF(NETDEV_UP): eth2: link is not ready
> [ 1735.427902] e1000e: eth2 NIC Link is Up 1000 Mbps Full Duplex, Flow
> Control: Rx
> [ 1735.428689] ADDRCONF(NETDEV_CHANGE): eth2: link becomes ready
> [ 1735.626133] [DNA] Enabled DNA on eth3 (len=65536)
> [ 1735.626213] e1000e 0000:0b:00.1: irq 107 for MSI/MSI-X
> [ 1735.677083] e1000e 0000:0b:00.1: irq 107 for MSI/MSI-X
> [ 1735.678220] ADDRCONF(NETDEV_UP): eth3: link is not ready
> [ 1737.727880] e1000e: eth3 NIC Link is Up 1000 Mbps Full Duplex, Flow
> Control: None
> [ 1737.728683] ADDRCONF(NETDEV_CHANGE): eth3: link becomes ready
>
> so I can normally load DNA driver and it seems to work.
>
> Then we tested using pfcount application and it shows 350Mbit of
> traffic completely normally and 0% packet loss, but then headache starts:
> when I compile libpcap included with pf_ring and tcpdump, tcpdump
> segfaults with following:
>
> [ 9554.546293] test[19674]: segfault at 0 ip 00007f1475c88f7e sp
> 00007fffd0152100 error 6 in libpcap.so.1.1.1[.7f1475c5d000+46000
>
> Tried compiling same thing on my own machine, and just transfering
> data remote, but that doesnt help, tried running locally no segfaults.
> I though OK let's change NIC, didn't helped, both kernel and OS and
> packages are the same, different things are CPU and few more
> additional packages on my home computer. gcc, g++, make all are the
> same version. Also pfcount_multichannel segfaults immediatelly.
>
> Output from pfcount:
> =========================
> Absolute Stats: [2095523 pkts rcvd][0 pkts dropped]
> Total Pkts=2095523/Dropped=0.0 %
> 2'095'523 pkts - 491'997'002 bytes [190'769.20 pkt/sec - 358.32 Mbit/sec]
> =========================
> Actual Stats: 166996 pkts [983.14 ms][169'860.18 pkt/sec]
> =========================
>
> Should I try some compile switches, I am slowly loosing ideas?
>
> Tnx in advance,
>
> J


josip.djuricic at gmail

Sep 8, 2011, 7:36 AM

Post #3 of 3 (377 views)
Permalink
Re: Strange pf_ring compile issues [In reply to]

Thanks to Alfredo and Luca once again.

Issue is successfully resolved with build newer then 4796.

Best regards,

Josip

On 09/06/11 23:55, Josip Djuricic wrote:
> Some additional info:
>
> pf_ring revision is 4796, version 4.7.3
>
> ./tcpdump -i eth2 -s 4096 -w /dev/null
> tcpdump: WARNING: eth2: no IPv4 address assigned
> tcpdump: listening on eth2, link-type EN10MB (Ethernet), capture size
> 4096 bytes
> ^Ctcpdump: pcap_loop:
> 5608216 packets captured
> 5608216 packets received by filter
> 0 packets dropped by kernel
> Segmentation fault
>
> from /var/log/messages in same time:
> kernel: [ 979.548607] tcpdump[12615]: segfault at 0 ip
> 0000000000488566 sp 00007fff9d152870 error 6 in tcpdump[400000+e4000]
>
> And more importantly:
> cat /proc/net/pf_ring/12615-eth2.21
> Bound Device : eth2
> Slot Version : 13 [4.7.3]
> Active : 1
> Breed : DNA
> Sampling Rate : 1
> Capture Direction : RX+TX
> Appl. Name : <unknown>
> IP Defragment : Yes
> BPF Filtering : Enabled
> # Sw Filt. Rules : 0
> # Hw Filt. Rules : 0
> Cluster Id : 0
> Channel Id : -1
> Min Num Slots : 6502
> Poll Pkt Watermark : 1
> Bucket Len : 128
> Slot Len : 160 [bucket+header]
> Tot Memory : 1048576
> Num Poll Calls : 54928
> Tot Packets : 0
> Tot Pkt Lost : 0
> Tot Insert : 0
> Tot Read : 0
> Insert Offset : 0
> Remove Offset : 0
> Tot Fwd Ok : 0
> Tot Fwd Errors : 0
> Num Free Slots : 6502
>
> Please note Tot Packets all the time on 0, no matter if I run tcpdump
> or pfcount:
> ./pfcount -i dna:eth2
> Using PF_RING v.4.7.3
> Capturing from dna:eth2 [00:15:17:3D:14:A2]
> # Device RX channels: 1
> # Polling threads: 1
> =========================
> Absolute Stats: [155981 pkts rcvd][0 pkts dropped]
> Total Pkts=155981/Dropped=0.0 %
> 155'981 pkts - 34'901'424 bytes
> =========================
>
> =========================
> Absolute Stats: [316437 pkts rcvd][0 pkts dropped]
> Total Pkts=316437/Dropped=0.0 %
> 316'437 pkts - 70'881'585 bytes [316'403.46 pkt/sec - 566.99 Mbit/sec]
> =========================
> Actual Stats: 160456 pkts [1'000.10 ms][160'439.15 pkt/sec]
> =========================
>
> Weird thing is pfcount seems to see traffic but if I do -v I get:
> ./pfcount -i dna:eth2 -v
> Using PF_RING v.4.7.3
> Capturing from dna:eth2 [00:15:17:3D:14:A2]
> # Device RX channels: 1
> # Polling threads: 1
> Killed
>
> and if I try:
> ./pfcount_multichannel -i eth2
> Capturing from eth2
> Segmentation fault
>
> Processor: Intel(R) Xeon(R) CPU 5160 @ 3.00GHz
>
> LSPCI Output:
> 00:00.0 Host bridge: Intel Corporation 5000X Chipset Memory Controller
> Hub (rev 12)
> 00:02.0 PCI bridge: Intel Corporation 5000 Series Chipset PCI Express
> x4 Port 2 (rev 12)
> 00:03.0 PCI bridge: Intel Corporation 5000 Series Chipset PCI Express
> x4 Port 3 (rev 12)
> 00:04.0 PCI bridge: Intel Corporation 5000 Series Chipset PCI Express
> x8 Port 4-5 (rev 12)
> 00:05.0 PCI bridge: Intel Corporation 5000 Series Chipset PCI Express
> x4 Port 5 (rev 12)
> 00:06.0 PCI bridge: Intel Corporation 5000 Series Chipset PCI Express
> x8 Port 6-7 (rev 12)
> 00:07.0 PCI bridge: Intel Corporation 5000 Series Chipset PCI Express
> x4 Port 7 (rev 12)
> 00:10.0 Host bridge: Intel Corporation 5000 Series Chipset FSB
> Registers (rev 12)
> 00:10.1 Host bridge: Intel Corporation 5000 Series Chipset FSB
> Registers (rev 12)
> 00:10.2 Host bridge: Intel Corporation 5000 Series Chipset FSB
> Registers (rev 12)
> 00:11.0 Host bridge: Intel Corporation 5000 Series Chipset Reserved
> Registers (rev 12)
> 00:13.0 Host bridge: Intel Corporation 5000 Series Chipset Reserved
> Registers (rev 12)
> 00:15.0 Host bridge: Intel Corporation 5000 Series Chipset FBD
> Registers (rev 12)
> 00:16.0 Host bridge: Intel Corporation 5000 Series Chipset FBD
> Registers (rev 12)
> 00:1c.0 PCI bridge: Intel Corporation 631xESB/632xESB/3100 Chipset PCI
> Express Root Port 1 (rev 09)
> 00:1d.0 USB Controller: Intel Corporation 631xESB/632xESB/3100 Chipset
> UHCI USB Controller #1 (rev 09)
> 00:1d.1 USB Controller: Intel Corporation 631xESB/632xESB/3100 Chipset
> UHCI USB Controller #2 (rev 09)
> 00:1d.2 USB Controller: Intel Corporation 631xESB/632xESB/3100 Chipset
> UHCI USB Controller #3 (rev 09)
> 00:1d.7 USB Controller: Intel Corporation 631xESB/632xESB/3100 Chipset
> EHCI USB2 Controller (rev 09)
> 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev d9)
> 00:1f.0 ISA bridge: Intel Corporation 631xESB/632xESB/3100 Chipset LPC
> Interface Controller (rev 09)
> 00:1f.1 IDE interface: Intel Corporation 631xESB/632xESB IDE
> Controller (rev 09)
> 01:00.0 PCI bridge: Intel Corporation 6702PXH PCI Express-to-PCI
> Bridge A (rev 09)
> 02:08.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1068
> PCI-X Fusion-MPT SAS (rev 01)
> 03:00.0 PCI bridge: Broadcom EPB PCI-Express to PCI-X Bridge (rev c3)
> 04:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5708
> Gigabit Ethernet (rev 12)
> 05:00.0 PCI bridge: Intel Corporation 6311ESB/6321ESB PCI Express
> Upstream Port (rev 01)
> 05:00.3 PCI bridge: Intel Corporation 6311ESB/6321ESB PCI Express to
> PCI-X Bridge (rev 01)
> 06:00.0 PCI bridge: Intel Corporation 6311ESB/6321ESB PCI Express
> Downstream Port E1 (rev 01)
> 06:01.0 PCI bridge: Intel Corporation 6311ESB/6321ESB PCI Express
> Downstream Port E2 (rev 01)
> 07:00.0 PCI bridge: Broadcom EPB PCI-Express to PCI-X Bridge (rev c3)
> 08:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5708
> Gigabit Ethernet (rev 12)
> 0b:00.0 Ethernet controller: Intel Corporation 82571EB Gigabit
> Ethernet Controller (rev 06)
> 0b:00.1 Ethernet controller: Intel Corporation 82571EB Gigabit
> Ethernet Controller (rev 06)
> 0f:0d.0 VGA compatible controller: ATI Technologies Inc ES1000 (rev 02)
>
>
>
> Whatever I do no luck...
>
>
>
>
>
>
> On 09/06/11 22:02, Josip Djuricic wrote:
>> Hi,
>>
>> just recently we bought DNA enabled driver for e1000e (Intel
>> 82571EB), and decided to implement pf_ring as part of our test solution.
>>
>> But we have strange compile issues on Dell poweredge 1950 (2x Intel
>> Dual Core 3.0).
>> Operating system installed is Gentoo Linux 64 bit, compiled kernel is
>> 2.6.39-r3. When we compile it it finishes normally, then I can load
>> pf_ring with:
>> | insmod pf_ring.ko transparent_mode=2 enable_tx_capture=0
>> quick_mode=1|
>>
>> and it loads normally:
>> cat /proc/net/pf_ring/info
>> PF_RING Version : 4.7.3 ($Revision: $)
>> Ring slots : 4096
>> Slot version : 13
>> Capture TX : No [RX only]
>> IP Defragment : No
>> Socket Mode : Quick
>> Transparent mode : No (mode 2)
>> Total rings : 0
>> Total plugins : 0
>>
>> Then trying to insert e1000e driver spawns this:
>> [ 882.456375] e1000e: Intel(R) PRO/1000 Network Driver - 1.3.10a-NAPI
>> [ 882.456378] e1000e: Copyright(c) 1999 - 2011 Intel Corporation.
>> [ 882.456410] e1000e 0000:0b:00.0: Disabling ASPM L1
>> [ 882.456425] e1000e 0000:0b:00.0: PCI INT A -> GSI 16 (level, low)
>> -> IRQ 16
>> [ 882.456438] e1000e 0000:0b:00.0: setting latency timer to 64
>> [ 882.456485] e1000e 0000:0b:00.0: PCI INT A disabled
>> [ 882.456489] e1000e: probe of 0000:0b:00.0 failed with error -5
>> [ 882.456494] e1000e 0000:0b:00.1: Disabling ASPM L1
>> [ 882.456507] e1000e 0000:0b:00.1: PCI INT B -> GSI 17 (level, low)
>> -> IRQ 17
>> [ 882.456514] e1000e 0000:0b:00.1: setting latency timer to 64
>> [ 882.456549] e1000e 0000:0b:00.1: PCI INT B disabled
>> [ 882.456552] e1000e: probe of 0000:0b:00.1 failed with error -5
>>
>>
>> But if I compile the same driver on my home machine and just transfer
>> it to the server:
>> [ 1720.874465] e1000e: Intel(R) PRO/1000 Network Driver - 1.3.10a-NAPI
>> [ 1720.874468] e1000e: Copyright(c) 1999 - 2011 Intel Corporation.
>> [ 1720.874500] e1000e 0000:0b:00.0: Disabling ASPM L1
>> [ 1720.874515] e1000e 0000:0b:00.0: PCI INT A -> GSI 16 (level, low)
>> -> IRQ 16
>> [ 1720.874529] e1000e 0000:0b:00.0: setting latency timer to 64
>> [ 1720.874636] e1000e 0000:0b:00.0: irq 106 for MSI/MSI-X
>> [ 1721.029675] e1000e 0000:0b:00.0: eth2: (PCI Express:2.5GB/s:Width
>> x4) 00:15:17:3d:14:a2
>> [ 1721.029677] e1000e 0000:0b:00.0: eth2: Intel(R) PRO/1000 Network
>> Connection
>> [ 1721.029749] e1000e 0000:0b:00.0: eth2: MAC: 1, PHY: 4, PBA No:
>> C57721-005
>> [ 1721.029758] e1000e 0000:0b:00.1: Disabling ASPM L1
>> [ 1721.029765] e1000e 0000:0b:00.1: PCI INT B -> GSI 17 (level, low)
>> -> IRQ 17
>> [ 1721.029775] e1000e 0000:0b:00.1: setting latency timer to 64
>> [ 1721.029859] e1000e 0000:0b:00.1: irq 107 for MSI/MSI-X
>> [ 1721.038652] net.sh used greatest stack depth: 3768 bytes left
>> [ 1721.184663] e1000e 0000:0b:00.1: eth3: (PCI Express:2.5GB/s:Width
>> x4) 00:15:17:3d:14:a3
>> [ 1721.184665] e1000e 0000:0b:00.1: eth3: Intel(R) PRO/1000 Network
>> Connection
>> [ 1721.184741] e1000e 0000:0b:00.1: eth3: MAC: 1, PHY: 4, PBA No:
>> C57721-005
>> [ 1733.298130] [DNA] Enabled DNA on eth2 (len=65536)
>> [ 1733.298214] e1000e 0000:0b:00.0: irq 106 for MSI/MSI-X
>> [ 1733.349077] e1000e 0000:0b:00.0: irq 106 for MSI/MSI-X
>> [ 1733.350240] ADDRCONF(NETDEV_UP): eth2: link is not ready
>> [ 1735.427902] e1000e: eth2 NIC Link is Up 1000 Mbps Full Duplex,
>> Flow Control: Rx
>> [ 1735.428689] ADDRCONF(NETDEV_CHANGE): eth2: link becomes ready
>> [ 1735.626133] [DNA] Enabled DNA on eth3 (len=65536)
>> [ 1735.626213] e1000e 0000:0b:00.1: irq 107 for MSI/MSI-X
>> [ 1735.677083] e1000e 0000:0b:00.1: irq 107 for MSI/MSI-X
>> [ 1735.678220] ADDRCONF(NETDEV_UP): eth3: link is not ready
>> [ 1737.727880] e1000e: eth3 NIC Link is Up 1000 Mbps Full Duplex,
>> Flow Control: None
>> [ 1737.728683] ADDRCONF(NETDEV_CHANGE): eth3: link becomes ready
>>
>> so I can normally load DNA driver and it seems to work.
>>
>> Then we tested using pfcount application and it shows 350Mbit of
>> traffic completely normally and 0% packet loss, but then headache starts:
>> when I compile libpcap included with pf_ring and tcpdump, tcpdump
>> segfaults with following:
>>
>> [ 9554.546293] test[19674]: segfault at 0 ip 00007f1475c88f7e sp
>> 00007fffd0152100 error 6 in libpcap.so.1.1.1[.7f1475c5d000+46000
>>
>> Tried compiling same thing on my own machine, and just transfering
>> data remote, but that doesnt help, tried running locally no
>> segfaults. I though OK let's change NIC, didn't helped, both kernel
>> and OS and packages are the same, different things are CPU and few
>> more additional packages on my home computer. gcc, g++, make all are
>> the same version. Also pfcount_multichannel segfaults immediatelly.
>>
>> Output from pfcount:
>> =========================
>> Absolute Stats: [2095523 pkts rcvd][0 pkts dropped]
>> Total Pkts=2095523/Dropped=0.0 %
>> 2'095'523 pkts - 491'997'002 bytes [190'769.20 pkt/sec - 358.32 Mbit/sec]
>> =========================
>> Actual Stats: 166996 pkts [983.14 ms][169'860.18 pkt/sec]
>> =========================
>>
>> Should I try some compile switches, I am slowly loosing ideas?
>>
>> Tnx in advance,
>>
>> J
>

NTop misc RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.