Installation Bison Router DPDK 21
A 30-day trial build of Bison Router is available only by request. Please contact us.
The following installation steps apply to the BisonRouter DPDK 21 build for Ubuntu 22.04.
Important
The Main DPDK 21 build for Ubuntu 22.04 is now the preferred Bison Router build.
- It supports Intel Ethernet 800 Series (E810) and 700 Series (X710) NICs.
- It is not compatible with some older CPUs.
If Bison Router fails to start with an illegal instruction error,
switch to a build from the Old CPUs list.
Update the system
To update the system and install Linux kernel headers, run:
apt update
apt-get install linux-headers-$(uname -r)
Configure Linux kernel
Edit GRUB_CMDLINE_LINUX variable in the /etc/default/grub.
GRUB_CMDLINE_LINUX="intel_idle.max_cstate=1 isolcpus=1-12 rcu_nocbs=1-12 rcu_nocb_poll=1-12 default_hugepagesz=2M hugepagesz=2M hugepages=4000 intel_iommu=on iommu=pt"
Note
This is an example of syntax, use core numbers and the amount of hugepages according to the hardware configuration of your machine.
Note
isolcpus, rcu_nocbs, and rcu_nocb_poll parameters must have identical values.
Note
The intel_iommu=on iommu=pt parameters are required for the
vfio-pci driver used by the DPDK 21 based builds.
Run.
update-grub
Note
You may want to isolate a different set of cores or reserve different amount of RAM for huge pages, depending on the hardware configuration of your server. To maximize the performance of your system, always isolate the cores used by Bison Router.
Enable VT-d / IOMMU in BIOS
For the vfio-pci driver to work correctly, Intel VT-d (Virtualization
Technology for Directed I/O) or IOMMU must be enabled in the BIOS/UEFI.
Typical steps:
- Reboot the server and enter the BIOS/UEFI Setup.
- Locate the setting for Intel VT-d or IOMMU.
- Common menu locations: Chipset Configuration, System Agent (SA) Configuration, North Bridge, or Processor Configuration.
- Ensure VT-d/IOMMU is Enabled.
- Save changes and reboot.
Configure hugepages
Reboot your machine and check that hugepages are available and free.
grep -i huge /proc/meminfo
The expected result will have a structure similar to this example:
HugePages_Total: 4000
HugePages_Free: 4000
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Make a mount point for hugepages.
mkdir /mnt/huge
Create a mount point entry in the /etc/fstab.
huge /mnt/huge hugetlbfs pagesize=2M 0 0
Mount hugepages.
mount huge
Verify kernel boot parameters
After reboot, verify that IOMMU parameters are present:
cat /proc/cmdline
Ensure that both intel_iommu=on and iommu=pt are listed in the output.
Bison Router
Download Bison Router
Please, contact us at info@bisonrouter.net
Install Bison Router
To install Bison Router use the following command:
apt install ./bison-router-xxx.deb
Configure DPDK ports
- Determine what NIC devices are available in your system. The following command will output information about NIC devices and their PCI addresses:
bisonrouter dev_status
- Edit
/etc/bisonrouter/bisonrouter.envand save the NIC PCI addresses you want to use in thebr_pci_devslist.
For example:
br_pci_devs=(
"0000:04:00.0"
"0000:04:00.1"
)
- Set the DPDK device driver used by Bison Router to
vfio-pci:
br_dev_driver="vfio-pci"
- Run:
bisonrouter bind_devices
bisonrouter dev_status
Now your PCI devices use DPDK drivers and are ready for Bison Router.
The response to bisonrouter dev_status has the following form:
Network devices using DPDK-compatible driver
============================================
0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=vfio-pci unused=ixgbe
0000:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=vfio-pci unused=ixgbe
Prepare a configuration file
For configuration examples and options, see Command reference.
First of all, you need to create /etc/bisonrouter/brouter.conf.
nano /etc/bisonrouter/brouter.conf
Configure RSS (DPDK 21)
Warning
Bison Router DPDK 21 based builds require manual RSS configuration for the DPDK ports.
Missing RSS configuration will lead to serious performance issues and may cause errors in packet processing.
Configure RSS according to the Flow Director / RSS documentation:
Configure CPU cores
Edit /etc/bisonrouter/bisonrouter.env and update the variable br_lcores according to your hardware setup. Use only the cores from the same NUMA socket if your system has multiple CPUs. A command that outputs information about available cores:
bisonrouter cpu_layout
Example of bison router cpu_layout output:
# bisonrouter cpu_layout
======================================================================
Core and Socket Information (as reported by '/sys/devices/system/cpu')
======================================================================
cores = [0, 1, 2, 3]
sockets = [0]
Socket 0
--------
Core 0 [0]
Core 1 [1]
Core 2 [2]
Core 3 [3]
Note
On servers with more than one CPU socket (NUMA systems), BisonRouter must use CPU cores from only one NUMA socket. Using cores from two or more sockets at the same time is not supported.
The NUMA socket you choose must match where your network card (NIC) is connected. For example, if the NIC is in a PCI slot that belongs to NUMA socket 0, then you should use only CPU cores from NUMA socket 0 for BisonRouter.
The command bisonrouter dev_numa_info displays the NUMA configuration
for the NICs used by BisonRouter. It shows each NIC’s PCI address and the
NUMA node (CPU socket) it is attached to.
Device NUMA
0000:01:00.0 -1
- Values
0,1, etc. indicate the NUMA node. - A value of
-1means the kernel does not report a node for this device. This is normal on many single-socket systems, but on multi-socket systems it may mean the mapping is not available.
If the NUMA value is -1 (unknown), you can try to obtain additional
information using:
numactl --hardware
If this still does not provide a clear mapping, you may need to check the physical PCI slot on the motherboard and consult the motherboard manual to determine which CPU socket the slot is connected to.
After you know which NUMA socket the NIC uses (for example, socket 0),
use the cpu_layout output to choose the CPU cores for Bison Router.
In the cpu_layout table:
- The column titles “Socket 0”, “Socket 1”, etc. show the NUMA socket.
- The numbers in square brackets
[ ]are the CPU core numbers.
For example, if your NIC is on NUMA socket 0, then you must select only
the core numbers in the “Socket 0” column (the numbers in [ ] under
“Socket 0”) when you configure:
isolcpusin the kernel command linebr_lcoresin/etc/bisonrouter/bisonrouter.env
Do not mix cores from different sockets in br_lcores.
Set values according to your bisonrouter cpu_layout output:
br_lcores='<lcores_count> lcores at cpu cores <cpu_range_list>'
Warning
Bison Router must never use CPU cores from more than one NUMA socket at the same time.
All cores listed in br_lcores must belong to a single NUMA socket,
and this socket must be chosen based on the NIC connection (see
bisonrouter dev_numa_info).
The format for br_lcores consists of a positive integer
<lcores_count> followed by the literal text lcores at cpu cores
and a comma-separated list of CPU core subranges. Each subrange must be
specified in the form X-Y, where X is the starting CPU core number
and Y is the ending CPU core number (inclusive). All CPU core numbers
must be unique and the total number of CPU cores specified across all subranges
must equal <lcores_count>. CPU core numbers must lie between 0 and 128.
For example,
br_lcores='16 lcores at cpu cores 1-5,7-8,20-28'
assigns 16 lcores (numbered 0 to 15) to CPU cores 1 through 5, 7 through 8, and 20 through 28, while
br_lcores='4 lcores at cpu cores 1-4'
assigns 4 lcores (numbered 0 to 3) to CPU cores 1 through 4.
Run Bison Router
Start Bison Router.
bisonrouter start
Check the syslog to ensure that Bison Router has started successfully.
ROUTER: router configuration file '/etc/bisonrouter/brouter.conf' successfully loaded
Use the rcli utility to configure and control Bison Router.
# rcli sh uptime
Uptime: 0 day(s), 1 hour(s), 38 minute(s), 14 sec(s)
To stop/restart Bison Router also use the bisonrouter utility which supports the following options:
# bisonrouter
Usage: bisonrouter [OPTIONS]
Options:
start - start the Bison Router (BR) daemon
stop - stop the BR daemon
restart - restart the BR daemon
status - show Bison Router daemon status
bind_devices - load kernel modules and bind PCI devices
unbind_devices - unload kernel modules and unbind devices
dev_status - show device status
cpu_layout - show core and socket information
install_dpdk - fetch the DPDK source code and install
reinstall_dpdk - remove the current DPDK and reinstall