Installation
A 30-day trial build of Bison Router is available only by request. Please contact us.
The following installation steps can be used for Ubuntu 20.04.
Configure linux kernel
- Turn on linux boot time options:
Edit GRUB_CMDLINE_LINUX
variable in the /etc/default/grub
.
GRUB_CMDLINE_LINUX="intel_idle.max_cstate=1 isolcpus=1-12 rcu_nocbs=1-12 rcu_nocb_poll=1-12 default_hugepagesz=2M hugepagesz=2M hugepages=3072"
Note
This is an example of syntax, use core numbers according to the hardware configuration of your machine.
Note
isolcpus, rcu_nocbs, and rcu_nocb_poll parameters must have identical values.
Run.
update-grub
Note
You may want to isolate a different set of cores or reserve different amount of RAM for huge pages, depending on the hardware configuration of your server. To maximize the performance of your system, always isolate the cores used by Bison Router.
- Configure hugepages
Reboot you machine and check that hugepages are available and free.
grep -i huge /proc/meminfo
The expected result will have a structure similar to this example:
HugePages_Total: 3072
HugePages_Free: 3072
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Make a mount point for hugepages.
mkdir /mnt/huge
Create a mount point entry in the /etc/fstab
.
huge /mnt/huge hugetlbfs pagesize=2M 0 0
Mount hugepages.
mount huge
Bison Router
Download Bison Router
Please, contact us at info@bisonrouter.net
Install Bison Router
To install Bison Router use the following command:
apt install ./bison-router-xxx.deb
Install DPDK
Use the bisonrouter
utility to download and install DPDK. DPDK will be saved into the directory specified in /etc/bisonrouter/bisonrouter.env
configuration file in the variable br_dpdk_dest
. Default DPDK installation path is /usr/src/dpdk-18.11.11
.
bisonrouter install_dpdk
Configure DPDK ports
- Determine what NIC devices are available in your system. The following command will output information about NIC devices and their PCI addresses:
bisonrouter dev_status
- Edit
/etc/bisonrouter/bisonrouter.env
and save the NIC PCI addresses you want to use in thebr_pci_devs
list.
For example:
br_pci_devs=(
"0000:04:00.0"
"0000:04:00.1"
)
- Run:
bisonrouter bind_devices
bisonrouter dev_status
Now your PCI devices use DPDK drivers and are ready for Bison Router.
The response to bisonrouter dev_status
has the following form:
Network devices using DPDK-compatible driver
============================================
0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=vfio-pci unused=ixgbe
0000:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=vfio-pci unused=ixgbe
Prepare a configuration file
For configuration examples and options, see Command reference.
First of all, you need to create /etc/bisonrouter/brouter.conf
.
nano /etc/bisonrouter/brouter.conf
Configure CPU cores
Edit /etc/bisonrouter/bisonrouter.env
and update the variable br_lcores
according to your hardware setup. Use only the cores from the same NUMA socket if your system has multiple CPUs. A command that outputs information about available cores:
bisonrouter cpu_layout
Example of bison router cpu_layout
output:
# bisonrouter cpu_layout
======================================================================
Core and Socket Information (as reported by '/sys/devices/system/cpu')
======================================================================
cores = [0, 1, 2, 3]
sockets = [0]
Socket 0
--------
Core 0 [0]
Core 1 [1]
Core 2 [2]
Core 3 [3]
Note
On multi-CPU NUMA systems, BisonRouter requires that all CPU cores come from a single NUMA socket. Configurations that span multiple NUMA sockets are not supported. The selected NUMA socket is determined by the PCI connection of the network interface card(s) (NICs). For example, if the NIC(s) is connected to a PCI slot associated with NUMA socket 0, then only CPU cores from NUMA socket 0 should be used.
Note
The command bisonrouter dev_numa_info
displays the NUMA configuration
for the network interface cards (NICs) used by BisonRouter. This output
shows each NIC’s PCI address along with the NUMA socket to which it is
connected. A NUMA value of -1
indicates that the device is not
associated with any specific NUMA socket. For example:
Device NUMA
0000:01:00.0 -1
Use this command to verify the NIC-to-NUMA mapping. This is important because BisonRouter requires that all CPU cores for packet processing are selected from the same NUMA socket, which should be chosen based on the NIC connection.
Set values according to your bisonrouter cpu_layout
output:
br_lcores='<lcores_count> lcores at cpu cores <cpu_range_list>'
The format for br_lcores
consists of a positive integer
<lcores_count>
followed by the literal text lcores at cpu cores
and a comma-separated list of CPU core subranges. Each subrange must be
specified in the form X-Y
, where X
is the starting CPU core number
and Y
is the ending CPU core number (inclusive). All CPU core numbers
must be unique and the total number of CPU cores specified across all subranges
must equal <lcores_count>
. CPU core numbers must lie between 0 and 128.
For example,
br_lcores='16 lcores at cpu cores 1-5,7-8,20-28'
assigns 16 lcores (numbered 0 to 15) to CPU cores 1 through 5, 7 through 8, and 20 through 28, while
br_lcores='4 lcores at cpu cores 1-4'
assigns 4 lcores (numbered 0 to 3) to CPU cores 1 through 4.
Run Bison Router
Start Bison Router.
bisonrouter start
Check the syslog
to ensure that Bison Router has started successfully.
ROUTER: router configuration file '/etc/bisonrouter/brouter.conf' successfully loaded
Use the rcli
utility to configure and control Bison Router.
# rcli sh uptime
Uptime: 0 day(s), 1 hour(s), 38 minute(s), 14 sec(s)
To stop/restart Bison Router also use the bisonrouter
utility which supports the following options:
# bisonrouter
Usage: bisonrouter [OPTIONS]
Options:
start - start the Bison Router (BR) daemon
stop - stop the BR daemon
restart - restart the BR daemon
status - show Bison Router daemon status
bind_devices - load kernel modules and bind PCI devices
unbind_devices - unload kernel modules and unbind devices
dev_status - show device status
cpu_layout - show core and socket information
install_dpdk - fetch the DPDK source code and install
reinstall_dpdk - remove the current DPDK and reinstall