BisonRouter NAT64 Performance Lab: Design, Methodology, and Results
Overview
This document describes a NAT64 performance and robustness test lab built around BisonRouter.
The goals of the lab are:
- Measure the forwarding performance of the NAT64 dataplane under heavy load.
- Validate behavior with a large number of concurrent UDP sessions (~2M).
- Evaluate the impact of high session churn (tens of thousands of session deletions per second).
- Verify that all packet loss is properly accounted for by BisonRouter statistics.
The lab emulates an ISP-style environment in which IPv6 subscribers access IPv4 hosts through NAT64.
Topology
Physical layout
The testbed consists of two hosts connected by 40G links:
ger1 — traffic generator and reflector.
ger1beast — BisonRouter (device under test).
Topology diagram:
pktgen (IPv6 clients) <—-40G-—> BisonRouter (NAT64) <—-40G-—> testpmd (5tswap reflector)
Traffic directions
Forward (uplink):
pktgen (IPv6) → BR port 0 NAT64 BR port 1 → testpmd (IPv4)
Reverse (downlink):
testpmd (5tswap) → BR port 1 NAT46 (reverse NAT64) BR port 0 → pktgen
dpdk-testpmd in 5tswap forwarding mode swaps:
- Source and destination MAC addresses.
- Source and destination IP addresses.
- Source and destination L4 ports (UDP/ TCP where applicable).
As a result, each IPv4 packet that leaves BisonRouter is reflected back as a valid reply that hits the corresponding NAT64 session on the way back.
Hardware
BisonRouter host (ger1beast)
- CPU: AMD Ryzen 9 9950X3D 16-Core Processor
- NIC: Intel XL710-QDA2 dual-port 40G
Generator / reflector host (ger1)
- CPU: AMD Ryzen 9 5900XT 16-Core Processor
- NIC: Intel XL710-QDA2 dual-port 40G NIC
Software
- BisonRouter with NAT64 dataplane on ger1beast.
- DPDK pktgen 25.08.1 (DPDK 25.11.0) on ger1.
- dpdk-testpmd (DPDK 25.11.0) on ger1 in 5tswap forwarding mode.
Pktgen configuration (IPv6 clients)
set 0 count 0
set 0 size 128
set 0 dst mac 3C:FD:FE:A2:AC:98
enable 0 vlan
set 0 vlanid 1
set 0 type ipv6
set 0 proto udp
# Base IPv6 template (prefix for subscribers)
# host part will be randomized by rnd
set 0 src ip 2001:0db8:0001:0000:0000:0000:0000:0000/96
# Base IPv6 dst (NAT64-embedded IPv4 192.168.21.2)
set 0 dst ip fd12:3456:789a:0011:2233:4455:c0a8:1502
# Base UDP ports
set 0 sport 20000
set 0 dport 12345
########################################
# Randomization
########################################
page rnd
enable 0 random
# 1) Randomize *last 3 bytes* of source MAC
# src MAC bytes: 6..11
# we start at offset 8 → bytes 8..11
# first byte (8) stays fixed, last 3 become random
set 0 rnd 0 8 ........XXXXXXXXXXXXXXXXXXXXXXXX
# 2) Randomize *host part* of IPv6 src address
# src IPv6 bytes: 26..41
# last 4 bytes: 38..41 → host part in 2001:db8:1::/96
# 2^17 = 131k subscribers
set 0 rnd 1 38 ...............XXXXXXXXXXXXXXXXX
# 3) Randomize low 4 bits of UDP src port (20000–20015)
# → up to 16 flows per subscriber
set 0 rnd 2 58 ............XXXX................
page main
With this configuration, pktgen generates:
- Approximately 131k distinct IPv6 “subscribers”.
- Up to 16 flows per subscriber (via the randomized source port low bits).
- 128-byte frames (IPv6 + UDP + VLAN + L2).
- VLAN 1 traffic towards BisonRouter port 0.
BisonRouter configuration
startup {
sysctl set numa 1
sysctl set num_rx_desc 2048
sysctl set num_tx_desc 2048
sysctl set num_cplane_timers 30000
# mbuf mempool size
sysctl set mbuf 160000
port 0 mtu 1500 tpid 0x8100 state enabled
rx_queues port 0 queues 14 lcores 1 - 14
port 1 mtu 1500 tpid 0x8100 state enabled
rx_queues port 1 queues 14 lcores 1 - 14
sysctl set log_level 8
sysctl set global_packet_counters 1
sysctl set arp_cache_timeout 600
sysctl set arp_cache_size 1024
sysctl set dynamic_vif_ttl 300
sysctl set vif_stat 1
sysctl set frag_mbuf 4000
sysctl set mac_addr_format "linux"
sysctl set nd_neighbor_cache_entry_ttl 120
#
# System name
#
sysctl set system_name "tr1"
#
# LPM DIR24-8 IPv4 FIB
#
sysctl set lpm_table8_size 2048
#
# 3 - RTPROT_BOOT (linux netlink routes proto)
# Note: FRR 4.0 bgpd redistribute kernel doesn't see linux routes with proto static,
# but it sees BOOT routes
#
sysctl set linux_route_proto 3
#
sysctl set nat_portmap_initial_size 16777216
#
sysctl set install_subsc_linux_routes 1
# enable IPv6 protocol for IPoE subscribers
sysctl set ipoe_ipv6 1
# enable ipv6 for pppoe
sysctl set ppp_ipv6 1
# NEL retransmission interval
sysctl set nel_template_retransmission_interval 30
# NEL usernames
sysctl set nel_usernames_enabled 1
sysctl set vif_max 3000
sysctl set ppp_1session_per_username 1
sysctl set kni_driver_type "tap"
sysctl set max_ippools 512
sysctl set cplane_protection_arp_nb_policers 16
linux link port enable
}
runtime {
# FlowDirector
flow dir rule add port 0 pattern "eth ipv4" rss "ipv4"
flow dir rule add port 0 pattern "eth ipv4 tcp" rss "ipv4"
flow dir rule add port 0 pattern "eth ipv4 udp" rss "ipv4"
flow dir rule add port 0 pattern "eth ipv4 sctp" rss "ipv4"
flow dir rule add port 0 pattern "eth ipv6" rss "ipv6"
flow dir rule add port 0 pattern "eth ipv6 tcp" rss "ipv6"
flow dir rule add port 0 pattern "eth ipv6 udp" rss "ipv6"
flow dir rule add port 0 pattern "eth ipv6 sctp" rss "ipv6"
flow dir rule add port 0 pattern "eth" rss "eth l2-src-only"
# FlowDirector
flow dir rule add port 1 pattern "eth ipv4" rss "ipv4"
flow dir rule add port 1 pattern "eth ipv4 tcp" rss "ipv4"
flow dir rule add port 1 pattern "eth ipv4 udp" rss "ipv4"
flow dir rule add port 1 pattern "eth ipv4 sctp" rss "ipv4"
flow dir rule add port 1 pattern "eth ipv6" rss "ipv6"
flow dir rule add port 1 pattern "eth ipv6 tcp" rss "ipv6"
flow dir rule add port 1 pattern "eth ipv6 udp" rss "ipv6"
flow dir rule add port 1 pattern "eth ipv6 sctp" rss "ipv6"
flow dir rule add port 1 pattern "eth" rss "eth l2-src-only"
#
ip addr add 5.5.5.5/32 dev lo
# blackhole multicast addresses
ip route add 224.0.0.0/4 unreachable
# v1
vif add name v1 port 0 type dot1q cvid 1
ipv6 enable dev v1
ipv6 addr add 2001:470:1f0e:1::1/64 dev v1
# v2 - nat6 ip addr add 192.168.3.1/24 dev uplink
vif add name v2 port 1 type dot1q cvid 2
ipv6 enable dev v2
ipv6 addr add 2001:470:1f0e:2::1/64 dev v2
ip addr add 192.168.21.1/24 dev v2
# nat64
sysctl set nat64_internal_ip_ttl 60
nat64 create map 1 in entries 140000 sess 3000000 port block size 256 ratio 256 ip port block 4 dst prefix fd12:3456:789a:11:2233:4455::/96
nat64 map 1 add out range 10.82.0.0 10.82.3.255
nat64 vif v2 enable
nat64 vif v2 add map 1
# route nat64 prefix via v21
ipv6 route add fd12:3456:789a:11:2233:4455::/96 via 2001:470:1f0e:2::2
arp add 192.168.21.1 9c:69:b4:66:12:0d dev v2 static
arp add 192.168.21.2 9c:69:b4:66:12:0d dev v2 static
arp add 192.168.21.3 9c:69:b4:66:12:0d dev v2 static
arp add 192.168.21.4 9c:69:b4:66:12:0d dev v2 static
arp add 192.168.21.5 9c:69:b4:66:12:0d dev v2 static
arp add 192.168.21.6 9c:69:b4:66:12:0d dev v2 static
arp add 192.168.21.7 9c:69:b4:66:12:0d dev v2 static
arp add 192.168.21.8 9c:69:b4:66:12:0d dev v2 static
arp add 192.168.21.9 9c:69:b4:66:12:0d dev v2 static
arp add 192.168.21.10 9c:69:b4:66:12:0d dev v2 static
arp add 192.168.21.11 9c:69:b4:66:12:0d dev v2 static
arp add 192.168.21.12 9c:69:b4:66:12:0d dev v2 static
arp add 192.168.21.13 9c:69:b4:66:12:0d dev v2 static
arp add 192.168.21.14 9c:69:b4:66:12:0d dev v2 static
arp add 192.168.21.15 9c:69:b4:66:12:0d dev v2 static
arp add 192.168.21.16 9c:69:b4:66:12:0d dev v2 static
arp add 192.168.21.17 9c:69:b4:66:12:0d dev v2 static
arp add 192.168.21.18 9c:69:b4:66:12:0d dev v2 static
arp add 192.168.21.19 9c:69:b4:66:12:0d dev v2 static
arp add 192.168.21.20 9c:69:b4:66:12:0d dev v2 static
arp add 192.168.21.21 9c:69:b4:66:12:0d dev v2 static
arp add 192.168.21.22 9c:69:b4:66:12:0d dev v2 static
arp add 192.168.21.23 9c:69:b4:66:12:0d dev v2 static
arp add 192.168.21.24 9c:69:b4:66:12:0d dev v2 static
linux link create 10.7.7.1 10.7.7.2 mask 24
udp_api secret xxxxxxxx
ipv6 route add 2001:db8:1::/96 via 2001:470:1f0e:1::1
ipv6 arp add 2001:470:1f0e:1::1 9c:69:b4:66:12:0c dev v1 static
}
Map configuration
NAT64 is configured with a single map (ID 1):
root@alex-gerbeast:~# rcli sh nat64 maps
NAT64 map
map id: 1
ipv6 dst prefix: fd12:3456:789a:11:2233:4455::/96
out ranges:
10.82.0.0 - 10.82.3.255, port-block usage 131072/258048, %50
active sessions: 2097152
max internal hosts: 140000
max sessions: 3000000
in/out ratio: 256
port block size: 256
port blocks per out address: 252
port blocks per in address: 4
NEL session: enabled
NEL port-block: disabled
state counters:
unknown: 0
active: 2097152
syn_sent: 0
established: 0
fin_wait: 0
close_wait: 0
closing: 0
last_ack: 0
closed: 0
icmp_active: 0
dns: 0
gre: 0
During the tests the system typically maintains:
- Around 2.0–2.1M active NAT64 UDP sessions.
- Around 131k internal IPv6 hosts.
root@alex-gerbeast:~# rcli sh nat64 int_ip map 1 | wc -l
131073
root@alex-gerbeast:~# rcli sh nat64 bib map 1 | head
in addr:port out addr:port proto rc
2001:db8:1::5865:20011 10.82.2.96:44226 udp 1
2001:db8:1::d09a:20009 10.82.0.129:56162 udp 1
2001:db8:1::1:9acc:20013 10.82.2.138:51139 udp 1
2001:db8:1::fb51:20000 10.82.3.234:39233 udp 1
2001:db8:1::45f2:20011 10.82.0.29:62435 udp 1
2001:db8:1::1:cbb8:20013 10.82.0.48:36801 udp 1
2001:db8:1::1:63c3:20008 10.82.0.247:41280 udp 1
2001:db8:1::1:45c7:20015 10.82.2.174:63169 udp 1
2001:db8:1::f481:20010 10.82.2.171:40514 udp 1
root@alex-gerbeast:~# rcli sh nat64 bib map 1 | wc -l
2097153
Nat64 translation session killer script
To emulate real subscriber churn, a “session killer” script runs on the management side and continuously deletes random NAT64 sessions via BisonRouter UDP API.
- Periodically builds a snapshot of all NAT64 sessions.
- Iterates over the snapshot and deletes sessions at a configurable rate.
- Refreshes the snapshot every few seconds to keep it close to the real state.
Test methodology
Each test run consists of two stages:
- Stage 1 — session build-up.
- Stage 2 — steady-state forwarding with churn.
Stage 1 – session build-up
- Duration: Approximately 10–20 seconds.
- Goal: Allow BisonRouter to establish about 2M NAT64 sessions before steady-state measurements.
- Procedure: Start pktgen at the target rate (for example, 35 % or 40 % of 40G line rate). Observe NAT64 maps until active sessions in sh nat64 maps reaches approximately 2,097,152.
Accept high packet loss during this phase. High loss is expected because new sessions allocate BIB and session structures.
Memory allocation and initialization put heavy pressure on caches and memory. RX rings may overflow, increasing the missed counters.
At the end of Stage 1, record the per-port missed counters on both ports.
Stage 2 – steady-state forwarding with churn
- Duration: approximately 10 minutes
- Goal: Measure forwarding rate and Packet loss under continuous session churn from the session killer script.
Procedure
- Keep pktgen running with the same configuration as in Stage 1.
- Start the session killer script to generate the desired CPS (approximately 20–30k sessions/sec in the examples).
After the desired measurement interval:
- Stop pktgen to freeze its TX/RX counters.
- Read pktgen statistics.
- Read BisonRouter per-port statistics and NAT64 statistics.
- Calculate forwarding rate and packet loss.
Test 1 run
Stage 1. Initial session establishing phase.
root@alex-gerbeast:~# rcli sh port general stat
port 0
pkts rx 150253375
pkts tx 150084519
bytes rx 19614261100
bytes tx 18610480050
errors rx 0
errors tx 0
missed 7932833
rx_nombuf 0
port 1
pkts rx 150084528
pkts tx 150253393
bytes rx 15626353052
bytes tx 15626353052
errors rx 0
errors tx 0
missed 168865
rx_nombuf 0
The number of missed packets at the end of stage 1 is (7932833 + 168865) = 8,101,698
Stage 2
Pktgen: stop, clear stat, restart. Start the session killer script.
Feb 16 23:03:39 rates sess_del=195.7/sec req=4.9/sec failures=0
Feb 16 23:03:40 rates sess_del=96.3/sec req=2.4/sec failures=0
Feb 16 23:03:41 rates sess_del=74.9/sec req=1.9/sec failures=0
Feb 16 23:03:41 Snapshot refresh OK: 2097152 sessions (build_ms=3577.6 next_poll_wait_ms=2422.3 rcli_t0=3.565s)
Feb 16 23:03:41 Swapped in new working snapshot: 2097152 sessions
Feb 16 23:03:42 rates sess_del=54461.9/sec req=1362.6/sec failures=41
Feb 16 23:03:43 rates sess_del=67235.1/sec req=1680.9/sec failures=0
Feb 16 23:03:45 rates sess_del=28962.2/sec req=724.1/sec failures=0
Feb 16 23:03:47 rates sess_del=58.7/sec req=1.5/sec failures=0
Feb 16 23:03:47 Snapshot refresh OK: 2082897 sessions (build_ms=3791.5 next_poll_wait_ms=2208.4 rcli_t0=9.565s)
Feb 16 23:03:47 Swapped in new working snapshot: 2082897 sessions
Feb 16 23:03:48 rates sess_del=4918.5/sec req=123.0/sec failures=0
Feb 16 23:03:49 rates sess_del=70248.3/sec req=1756.2/sec failures=0
Feb 16 23:03:50 rates sess_del=67746.6/sec req=1693.7/sec failures=0
Feb 16 23:03:51 rates sess_del=2396.3/sec req=59.9/sec failures=0
Feb 16 23:03:52 rates sess_del=25874.5/sec req=646.9/sec failures=0
Feb 16 23:03:53 rates sess_del=25355.1/sec req=633.9/sec failures=0
Feb 16 23:03:54 rates sess_del=64.3/sec req=1.6/sec failures=0
Feb 16 23:03:54 Snapshot refresh OK: 2083260 sessions (build_ms=4772.7 next_poll_wait_ms=1227.3 rcli_t0=15.565s)
Feb 16 23:03:54 Swapped in new working snapshot: 2083260 sessions
Feb 16 23:03:55 rates sess_del=65146.5/sec req=1628.7/sec failures=0
Feb 16 23:03:56 rates sess_del=16693.2/sec req=417.3/sec failures=0
Feb 16 23:03:58 rates sess_del=8574.2/sec req=214.4/sec failures=0
Feb 16 23:03:59 rates sess_del=36.4/sec req=0.9/sec failures=0
Feb 16 23:03:59 Snapshot refresh OK: 2083529 sessions (build_ms=3483.9 next_poll_wait_ms=2516.1 rcli_t0=21.565s)
Feb 16 23:03:59 Swapped in new working snapshot: 2083529 sessions
Feb 16 23:04:00 rates sess_del=30554.9/sec req=763.9/sec failures=0
Feb 16 23:04:01 rates sess_del=69510.5/sec req=1737.8/sec failures=0
Feb 16 23:04:02 rates sess_del=66842.9/sec req=1671.1/sec failures=0
Feb 16 23:04:03 rates sess_del=757.9/sec req=18.9/sec failures=0
Feb 16 23:04:04 rates sess_del=407.7/sec req=10.2/sec failures=0
Feb 16 23:04:06 rates sess_del=68.5/sec req=1.7/sec failures=0
Feb 16 23:04:06 Snapshot refresh OK: 2082887 sessions (build_ms=4008.6 next_poll_wait_ms=1991.4 rcli_t0=27.565s)
Feb 16 23:04:06 Swapped in new working snapshot: 2082887 sessions
Feb 16 23:04:07 rates sess_del=51573.9/sec req=1289.3/sec failures=0
Feb 16 23:04:08 rates sess_del=67064.4/sec req=1676.6/sec failures=0
Feb 16 23:04:09 rates sess_del=13569.4/sec req=339.2/sec failures=0
Feb 16 23:04:10 rates sess_del=102.1/sec req=2.6/sec failures=0
Feb 16 23:04:11 Snapshot refresh OK: 2083650 sessions (build_ms=3293.4 next_poll_wait_ms=2706.5 rcli_t0=33.565s)
Feb 16 23:04:11 Swapped in new working snapshot: 2083650 sessions
Feb 16 23:04:11 rates sess_del=5039.9/sec req=126.0/sec failures=0
Feb 16 23:04:12 rates sess_del=69867.0/sec req=1746.7/sec failures=0
Feb 16 23:04:13 rates sess_del=69683.1/sec req=1742.1/sec failures=0
Feb 16 23:04:14 rates sess_del=38676.5/sec req=966.9/sec failures=0
Feb 16 23:04:16 rates sess_del=26.1/sec req=0.7/sec failures=0
Feb 16 23:04:17 rates sess_del=106.5/sec req=2.7/sec failures=0
Feb 16 23:04:18 Snapshot refresh OK: 2082702 sessions (build_ms=4152.9 next_poll_wait_ms=1847.1 rcli_t0=39.565s)
Feb 16 23:04:18 rates sess_del=39.0/sec req=1.0/sec failures=0
Feb 16 23:04:18 Swapped in new working snapshot: 2082702 sessions
Pktgen in the middle of the Stage 2.
\ <Main Page> Ports 0-0 of 1 Copyright(c) <2010-2025>, Intel Corporation
Port:Flags : 0:P------R-VLAN Single
Link State : <UP-40000-FD> ---Total Rate---
Pkts/s Rx : 13,744,271 13,744,271
Tx : 13,744,267 13,744,267
MBits/s Rx/Tx : 16,273/16,273 16,273/16,273
Total Rx Pkts : 882,429,257 13,748,017
Tx Pkts : 883,234,987 13,744,287
Rx/Tx MBs : 1,044,796/1,045,750
Pkts/s Rx Max : 13,748,017
Tx Max : 13,744,287
Errors Rx/Tx : 0/0
Broadcast : 0
Multicast : 0
Sizes 64 : 0
65-127 : 0
128-255 : 887,623,808
256-511 : 0
512-1023 : 0
1024-1518 : 0
Runts/Jumbos : 0/0
ARP/ICMP Pkts : 0/0
Rx/Tx queue cnt : 1/1
Tx Count/% Rate : Forever /40%
Pkt Size/Rx:Tx Burst: 128 / 64: 32
Port Src/Dest : 20000/12345
Type:VLAN ID:Flags : IPv6 / UDP:0001:0010
IP Destination : fd12:3456:789a:11:22+17
Source : 2001:db8:1::/96
MAC Destination : 3c:fd:fe:a2:ac:98
Source : 9c:69:b4:66:12:0c
NUMA/Vend:ID/PCI : -1/8086:1583/06:00.0
-- Pktgen 25.08.1 Powered by DPDK 25.11.0 (pid:1649) -------------------------
Pktgen stopped. Final pktgen stats.
- <Main Page> Ports 0-0 of 1 Copyright(c) <2010-2025>, Intel Corporation
Port:Flags : 0:P------R-VLAN Single
Link State : <UP-40000-FD> ---Total Rate---
Pkts/s Rx : 0 0
Tx : 0 0
MBits/s Rx/Tx : 0/0 0/0
Total Rx Pkts : 9,730,033,597 13,750,445
Tx Pkts : 9,740,273,088 13,744,302
Rx/Tx MBs : 11,520,359/11,532,483
Pkts/s Rx Max : 13,750,445
Tx Max : 13,744,302
Errors Rx/Tx : 0/0
Broadcast : 0
Multicast : 0
Sizes 64 : 0
65-127 : 0
128-255 : 9,730,033,597
256-511 : 0
512-1023 : 0
1024-1518 : 0
Runts/Jumbos : 0/0
ARP/ICMP Pkts : 0/0
Rx/Tx queue cnt : 1/1
Tx Count/% Rate : Forever /40%
Pkt Size/Rx:Tx Burst: 128 / 64: 32
Port Src/Dest : 20000/12345
Type:VLAN ID:Flags : IPv6 / UDP:0001:0010
IP Destination : fd12:3456:789a:11:22+17
Source : 2001:db8:1::/96
MAC Destination : 3c:fd:fe:a2:ac:98
Source : 9c:69:b4:66:12:0c
NUMA/Vend:ID/PCI : -1/8086:1583/06:00.0
-- Pktgen 25.08.1 Powered by DPDK 25.11.0 (pid:1649) -------------------------
Lost pkts = Total Tx Pkts - Rx Pkts = (9,740,273,088 - 9,730,033,597) = 10,239,491 Lost pkts % = 10,239,491 / (9,740,273,088 + 9,730,033,597) = 0.0005259 * 100 % = 0.052 %
Final BisonRouter port stats.
root@alex-gerbeast:~# rcli sh port general stat
port 0
pkts rx 10443727022
pkts tx 10438679108
bytes rx 1296669687020
bytes tx 1294396210910
errors rx 0
errors tx 0
missed 13293266
rx_nombuf 0
port 1
pkts rx 10438679079
pkts tx 10443727002
bytes rx 1086147608388
bytes tx 1086147608388
errors rx 0
errors tx 0
missed 5047923
rx_nombuf 0
Missed = (13293266 + 5047923) - 8,101,698 = 10,239,491 The value matches the lost packets number calculated as Pktgen TX - RX.
Test 2 run
Decrease the pktgen rate to 35% since the 40% rate resulted in some packet loss.
Stage 1. Initial session establishing phase.
root@alex-gerbeast:~# rcli sh port general stat
port 0
pkts rx 276233182
pkts tx 276131047
bytes rx 34938978172
bytes tx 34240247986
errors rx 0
errors tx 0
missed 5534562
rx_nombuf 0
port 1
pkts rx 276132670
pkts tx 276233328
bytes rx 28728320268
bytes tx 28728264996
errors rx 0
errors tx 0
missed 101177
rx_nombuf 0
The number of missed packets at the end of stage 1 = (5534562 + 101177)
Stage 2
/ <Main Page> Ports 0-0 of 1 Copyright(c) <2010-2025>, Intel Corporation
Port:Flags : 0:P------R-VLAN Single
Link State : <UP-40000-FD> ---Total Rate---
Pkts/s Rx : 12,013,007 12,013,007
Tx : 12,012,992 12,012,992
MBits/s Rx/Tx : 14,223/14,223 14,223/14,223
Total Rx Pkts : 762,945,068 12,013,041
Tx Pkts : 762,947,680 12,012,992
Rx/Tx MBs : 903,326/903,330
Pkts/s Rx Max : 12,013,041
Tx Max : 12,012,992
Errors Rx/Tx : 0/0
Broadcast : 0
Multicast : 0
Sizes 64 : 0
65-127 : 0
128-255 : 765,593,424
256-511 : 0
512-1023 : 0
1024-1518 : 0
Runts/Jumbos : 0/0
ARP/ICMP Pkts : 0/0
Rx/Tx queue cnt : 1/1
Tx Count/% Rate : Forever /35%
Pkt Size/Rx:Tx Burst: 128 / 64: 32
Port Src/Dest : 20000/12345
Type:VLAN ID:Flags : IPv6 / UDP:0001:0010
IP Destination : fd12:3456:789a:11:22+17
Source : 2001:db8:1::/96
MAC Destination : 3c:fd:fe:a2:ac:98
Source : 9c:69:b4:66:12:0c
NUMA/Vend:ID/PCI : -1/8086:1583/06:00.0
-- Pktgen 25.08.1 Powered by DPDK 25.11.0 (pid:1649) -------------------------
Feb 16 22:46:45 Swapped in new working snapshot: 2082447 sessions
Feb 16 22:46:45 rates sess_del=3707.0/sec req=92.7/sec failures=0
Feb 16 22:46:46 rates sess_del=70897.2/sec req=1772.4/sec failures=0
Feb 16 22:46:47 rates sess_del=70934.0/sec req=1773.4/sec failures=0
Feb 16 22:46:49 rates sess_del=39035.6/sec req=975.9/sec failures=0
Feb 16 22:46:52 rates sess_del=3381.1/sec req=84.5/sec failures=0
Feb 16 22:46:52 Snapshot refresh OK: 2081633 sessions (build_ms=4050.1 next_poll_wait_ms=1949.9 rcli_t0=39.497s)
Feb 16 22:46:52 Swapped in new working snapshot: 2081633 sessions
Feb 16 22:46:53 rates sess_del=51870.2/sec req=1296.8/sec failures=0
Feb 16 22:46:54 rates sess_del=70640.9/sec req=1766.0/sec failures=0
Feb 16 22:46:56 rates sess_del=3264.9/sec req=81.6/sec failures=0
Feb 16 22:46:57 rates sess_del=60.6/sec req=1.5/sec failures=0
Feb 16 22:46:57 Snapshot refresh OK: 2081810 sessions (build_ms=3293.3 next_poll_wait_ms=2706.6 rcli_t0=45.497s)
Feb 16 22:46:57 Swapped in new working snapshot: 2081810 sessions
Feb 16 22:46:58 rates sess_del=42984.7/sec req=1074.6/sec failures=0
Feb 16 22:46:59 rates sess_del=70172.6/sec req=1754.3/sec failures=0
Feb 16 22:47:00 rates sess_del=71307.6/sec req=1782.7/sec failures=0
Feb 16 22:47:02 rates sess_del=942.0/sec req=23.5/sec failures=0
Feb 16 22:47:03 rates sess_del=352.9/sec req=8.8/sec failures=0
Feb 16 22:47:04 rates sess_del=465.9/sec req=11.6/sec failures=0
Feb 16 22:47:04 Snapshot refresh OK: 2081601 sessions (build_ms=3850.6 next_poll_wait_ms=2149.4 rcli_t0=51.497s)
Feb 16 22:47:04 Swapped in new working snapshot: 2081601 sessions
Feb 16 22:47:05 rates sess_del=59430.8/sec req=1485.8/sec failures=0
Feb 16 22:47:06 rates sess_del=71221.9/sec req=1780.5/sec failures=0
Feb 16 22:47:07 rates sess_del=10713.4/sec req=267.8/sec failures=0
Feb 16 22:47:09 rates sess_del=68.2/sec req=1.7/sec failures=0
Feb 16 22:47:09 Snapshot refresh OK: 2082069 sessions (build_ms=3241.1 next_poll_wait_ms=2758.8 rcli_t0=57.498s)
Feb 16 22:47:09 Swapped in new working snapshot: 2082069 sessions
Feb 16 22:47:10 rates sess_del=13433.4/sec req=335.8/sec failures=0
Feb 16 22:47:11 rates sess_del=71316.2/sec req=1782.9/sec failures=0
Feb 16 22:47:12 rates sess_del=70069.7/sec req=1751.7/sec failures=0
Feb 16 22:47:14 rates sess_del=17511.9/sec req=437.8/sec failures=0
Feb 16 22:47:15 rates sess_del=73.4/sec req=1.8/sec failures=0
Feb 16 22:47:15 Snapshot refresh OK: 2056869 sessions (build_ms=3357.5 next_poll_wait_ms=2642.5 rcli_t0=63.498s)
Feb 16 22:47:15 Swapped in new working snapshot: 2056869 sessions
Feb 16 22:47:16 rates sess_del=26716.0/sec req=667.9/sec failures=0
Feb 16 22:47:17 rates sess_del=71076.0/sec req=1776.9/sec failures=0
Feb 16 22:47:18 rates sess_del=69970.3/sec req=1749.3/sec failures=0
root@alex-gerbeast:~# rcli sh nat64 maps
NAT64 map
map id: 1
ipv6 dst prefix: fd12:3456:789a:11:2233:4455::/96
out ranges:
10.82.0.0 - 10.82.3.255, port-block usage 131072/258048, %50
active sessions: 2131660
max internal hosts: 140000
max sessions: 3000000
in/out ratio: 256
port block size: 256
port blocks per out address: 252
port blocks per in address: 4
NEL session: enabled
NEL port-block: disabled
state counters:
unknown: 0
active: 2082002
syn_sent: 0
established: 0
fin_wait: 0
close_wait: 0
closing: 0
last_ack: 0
closed: 0
icmp_active: 0
dns: 0
gre: 0
root@alex-gerbeast:~# rcli sh nat64 sessions map 1 | wc -l
2083752
root@alex-gerbeast:~# rcli sh nat64 stat; sleep 30; rcli sh nat64 stat;
map 1
no session entries 0
session egress map full 0
session ingress map error 0
session ingress key already exists 0
bib being deleted 22
bib full 0
bib collision 0
cmbuf pool empty 0
cmbuf tx error 0
worker message ring full 0
unsupported icmp error message 0
unsupported icmp message 0
unsupported icmp inner proto 0
icmpv4 error 0
icmpv6 error 0
incorrect packet 0
packet too big (> mtu) 0
packet transformation error 0
no int_ip entries 0
no translated addresses available 0
no port-blocks available 0
no bib entries available 0
no translated ports available 0
unsupported packet 0
no fragmentation entries available 0
fragmentation map full 0
no L4 header in the first fragment 0
unsupported L4 proto 0
ingress bib not found 0
fragmentation error 0
session not found 0
total session created 14768173
total session deleted 12671065
map 1
no session entries 0
session egress map full 0
session ingress map error 0
session ingress key already exists 0
bib being deleted 22
bib full 0
bib collision 0
cmbuf pool empty 0
cmbuf tx error 0
worker message ring full 0
unsupported icmp error message 0
unsupported icmp message 0
unsupported icmp inner proto 0
icmpv4 error 0
icmpv6 error 0
incorrect packet 0
packet too big (> mtu) 0
packet transformation error 0
no int_ip entries 0
no translated addresses available 0
no port-blocks available 0
no bib entries available 0
no translated ports available 0
unsupported packet 0
no fragmentation entries available 0
fragmentation map full 0
no L4 header in the first fragment 0
unsupported L4 proto 0
ingress bib not found 0
fragmentation error 0
session not found 0
total session created 15597837
total session deleted 13500705
CPS = (15597837 - 14768173) / 30 ≈ 27 K sessions/sec
Pktgen stopped. Final pktgen stats.
\ <Main Page> Ports 0-0 of 1 Copyright(c) <2010-2025>, Intel Corporation
Port:Flags : 0:P------R-VLAN Single
Link State : <UP-40000-FD> ---Total Rate---
Pkts/s Rx : 0 0
Tx : 0 0
MBits/s Rx/Tx : 0/0 0/0
Total Rx Pkts : 7,125,382,225 12,014,113
Tx Pkts : 7,125,387,712 12,013,024
Rx/Tx MBs : 8,436,452/8,436,459
Pkts/s Rx Max : 12,014,113
Tx Max : 12,013,024
Errors Rx/Tx : 0/0
Broadcast : 0
Multicast : 0
Sizes 64 : 0
65-127 : 0
128-255 : 7,125,382,225
256-511 : 0
512-1023 : 0
1024-1518 : 0
Runts/Jumbos : 0/0
ARP/ICMP Pkts : 0/0
Rx/Tx queue cnt : 1/1
Tx Count/% Rate : Forever /35%
Pkt Size/Rx:Tx Burst: 128 / 64: 32
Port Src/Dest : 20000/12345
Type:VLAN ID:Flags : IPv6 / UDP:0001:0010
IP Destination : fd12:3456:789a:11:22+17
Source : 2001:db8:1::/96
MAC Destination : 3c:fd:fe:a2:ac:98
Source : 9c:69:b4:66:12:0c
NUMA/Vend:ID/PCI : -1/8086:1583/06:00.0
Lost pkts = Total Tx Pkts - Rx Pkts = (7,125,387,712 - 7,125,382,225) = 5,487
Final BisonRouter port stats.
root@alex-gerbeast:~# rcli sh port general stat
port 0
pkts rx 7783490910
pkts tx 7783387327
bytes rx 965839317372
bytes tx 965140029490
errors rx 0
errors tx 0
missed 5537634
rx_nombuf 0
port 1
pkts rx 7783387310
pkts tx 7783490902
bytes rx 809483053988
bytes tx 809483053988
errors rx 0
errors tx 0
missed 103592
rx_nombuf 0
- Missed (103592 + 5537634) - (5534562 + 101177) = 5,487
- Missed % = 5,487 / (7,125,382,225 + 7,125,387,712) * 100 % = 0.0000385 %
- The missed value reported by BisonRouter matches the lost packets calculated as Pktgen TX - RX values.
Summary and interpretation
BisonRouter is able to perform NAT64 translation at a total rate of 24 Mpps with effectively zero packet loss (see Test 2), maintaining 2M active NAT64 translation sessions and 131k active subscribers while continuously deleting and creating new NAT64 translations at a CPS of 27–30k translations per second.