The Supercomputing (SC) conference is held each year in the United States. In 2025, it was held in St. Louis, Missouri, from Nov. 16 to Nov. 21. Network research exhibitions (NREs) are a key part of the SC program; these demonstrate cutting-edge usage of the network, both in terms of raw capacity and advanced networking features. HEPnet Canada led an NRE at SC25 encompassing both these aspects. Over 1 Tbps bandwidth was achieved from the University of Victoria (UVic) and Simon Fraser University (SFU) to the show floor in St. Louis, a record for an international NRE and the highest bandwidth ever achieved at a Canadian university.

In addition, the capability was demonstrated to inject information at the packet level identifying the owner and purpose of the traffic. This was done as part of the Scitags project, an initiative led by CERN whose goal is to improve visibility into the usage of the R&E network by major science experiments. This information was injected while still achieving terabit-scale bandwidth, a major step towards demonstrating the feasibility of using Scitags in production networks.

Setup

The endpoints used for the transfers were high-performance servers called Data Transfer Nodes (DTNs). There were three 400G DTNs at UVic and one 400G DTN at SFU, which originated the transfers. The SFU DTN was connected by a 400G link to the UVic; from UVic, the wide-area network (WAN) path to St. Louis consisted of three separate 400G links. The SFU DTN and one of the UVic DTNs combined to fill one of the 400G WAN links; the other two UVic servers were each put on their own 400G link (see Figure 1 below). On the show floor there were three 400G DTNs to receive the transfers.

The SFU DTN, network interface card (NIC), and transceivers were provided by Lenovo, with a cable provided by Arista. Two of the UVic DTNs and NICs were provided in 2025 by Lenovo; the third DTN was provided in 2023 by Dell, with a NIC provided by NVIDIA. The NIC and CPU models are listed below.

  1. SFU
    • CPU: AMD EPYC 9334 32-core processor, single-socket
    • NIC: Broadcom 57608
  2. UVic Server 1
    • CPU: Intel Xeon 6730 32-core processor, dual-socket
    • NIC: Broadcom 57608
  3. UVic Server 2
    • CPU: Intel Xeon 6730 32-core processor, dual-socket
    • NIC: Broadcom 57608
  4. UVic Server 3
    • CPU: Intel Xeon Platinum 8462Y+ 32-core processor, dual-socket
    • NIC: Mellanox CX-7

Network Connectivity

NREs use the Scinet network, which is built and torn down each year, to connect to the SC show floor. The connection from the University of Victoria and Simon Fraser University to Scinet traverses many different networks and partner organizations; a diagram is shown in Figure 1.


Scitags NRE Network Diagram

Figure 1: The wide-area network connectivity for the Scitags NRE.

Scitags

The Scitags initiative aims to improve visibility into R&E network usage by labelling traffic with the owner and purpose of the traffic. The combination of the owner (e.g. ATLAS) and the activity (e.g. data consolidation) is referred to as a scitag. The information is stored in the IPv6 header; two approaches are under consideration. The first uses the 20-bit flow label field, and the second uses a destination options extension header. The flow label approach is expected to have lower overhead, while the extension header approach has greater flexibility. The packets are modified using flowd-go, software written by Pablo Soto of the Scitags group for this purpose.

This software uses eBPF to access the underlying socket buffer and either overwrite the flow label field or add an extension header as the packet leaves the kernel network stack. This approach was originally demonstrated with the flowd software package written in python, which flowd-go is based on. The initial implementation of using eBPF to modify the packets was done by Tristan Sullivan of HEPnet Canada.

Results

The NRE ran for three hours (11AM - 2PM PST) on each of Nov. 17, 18, and 19. The data streams were produced using iperf3, which is commonly used network testing software. Transfers done with iperf3 are memory-to-memory rather disk-to-disk; this removes storage as a limiting factor and allows network links to be fully exploited in a straightforward manner.

For the first hour of the NRE, no packet marking was applied; for the second hour, packet marking was applied using the flow label; and for the third hour, packet marking with the destination options header was used. Each sending server was randomly assigned a scitag, which was then applied to each iperf3 stream originating on that server. A plot of the bandwidth obtained on Nov. 17 is shown in Figure 2.


Bandwidth plot

Figure 2: The bandwidth achieved by the Scitags NRE. The top-left panel shows the total bandwidth; the other three panels show the bandwidth achieved by the individual vlans. Breaks can be seen between the no marking, flow label marking, and extension header marking parts of the run. The drop around 12:45 PM was caused by the vlan2028 iperf3 flows unexpectedly stopping; this was not due to network failure.

The bandwidth target of 1 Tbps was immediately surpassed; as can be seen in the top-left panel of Figure 2, bandwidth of approximately 1.1 Tbps was achieved steadily throughout the running period. Flow label marking produced no noticeable drop in bandwidth; extension header marking had no effect on vlans 2027 and 2028, but did have a small impact on vlan 2026. It is expected that extension header marking will have more performance impact than flow label marking, but these results show that with sufficiently performant hardware, even extension header marking has no observable effect.

For the Scitags initiative to be useful, it will be necessary for R&E networks to be able to read the labels applied to the packets on any network device in real time. The capability to read and decode the labels has been demonstrated in previous NREs, and was again deployed this year. To accomplish this, sFlow records were sent from each SFU and UVic server to a collector machine, which extracted the labels applied and plotted the bandwidth seen per label, as shown in Figure 3. The ability to export sFlow records is standard on R&E routers as well, so this approach should be extensible to network gear. The collector and dashboard were provided by Neil McKee of InMon Corporation.


Scitags plot

Figure 3: The bandwidth per scitag. Each sending server was assigned a random scitag.

Conclusion and Acknowledgements

The 2025 Scitags NRE achieved ground-breaking results. It is the first terabit-scale transfer from a Canadian university, and also the first terabit-scale international NRE. Furthermore, in the context of the Scitags initiative, it provided further evidence that marking packets with flowd-go will not impact network performance when used on production systems.

Many individuals and organizations contributed to the success of this NRE. BCNet, CANARIE, Pacific Wave, Internet2, Starlight, and Scinet provided the WAN connections. Lenovo, Dell, and NVIDIA provided the DTNs and NICs for UVic and SFU. Starlight provided the receiving DTNs and handled their physical installation and setup at the SC exhibition floor.

Scitags NRE Team

  • Tristan Sullivan, University of Victoria, tssulliv@uvic.ca
  • Randall Sobie, University of Victoria
  • Ryan Enge, University of Victoria
  • Hans Yodzis, University of Victoria
  • Lixin Liu, Simon Fraser University
  • Thomas Tam, CANARIE
  • Tom Samplonius, BCNet
  • Joe Mambretti, iCAIR
  • Jim Chen, iCAIR
  • Fei Yeh, iCAIR