To stay up to date with the latest developments and get support, consider joining the `mailing list <https://groups.google.com/d/forum/corundum-nic>`_ and `Zulip <https://corundum.zulipchat.com/>`_.
The main `upstream repository for Corundum <https://github.com/corundum/corundum/>`_ is located on `GitHub <https://github.com/>`_. There are two main ways to download the source code - downloading an archive, or cloning with git.
There is also a `mirror of the repository <https://gitee.com/alexforencich/corundum/>`_ on `gitee <https://gitee.com/>`_, here are the equivalent commands::
Corundum currently uses `Icarus Verilog <http://iverilog.icarus.com/>`_ and `cocotb <https://github.com/cocotb/cocotb>`_ for simulation. Linux is the recommended operating system for a development environment due to the use of symlinks (which can cause problems on Windows as they are not supported by windows filesystems), however WSL may also work well.
The required system packages are:
* Python 3 (``python`` or ``python3``, depending on distribution)
* Icarus Verilog (``iverilog``)
* GTKWave (``gtkwave``)
The required python packages are:
*``cocotb``
*``cocotb-bus``
*``cocotb-test``
*``cocotbext-axi``
*``cocotbext-eth``
*``cocotbext-pcie``
*``pytest``
*``scapy``
Recommended additional python packages:
*``tox`` (to run pytest inside a python virtual environment)
*``pytest-xdist`` (to run tests in parallel with `pytest -n auto`)
*``pytest-sugar`` (makes pytest output a bit nicer)
It is recommended to install the required system packages via the system package manager (``apt``, ``yum``, ``pacman``, etc.) and then install the required Python packages as user packages via ``pip`` (or ``pip3``, depending on distribution).
Running tests
=============
Once the packages are installed, you should be able to run the tests. There are several ways to do this.
First, all tests can be run by runing ``tox`` in the repo root. In this case, tox will set up a python virtual environment and install all python dependencies inside the virtual environment. Additionally, tox will run pytest as ``pytest -n auto`` so it will run tests in parallel on multiple CPUs. ::
Second, all tests can be run by running ``pytest`` in the repo root. Running as ``pytest -n auto`` is recommended to run multiple tests in parallel on multiple CPUs. ::
$ cd /path/to/corundum/
$ pytest -n auto
============================= test session starts ==============================
platform linux -- Python 3.9.7, pytest-6.2.5, py-1.10.0, pluggy-0.13.1
======================= 69 passed in in 2032.42s (0:33:52) =====================
Third, groups of tests can be run by running ``pytest`` in a subdirectory. Running as ``pytest -n auto`` is recommended to run multiple tests in parallel on multiple CPUs. ::
$ cd /path/to/corundum/fpga/common/tb/rx_hash
$ pytest -n 4
============================= test session starts ==============================
platform linux -- Python 3.9.7, pytest-6.2.5, py-1.10.0, pluggy-0.13.1
============================== 2 passed in 37.49s ==============================
Finally, individual tests can be run by runing ``make``. This method provides the capability of overriding parameters and enabling waveform dumps in FST format that are viewable in gtkwave. ::
-.--ns INFO cocotb.gpi ..mbed/gpi_embed.cpp:76 in set_program_name_in_venv Did not detect Python virtual environment. Using system-wide Python interpreter
-.--ns INFO cocotb.gpi ../gpi/GpiCommon.cpp:99 in gpi_print_registered_impl VPI registered
0.00ns INFO Running on Icarus Verilog version 11.0 (stable)
0.00ns INFO Running tests with cocotb v1.7.0.dev0 from /home/alex/.local/lib/python3.9/site-packages/cocotb
0.00ns INFO Seeding Python random module with 1643529566
0.00ns INFO Found test test_rx_hash.run_test
0.00ns INFO Found test test_rx_hash.run_test
0.00ns INFO Found test test_rx_hash.run_test
0.00ns INFO Found test test_rx_hash.run_test
0.00ns INFO Found test test_rx_hash.run_test
0.00ns INFO Found test test_rx_hash.run_test
0.00ns INFO Found test test_rx_hash.run_test
0.00ns INFO Found test test_rx_hash.run_test
0.00ns INFO running run_test (1/8)
0.00ns INFO AXI stream source
0.00ns INFO cocotbext-axi version 0.1.19
0.00ns INFO Copyright (c) 2020 Alex Forencich
0.00ns INFO https://github.com/alexforencich/cocotbext-axi
Building FPGA configurations for Xilinx devices requires `Vivado <https://www.xilinx.com/products/design-tools/vivado.html>`_. Linux is the recommended operating system for a build environment due to the use of symlinks (which can cause problems on Windows) and makefiles for build automation. Additionally, Vivado uses more CPU cores for building on Linux than on Windows. It is not recommended to run Vivado inside of a virtual machine as Vivado uses a significant amount of RAM during the build process. Download and install the appropriate version of Vivado. Make sure to install device support for your target device; support for other devices can be disabled to save disk space.
Licenses may be required, depending on the target device. A bare install of Vivado without any licenses runs in "WebPACK" mode and has limited device support. If your target device is on the `WebPACK device list <https://www.xilinx.com/products/design-tools/vivado/vivado-webpack.html#architecture>`_, then no Vivado license is required. Otherwise, you will need access to a Vivado license to build the design.
Additionally, the 100G MAC IP cores on UltraScale and UltraScale+ require separate licenses. These licenses are free of charge, and can be generated for `UltraScale <https://www.xilinx.com/products/intellectual-property/cmac.html>`_ and `UltraScale+ <https://www.xilinx.com/products/intellectual-property/cmac_usplus.html>`_. If your target design uses the 100G CMAC IP, then you will need one of these licenses to build the design.
For example: if you want to build a 100G design for an Alveo U50, you will not need a Vivado license as the U50 is supported under WebPACK, but you will need to generate a (free-of-charge) license for the CMAC IP for UltraScale+.
Before building a design with Vivado, you'll have to source the appropriate settings file. For example::
$ source /opt/Xilinx/Vivado/2020.2/settings64.sh
$ make
Building the FPGA configuration
===============================
Each design contains a set of makefiles for automating the build process. To use the makefile, simply source the settings file for the required toolchain and then run ``make``. Note that the repository makes significant use of symbolic links, so it is highly recommended to build the design under Linux.
For example::
$ cd /path/to/corundum/fpga/mqnic/[board]/fpga_[variant]/fpga
$ source /opt/Xilinx/Vivado/2020.2/settings64.sh
$ make
Building the driver
===================
To build the driver, you will first need to install the required compiler and kernel source code packages. After these packages are installed, simply run ``make``. ::
$ cd /path/to/corundum/modules/mqnic
$ make
Note that the driver currently does not support RHEL, centos, and related distributions that use very old and significantly modified kernels where the reported kernel version number is not a reliable of the internal kernel API.
Building the userspace tools
============================
To build the driver, you will first need to install the required compiler packages. After these packages are installed, simply run ``make``. ::
Building PetaLinux projects for Xilinx devices requires `PetaLinux Tools <https://www.xilinx.com/products/design-tools/embedded-software/petalinux-sdk.html>`_. Linux is the recommended operating system for a build environment due to the use of symlinks (which can cause problems on Windows) and makefiles for build automation. Download and install the appropriate version of PetaLinux Tools. Make sure to install device support for your target device; support for other devices can be disabled to save disk space.
An example for a PetaLinux project in Corundum is accompanying the FPGA design using the Xilinx ZynqMP SoC as host system for mqnic on the Xilinx ZCU106 board. See `fpga/mqnic/ZCU106/fpga_zynqmp/README.md`.
Before building a PetaLinux project, you'll have to source the appropriate settings file. For example::
There are three main ways for loading Corundum on to an FPGA board. The first is via JTAG, into volatile FPGA configuration memory. This is best for development and debugging, especially when complemented with a baseline design with the same PCIe interface configuration stored in flash. The second is via indirect JTAG, into nonvolatile on-card flash memory. This is quite slow. The third is via PCI express, into nonvolatile on-card memory. This is the fastest method of programming the flash, but it requires the board to already be running the Corundum design.
For a card that's not already running Corundum, there are two options for programming the flash. The first is to use indirect JTAG, but this is very slow. The second is to first load the design via JTAG into volatile configuration memory, then perform a warm reboot, and finally write the design into flash via PCIe with the ``mqnic-fw`` utility.
Loading the design via JTAG into volatile configuration memory with Vivado is straightforward: install the card into a host computer, attach the JTAG cable, power up the host computer, and use Vivado to connect and load the bit file into the FPGA. When using the makefile, run ``make program`` to program the device. If physical access is a problem, it is possible to run a hardware server instance on the host computer and connect to the hardware server over the network. Once the design is loaded into the FPGA, perform either a hot reset (via ``pcie_hot_reset.sh`` or ``mqnic-fw -t``, but only if the card was enumerated at boot and the PCIe configuration has not changed) or a warm reboot.
Loading the design via indirect JTAG into nonvolatile memory with Vivado requires basically the same steps as loading it into volatile configuration memory, the main difference is that the configuration flash image must first be generated by running ``make fpga.mcs`` after using make to generate the bit file. Once this file is generated, connect with the hardware manager, add the configuration memory device (check the makefile for the part number), and program the flash. After the programming operation is complete, boot the FPGA from the configuration memory, either via Vivado (right click -> boot from configuration memory) or by performing a cold reboot (full shut down, then power on). When using the makefile, run ``make flash`` to generate the flash images, program the flash via indirect JTAG, and boot the FPGA from the configuration memory. Finally, reboot the host computer to re-enumerate the PCIe bus.
Loading the design via PCI express is straightforward: use the ``mqnic-fw`` utility to load the bit file into flash, then trigger an FPGA reboot to load the new design. This does not require the kernel module to be loaded. With the kernel module loaded, point ``mqnic-fw`` either to ``/dev/mqnic<n>`` or to one of the associated network interfaces. Without the kernel module loaded, point ``mqnic-fw`` either to the raw PCIe ID, or to ``/sys/bus/pci/devices/<pcie-id>/resource0``; check ``lspci`` for the PCIe ID. Use ``-w`` to specify the bit file to load, then ``-b`` to command the FPGA to reset and reload its configuration from flash. You can also use ``-t`` to trigger a hot reset to reset the design.
Query device information with ``mqnic-fw``, with no kernel module loaded::
$ sudo ./mqnic-fw -d 81:00.0
PCIe ID (device): 0000:81:00.0
PCIe ID (upstream port): 0000:80:01.1
FPGA ID: 0x04b77093
FPGA part: XCU50
FW ID: 0x00000000
FW version: 0.0.1.0
Board ID: 0x10ee9032
Board version: 1.0.0.0
Build date: 2022-01-05 08:33:23 UTC (raw 0x61d557d3)
The driver will attempt to read MAC addresses from the card. If it fails, it will fall back on random MAC addresses. On some cards, the MAC addresses are fixed and cannot be changed, on other cards they are written to use-accessible EEPROM and as such can be changed. Some cards with EEPROM come with blank EEPROMs, so if you want a persistent MAC address, you'll have to write a base MAC address into the EEPROM. And finally, some cards do not have an EEPROM for storing MAC addresses, and persistent MAC addresses are not currently supported on these cards.
Testing the design
==================
To test the design, connect it to another NIC, either directly with a DAC cable or similar, or via a switch.
Before performing any testing, an IP address must be assigned through the Linux kernel. There are various ways to do this, depending on the distribution in question. For example, using ``iproute2``::
$ sudo ip link set dev enp129s0 up
$ sudo ip addr add 10.0.0.2/24 dev enp129s0
You can also change the MTU setting::
$ sudo ip link set mtu 9000 dev enp129s0
Note that NetworkManager can fight over the network interface configuration (depending on the linux distribution). If the IP address disappears from the interface, then this is likely the fault of NetworkManager as it attempts to dynamically configure the interface. One solution for this is simply to use NetworkManager to configure the interface instead of iproute2. Another is to statically configure the interface using configuration files in ``/etc/network/interfaces`` so that NetworkManager will leave it alone.
One the card is configured, using ``ping`` is a good first test::
$ ping 10.0.0.1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms
64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.109 ms
^C
--- 10.0.0.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1052ms