This manual provides additional usage details about the FPGA. Specifically, it provides instructions on SW development flows and testing procedures. Refer to the FPGA Setup guide for more information on initial setup.
The FPGA is meant for both boot ROM and general software development. The flow for each is different, as the boot ROM is meant to be fairly static while general software can change very frequently. However, for both flows, Vivado must be installed, with instructions described here.
The FPGA bitstream is built after compiling whatever code is sitting in sw/device/lib/testing/test_rom
.
This binary is used to initialize internal FPGA memory and is part of the bitstream directly.
To update this content without rebuilding the FPGA, a flow is required to splice a new boot ROM binary into the bitstream.
First, you must splice the new content into an existing bitstream.
Then, you can flash the new bitstream onto the FPGA with opentitantool
.
There are two ways to splice content into a bitstream.
-
Define a Bazel target (or use an existing one). For example, see the
//hw/bitstream:rom
target defined in hw/bitstream/BUILD. -
Use the
splice_rom.sh
script.There are two prerequisites in order for this flow to work:
- The boot ROM during the build process must be correctly inferred by the tool.
- See prim_rom and vivado_hook_opt_design_post.tcl.
- The MMI file outlining the physical boot ROM placement and mapping to FPGA block RAM primitives needs to be generated by the tool.
With these steps in place, a script can be invoked to take a new binary and push its contents into an existing bitfile. For details, please see the
splice_rom.sh
script.See example below:
$ cd $REPO_TOP $ ./util/fpga/splice_rom.sh $ bazel run //sw/host/opentitantool fpga load-bitstream build/lowrisc_systems_chip_earlgrey_cw310_0.1/synth-vivado/lowrisc_systems_chip_earlgrey_cw310_0.1.bit
The script assumes that there is an existing bitfile
build/lowrisc_systems_chip_earlgrey_cw310_0.1/synth-vivado/lowrisc_systems_chip_earlgrey_cw310_0.1.bit
(this is created after following the steps in FPGA Setup).The script assumes that there is an existing boot ROM image under
build-bin/sw/device/lib/testing/test_rom
and then creates a new bitfile of the same name at the same location. The original input bitfile is moved tobuild/lowrisc_systems_chip_earlgrey_cw310_0.1/synth-vivado/lowrisc_systems_chip_earlgrey_cw310_0.1.bit.orig
.opentitantool
can then be used to directly flash the updated bitstream to the FPGA. - The boot ROM during the build process must be correctly inferred by the tool.
After building, the FPGA bitstream contains only the boot ROM.
Using this boot ROM, the FPGA is able to load additional software to the emulated flash, such as software in the sw/device/benchmark
, sw/device/examples
and sw/device/tests
directories.
To load additional software, opentitantool
is required.
Also the binary you wish to load needs to be built first.
For the purpose of this demonstration, we will use sw/device/examples/hello_world
, but it applies to any software image that is able to fit in the emulated flash space.
The example below builds the hello_world
image and loads it onto the FPGA.
$ cd ${REPO_TOP}
$ bazel run //sw/host/opentitantool fpga set-pll # This needs to be done only once.
$ bazel build //sw/device/examples/hello_world:hello_world_fpga_cw310_bin
$ bazel run //sw/host/opentitantool bootstrap $(ci/scripts/target-location.sh //sw/device/examples/hello_world:hello_world_fpga_cw310_bin)
Uart output:
I00000 test_rom.c:81] Version: earlgrey_silver_release_v5-5886-gde4cb1bb9, Build Date: 2022-06-13 09:17:56
I00001 test_rom.c:87] TestROM:6b2ca9a1
I00000 test_rom.c:81] Version: earlgrey_silver_release_v5-5886-gde4cb1bb9, Build Date: 2022-06-13 09:17:56
I00001 test_rom.c:87] TestROM:6b2ca9a1
I00002 test_rom.c:118] Test ROM complete, jumping to flash!
I00000 hello_world.c:66] Hello World!
I00001 hello_world.c:67] Built at: Jun 13 2022, 14:16:59
I00002 demos.c:18] Watch the LEDs!
I00003 hello_world.c:74] Try out the switches on the board
I00004 hello_world.c:75] or type anything into the console window.
I00005 hello_world.c:76] The LEDs show the ASCII code of the last character.
For more details on the exact operation of the loading flow and how the boot ROM processes incoming data, please refer to the boot ROM readme.
To set the stage, let's say you've discovered a test regression.
The test used to pass on GOOD_COMMIT
, but now it fails at BAD_COMMIT
.
Your goal is to find the first bad commit.
In general, a linear search from GOOD_COMMIT
to BAD_COMMIT
is one of the slowest ways to find the first bad commit.
We can save time by testing fewer commits with git bisect
, which effectively applies binary search to the range of commits.
We can save even more time by leveraging the bitstream cache with //util/fpga:bitstream_bisect
.
The :bitstream_bisect
tool is faster than regular git bisect
because it restricts itself to cached bitstreams until it can make no more progress.
Building a bitstream is many times slower than running a test (hours compared to minutes), and git bisect
has no idea that some commits will be faster to classify than others due to the bitstream cache.
For example, suppose that //sw/device/tests:uart_smoketest
has regressed sometime in the last 30 commits.
The following command could easily save hours compared to a naive git bisect
:
# This will use the fast command to classify commits with cached bitstreams. If
# the results are ambiguous, it will narrow them down with the slow command.
./bazelisk.sh run //util/fpga:bitstream_bisect -- \
--good HEAD~30 \
--bad HEAD \
--fast-command "./bazelisk.sh test //sw/device/tests:uart_smoketest_fpga_cw310_rom" \
--slow-command "./bazelisk.sh test --define bitstream=vivado //sw/device/tests:uart_smoketest_fpga_cw310_rom"
One caveat is that neither git bisect
nor :bitstream_bisect
will help if the FPGA somehow retains state between tests.
That is, if the test command bricks the FPGA causing future tests to fail, bisection will return entirely bogus results.
We plan to add a "canary" feature to :bitstream_bisect
that will abort the bisect when FPGA flakiness is detected (issue #16788).
For now, if you suspect this kind of FPGA flakiness, the best strategy may be a linear progression from GOOD_COMMIT
to BAD_COMMIT
.
Note that the slow command doesn't necessarily have to build a bitstream.
If you don't have a Vivado license and the test regression is reproducible in Verilator, it could make sense to fall back to the Verilated test.
Building on the example above, you could replace the slow command with "./bazelisk.sh test //sw/device/tests:uart_smoketest_sim_verilator"
and the :bitstream_bisect
tool would never build any bitstreams.
For more information, consult the :bitstream_bisect
tool directly!
./bazelisk.sh run //util/fpga:bitstream_bisect -- --help
This section gives an overview of where bitstreams are generated, how they are uploaded to the GCP cache, and how Bazel reaches into the cache.
OpenTitan runs CI tasks on Azure Pipelines that build FPGA bitstreams. A full bitstream build can take hours, so we cache the output artifacts in a GCS bucket. These cached bitstreams can be downloaded and used as-is, or we can splice in freshly-compiled components, including the ROM and the OTP image.
The chip_earlgrey_cw310
CI job builds the //hw/bitstream/vivado:standard
target which will build a bitstream with the test ROM and RMA OTP image.
This target will also produce bitstreams with the ROM spliced in and the DEV OTP image spliced in.
The following files are produced as a result:
fpga_cw310_rom.bit
(ROM, RMA OTP image)fpga_cw310_rom_otp_dev.bit
(ROM, DEV OTP image)lowrisc_systems_chip_earlgrey_cw310_0.1.bit
(test ROM, RMA OTP image)otp.mmi
rom.mmi
If CI is working on the master
branch, it puts selected build artifacts into a tarball, which it then uploads to the GCS bucket. The latest tarball is available here: https://storage.googleapis.com/opentitan-bitstreams/master/bitstream-latest.tar.gz
The @bitstreams//
workspace contains autogenerated Bazel targets for the GCS-cached artifacts.
This magic happens in rules/scripts/bitstreams_workspace.py
.
Under the hood, it fetches the latest tarball from the GCS bucket and constructs a BUILD file that defines one target per artifact.
One meta-level up, we have targets in //hw/bitstream
that decide whether to use cached artifacts or to build them from scratch.
By default, these targets use cached artifacts by pulling in their corresponding @bitstreams//
targets.
- TODO Define the new naming scheme. #13807