The RISC Zero STARK Protocol
The implementation in code for the RISC Zero STARK prover can be seen here. In this document, we present an overview to the RISC Zero STARK protocol, as well as a sequence diagram and a detailed description below. The STARK by Hand explainer and the RISC Zero ZKP Whitepaper are good companions to this document.
Overview
RISC Zero's receipts are built on the shoulders of several recent advances in the world of zeroknowledge cryptography. The core of the proof system is STARKbased, implementing DEEPALI & FRI. This proof system is used to generate zeroknowledge validity proofs for RISC Zero's RISCV circuit and RISC Zero's recursion circuit. Users may also be interested in reading about the [RISC Zero Groth16 Circuit], which enables onchain verification.
At a high level, the design of the RISC Zero STARK protocol is very similar to the system described in ethSTARK, and the system implemented in Winterfell.
Setup Phase
The protocol includes a twopart setup phase; the first setup happens once per zkVM version, and the second setup establishes the Image ID for a given RISCV binary file.
Part 1: Circuit Setup
This setup is transparent and establishes the public parameters for the prover & verifier. These public parameters include the number and length of the trace columns, the choice of hash function and Merklization structure, as well as a full enumeration of the constraints that are to be enforced.
Part 2: Program Setup
This phase establishes an Image ID, which is determined transparently from a RISCV binary file and the circuit parameters. The Image ID is constructed by loading the RISCV binary file into the zkVM memory, and then recording a Merkle snapshot of the full machine state. This setup can be repeated by anyone with access to the binary file, in order to confirm the correctness of the Image ID.
Main Trace & Auxiliary Trace
After the setup phase, the Prover executes the binary in the zkVM, computes a LowDegree Extension on each column, and commits the Extended Main Execution Trace. Then, the prover computes and commits the Extended Auxiliary Execution Trace which depends on verifier randomness.
Compared to ethSTARK, our protocol adds an additional round of interaction to support constraints beyond basic AIR constraints. Using constraints that may span both the main trace and the auxiliary trace, we proceed with DEEPALI & FRI as described in ethSTARK. Adding an Auxiliary Execution Trace enables various enhancements, relative to a Vanilla STARK protocol. These enhancements are described well in From AIRs to RAPs.
We use this Auxiliary Execution Trace to support:

A permutation argument for memory verification
The permutation argument is currently implemented as a grand product accumulator argument, as in PLONK. We plan to change this to a log derivative accumulator argument in the next version of the circuit.
Here, operations corresponding to memory are committed to the main trace both in the original ordering and the permuted ordering, and grand product accumulators are committed in the auxiliary trace. 
A lookup argument for range checks
The lookup argument is currently implemented using the approach described in PLOOKUP. We plan to change this to a log derivative accumulator argument in the next version of the circuit.
Here, the tables and the witness are committed in the main trace, and grand product accumulators are committed in the auxiliary trace. 
A big integer accelerator to enable fast cryptographic operations
The bigint accelerator implements multiplication ofa
andb
by asking the host to provide the productc
as nondeterministic advice. Then, the verifier provides randomnessr
, and the constraints enforce that whena
,b
, andc
are interpreted as polynomials,a(r) * b(r) == c(r)
.
Here,a
,b
, andc
are committed in the main trace, and the evaluations atr
are committed in the auxiliary trace.
DEEPALI & FRI
The rest of the protocol implements with DEEPALI & FRI as described in EthSTARK. We describe this in more detail below, and refer readers to the ZKP Whitepaper for a more formal description of the protocol.
Sequence Diagram
Detailed StepbyStep Description
In this section, we elaborate on the sequence diagram above. For a more formal articulation of the protocol, refer to the ZKP Whitepaper.
Extended Main Execution Trace
 The Prover runs a computation in order to generate an
Execution Trace
. The
trace
is organized intocolumns
, and the columns are categorized ascontrol columns
,data columns
, andauxiliary/accum columns
. The
control columns
handle system initialization and shutdown, the initial program code to load into memory before execution, and other control signals that don't depend on the program execution.  The
data columns
contain the input and the computation data, both of which are private. These columns are committed in two orderings: in order of program execution, and
 reordered by register first and clock cycle second. The reordered columns allow for efficient validation of RISCV memory operations.
 The
auxiliary/accum columns
are used for a permutation argument, a lookup argument, and a big integer accelerator circuit.
 The
 After computing the
data columns
andauxiliary/accum columns,
the Prover adds some randomnoise
to the end of those columns in order to ensure that the protocol is zeroknowledge.
 The
 The Prover encodes the
trace
as follows: The Prover converts each
column
into a polynomial using aniNTT
. We'll refer to these asTrace Polynomials
, denoted $P_i(x)$.  The Prover evaluates the
data polynomials
and thecontrol polynomials
over an expanded domain. The evaluations of thedata polynomials
and thecontrol polynomials
over this larger domain is called theExtended Main Execution Trace
.  The Prover commits the
Extended Main Execution Trace
into two separate Merkle Trees, sending the roots to the Verifier.
 The Prover converts each
Extended Auxiliary Execution Trace
 Using the transcriptthusfar as an entropysource, we choose some random extension field elements, using a SHA2 CRNG.
 Then, the Prover uses the randomness to generate the
auxiliary/accum columns
. The Prover computes the LowDegree Extension of the auxiliary columns to form the Extended Auxiliary Execution Trace.  The Prover commits the Extended Auxiliary Execution Trace to a Merkle tree and sends the Merkle root to the Verifier.
 Using the transcriptthusfar as an entropysource, we choose a random
constraint mixing parameter
$\alpha$, using a SHA2 CRNG.
DEEPALI (part 1)

The Prover uses the
constraint mixing parameter
, theTrace Polynomials
, and theRule Checking Polynomials
to construct a fewLow Degree Validity Polynomials.
The details are as follows: By writing $k$ publicly known
Rule Checking Polynomials
, $R_0, R_1, ..., R_{k1}$, in terms of the privateTrace Polynomials
, the Prover generates $k$Constraint Polynomials
, $C_j(x)$. The key point about these polynomials is that for each of the $k$ rules and each input $z$ that's associated with the trace, $C_j(z)$ will return 0 if the trace "passes the test," so to speak.
 Using the
constraint mixing parameter
$\alpha$, the Prover combines theConstraint Polynomials
, $C_j$ into a singleMixed Constraint Polynomial
, $C$, by computing $C(x)=\alpha^0C_0(x)+\ldots+\alpha^{k1}C_{k1}(x).$ Note that if each $C_j$ returns 0 at some point $z$, then $C$ will also return 0 at $z$.
 Using a publicly known
Zeros Polynomial
, the Prover computes theHigh Degree Validity Polynomial
, $V(x)=\frac{C(x)}{Z(x)}$. The
Zeros Polynomial
$Z(x)$ is a divisor of any honest construction of $C(x)$. In other words, an honest prover will construct $V(x)$ to be a polynomial of lower degree than $C(x)$. We call $V$ "high degree" relative to the Trace Polynomials, $P_i$.
 The
 The Prover
splits
theHigh Degree Validity Polynomial
into 4Low Degree Validity Polynomials
, $v_0(x), v_1(x), ..., v_3$.  The Prover evaluates the
Low Degree Validity Polynomials
, encodes them in a Merkle Tree, and sends the Merkle root to the Verifier.  We use FiatShamir to choose an outofdomain evaluation point, $z$.
 By writing $k$ publicly known
DEEPALI (part 2)
 The Verifier would like to check the asserted relation between $C$, $Z$, and $V$ at the
DEEP Test Point,
$z$. Namely, the Verifier would like to confirm that $V(z)Z(z)=C(z)$. The Prover sends the evaluations of each $v_i$ at $z$, which allows the Verifier to compute $V(z)$.
 Computing $C(z)$ is slightly more complicated. Because
rule checks
can check relationships across multiplecolumns
and multipleclock cycles
, evaluating $C(z)$ requires numerous evaluations of the form $P_i(\omega^nz)$ for varyingcolumns
$i$ andcycles
$n$. The Prover sends thesenecessary evaluations
of each $P_i$ to allow the Verifier to evaluate $C(z)$. We refer to thenecessary evaluations
$P_i(\omega^nz)$ as thetaps
of $P_i$ at $z$.  The Verifier can now check $V(z)Z(z)=C(z)$.
 Although these asserted evaluations have no associated Merkle branches, the DEEP technique offers an alternative to the usual Merkle proof.
 The Prover constructs the DEEP polynomials using the
taps
: Denoting the
taps
of $P_i$ at $z$ as $(x_1,P_i(x_1)),\ldots,(x_n,P_i(x_n))$, the Prover constructs the DEEP polynomial $P'_i(x)=\frac{P_i(x)\overline{P_i}(x)}{(xx_1)\ldots(xx_n)}$ where $\overline{P_i}(x)$ is the polynomial formed by interpolating the taps of $P_i$. The Prover computes $P'_i$, runs an iNTT on the result, and sends the coefficients of $P'_i$ to the Verifier. Using this technique, the Prover constructs and sends a DEEP polynomial for each $P_i$ and each $v_i$.
 Denoting the
 At this point, the claim of trace validity has been reduced to the claim that each of the DEEP polynomials is actually a lowdegree polynomial.
To conclude the proof, the Prover mixes the DEEP polynomials into the
FRI Polynomial
using aDEEP mixing parameter
and use the FRI protocol to show that theFRI Polynomial
is a lowdegree polynomial.
The FRI Protocol
 We omit the details of the DEEPALI & FRI for brevity.
Thanks for reading! If you have questions or feedback, we'd love to hear from you on Discord or Twitter.