*Thanks to Ulrich Haböck for his feedback!*

- Introduction
- Why a new FFT?
- Where does FFT fit in Circle-STARKs?
- The ingredients
- The group
- The domain
- Twin-cosets
- Standard-position cosets
- The basis
- Riemann-Roch space
- Dimension gap
- The final basis
- FFT
- First step (bivariate → univariate)
- Solving a.
- Solving b.
- Second step (univariate → univariate)
- Solving a.
- Solving b.
- Further and base case
- IFFT
- What about FRI?
- Final words

# Introduction

If you’re reading this article, you’re probably trying to learn more about Circle-STARKs, a new proving system published ~three months ago. I heard about this breakthrough the day the paper was released. Still, I didn’t read about it until Vitalik wrote Exploring Circle STARKs, which I highly recommend (I also recommend LambdaClass article).

As a cryptography enthusiast, it’s normal to read a high-level explanation about some breakthrough and be left thinking: *That sounds quite magical! I wonder why <insert here a million questions>. *If the idea is new, usually you’re on your own! Any (limited) articles that aren’t the paper might unavoidably have to ignore information; that’s the whole point of being high-level (with time, what you consider “high-level” changes; many would argue that Vitaliks post isn’t).

I’m not a mathematician nor cryptographer, but I bit the bullet and jumped to the paper to understand more — in particular, I wanted to understand the FFT in more detail, which is at the core of the breakthrough.

Usually, I write articles to self-check my understanding and hopefully save time for someone out there trying to understand the same. This article is more detailed than Vitalik’s FFT section, which provides some hand-holding while trying to understand the paper.

Even though I’m not a cryptographer and do not spend time reading papers every day, I recommend reading the paper. Considering I could read it (with some effort), I believe this proves the authors did a fantastic job.

# Why a new FFT?

In cryptography, finite fields are used, which provide enough algebraic structure to build proving systems with a bounded size that is convenient for their implementation. The bigger the field size, the more work should be done per operation since we need to use multiple registers to represent their values (sum, multiply, reduce, etc).

A relatively new field is Mersene31 (M31), a prime field with modulus $2^{31}-1$. The benefit of this field is that you can do fast implementations since:

- You can represent values in 31-bit values, which is convenient for CPUs/GPUs (32-bit registers).
- Adding two numbers requires, at most, one extra bit, which is perfect for the same reason.
- Reducing a non-canonical value from addition or multiplication is very simple since you can decompose it into 32-bit chunks and, with some shifts and additions, have a canonical representation.

In proving systems, polynomial interpolations and evaluations are commonly required. If you don’t want to do this naively (and thus, slowly), you usually want to use FFT if the evaluation domain is special enough. Usually, the multiplicative group of the field has a big enough subgroup generated by a primitive root of unity, which satisfies this need.

This isn’t the case for this finite field, so that’s quite unfortunate. We have a field with good mechanical sympathy, but you’re missing the needed algebraic structure. Circle-STARKs were invented to deal with this challenge. The paper is more generic than targeting M31; it targets a *CFFT-friendly prime,* a prime $p$ where $p+1$ is divisible by $2^{N+1}$ some large $N$. M31 is a particular instance of such a prime.

In STARKs, there’s a particular decoupling between the field you use for arithmetization and the one required for good soundness in the FRI step. For Circle-STARKs, the same strategy is applied — for FRI you use an extension field of the arithmetization field.

# Where does FFT fit in Circle-STARKs?

Proving systems have many layers of concepts to understand, but at the end of the day, some computation is represented as a constraint system using polynomials. After that transformation, you want to prove the knowledge of some *witness* that satisfies those constraints. These constraints are checked efficiently by using some fundamental (but powerful) polynomial theorems.

There are two primary needs while transforming the computation to polynomials:

- Given the execution witness table values, we want to encode this information in a polynomial. This is doing a polynomial interpolation.
- We express the constraints as new polynomials created using the interpolated execution witness values and ask them to satisfy some equality within the interpolated domain.

To prove the last point, some usual theorems/lemmas from polynomials are used, transforming the problem into proving that a polynomial division isn’t a rational function. i.e., the quotient is a polynomial with a bounded degree. This is where FRI comes in and proves that claim. (In other proving systems that aren’t STARKs, this varies).

To apply FRI, we must do a “low degree extension,” evaluating the polynomial in a bigger domain. Then, we create the Merkle Tree and apply FRI. You can do this “naively” with some Lagrange basis and polynomial evaluations — but if the interpolation domain satisfies some properties, you can do this much faster with FFT and IFFT.

In regular STARKs, we work with univariate polynomials with a field from which we can have a proper multiplicative subgroup to apply FFT/IFFT. In Circle-STARKs, all these ideas of interpolation and evaluation are the same — but we work with bivariate “polynomials” (this isn’t true! but more on this later) in a new kind of domain built from a novel group construction.

Other parts of the Circle-STARK constructions also require some adjustments, but they’re finding analogous ways of doing the same as in regular STARKs. I won’t even claim I understand them in detail (yet!) since I got into the FFT rabbit hole for now!

The STARK (and usually most) proving system is modular, and you can plug and play different constructions if they satisfy the required algebraic properties. I find this modularity fact very beautiful! — it’s worth taking a minute to appreciate how elegant math is.

# The ingredients

There are multiple kinds of FFT. In my case, I was only aware of the usual FFT using roots of unity. However, there’s also ECFFT and others — still, you usually have to understand the same ingredients:

- The group you’re working on.
- The domain of evaluation.
- The basis used for interpolation.

For usual roots of unity FFT, you’d use the field multiplicative group, a proper subgroup, and $[1, x, x^2, …]$, respectively. Now, let’s explore these dimensions for Circle-STARKs FFT.

## The group

It all starts with the unity circle equation:

$x^2+y^2=1$where $(x, y)$ coordinates are elements of our field (e.g. M31). For the points in the circle, we define the following group operation:

$(x_0, y_0) · (x_1, y_1) = (x_0 · x_1−y_0·y_1, x_0·y_1+y_0·x_1)$From this defined group operation, we can note three facts:

- The neutral element is $(1, 0)$, since $(x, y) · (1, 0) = (x · 1 - y · 0, x·0 + y · 1) = (x ,y)$
- The squaring for a point is $π(x, y) = (x, y) · (x, y) = (2·x^ 2 − 1, 2 · x · y)$
- The inverse of an element is $(x, -y)$, since $(x, y) · (x, -y) = (x^2 + y^2, x·y - x · y) = (1, 0)$

We can appreciate the above in the following diagram:

Finally, let’s highlight three facts that we’ll have to remember:

- If we square $P$ or $-P$ we get the same $x$ coordinate (i.e: $2x^2-1$) and symmetrical results. You can check this yourself by doing the math with the group law.
- This group is cyclic — there’s an element $g$ from which you can generate all elements in the group. (Section 3.1 in the paper)
- The number of elements in the group is $p+1$ which, in the case of Mersene31, means $2^{31}$ elements. (Lemma 1 in the paper).

## The domain

In regular STARKs, two domains are used:

*Interpolation domain:*is where we encode the witness values into a polynomial.*FRI domain:*is where we evaluate the polynomial to construct the Merkle Tree.

For targeting zero-knowledge, both domains shouldn’t overlap. This is done by choosing the *FRI domain* as a shifted (but larger!) version of the *Interpolation domain*. Here *shifted* usually means a coset — if $G$ is the *interpolation domain* we take a coset $Q · G$. Using this coset still satisfies the properties we need for usual FFTs in STARKs. In the case of circle STARKs, there’s an analogous concept of *Twin cosets,* which is another way to have a non-overlapping domain for Circle-FFT.

The Circle-STARK paper defines two constructions to achieve the same goal: *Twin cosets* and *Standard position cosets*. What’s important to note is that both constructions can be used as domains for Circle-FFT, so after this sub-section, you can imagine working only with *Twin cosets* or *Standard position cosets* (which is probably easier).

### Twin-cosets

A *Twin coset* is defined as $D = Q·G_{n-1} \cup Q^{-1}·G_{n-1}$, but let’s unpack it a bit more:

- Let’s define $G$ as the (cyclic) circle group we explained in the previous section. The sub-index $n$ means that the group has order $2^n$.
- $G_{n-1}$ is defined as squaring $G_{n}$, thus it’s order is $2^{n-1}$. This is the same as squaring a domain generated by a primitive root of unity.
- $Q$ is an element from $G$, which must not be the neutral element or have order two.
- $Q·G_{n-1}$ is a left-coset of $G_{n-1}$
- We can prove that it doesn’t overlap thanks to the properties we asked for in bullet 3.

In a way, we’re taking $G_{n-1}$ and shifting by $Q$ to get half of $D$ and by $Q^{-1}$ to get the other half. But let’s try to explore a bit deeper some other facts:

- If we try to check if both groups overlap, we’d have to know some $g_0, g_1 \in G_{n-1}$ that $Q·g_0 = Q^{-1}·g_1$, if we multiply by $Q$ both sides we have $Q^2·g_0=g_1$ so clearly $Q^{2} \in G_{n-1}$ which is why was mentioned that $Q$ can’t be the neutral element or have order two.
- More interestingly, the inverse element from one coset is in the other. Let $g \in G$ then $(Q·g) · (Q^{-1}·g^{-1}) = 1$.

This last point can be appreciated in the following drawing shown in the paper about *Twin cosets *examples:

The paper doesn’t give a visual interpretation of that image (maybe considered trivial?), but from my perspective, it seems pretty clear that the blue and red points are the points of each coset. Additionally, you can appreciate that the inverse of a point is another point with the opposite color.

Finally, comes a crucial point: what happens if we square $D$? As you can imagine, this is a basic step in any FFT-like algorithm, and we’d imagine it should halve in size and keep its main properties. Let’s try to do that directly:

- We start with $D = Q·G_{n-1} \cup Q^{-1}·G_{n-1}$, now we square it…
- $π(D) = π(Q·G_{n-1}) \cup π(Q^{-1}·G_{n-1})$
- $π(D) = π(Q)·G_{n-2} \cup π(Q^{-1})·G_{n-2}$

Looks quite familiar? :) Remember that $Q$ and $Q^{-1}$ have the same $x$ coordinate, and the $π$ (squaring) keeps the inverse mapping, so we have again a *Twin coset* of half the size!

This sounds exactly like what our previous understanding of common roots of the unity FFT domain should look like. A set of elements where the squaring function halves the set and keeps the property of point inverse relationship, which we can continue to square until we reach a base case.

Why do we have to do this weird union of cosets instead of using a coset directly? The paper mentions the following:

Although breaking with the group structure, twin-cosets are natural evaluation domains for the FFT. They are used to work around the non-smooth behaviour of the circle FFT under rotation, and are the typical extrapolation target in our application.

I believe this is claiming that if you take *any* coset (for any Circle-FFT friendly prime $p$), then by squaring the coset you won’t preserve the FFT property we need.

### Standard-position cosets

If the Circle-FFT prime $p$ we’re using satisfies that $p+1$ it’s divided by a big $2^n$, we can use another construct for the domain called *Standard position coset*: $D = Q·G_{n}$ with $Q$ in $G_{n+1}$.

This construction is simpler since we don’t have to deal with unions. In the case of using M31, we have $p+1 = 2^{31}$ so we could build a proper domain of size up to $2^{30}$. For other primes, you might have $p+1 = 2^n · k$ , with a small $n$, so its size might fall short for our needs.

*Twin cosets* don’t depend on any properties of $p$, so it’s a more general construction. As mentioned earlier, you can think of our domains as twin cosets or standard position cosets — both satisfy what we need for the FFT domain, so which one you use is (mostly) irrelevant.

## The basis

Let’s take a step back and make a face-to-face comparison between usual STARKs and Circle-STARKs in relevant dimensions for an FFT:

Domain | Interpolation coefficients | Encoding target | |

Usual STARK | Roots of unity in multiplicative subgroup | Field element | Univariate polynomials with basis $1$, $x$, $x^2$,… |

Circle-STARK | Twin-Cosets / Standard position cosets | Field element | Bivariate “polynomial” (Riemann-Roch space) |

### Riemann-Roch space

In the previous section, we discussed what *domain* we use for Circle-STARKs. But what is this Reimann-Roch space concept? Vitalik's article briefly mentioned, so here I’ll dive a bit deeper. Remember, I’m not a mathematician and was on my own unpacking all this, so hopefully, there aren’t any mistakes (if that’s the case, please ping me!).

In usual STARKs, we “encode” the columns of the witness table into polynomials by doing an interpolation. This interpolation outputs the coefficients of a univariate polynomial where, on each element of the (root of unity) domain, the evaluation is equal to the witness value at a specific point in the execution trace. In a way, you can see this as encoding the witness table into something else — in this case, an univariate polynomial.

In the case of Circle-STARKs, we encode these values not in a univariate polynomial but in a bivariate polynomial—not *any* bivariate polynomial, but ones in a (Riemann-Roch) space.

Before continuing down the rabbit hole, it’s worth saying that the idea is to encode the witness values into “somewhere” that has appropriate properties to run FRI. In the case of usual STARKs, we talk about Reed-Solomon codes; this space has the required properties. In the case of Circle-STARKs, the idea is that this Riemann-Roch space is isomorphic to a Reed-Solomon code and thus also has the required properties.

If we call this space $L_N(F)$ which:

- $F$ is the (extension) field that we’re working on.
- Are bivariate polynomials modulo $x^2+y^2-1$
- Have degree $N/2$. This means that for each term $x^a · y^b$ it satisfies $a+b \leq N/2$

Since we’re considering bivariate polynomials modulo $x^2+y^2-1$, you can take any bivariate polynomial and substitute $y^2 = 1 - x^2$ (i.e: $y^2 = 1 · (x^2 + y^2+1)+1-x^2$). Thus, this space of polynomials always has $1$ as the degree of $y$ terms. Moreover, we can define a *canonical form* for a polynomial in this space as:

This should be obvious since we said the highest degree of $y$ is $1$. I find this *canonical form* to be handy to imagine how these polynomials look like, and also:

- It already looks interesting that you can see $f(x, y)$ as the sum of two terms involving univariate polynomials only depending on $x$. It might not be obvious why that’s useful, but this is no coincidence.
- If you know about FRI, this should look quite familiar.

### Dimension gap

What about this degree $< N/2$ requirement? The paper has a proven proposition that claims this space has dimension $N+1$. This means that by having these degree $<N/2$ bivariate polynomials we can encode stuff with $N+1$ degrees of freedom.

Remember that we can see elements of the Riemann-Roch space in a canonical form like: $f(x, y) = f_0(x) + y·f_1(x)$, which means that we could describe it on the following basis:

$[1, x, x^2, ..., x^{N/2}, y, y·x, y·x^2, ..., y·x^{N/2-1}]$Despite this already looking like a weird basis compared with the usual roots-of-unity FFT one $1, x, x^2, …$, it should make sense, given the mentioned canonical form.

Note that $y·x^{N/2-1}$ has $N/2-1$ as the maximum degree of $x$ since we have to respect the $N/2$ degree bound. If we count the number of elements on this basis is $(1+N/2)+(1+N/2-1)=N+1$.

But remember, we want to use this space as the target for our FFT. The FFT “output” dimension is $N$, but the target space has one extra dimension. To solve this, a $L’_N(F)$ subspace is used as the “real” target space — since it’s a subspace it satisfies the properties as the space but correctly fits the FFT output requirements.

### The final basis

To understand how $L’_N(F)$ work, I’ll make the assumption of using *Standard position cosets *for the domain. If we use *Twin cosets* the construction changes slightly, but it’s the same idea.

Let's first define $v_n(x)$ as follows:

- $v_1(x) = x$
- $v_n = v_{n-1} \circ \pi$. Recall $\pi$ was our squaring operation $\pi(x ,y) = (2·x^2 − 1, 2 · x · y)$, but here we only refer to the $x$ coordinate since $v_n$ is univariate.
- If we expand the first values:
- $v_1(x) = x$
- $v_2(x) = 2·x^2 − 1$
- $v_2(x) = 8 · x^4 − 8 · x^2 + 1$
- …

These $v_n$ aren’t arbitrary polynomials, but vanishing polynomials. Note that since they’re powers of $\pi$, each polynomial zero-out on domains of different size. i.e: if you square and square a given domain, at some point you’ll reduce to a single point which (for standard position cosets) will be 0. For this article, we can ignore the implications of this, but it is relevant for the overall Circle-STARK construction.

Let’s jump directly to the definition of our FFT basis. Remember that our basis is just a vector to which our FFT coefficients would map. Each element of our basis $B_N$ (which generates $L’_N$) is defined as:

$b_j = y^{j_0} · v_1(x)^{j_1}· ... · v_{n-1}(x)^{j_{n-1}}$where $j = j_0 + j_1·2 + … + j_{n-1}·2^{n-1}$.

At first sight, this definition can be confusing and complex, but let’s unpack it:

- $j$ is the index of the element in the basis. i.e: $b_j$ is the $j$-th element of the basis.
- While moving through the index, you’re enabling/disabling terms in the definition. e.g. notice that odd $j$ will have an $y$ term, and even ones won’t.
- Let’s expand to understand it better:
- $1$
- $y$
- $v_1(x)$
- $y·v_1(x)$
- $v_2(x)$
- $y·v_2(x)$
- $v_1(x)·v_2(x)$
- $y·v_1(x)·v_2(x)$
- …
- Note that we can only have $y$ with degree $0$ or $1$.

If we want to be strict, we have to prove this is a proper basis — each term of the basis should be linearly independent of the others. The formal proof can be found in the paper — but the way I think about it is noticing that each newly introduced element has a degree of being “the next” power of two. If you try doing linear combinations up to element $k$, you can’t have an element of the degree of element $k+1$.

Let’s build more intuition on why this basis is useful:

- If you see the expanded first 8 terms above, you can notice that half of them have an $y$ term and the other half doesn’t.
- You can appreciate how using this basis already hints is a sub-space of our space, since it also can be described in the canonical form: $f(x ,y) = f_0(x) + y · f_1(x)$.
- This means $f_0$ and $f_1$ are constructed with the same basis but without the $y$ term.
- The basis of size $N/2$ could be constructed by “squaring” the base of size $N$. This is because our $v_n$ definition is powers of — this should ring a bell for FFT-like requirements!

Note the subspace definition implies its dimension $N$, solving the original problem that motivated this construction.

# FFT

Now that we have a reasonable understanding of the domain and its basis, we can finally describe the FFT algorithm. Before doing so, let’s review what we are trying to achieve.

The FFT algorithm allows the efficient calculation of the coefficients for a polynomial from its evaluations in a defined domain. The inverse FFT (IFFT) does the inverse operation. Given the coefficients on our basis, we can efficiently calculate the evaluation in the domain.

As mentioned, it’s important that you always keep in mind that when we say *the polynomial coefficients* we aren’t talking about the coefficients in the usual basis of polynomials like: $a· x^2+b· x+3$, since this is assuming a basis of $[1, x, x^2]$. For this algorithm, we’re calculating the coefficients of the polynomials described in the basis $B_N$ described before: $[1, y, v_1(x), y· v_1(x), … ]$. Note that the length of the basis depends on the length of the evaluation vector we have as input.

The papers explain the FFT algorithm separately from proving how it works — I’ll merge both explanations since I find it more useful. The following is an image that can describe at a high level the mechanics of how it works:

Please return to this image while we explore how it works below. Let’s jump into it!

## First step (bivariate → univariate)

Recall that each polynomial in our Riemann-Roch (sub!)space can be described in its canonical form:

$f(x, y) = f_0(x) + y · f_1(x)$where $f(x, y)$ can be described in “coefficient form” in the basis:

$L'_N = [1, y, v_1(x), y· v_1(x), … ]$Given our decomposition, we can see that both $f_0$ and $f_1$ can be described under the basis:

$U = [1, v_1(x), v_2(x), v_1 · v_2(x), … ]$since we removed the $y$ term. In other words, $f_1$ has the terms with involving $y$ and $f_0$ has the terms without $y$. Since $y$ in $B_n$ has degree $0$ or $1$ then $f_0$ and $f_1$ only depend on even indexes elements of our original $B_N$. This is useful since after this first decomposition, we can continue applying FFT in univariate “polynomials” (i.e: only depending on $x$). This new basis $U$, only depending on $x$, is half the size of the original! This is why, in the diagram above (in green), I refer to “+-” since we’re “collapsing” symmetrical points.

Let’s *assume* that we could apply the FFT algorithm we’re describing to $f_0$ and $f_1$. That is, given their evaluations it returns the coefficients form. If we can reconstruct the coefficients of $f$ then we’re done.

The question now is, given the evaluations of $f$:

- How do we calculate the evaluations of $f_0$ and $f_1$?
- After we use a. result and run FFT on $f_0$ and $f_1$ we have their coefficients — how using that do we calculate the coefficients of $f$?

### Solving a.

For a., we note the following relationship:

$f_0(x) = \frac{f(x, y) + f(x, -y)}{2} \newline f_1(x) = \frac{f(x, y) - f(x, -y)}{2 · y}$The first equation is canceling the $y$ terms and dividing by $2$ so we are only left with $x$ terms. If you know how usual FFT works, this should look very familiar. The second equation is similar but cancels the terms without $y$, and then divides by $2·y$ to removing the doubling and the $y$ term. With this, we can see how $f_0$ and $f_1$ are the right parts to write $f$ in the canonical form.

These equations are a symbolic relationship, but remember that our input for the algorithm is the evaluations of $f$. To get the evaluations of $f_0$, we evaluate the above $f_0$ equation at each point in the domain, say $(d_x, d_y)$:

$f_0(d_x) = \frac{f(d_x, d_y) + f(d_x, -d_y)}{2}$Recall that the way our domain works, both $(d_x, d_y)$ and $(d_x, -d_y)$ are part of our domain (symmetric points on $x$), so we have both evaluations in the evaluation form of $f$, so we can calculate this value. We do the same for all $d$ values in the domain, so we can have the evaluation form of $f_0$. We do the same to get the evaluation form of $f_1$ but with the other equation. With this, we solved point a.

### Solving b.

Now that we have the evaluations and *magically* use this same FFT but for $f_0$ and $f_1$ (we’ll explain this next, but let’s assume it works), we have the “coefficient form” for $f_0$ and $f_1$.

Reconstructing the coefficient form for $f$ is relatively easy. Recall that $f_0$ where the terms of $B_N$ that don’t have the $y$ term, so the even indexes in $B_N$, and $f_1$ would map to the odd indexes of $B_N$ since we had $y · f_1(x)$ — so we interleave the coefficients from $f_0$ with the coefficients of $f_1$ and we’re done.

OK, for now the algorithm works if we know that given evaluations of $f_0$ (or $f_1$) in $U$ (defined before), we can get their coefficient form. Let’s prove that next.

## Second step (univariate → univariate)

The problem now was transformed from bivariate to univariate. The basis now is $B_N$ (with $N$ half of the previous step) but we got rid of $y$ terms — those were removed in the first step. The basis now looks like: $[1, v_0(x), v_1(x), v_0(x) · v_1(x), …]$. Recall that $v_0(x)=x$, thus $B_N = [1, x, v_1(x), x · v_1(x), …]$.

We now apply the same idea of function decomposition and express:

$f(x) = f_0(x) + x · f_1(x)$But not so fast — this isn’t the relationship Circle-STARK FFT use but:

$f(x) = f_0(\pi(x)) + x · f_1(\pi(x))$Let’s refresh why this is correct:

- From our
*The domain*section we know that $v_n = v_{n-1} \circ \pi$ - Our $B_N$ basis could be split by terms that has and hasn’t an $x$ term.

In summary, $f_0$ and $f_1$ are functions that can be expressed on a “shifted” basis $B_{N/2}$. This is because when we do the function composition $f_0(\pi(x))$ we’re re-basing each term of the basis. This is the same idea of usual FFT when we do $f_0(x^2)$ — the only difference is that instead of “literal” squaring we do “circle squaring” on the $x$ axis i.e: $\pi$. The basis was constructed intentionally so when we square it, we get half of the basis for size $2·N$.

To provide some extra visuals, here’s a diagram that can also help understanding:

This diagram compresses the important ideas —use it to assist your understanding. The colors are used intentionally!

As with the first step section before, we ask the same questions — given the evaluations of $f$:

- How do we calculate the evaluations of $f_0$ and $f_1$?
- After we use a. result and run FFT, we have the coefficients of $f_0$ and $f_1$, how do we calculate the coefficients of $f$?

### Solving a.

The logic here is analogous with the bivariate case — note the following facts:

$f_0(\pi(x)) = \frac{f(x) + f(-x)}{2} \newline f_1(\pi(x)) = \frac{f(x) - f(-x)}{2 · x}$In short, by doing the same right-hand-side “trick” we’re left with one or other half of the terms from the polynomial. Note that $f_0(\pi(x))$ and $f_1(\pi(x))$ are used since $f_0$ and $f_1$ are polynomials expressed with the basis $B_{N/2}$ — the right side of the equations are these polynomials but “squared”. Note again that the new basis for $f_0$ and $f_1$ is half the size.

### Solving b.

If we *magically* apply the FFT algorithm to $f_0$ and $f_1$ on the new basis $B_{N/2}$, then we have the coefficient form of both polynomials. With this, we can reconstruct the coefficient form of $f$ by constructing the $N$ length vector by interleaving both $f_0$ and $f_1$ coefficient vector of size $N/2$.

Again, this is analogous to the coefficient forms of $f_0$ and $f_1$ for $f(x) = f_0(x^2) + x· f_1(x^2)$ — you might find it more intuitive to think about it that way.

## Further and base case

We have a univariate case again, so we apply “Step 2” repeatedly, halving the basis on each step. At some point, we’ll reach a base case with a polynomial with a single evaluation. In this case, calculating the coefficient form is trivial since it is a constant function that always returns that evaluation.

We’re done!

# IFFT

The IFFT algorithm does the inverse operation — given the coefficient form of a polynomial in our $B_N$ basis, it returns the polynomial evaluations in the domain. The paper doesn’t go through this in detail but provides some hints and leaves the reader to figure out the details. I think that’s fair since it is not hard to figure out the inverse if you understand how FFT works.

Remember that in the bivariate and univariate steps in the FFT, we always ended up asking two essential questions, but now we make the “inverse” twist:

- Given $f$ coefficient form, how do we calculate $f_0$ and $f_1$ coefficients?
- Given the evaluations of $f_0$ and $f_1$ in their (half-sized) domain, how do we reconstruct the evaluations of $f$?

For a. it’s quite simple, $f_0$ and $f_1$ correspond the even and odd indexes in $f$. So given $f$ coefficients, it’s easy (i.e: indexing) to get $f_0$ and $f_1$ coefficients in their halved basis.

Now that we have each part's coefficients, we can call IFFT for these polynomials on a basis with half the size, which will return the evaluation form of each. To solve b. we refer to the already explained relationship:

$f(x) = f_0(\pi(x)) + x · f_1(\pi(x))$For each $d$ in $B_N$, we already have the calculated $f_1(\pi(d))$ and $f_1(\pi(d))$. Note that for $f(d)$ and $f(-d)$ we use the same $f_0 \circ \pi$ and $f_1 \circ \pi$ results and only changes the $x$ value in the equation. This is because squaring $d$ and $-d$ yields the same result, and $B_{N/2}$ used in $f_0$ and $f_1$ was the basis for the IFFT call.

Note the above explains the univariate case. For the final bivariate case (since this is a recursive algorithm), the idea is the same but the $y$ term is introduced when calculating the reconstructed evaluations.

# What about FRI?

Note that FRI is very similar to FFT, but in the function decomposition, we do a random linear combination of both parts, so if you understood the FFT/IFFT algorithm, then you’ve already done the heavy lifting!

# Final words

The idea of the Circle-STARK FFT is analogous with the usual roots of unity FFT:

- The concept of “squaring” isn’t $x^2$ but $\pi$ in a coset in a circle group.
- The “polynomials” we’re interpolating or evaluating are a “space” with some properties required for soundness.
- The basis isn’t the usual $[1, x, x^2, ..]$ but another construction using $v_n$ which if you compose with $\pi$ you get “the same effect” as if you square $[1, x, x^2, …]$ (i.e: $f(x^2)$).

The paper does a fantastic job explaining all this. It has a more formal style, as it should, since it must contain the most detailed and structured explanation since it’s the “official” explanation.

I hope you’ve found this alternative (hand-holding) style explanation helpful!