Expand description
§LaBRADOR: Lattice-Based Recursively Amortized Demonstration Of R1CS
This is the code implementation of LaBRADOR, a lattice-based proof system for R1CS that achieves proof sizes of logarithmic order, a quadratic improvement over the best hash-based PCP-systems.
These notes serve as a friendly introduction to the protocol and a prototype for the documentation.
The main idea is to define a protocol between a prover $P$ and a verifier $V$, where, given an initial arithmetic circuit, they can perform an interaction $<P, V>$. This interaction allows the prover to demonstrate knowledge of solutions to the circuit, and the verifier will output a binary result indicating whether the proof was accepted.
Protocols of this type already exist, but LaBRADOR has the added advantage of achieving sublinear proof sizes while relying on (plausible) post-quantum secure cryptographic lattice problems.
§Protocol Overview
The protocol, from the circuit to the final proof, can be described through the following sequence of steps:
- Translate the rank-1 constraint system (R1CS) into a dot product constraint system and solve it.
- Apply an Ajtai commitment to the solutions of the dot product constraints.
- Prove the norm requirements on the solution using projections (Modular Johnson-Lindenstrauss Lemma).
- Make the proof more compact by aggregating multiple constraints (Aggregation).
- Only prove linear combinations of the commitments (Amortization). Instead of proving all commitments directly, prove linear combinations of them. This recursive amortization process reduces the witness size in each iteration.
§Notation
We will use an upper bar $\bar{s}$ for vectors, lowercase $s$ for scalars, uppercase $S$ for matrices, and boldface letters for elements $\mathbf{s} \in \mathbb{Z}_q[x] / (x^d + 1)$, unless explicitly noted.
§Step 1 - Arithmetic Circuit Translation
In the world of computational complexity, one of the foundational results is the Cook-Levin theorem, which established the concept of NP-completeness. This theorem demonstrated that a broad range of problems in the NP class could be reduced to one central problem: Boolean satisfiability (SAT). This means that it allows us to express common NP problems instances as a set of Boolean expressions that must be satisfied. These problems, typically modeled using Boolean logic, can also be rewritten as arithmetic circuits. These circuits, which use basic operations like addition and multiplication, can represent NP-complete problems in a new way.
The main idea regarding this proof systems involving circuits, its to write a statement one wants to prove (normally that some calculation has been done) by rewriting that calculation as an arithmetic circuit or an equivalent formulation that will allow for automatic proof protocols to convince a verifier of such statement.
§Definition of an Arithmetic Circuit
The main idea behind arithmetic circuits is that, given an instance of a hard problem, we can rewrite it as a Directed Acyclic Graph (DAG), which can be imagined as a tree for simplicity. Each node in the graph represents either a sum or a multiplication, and the leaves of the tree represent the variables and constants in question. Every NP problem can be rewritten in this form, much like how every piece of code can be converted into a circuit of binary operations.
For example, consider the equation $x + y$, where both $x$ and $y$ are variables. This can be represented as a tree with three nodes: a central node representing addition and two leaf nodes, each connected to the central node, representing the variables $x$ and $y$.
This kind of arithmetic circuit $C$ allows us to pose problems by reducing them to arithmetic constraints. A solution to such a problem would be an $\bar{s}$ such that $C(\bar{s}) = 0$. It’s important to note that the problem we define here involves multivariate polynomials, since all the operations ultimately form polynomials based on the variables at the leaves of the DAG.
§R1CS and Dot Product Constraints
Arithmetic circuits can be represented in a standard way called a Rank-1 Constraint System (R1CS), where, given three matrices $A$, $B$, and $C$ (all modulo some integer scalar $N$), a vector $\bar{s}$ is a solution if:
$$ (A \bar{s}) \circ (B \bar{s}) = C \bar{s} \text{ mod } N $$
where $\circ$ denotes the Hadamard product (element-wise multiplication), and the rest follows standard matrix-vector multiplication.
A simple way to observe that an arithmetic circuit can be represented in this form is to understand that an R1CS system consists solely of multiplications and additions that generate constraints. By writing:
$$ (A\bar{s} \circ B\bar{s}) - C\bar{s} = 0 \text{ mod } N $$
and rewriting the right-hand side as a Directed Acyclic Graph (DAG), as explained earlier, the solutions $\bar{s}$ correspond to the values assigned to the leaves, ensuring that the circuit evaluates to zero.
§Dot Product Constraints in LaBRADOR
LaBRADOR was specifically designed to work with dot product constraints $f$. These are constraints that contain dot products of the form $\langle \bar{a}, \bar{b} \rangle$ and are required to hold for a given solution $\mathbf{\bar{s}}$:
$$ f(\mathbf{\bar{s}})=0 \text{ or } ct(f(\mathbf{\bar{s}})) = 0 $$
where $ct()$ extracts the constant coefficient of the polynomial. The function $f(\mathbf{\bar{s}})$ has the general form:
$$ f(s) = \sum_{1 \leq i,j \leq r} \mathbf{a_{ij}} \langle \mathbf{\bar{s_{i}}}, \mathbf{\bar{s_{j}}} \rangle + \sum_{i=1}^{r} \langle \mathbf{\bar{\varphi_{i}}}, \mathbf{\bar{s_{i}}} \rangle + \mathbf{b} $$
where both the variables $\mathbf{\bar{s}}$ and the constants $\mathbf{a_{ij}}, \mathbf{\bar{\varphi_i}}, \mathbf{b}$ are polynomials (or vectors of polynomials) over a polynomial ring modulo an ideal:
$$ Z_{q}[x] / (x^d + 1) $$
Where $Z_{q}[x]$ denotes the ring of polynomials with coefficients in $Z_{q}$ (the integers modulo $q$), and the quotient is taken over the ideal generated by the polynomial $x^d + 1$ This structure is particularly useful in lattice-based cryptography and zero-knowledge proof systems.
§Binary R1CS to Dot Product Constraints
Some problems are easier to translate into R1CS when the modulo $N$ is $2$. To translate from Binary R1CS to a Dot Product Constraint System, the first step is to solve the Binary R1CS by finding a witness $\bar{w}$ such that $A \bar{w} \circ B \bar{w} = C \bar{w}$. Then, through an interactive protocol, the prover demonstrates to the verifier that they possess the witness and that the equations hold. During the interaction, various parts involve defining dot product constraints, which are sent to the verifier. These constraints are ultimately what define the Dot Product Constraint System.
Having done that, we are now in possession of a system of dot product constraints and a witness that solves them.
The key steps in the translation are as follows:
- The prover sends a commitment $t = A(\mathbf{a} \lVert \mathbf{b} \lVert \mathbf{c} \lVert \mathbf{W})$, where $\mathbf{a}, \mathbf{b}, \mathbf{c}$ are polynomials with coefficients $A \bar{w}, B \bar{w}, C \bar{w}$, all modulo $2$.
- Next, the prover must demonstrate that the coefficients are correct: $\bar{a} = A \bar{w} \pmod{2}$, $\bar{b} = B \bar{w} \pmod{2}$, and $\bar{c} = C \bar{w} \pmod{2}$, where $\bar{a}, \bar{b}, \bar{c}$ are the coefficients of $\mathbf{a}, \mathbf{b}, \mathbf{c}$, respectively.
- Then, the prover must prove that $\bar{a}$, $\bar{b}$, and $\bar{c}$ are binary values.
- Finally, the prover must show that $\bar{a} \circ \bar{b} = \bar{c}$.
§R1CS Modulo $2^{d}+1$ to Dot Product Constraints
The main challenge in translating between the R1CS modulo $2^{d}+1$ and the dot product constraint lies in the weight of sending all the information. The non-adjacent form of a number is a unique digit representation using only the elements $\{-1, 1, 0\}$. We will leverage this form to rewrite all our components, as it minimizes the Hamming weight and allows for a much more compact representation of the system.
The protocol will operate similarly to the Binary R1CS, but with the addition of committing to the encoded version. In this case, the prover will commit to:
$t = A(\text{Enc}(A \bar{w}) \parallel \text{Enc}(B \bar{w}) \parallel \text{Enc}(C \bar{w})) \parallel \text{Enc}(\bar{w})$
For proving the quadratic constraints $\bar{a} \circ \bar{b} = \bar{c}$, instead of sending the information directly, a probabilistic approach will be used. The verifier will send a collection of $l$ challenge vectors $\bar{\varphi_i}$, which, through linear combinations, allow probabilistic verification, reducing the need to send the full data. Now, the prover needs to demonstrate that:
$$\langle \bar{\varphi_i}, \bar{a} \circ \bar{b} - \bar{c} \rangle = 0 \quad \text{for all} \quad i \in [l]$$
Note that when a system contains both Binary R1CS and R1CS modulo $2^{d}+1$ parts, the prover can perform both proofs simultaneously by combining the commitments and committing to:
$$t = A(A_{\text{bin}}\bar{w} \lVert B_{\text{bin}}\bar{w} \lVert C_{\text{bin}}\bar{w} \lVert \text{Enc}(A \bar{w}) \lVert \text{Enc}(B \bar{w}) \lVert \text{Enc}(C \bar{w}) \lVert \bar{w})$$
as long as $\bar{w}$ is a binary R1CS witness that simultaneously encodes a witness for the R1CS modulo system.
§Step 2 - Ajtai Commitment
Many well-known computational problems, such as factoring, are considered worst-case problems, whereas cryptographic applications typically require problems that are hard on average. This is because, for hard average-case problems, any random instance is likely to be hard with a positive probability. In contrast, there is no known method for generating a hard instance of a worst-case problem. By leveraging lattice-based problems, it’s possible to construct average-case problems that are as difficult as certain well-known worst-case problems.
In 1996, Miklós Ajtai proposed the idea of how to generate one of these average hard instance problems. An Ajtai Commitment is a polynomial commitment scheme based on these techniques to guarantee the average hardness of this fundamental component in many cryptographic protocols.
§What is a Commitment Scheme?
A commitment scheme allows a committer to publish a value, called the commitment, which binds them to a message without revealing it. Later, they may open the commitment and reveal the committed message to a verifier, who can check that the message is consistent with the commitment. In our case, we are more interested in the compactness of the commitment in relation to the message, allowing the committer to send smaller messages, rather than the non-revealing aspect. The idea behind a commitment is a cryptographic equivalent to keeping some knowledge sealed in an “envelope”, to be revealed later.
The fundamental problem upon which the Ajtai commitment is based upon is called the Modular-SIS problem. The Short Integer Solution (SIS) problem is based on the idea that, given a sufficiently large random matrix $A$ and a vector $\bar{x}$, solving the linear system of equations $A\bar{x}=\bar{b}$, where the solution $\textbf{x}$ is required to have a small norm $\lVert \textbf{x} \rVert < \beta$, is as hard as the short basis problem (a lattice problem that is believed to be hard on average). The Modular part is simply a generalization of SIS to work on a module structure.
In order to perform an Ajtai Commitment Scheme, the committer would first need to be in possession of a small-norm vector $\bar{x}$ that represents the message they want to commit to. Given a public matrix $A$, one can calculate $A \bar{x} = \bar{b}$ and send $\bar{b}$, which is a more compact vector, making it lighter to send over restricted-size networks. Now, the verifier is in possession of both $A$ and $\bar{b}$ (the “envelope”). Once the solution vector $\bar{x}$ is revealed at the end by the prover, the verifier can easily check that the prover had the solution all along.
The reason why $\bar{b}$ is binding is due to the SIS problem. It is a hard problem to find a solution to the constraints that is also of small norm, so the prover must have had the solution all along.
§Commitment
The prover in the protocol has solved the dot product constraints and is in possession of small-norm solution vectors (which we will call “witness vectors”) as well as public matrices $A,B,C,D$.
In the first step of the protocol, the prover commits to the small-norm solution vectors $\mathbf{\bar{s_{1}}},\dots,\mathbf{\bar{s_{r}}}$ by computing Ajtai commitments. Specifically, the prover computes the vector $\mathbf{\bar{t_{i}}} = A * \mathbf{\bar{s_{i}}}$, where $\mathbf{\bar{s_{i}}}$ is a vector of $n$ polynomials. After multiplication, this results in vectors $\mathbf{\bar{t_{i}}}$ of $k$ polynomials. However, sending the vectors $\mathbf{\bar{t_{i}}}$ directly could be costly due to its potentially large size. Therefore, the prover commits to the $\mathbf{\bar{t_{i}}}$ using another Ajtai commitment, referred to as the outer commitment.
§Decomposition for Efficiency
The components of the vector $\mathbf{\bar{t_{i}}}$ have coefficients that are arbitrary modulo $q$. This may lead to inefficiencies when transferring large numbers. To optimize the transfer and reduce the size of the values being committed, the coefficients need to be decomposed into smaller parts.
The decomposition is done element-wise. Specifically, each polynomial in $\mathbf{\bar{t_{i}}}$ is decomposed into $t_{1} \geq 2$ parts with respect to a small base $b_{1}$, such that:
$$\mathbf{\bar{t_{i}}} = \mathbf{\bar{t_{i}}}^{(0)} + \mathbf{\bar{t_{i}}}^{(1)} * b_{1} + \mathbf{\bar{t_{i}}}^{(2)} * b_{1}^{2} + \dots + \mathbf{\bar{t_{i}}}^{(t_{1}-1)} * b_{1}^{t_{1}-1}$$
In this decomposition, Centered representatives modulo $b_{1}$ are used to minimize the magnitude of coefficients, ensuring compactness. This means that $\lVert \mathbf{\bar{t_{i}}}^{(k)} \rVert_{\infty}$ is kept as small as possible. Once the decomposition is complete for each $\mathbf{\bar{t_{i}}}$, we concatenate all the decomposition coefficients $\mathbf{\bar{t_{i}}}^{k}$ for each $i$ and $k$ to form a new vector $\mathbf{\bar{t}}$. The second Ajtai commitment can then be written as: $\mathbf{\bar{u_{1}}}=B*\mathbf{\bar{t}}$
§Commitment to the Garbage Polynomial
The protocol also includes a polynomial referred to as the garbage polynomial, which does not depend on any challenge during the interaction and is used in the amortization part. To simplify the proof of security, this polynomial is also committed to at the beginning of the protocol. To commit to the garbage polynomial, the prover can simply include it in the commitment to $\mathbf{\bar{t}}$.
The garbage polynomial commitment is handled similarly. Given that $\mathbf{g_{ij}} = <\mathbf{\bar{s_{i}}}, \mathbf{\bar{s_{j}}}>$ is the inner product of the solution vectors, the prover constructs a symmetric matrix of polynomials. Each element of this matrix $\mathbf{g_{ij}}$ is decomposed based on some base $b_{2}$ into $t_{2} \geq 2$ parts and then each decomposition coefficient is concateated into $\mathbf{g}$ :
$$\mathbf{g_{ij}} = \mathbf{g_{ij}}^{(0)} + \mathbf{g_{ij}}^{(1)} * b_{2} + \mathbf{g_{ij}}^{(2)} * b_{2}^{2} + \dots + \mathbf{g_{ij}}^{(t_{2}-1)} * b_{2}^{t_2-1}$$
Finally, the full commitment to both the vector $\mathbf{\bar{t}}$ and the garbage polynomial $\mathbf{\bar{g}}$ is expressed as: $$\mathbf{\bar{u_{1}}} = B * \mathbf{\bar{t}} + C * \mathbf{\bar{g}}$$
§Hierarchical Commitment Structure
A key optimization in the LaBRADOR protocol is the use of hierarchical commitments - a two-layer structure with inner and outer commitments.
§Inner and Outer Commitments
The hierarchical commitment structure consists of:
- Inner Commitments: First-level commitments to witness vectors
- Outer Commitments: Second-level commitments that summarize multiple inner commitments
This two-layer structure is crucial for reducing proof size by allowing the prover to move material from the current proof iteration to the next iteration, where it will be further compressed.
§Decomposition Process
When creating a hierarchical commitment:
- The prover first commits to each witness vector using individual Ajtai commitments, creating the inner commitments
- These inner commitments are then decomposed into a base-B representation to reduce coefficient size
- The decomposed parts are concatenated into a single vector
- This vector is then committed to using another Ajtai commitment, creating the outer commitment
The decomposition uses centered representatives modulo the base to minimize the magnitude of coefficients, ensuring compactness.
§Verification Process
To verify a hierarchical commitment:
- The verifier checks each inner commitment against its opening information
- The verifier then decomposes the inner commitments using the same process
- The decomposed parts are concatenated and committed to using the outer commitment scheme
- The resulting commitment is compared against the provided outer commitment
- The outer commitment’s opening information is also verified
§Benefits of Hierarchical Commitments
The hierarchical structure provides several advantages:
- Reduced Communication Complexity: By summarizing multiple inner commitments into a single outer commitment, the total size of the proof is reduced
- Efficient Verification: The verification process can be performed efficiently even with the two-layer structure
- Flexibility: The decomposition parameters can be adjusted to optimize for specific use cases
§Integration with LaBRADOR Protocol
In the context of the LaBRADOR protocol, hierarchical commitments allow moving material from π(i+1) to s(i+1), which is then shrunk in subsequent iterations, resulting in a more compact overall proof size.
§Step 3 - Projections (Johnson-Lindenstrauss)
The main idea behind the Modular Johnson-Lindenstrauss Lemma is that if you want to prove that a long vector has a small norm, instead of having to send the entire vector to verify this condition, the verifier can send random linear projections $\Pi_{i}$ as challenges to the prover and then the prover just needs to return the applied projection to the vector $\bar{p} = \sum_{i=1}^{r}\Pi \bar{s_{i}}$ which is much more compact. The lemma guarantees that the projections almost preserve the L2 norm, so by performing a sufficient number of challenges, one can gain information about the actual norm size of the original vector.
After the projection vectors are sent to the prover, so that it can check that they have small norms, a proof is also required to verify that the projections were actually performed correctly. To do this, the prover will transform the projections into dot product constraints. These constraints will later be sent as part of the aggregation step. The way this works is by rewriting this matrix multiplication $\Pi_{i}\bar{s_{i}}$ as a sum of dot products $\sum_{i=1}^{r} \langle \bar{\pi_{i}}^{(j)}, \bar{s_{i}} \rangle$, Where this is the inner product between the coefficients of the polynomials and the j-th row of the projection matrix. Finally, we have the projection vector $\bar{p} = (p_{1}, \dots, p_{256})$, where each coordinate can be rewritten as $p_{j} = \sum_{i=1}^{r} \langle \bar{\pi_{i}}^{(j)}, \bar{s_{i}} \rangle$, and this allows us to define the dot product constraints:
$$\sum_{i=1}^{r} \langle \sigma_{-1}(\mathbf{\pi_{i}^{(j)}}), \mathbf{X} \rangle - p_j \quad \forall j \in {1, \dots, 256}$$
Let’s explain why this makes sense. The expression $p_j = \sum_{i=1}^{r} \langle \bar{\pi_{i}}^{(j)}, \bar{s_{i}} \rangle$ is scalar-valued, so if we were to write $\sum_{i=1}^{r} \langle \bar{\pi_{i}}^{(j)}, \bar{s_{i}} \rangle - p_j$, the result would always be zero. To avoid this triviality and prove correctness, we need to rewrite it as a dot product constraint.
To accomplish this, we treat the values $\bar{\pi_{i}}^{(j)}$ as the coefficients of a polynomial. This is possible due to the one-to-one correspondence between a degree $d$ polynomial and a vector of length $d + 1$. Now, we can use the same polynomial in a dot product constraint. Specifically, we will utilize the automorphism $\sigma_{-1}$, defined as $\sigma_{-1}: X \to X^{-1}$. Applying this automorphism in our inner product ensures that the constant coefficient is $\sum_{i=1}^{r} \langle \bar{\pi_{i}}^{(j)}, \bar{s_{i}} \rangle$. Thus, we obtain the following equality:
$$ ct\left( \sum_{i=1}^{r} \langle \sigma_{-1}(\mathbf{\pi_{i}^{(j)}}), \mathbf{X} \rangle \right) - p_j = 0 \quad \forall j \in {1, \dots, 256} $$
So the previous dot product belongs to the family of dot products with a zero constant coefficient under the variables $\mathbf{\bar{s_1}}, \dots, \mathbf{\bar{s_r}}$ and can be aggregated with other functions of the same form.
§Step 4 - Aggregation
The aggregation step allows the prover to compress the number of functions it needs to send for verification. The aggregation step in LaBRADOR is divided into two parts. Firstly, one would like to send all the projection functions $(\langle \sigma_{-1}(\mathbf{\pi_{i}}^{(j)}),\mathbf{\bar{s_{i}}} \rangle -p_{j})$ to guarantee that the previously sent projections were computed correctly. Additionally, the prover needs to show that the dot product constraints $f^{’}$, when calculated with $\mathbf{\bar{s_{1}}},\dots,\mathbf{\bar{s_{r}}}$, have zero constant coefficients. Since sending all this information directly would be too costly, the functions are aggregated together by linearly combining them based on uniformly chosen random challenges $\bar{\phi}^{(k)} \in (Z_{q})^{L}$ and $\bar{\omega}^{(k)} \in (Z_{q})^{256}$ sent by the verifier. So the prover will only have to send $k = 1, \dots , \lceil 128/ log(q) \rceil$ linear combinations of functions. For simplicity, in this section, we will avoid using boldface letters and vector notation when it is clear from the context.
$$f^{‘’(k)}=\sum_{l=1}^{L}\phi_{l}^{(k)}f^{’(l)}(s_{1},\dots,s_{r})+\sum_{j=1}^{256}\omega_{j}^{(k)}(<\sigma{-1}(\pi_{i}^{(j)}),s_{i}>-p_{j})$$
And based on the fact that the functions $f^{‘}$ are of the form $f^{’}(s_{1},\dots,s_{r}) = \sum_{i,j=1}^{r}a_{ij}^{‘}<s_{i},s_{j}>+\sum_{i=1}^{r}<\varphi_{i}^{’},s_{i}>=0$. If we define $a_{ij}^{‘’} = \sum_{l=1}^{L}\phi_{l}^{(k)}a_{ij}^{(l)}$ and $\varphi_{i}^{‘’} = \sum_{l=1}^{L}\phi_{l}^{(k)}\varphi_{i}^{(l)}+\sum_{j=1}^{256}\omega_{j}^{(k)}\sigma_{-1}(\pi_{i}^{(j)})$ then we can re write the $f^{‘’}$ functions so that it takes the form of a dot product restriction that becomes zero under $s_{1},\dots,s_{r}$:
$$0=\sum_{i,j=1}^{r}a_{ij}^{‘’}<s_{i},s_{j}>+\sum_{i=1}^{r}<\varphi_{i}^{‘’},s_{i}> - (<\omega^{(k)},p> + \sum_{l=1}^{(k)}\phi_{l}^{(k)}b_{0}^{’(l)})$$
So just by sending $\sum_{i,j=1}^{r}a_{ij}^{‘’}<s_{i},s_{j}>+\sum_{i=1}^{r}<\varphi_{i}^{‘’},s_{i}>$ the verifier can check whether the equation holds since its already in position of the other part of the function, therefore only sending that part of the function is enough to verify the condition.
The second part of the aggregation step involves aggregating all the dot product restriction functions together, such that they become zero when applied to $s_1,\dots, s_r$, and the previous functions are combined using linear combinations chosen in a uniform way. In this case, instead of sending these aggregated functions together, their correctness is proven probabilistically using the amortized opening $z$ explained in the amortization step.
§Step 5 - Amortization
Just like in the projection step, were we used a statistical argument to verify that that the norm of the witness vector is small, in the Amortization step both the correctness of the second step in aggregation and the verification of the commitments are proven probabilistically.
In this step the verifier sends a collection of random polynomials $\mathbf{c_{1}},\dots,\mathbf{c_{r}}$ as challenges, therefore defining a random linear combination of openings $\mathbf{\bar{z}}=\mathbf{c_{1}\bar{s_{1}}}+\dots+\mathbf{c_{r}\bar{s_{r}}}$. Instead of proving all commitments directly, the prover only needs to prove their linear combinations. This recursive amortization process reduces the witness size in each iteration.
In order to verify that the Ajtai commitments where done correctly, instead of sending the witness vectors the verifier can use $\mathbf{\bar{z}}$ to gain information on the witness vectors, more specifically, the verifier can check both that the norm of $\mathbf{\bar{z}}$ is sufficiently small and also that $A\mathbf{\bar{z}} = \sum_{i=1}^{r}\mathbf{c_{i}}A\mathbf{\bar{s_{i}}} =\sum_{i=1}^{r}\mathbf{c_{i}}\mathbf{\bar{t_{i}}}$ given that it is in possession of $A$, $\mathbf{\bar{z}}$, $t$ and the challenges $\mathbf{c_{i}}$.
In the last aggregation step, the verifier sends uniformly distributed challenges $\bar{\alpha} \in (Z_{q})^{L}$ and $\bar{\beta} \in (Z_{q})^{\lceil 128/ log(q) \rceil}$,such that the same scheme could be applied again, but now for all the dot product constraints (including those sent in the first aggregation step).
So, we will aggregate all functions into one linear combination of functions of the form: $$\sum_{k=1}^{K}\bar{\alpha_{k}}f^{(k)}(\mathbf{\bar{s_{1}}},\dots,\mathbf{\bar{s_{r}}})+\sum_{k=1}^{\lceil 128/ log(q) \rceil}\bar{\beta_{k}}f^{‘’(k)}(\mathbf{\bar{s_{1}}},\dots,\mathbf{\bar{s_{r}}})$$
When we decompose these functions into their respective parts, we will only be interested in sending $\mathbf{h_{ij}} = \frac{1}{2}(<\bar{\varphi_{i}},\mathbf{\bar{s_{j}}}>+<\mathbf{\varphi_{j}},\mathbf{\bar{s_{i}}}>)$ since the information regarding $<\mathbf{\bar{s_{i}}},\mathbf{\bar{s_{j}}}>$ was already sent with the polynomial $\mathbf{\bar{g}}$ on the Ajtai commitment step.
The $\bar{\varphi_{i}}$ that were mentioned in $\mathbf{h_{ij}}$ are defined as: $$\sum_{k=1}^{K}\bar{\alpha_{k}}\bar{\varphi_{i}}^{(k)}+\sum_{k=1}^{\lceil 128/ log(q) \rceil}\bar{\beta_{k}}\bar{\varphi_{i}}^{‘’(k)}$$ Just like in the case of the garbage polynomail $\mathbf{\bar{g}}$ the $\mathbf{h_{ij}}$ also form a matrix of polynomials that can be decomposed based on some base $b_{1}$ and sent in the same way as $\mathbf{\bar{g}}$ by defining: $$\mathbf{\bar{u_{2}}}=D*\mathbf{\bar{h}}$$