Conversation
📝 WalkthroughWalkthroughThis PR introduces comprehensive Ethereum EIP-7594 PeerDAS (Peer Data Availability Sampling) support, adding KZG multiproof operations, polynomial recovery from erasure samples, extended FFT descriptors for coset domains, cell serialization, and complete workflows for computing, verifying, and recovering cells with cryptographic proofs. Includes supporting mathematical modules, test infrastructure, and technical documentation. Changes
Sequence Diagram(s)sequenceDiagram
participant App as Application
participant Cell as Cell Computation
participant FFT as FFT/Coset Ops
participant KZG as KZG Proofs
participant Verify as Verification
App->>Cell: compute_cells(blob)
Cell->>FFT: blob_to_field_polynomial()
FFT->>FFT: coset_ifft_rn() for coefficients
Cell->>FFT: coset_fft_nr() half per cell
FFT-->>Cell: cell evaluations
Cell-->>App: Cell[] bytes
App->>KZG: compute_cells_and_kzg_proofs(blob)
KZG->>Cell: compute_cells(blob)
Cell-->>KZG: Cell[]
KZG->>KZG: getTauExtFftArray() precompute
KZG->>KZG: kzg_coset_prove() FK20 proofs
KZG-->>App: Cell[], KZGProof[]
App->>Verify: recover_cells_and_kzg_proofs(cell_indices, cells)
Verify->>FFT: recoverPolynomialCoeff() reconstruct
FFT->>FFT: vanishingPolynomial() missing cells
FFT->>FFT: coset division recover evaluations
FFT-->>Verify: polynomial coefficients
Verify->>KZG: kzg_coset_prove() proofs on recovered
KZG-->>Verify: recovered Cell[], KZGProof[]
Verify-->>App: recovery result
sequenceDiagram
participant Caller
participant Poly as Polynomial Ops
participant FFT as FFT Engine
participant Vanish as Vanishing Poly
participant Recover as Recovery
Caller->>Recover: recoverPolyFromSamples(samples, domain)
Recover->>Recover: validate inputs (50% threshold)
Recover->>Vanish: vanishingPolynomial(missing_roots)
Vanish->>Vanish: synthetic_multiply(x - root_i)
Vanish-->>Recover: Z(x) coefficients
Recover->>FFT: fft_nr(Z) get evaluations
FFT-->>Recover: Z(ω^k) evaluations
Recover->>FFT: (E·Z) product in evaluation form
Recover->>FFT: ifft_rn(E·Z) back to coefficients
FFT-->>Recover: (E·Z) coefficients
Recover->>Poly: coset shift scale both (E·Z), Z
Recover->>FFT: coset_fft_nr() shift to coset domain
FFT-->>Recover: shifted evaluations
Recover->>Recover: pointwise division E·Z / Z
Recover->>FFT: coset_ifft_rn() unshift to original
FFT-->>Recover: final polynomial coefficients
Recover-->>Caller: recovered PolynomialCoef
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Poem
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
|
|
Warning Gemini encountered an error creating the summary. You can try again by commenting |
Greptile SummaryThis PR implements the cryptographic backend for EIP-7594 PeerDAS, including FK20 multi-proofs, Reed-Solomon erasure-code recovery, coset FFTs, and the full public API (
Confidence Score: 2/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant Caller
participant EIP7594 as eth_eip7594_peerdas
participant PeerDAS as data_availability_sampling/eth_peerdas
participant FK20 as kzg_multiproofs (FK20)
participant FFT as math/polynomials/fft
participant Toeplitz as math/matrix/toeplitz
participant SRS as EthereumKZGContext (SRS)
Note over Caller,SRS: compute_cells_and_kzg_proofs
Caller->>EIP7594: compute_cells_and_kzg_proofs(blob)
EIP7594->>EIP7594: blob_to_field_polynomial → poly_lagrange
EIP7594->>FFT: ifft_nn (Lagrange→Monomial) → poly_monomial
EIP7594->>EIP7594: compute_cells (half-FFT trick)
EIP7594->>FK20: getTauExtFftArray(srs_monomial_g1, domainRoot)
FK20->>FFT: ec_fft_nr × L (kernel FFTs)
EIP7594->>FK20: kzg_coset_prove(tauExtFftArray, poly_monomial)
loop offset 0..L-1
FK20->>Toeplitz: makeCirculantMatrix
FK20->>Toeplitz: toeplitzMatVecMulPreFFT
Toeplitz->>FFT: fft_nr (Fr circulant)
Toeplitz->>FFT: ec_ifft_rn (EC IFFT)
end
FK20->>FFT: ec_fft_nr (final proof FFT)
EIP7594-->>Caller: cells[], proofs[]
Note over Caller,SRS: recover_cells_and_kzg_proofs
Caller->>EIP7594: recover_cells_and_kzg_proofs(indices, cells)
EIP7594->>PeerDAS: recoverPolynomialCoeff(indices, cosets, fft_desc_ext)
PeerDAS->>FFT: fft_nr (zero_poly eval)
PeerDAS->>FFT: ifft_rn (E·Z coeffs)
PeerDAS->>FFT: coset_fft_nr (E·Z over coset)
PeerDAS->>FFT: coset_fft_nr (Z over coset)
PeerDAS->>PeerDAS: pointwise divide
PeerDAS->>FFT: coset_ifft_rn (recover coeffs)
PeerDAS-->>EIP7594: poly_coeff[2N]
EIP7594->>FFT: fft_nr (eval all cells)
EIP7594->>FK20: getTauExtFftArray + kzg_coset_prove
EIP7594-->>Caller: recovered_cells[], recovered_proofs[]
Last reviewed commit: "Add research papers" |
| ## @return: FFTStatus indicating success or failure | ||
|
|
||
| let n = v.len |
There was a problem hiding this comment.
Memory leak:
vExtFft allocated but never freed
vExtFft is allocated on every call but is never used and never freed, leading to a guaranteed memory leak on each invocation of toeplitzMatVecMul. The actual FFT output buffer used is vExtFftSeq (allocated a few lines later). The vExtFft allocation should be removed entirely.
| ## @return: FFTStatus indicating success or failure | |
| let n = v.len | |
| let vExt = allocHeapArrayAligned(EC, n2, 64) | |
| for i in 0 ..< n: |
|
|
||
| var indices_arr: array[256, uint64] | ||
| doAssert indices.len <= 256 | ||
| for i in 0 ..< indices.len: | ||
| indices_arr[i] = indices[i] | ||
|
|
There was a problem hiding this comment.
Wrong vanishing polynomial: all 256 indices used instead of
indices.len
evalVanishingPolynomial initialises indices_arr as array[256, uint64] (zero-initialised), then passes the full 256-element array to vanishingPolynomialForIndices. Because numIndices is the compile-time array length (256), the call iterates over all 256 entries, treating the zero-filled tail entries as index 0 — i.e., it adds (domain.rootsOfUnity[0] = 1) as an extra root (256 - indices.len) times. The resulting vanishing polynomial has completely wrong roots for any call where indices.len < 256.
Use the runtime version vanishingPolynomialForIndicesRT which correctly limits iteration to indices.len:
| var indices_arr: array[256, uint64] | |
| doAssert indices.len <= 256 | |
| for i in 0 ..< indices.len: | |
| indices_arr[i] = indices[i] | |
| var poly: PolynomialCoef[N, F] | |
| vanishingPolynomialForIndicesRT(poly, indices, domain) |
| for i in 1 ..< num_cells: | ||
| if uint64(cell_indices[i-1]) >= uint64(cell_indices[i]): | ||
| return cttEthKzg_InputsLengthsMismatch | ||
|
|
There was a problem hiding this comment.
Missing bounds validation for cell indices
recover_cells_and_kzg_proofs validates that cell_indices is sorted and duplicate-free, but never checks that each index is within the valid range [0, CELLS_PER_EXT_BLOB). An index of, say, 200 would pass all current checks but later cause out-of-bounds memory access when used in recoverPolynomialCoeff (e.g., accessing fft_desc.rootsOfUnity[root_idx] with root_idx = 200 * stride).
Add a bounds check after the sort check:
| for i in 1 ..< num_cells: | |
| if uint64(cell_indices[i-1]) >= uint64(cell_indices[i]): | |
| return cttEthKzg_InputsLengthsMismatch | |
| # Check that input is sorted and has no duplicates | |
| for i in 1 ..< num_cells: | |
| if uint64(cell_indices[i-1]) >= uint64(cell_indices[i]): | |
| return cttEthKzg_InputsLengthsMismatch | |
| # Check that all cell indices are in range | |
| for idx in cell_indices: | |
| if uint64(idx) >= uint64(CELLS_PER_EXT_BLOB): | |
| return cttEthKzg_InputsLengthsMismatch |
| poly_coef_nat.computeCoefPoly(poly_eval_brp, ctx.fft_desc_ext) | ||
|
|
||
| # Step 3b: Shift coefficients by w_8192^k | ||
| # w_8192 = primitive 8192nd root of unity (coset shift factor) |
There was a problem hiding this comment.
Dead variable
odd_evals wastes stack space
odd_evals is declared as a 4096-element Fr[BLS12_381] array (≈128 KB on the stack) but is never read or written. This is a significant unnecessary stack allocation and likely a leftover from an earlier draft.
| # w_8192 = primitive 8192nd root of unity (coset shift factor) |
| order = N, | ||
| generatorRootOfUnity = domain.rootsOfUnity[1] | ||
| ) | ||
|
|
||
| var coeffs: array[N, F] | ||
| let ifft_status = ifft_rn(fft_desc, coeffs, all_samples) |
There was a problem hiding this comment.
256-sample hard limit contradicts documented N/2 capacity
recoverPolyFromSamples documents support for "up to N - n = n(r - 1) erasures" (i.e. up to N/2 for r=2), but the implementation uses a fixed array[256, uint64] for missing_indices and returns Recovery_TooManyMissing when missing_count >= 256. For any N > 512, having between 256 and N/2 missing samples — which should be recoverable — will incorrectly return this error. The limit should either be raised (using heap allocation) or the documentation should explicitly state the 256-index cap.
| return cttEthKzg_ScalarZero | ||
| of cttCodecScalar_ScalarLargerThanCurveOrder: | ||
| return cttEthKzg_ScalarLargerThanCurveOrder | ||
|
|
||
| # Step 2: Convert to monomial form via IFFT (needed for FK20 proofs) | ||
| var poly_monomial: PolynomialCoef[FIELD_ELEMENTS_PER_BLOB, Fr[BLS12_381]] | ||
| poly_monomial.computeCoefPoly(poly_lagrange, ctx.fft_desc_ext) |
There was a problem hiding this comment.
getTauExtFftArray recomputed on every call — very expensive
Both compute_cells_and_kzg_proofs and recover_cells_and_kzg_proofs call getTauExtFftArray, which performs L=64 EC-FFTs of size CDS=128 on every invocation. The function's own docstring notes "The result can be cached and reused for multiple polynomials." This precomputed table is structurally equivalent to s->x_ext_fft_columns in c-kzg-4844, which is computed once during trusted setup. Recomputing it per call makes the API orders of magnitude slower than the reference implementation for batch workloads. It should be stored in EthereumKZGContext and computed once during trusted_setup_load.
| ## 5. **Recovery**: If ≥50% of cells available, recover all cells | ||
| ## | ||
| ## References: | ||
| ## - EIP-7594: https://eips.ethereum.org/EIPS/eip-7594 |
There was a problem hiding this comment.
Duplicate constant definitions
FIELD_ELEMENTS_PER_CELL, CELLS_PER_BLOB, CELLS_PER_EXT_BLOB, and BYTES_PER_CELL are defined identically here and also in constantine/commitments_setups/ethereum_kzg_srs.nim. Keeping two independent definitions risks them diverging silently. The definitions in eth_eip7594_peerdas.nim should be removed in favour of importing them from the SRS module.
fffft
Outdated
| @@ -0,0 +1 @@ | |||
| Subproject commit be8702c88e7564ae83d4e5327430927720c5dd49 | |||
There was a problem hiding this comment.
Git submodule added without
.gitmodules metadata
fffft is tracked as a git submodule (mode 160000, commit be8702c) but no .gitmodules file exists in the repository. Without .gitmodules, there is no URL or path record for this submodule, so git submodule update --init will silently fail for anyone who clones the repo. Either add a .gitmodules entry (with url, path, and optionally branch fields), or remove this submodule reference if it is not yet ready to be included.
There was a problem hiding this comment.
Actionable comments posted: 18
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
constantine/commitments_setups/ethereum_kzg_srs.nim (1)
273-281:⚠️ Potential issue | 🟡 MinorPotential logic error in fscanf validation:
andshould likely beor.At lines 275, 290, and 306, the error check uses
andbut should likely useor:if num_matches != 1 and charsRead != 2*g1Bytes:This passes if either condition fails, but you probably want to fail if any condition is wrong. With
and, ifnum_matchesis wrong butcharsReadis correct, the check passes erroneously.Proposed fix
- if num_matches != 1 and charsRead != 2*g1Bytes: + if num_matches != 1 or charsRead != 2*g1Bytes:Apply similar fix at lines 290 and 306.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@constantine/commitments_setups/ethereum_kzg_srs.nim` around lines 273 - 281, The validation condition currently uses a logical AND so it only fails when both num_matches and charsRead are wrong; change the operator to OR in the fscanf checks so the function returns tsInvalidFile if either num_matches != 1 or charsRead != expected (2*g1Bytes or 2*g2Bytes as appropriate). Update the three occurrences surrounding the G1/G2 deserialization checks (the blocks using f.c_fscanf, bufG1Hex/ bufG2Hex, charsRead, g1Bytes/g2Bytes and the calls to ctx.srs_lagrange_brp_g1.evals[i].deserialize_g1_compressed and ctx.srs_lagrange_brp_g2.evals[i].deserialize_g2_compressed) to use "or" instead of "and".
♻️ Duplicate comments (2)
constantine/math/matrix/toeplitz.nim (1)
262-264:⚠️ Potential issue | 🟠 MajorMemory leak:
vExtFftis allocated but never used or freed.Same issue as noted previously -
vExtFftis allocated butvExtFftSeqis used instead, causing a memory leak.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@constantine/math/matrix/toeplitz.nim` around lines 262 - 264, The code allocates vExtFft via allocHeapArrayAligned(EC, n2, 64) but never uses or frees it because the rest of the code references vExtFftSeq; either stop allocating vExtFft or switch the code to use the allocated buffer. Update references to use vExtFft instead of vExtFftSeq where the FFT buffer is needed, or if vExtFftSeq is correct, remove the allocHeapArrayAligned call for vExtFft (or ensure vExtFft is deallocated) so there is no memory leak; key symbols to update are vExtFft, vExtFftSeq, and the allocHeapArrayAligned call that creates vExtFft.constantine/erasure_codes/recovery.nim (1)
276-291:⚠️ Potential issue | 🟠 MajorMemory leak:
zero_poly_evalis never freed.Same issue as noted previously -
zero_poly_evalallocated at line 276 is never freed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@constantine/erasure_codes/recovery.nim` around lines 276 - 291, zero_poly_eval allocated via allocHeapArrayAligned is not freed on the success path; after calling fft_nr and using zero_poly_eval to compute poly_evaluations_with_zero (the for loop that calls prod with extended_evaluation and zero_poly_eval), call freeHeapAligned(zero_poly_eval) once you no longer need its data (e.g., immediately after the loop or before any early returns) to match the failure branch that already frees it; ensure you do not free it before all uses and avoid double-freeing by removing any later redundant frees.
🟡 Minor comments (10)
papers/polynomial_commitments/FK23 - 2023-033 - Fast amortized KZG proofs.md-423-423 (1)
423-423:⚠️ Potential issue | 🟡 MinorFix heading level jumps (
MD001).The heading hierarchy skips a level at both locations, which breaks markdownlint and navigation structure.
🛠️ Proposed fix
-##### Construction +#### Construction ... -##### Construction +#### ConstructionAlso applies to: 449-449
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@papers/polynomial_commitments/FK23` - 2023-033 - Fast amortized KZG proofs.md at line 423, Heading levels jump at the "Construction" sections; change the current '##### Construction' headings to the correct level to follow the surrounding hierarchy (e.g., use '#### Construction' if the parent is '###' or otherwise adjust by one level) for both occurrences (the one shown and the one at the other noted location) so markdownlint MD001 is satisfied and navigation structure is preserved.papers/ethereum/B18 - reed-solomon_erasure_recovery_with_ffts.md-16-16 (1)
16-16:⚠️ Potential issue | 🟡 MinorMath multiplication syntax is being parsed as emphasis (
MD037).Several
*operators are interpreted as Markdown emphasis markers. Prefer$...$math with\cdot(or inline code) to preserve rendering.🛠️ Example fix pattern
-First of all, note that D * Z_{r,I} = E * Z_{r,I}. +First, note that $D \cdot Z_{r,I} = E \cdot Z_{r,I}$.Also applies to: 18-18, 24-24, 30-30, 39-39
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@papers/ethereum/B18` - reed-solomon_erasure_recovery_with_ffts.md at line 16, The markdown is mis-parsing the multiplication operator * as emphasis in inline math; update all occurrences like "D * Z_{r,I}", "E * Z_{r,I}", "d[i] * Z_{r,I}(r^i)", and similar instances on the other noted lines to use proper inline LaTeX math (e.g., $D \cdot Z_{r,I}$, $E \cdot Z_{r,I}$, $d[i] \cdot Z_{r,I}(r^i)$) or wrap them in inline code, ensuring you replace the asterisk with \cdot inside $...$ so the expressions render correctly and keep the same identifiers (D, E, Z_{r,I}, d[i], r^i, E \cdot Z_{r,I}) for clarity.research/peerdas/roots_of_unity_check.py-3-3 (1)
3-3:⚠️ Potential issue | 🟡 MinorFix domain size typo in header comment.
The comment says
80192; based on the script body this should be8192.🛠️ Proposed fix
-# are the first half of the roots of the 80192 domain +# are the first half of the roots of the 8192 domain🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@research/peerdas/roots_of_unity_check.py` at line 3, Update the top-of-file header comment that currently reads "are the first half of the roots of the 80192 domain" to correct the domain size typo to "8192" so the comment matches the script logic; locate the header comment in research/peerdas/roots_of_unity_check.py (the file-level comment at the top) and replace 80192 with 8192.papers/ethereum/B18 - reed-solomon_erasure_recovery_with_ffts.md-12-12 (1)
12-12:⚠️ Potential issue | 🟡 MinorHeading level should not skip (
MD001).Use an
##heading here to keep hierarchy valid.🛠️ Proposed fix
-### Erasure code recovery in O(n*log^2(n)) time +## Erasure code recovery in O(n*log^2(n)) time🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@papers/ethereum/B18` - reed-solomon_erasure_recovery_with_ffts.md at line 12, The heading "Erasure code recovery in O(n*log^2(n)) time" is currently a third-level header (###) and skips hierarchy; change it to a second-level header (##) to maintain proper Markdown heading order—update the line containing that header text to use '## Erasure code recovery in O(n*log^2(n)) time' so the document follows MD001 rules.papers/polynomial_commitments/F21 - PCS-multiproofs-using-random-evaluation.md-16-16 (1)
16-16:⚠️ Potential issue | 🟡 MinorProofread the transcription before treating this as an implementation note.
There are a few sentence-level errors in math-heavy passages here (
commiments,remans,we will the roots of unity…,allow all q_j…). In this kind of derivation, those read like notation mistakes rather than simple typos.Also applies to: 106-106, 197-197, 279-279
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@papers/polynomial_commitments/F21` - PCS-multiproofs-using-random-evaluation.md at line 16, Proofread and correct the transcription errors in the document: fix spelling/notation mistakes such as "commiments" -> "commitments", "remans" (likely "remains"), the phrase "we will the roots of unity…" (reconstruct intended wording), and "allow all q_j…" (clarify quantification/notation for q_j), and apply the same corrections at the indicated other occurrences (lines referenced as 106, 197, 279) so mathematical notation and sentence structure are unambiguous throughout the KZG/additively-homomorphic discussion.tests/eth_eip7594_peerdas/t_compute_cells_opt.nim-14-16 (1)
14-16:⚠️ Potential issue | 🟡 MinorUpdate the example command to the new test path.
The comment still points to
tests/eth_eip4844_peerdas/..., so copy-pasting it will fail from the repo root.Suggested fix
-## nim c -r -d:release --hints:off --warnings:off --outdir:build/wip --nimcache:nimcache/wip tests/eth_eip4844_peerdas/t_compute_cells_opt.nim +## nim c -r -d:release --hints:off --warnings:off --outdir:build/wip --nimcache:nimcache/wip tests/eth_eip7594_peerdas/t_compute_cells_opt.nim🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/eth_eip7594_peerdas/t_compute_cells_opt.nim` around lines 14 - 16, Update the example "## Run with" command string (the comment block starting with "## Run with") to reference the new test directory and file name: replace the old path segment "tests/eth_eip4844_peerdas/t_compute_cells_opt.nim" with "tests/eth_eip7594_peerdas/t_compute_cells_opt.nim" so the copy-paste command runs from the repo root.papers/ethereum/WZ24 - 2024-1362 - A documentation of Ethereum PeerDAS.md-92-92 (1)
92-92:⚠️ Potential issue | 🟡 MinorMinor typo in mathematical notation.
"there exists a subset
$\mathbb{H}_j$ of$\mathbb{H}$ of size$2^{t-j}$ that is a subgroup. In particular, we have$\mathbb{H}_j = {\omega^i \in \mathbb{H} \mid i \equiv 0 \mod 2^j}$ . Note that$\nu = \omega^{2^j}$ is a$2^{t-j}$ that root of unity" - appears to be missing "th" to form "2^{t-j}-th root of unity".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@papers/ethereum/WZ24` - 2024-1362 - A documentation of Ethereum PeerDAS.md at line 92, Fix the minor typo in the sentence describing nu: change "ν = ω^{2^j} is a 2^{t-j} that root of unity" to read "ν = ω^{2^j} is a 2^{t-j}-th root of unity" so the phrase correctly states ν is the 2^{t-j}-th root of unity (referencing ν, ω, and the exponent 2^{t-j} in the paragraph about ℍ_j and cosets ℍ_{j,s}).papers/ethereum/WZ24 - 2024-1362 - A documentation of Ethereum PeerDAS.md-18-18 (1)
18-18:⚠️ Potential issue | 🟡 MinorMinor grammatical error in abstract.
"about to integrated" should be "about to be integrated".
📝 Suggested fix
-PeerDAS aims to pave the way for the full data availability solution and will soon become an essential part of Ethereum's protocol. Therefore, it is crucial to formally understand its security guarantees. +As the next step towards this vision, an intermediate solution called PeerDAS is about to be integrated, to bridge the way to the full protocol.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@papers/ethereum/WZ24` - 2024-1362 - A documentation of Ethereum PeerDAS.md at line 18, Fix the grammatical error in the abstract by replacing the phrase "about to integrated" with "about to be integrated" in the sentence that introduces PeerDAS (the sentence beginning "As the next step towards this vision, an intermediate solution called PeerDAS..."); ensure the updated sentence reads "...an intermediate solution called PeerDAS is about to be integrated, to bridge the way to the full protocol."tests/erasure_codes/t_zero_polynomial.nim-5-6 (1)
5-6:⚠️ Potential issue | 🟡 MinorIncomplete license header.
The license header appears to be missing the MIT license option. Other files in this PR include both MIT and Apache v2 licenses.
📝 Suggested fix
-# * MIT license (license terms in the root directory or at http://www.apache.org/licenses/LICENSE-2.0). +# * MIT license (license terms in the root directory or at http://opensource.org/licenses/MIT). +# * Apache v2 license (license terms in the root directory or at http://www.apache.org/licenses/LICENSE-2.0). # at your option. This file may not be copied, modified, or distributed except according to those terms.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/erasure_codes/t_zero_polynomial.nim` around lines 5 - 6, The header in t_zero_polynomial.nim is incomplete; update it to the same dual-license boilerplate used elsewhere in the PR by adding the MIT license option alongside Apache-2.0 (e.g., a short top-of-file notice stating the file is licensed under the MIT License or the Apache License, Version 2.0, with links and the standard line about choosing at your option), ensuring the header text matches the other files' exact wording and license links so the file's license attribution is consistent.eth_eip7594_peerdas.nim-241-244 (1)
241-244:⚠️ Potential issue | 🟡 MinorDead code:
odd_evalsvariable is declared but never used.The variable
odd_evalsis declared at line 242 but the FFT writes directly topHalfCells:var odd_evals: array[N, Fr[BLS12_381]] # Never used let fft_status = ctx.fft_desc_ext.fft_nn(pHalfCells.toOpenArray(N), poly_coef_nat.coefs)Proposed fix
# Step 3d: FFT of shifted coefficients -> evaluations directly into cells 64-127 # fft_nn: natural input -> natural output let pHalfCells = cast[ptr UncheckedArray[Fr[BLS12_381]]](cells_evals[HALF_CDS][0].addr) - var odd_evals: array[N, Fr[BLS12_381]] let fft_status = ctx.fft_desc_ext.fft_nn(pHalfCells.toOpenArray(N), poly_coef_nat.coefs)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@eth_eip7594_peerdas.nim` around lines 241 - 244, Remove the unused local variable declaration `odd_evals` (array[N, Fr[BLS12_381]]) which is dead code; the FFT result is written directly to `pHalfCells` via `ctx.fft_desc_ext.fft_nn(pHalfCells.toOpenArray(N), poly_coef_nat.coefs)` and the subsequent `fft_status` check remains, so delete the `var odd_evals` line to eliminate the unused symbol and any related warnings.
🧹 Nitpick comments (17)
research/peerdas/roots_of_unity_check.py (1)
31-44: Add an explicit assertion for the stated BRP relation.This script currently prints samples but does not actually fail when the claimed property is false.
🛠️ Proposed fix
for N in [4, 8, 4096, 8192]: @@ print(f" >>> {top} first bit-reversed roots of unity") for i in range(top): print(f" - {hex(BRP_ROOTS_OF_UNITY[i])}") + +# Actual check for the note above: +roots_4096 = [pow(pow(PRIMITIVE_ROOT_OF_UNITY, (MODULUS - 1) // 4096, MODULUS), i, MODULUS) for i in range(4096)] +roots_8192 = [pow(pow(PRIMITIVE_ROOT_OF_UNITY, (MODULUS - 1) // 8192, MODULUS), i, MODULUS) for i in range(8192)] +brp_4096 = bit_reversal_permutation(roots_4096) +brp_8192 = bit_reversal_permutation(roots_8192) +assert brp_8192[:4096] == brp_4096, "BRP prefix relation failed for 4096 vs 8192"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@research/peerdas/roots_of_unity_check.py` around lines 31 - 44, The script only prints samples of the BRP relation but doesn't assert it; add an explicit assertion that verifies bit_reversal_permutation produces the expected rearrangement of ROOTS_OF_UNITY for each N: for each index i in range(N) check BRP_ROOTS_OF_UNITY[i] == ROOTS_OF_UNITY[bit_reverse_index(i, N)] (or equivalently build expected = [ROOTS_OF_UNITY[bit_reverse_index(i, N)] for i in range(N)] and assert BRP_ROOTS_OF_UNITY == expected), and if false raise an AssertionError with a clear message including i, expected and actual so the test fails when the claimed property is false; use the existing bit_reversal_permutation/ROOTS_OF_UNITY/BRP_ROOTS_OF_UNITY symbols to locate where to add this check.research/kzg/README.md (1)
25-65: Avoid duplicated roots-of-unity script across docs and repo scripts.This script is effectively duplicated in
research/peerdas/roots_of_unity_check.py; keeping one canonical script and linking to it from the README will prevent drift.🔧 Suggested doc simplification
-This can be checked with the following script - -```python -... full script ... -``` +This can be checked with: + +`python research/peerdas/roots_of_unity_check.py`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@research/kzg/README.md` around lines 25 - 65, The README contains a duplicated roots-of-unity script (functions is_power_of_two, reverse_bits, bit_reversal_permutation and the N-loop) that already exists in research/peerdas/roots_of_unity_check.py; remove the full script block from research/kzg/README.md and replace it with a short note pointing readers to run the canonical script (e.g. "Run: python research/peerdas/roots_of_unity_check.py") and optionally include a one-line description of what that script checks so the README stays informative without duplicating code.tests/eth_eip7594_peerdas/t_cells_and_kzg_proofs_opt.nim (2)
230-232: TODO: Test filter limits coverage.The filter
compute_cells_and_kzg_proofs_case_valid_2restricts the test to a single vector. Consider removing this filter before merging to ensure full test coverage.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/eth_eip7594_peerdas/t_cells_and_kzg_proofs_opt.nim` around lines 230 - 232, The test loop currently short-circuits using the hardcoded filter string "compute_cells_and_kzg_proofs_case_valid_2" via the expression file.contains(...), limiting execution to one vector; remove or disable that conditional (the if not file.contains(...) continue) in tests/eth_eip7594_peerdas/t_cells_and_kzg_proofs_opt.nim so the loop processes all test files/vectors, ensuring full coverage.
46-46: Duplicate import ofconstantine/named/algebras.This module is already imported on line 40.
♻️ Remove duplicate import
import - constantine/named/algebras, constantine/math/polynomials/[fft, polynomials],🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/eth_eip7594_peerdas/t_cells_and_kzg_proofs_opt.nim` at line 46, The file imports the module constantine/named/algebras twice; remove the duplicate import statement that references constantine/named/algebras (the one at or around the diff shown) so the module is only imported once (keep the original import near line 40 and delete the repeated constantine/named/algebras entry). Ensure no other code depends on having it listed twice and run the tests to verify no unresolved symbols remain.tests/erasure_codes/t_recovery.nim (2)
21-30: Consider whethercreateDomainexport is necessary.The
createDomainproc is exported with*but appears to be a test utility. If it's only used in test files, the export marker may not be needed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/erasure_codes/t_recovery.nim` around lines 21 - 30, The proc createDomain currently has a public export marker (*) but it appears to be a test-only helper; remove the export by deleting the '*' on the proc signature (proc createDomain(F: typedesc[Fr], N: static int): PolyEvalRootsDomain[N, F]) or alternatively move the proc into the test module so it remains internal; locate the proc by name createDomain and related types PolyEvalRootsDomain and createFFTDescriptor to update callers if you change visibility.
10-10: Remove unusedrandomimport.The
randommodule is imported but not used in this file.♻️ Remove unused import
import - std/[random, options], + std/options, constantine/named/algebras,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/erasure_codes/t_recovery.nim` at line 10, The import list includes an unused module 'random' which should be removed; edit the import statement that currently reads "std/[random, options]" in t_recovery.nim and delete 'random' so only the used module(s) (e.g., "options") remain, then run tests to confirm no references to the 'random' symbol remain.tests/data_availability_sampling/t_peerdas_recovery.nim (2)
90-97: Consider simplifying index collection logic.The current code adds index 0 separately, then loops from 1 to add indices < 65. This works but could be clearer.
♻️ Simplified version
var available_indices: seq[CellIndex] var available_cells: seq[Cell] - available_indices.add(CellIndex(0)) - available_cells.add(cells[0]) - for i in 1 ..< CELLS_PER_EXT_BLOB: - if i < 65: - available_indices.add(CellIndex(i)) - available_cells.add(cells[i]) + for i in 0 ..< 65: + available_indices.add(CellIndex(i)) + available_cells.add(cells[i])🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/data_availability_sampling/t_peerdas_recovery.nim` around lines 90 - 97, The code separately adds index 0 then loops 1..\<CELLS_PER_EXT_BLOB with an inner if i < 65; simplify by iterating once and adding when within the desired bound: use a single loop over i (referencing available_indices, available_cells, CellIndex, CELLS_PER_EXT_BLOB, and cells) and add CellIndex(i) and cells[i] only while i < 65 (or loop i from 0 up to min(CELLS_PER_EXT_BLOB, 65)) to remove the special-case for 0 and make the logic clearer.
10-10: Remove unused imports.The imports
random,algorithm, andoptionsappear to be unused in this file.♻️ Suggested cleanup
import - std/[os, random, algorithm, strutils, options], + std/[os, strutils], constantine/named/algebras,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/data_availability_sampling/t_peerdas_recovery.nim` at line 10, The single import line std/[os, random, algorithm, strutils, options] includes unused modules; remove the unused imports `random`, `algorithm`, and `options` from that import list so only the modules actually used in this file (e.g., `os` and `strutils`) remain. Before changing, search the file for any references to `random`, `algorithm` or `options` (functions/types) to ensure they truly aren't used; then update the import list in the line containing std/[os, random, algorithm, strutils, options] accordingly.constantine/erasure_codes/zero_polynomial.nim (1)
9-14: Unused import:std/options.The
optionsmodule is imported but not used anywhere in the file.🧹 Remove unused import
import - std/options, constantine/named/algebras, constantine/math/arithmetic, constantine/math/polynomials/[polynomials, fft], constantine/platforms/[allocs, views]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@constantine/erasure_codes/zero_polynomial.nim` around lines 9 - 14, The file imports std/options but never uses it; remove the unused import from the import list in zero_polynomial.nim (the import block that currently includes std/options, constantine/named/algebras, constantine/math/arithmetic, constantine/math/polynomials/[polynomials, fft], constantine/platforms/[allocs, views]) so only the actually used modules remain.tests/commitments/trusted_setup_generator.nim (2)
23-31: Unused functiongenPowersOfTauImpl.This function is defined but never called. It appears to be superseded by
computePowersOfTauG1andcomputePowersOfTauG2.🧹 Remove dead code
-func genPowersOfTauImpl(EC: typedesc, secret: auto, length: int): seq[EC] = - result.setLen(length) - var P {.noInit.}: EC - P.fromAffine(EC.F.C.getGenerator($EC.G)) - result[0] = P - for i in 1 ..< length: - P.scalarMul_vartime(secret) - result[i] = P -🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/commitments/trusted_setup_generator.nim` around lines 23 - 31, Remove the dead function genPowersOfTauImpl: locate the genPowersOfTauImpl definition and delete it (and any related unused declarations) since computePowersOfTauG1 and computePowersOfTauG2 supersede it; verify there are no remaining references to genPowersOfTauImpl and run the test build to ensure computePowersOfTauG1/computePowersOfTauG2 cover the needed behavior.
1-7: Missing copyright header.This file lacks the standard Constantine copyright header present in all other source files in this PR.
📝 Add copyright header
+# Constantine +# Copyright (c) 2018-2019 Status Research & Development GmbH +# Copyright (c) 2020-Present Mamy André-Ratsimbazafy +# Licensed and distributed under either of +# * MIT license (license terms in the root directory or at http://opensource.org/licenses/MIT). +# * Apache v2 license (license terms in the root directory or at http://www.apache.org/licenses/LICENSE-2.0). +# at your option. This file may not be copied, modified, or distributed except according to those terms. + import constantine/named/algebras,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/commitments/trusted_setup_generator.nim` around lines 1 - 7, The file trusted_setup_generator.nim is missing the standard Constantine copyright header; add the project’s canonical header block at the very top of the file (above the existing import statements) using the same text and format used in other PR files so the header is consistent across the codebase.data_availability_sampling/eth_peerdas.nim (1)
110-212: Memory leak on assertion failure.If any
doAssertfails after heap allocations but before the correspondingfreeHeapAlignedcalls, memory will leak. Consider using defer or a cleanup pattern to ensure memory is freed even on failure paths.♻️ Suggested pattern using defer
# After each allocation, immediately register cleanup: var extended_evaluation = allocHeapArrayAligned(Fr[Name], ext_size, alignment = 64) defer: freeHeapAligned(extended_evaluation) # Then remove individual freeHeapAligned calls at the end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@data_availability_sampling/eth_peerdas.nim` around lines 110 - 212, Multiple heap buffers (e.g., extended_evaluation, short_roots, short_vanishing_poly, zero_poly_coeff, zero_poly_eval, extended_times_zero, ext_times_zero_coeffs, ext_eval_over_coset, zero_poly_over_coset, reconstructed_over_coset, reconstructed_coeff) are allocated but several freeHeapAligned calls occur after doAssert checks, causing memory leaks on assertion failure; fix by registering cleanup immediately after each alloc (use Nim's defer: freeHeapAligned(<buffer>) right after each allocHeapArrayAligned) so every allocation is freed on all control paths, then remove or avoid duplicate freeHeapAligned calls at the end to prevent double-free. Ensure you apply this pattern around functions/methods where fft_nr, ifft_rn, coset_fft_nr, coset_ifft_rn and related doAssert checks appear (references: fft_nr, ifft_rn, coset_fft_nr, coset_ifft_rn, doAssert).tests/commitments/t_kzg_multiproofs.nim (1)
149-172: Unused function with incorrect synthetic division implementation.
computeQuotientPolyCoefappears to be defined but never called in this test file. Additionally, the synthetic division logic at lines 163-164 is incorrect - it multipliespoly.coefs[i]byzinstead ofquotient.coefs[i]byz:# Current (incorrect): quotient.coefs[i-1].prod(poly.coefs[i], z) # Expected for synthetic division: quotient.coefs[i-1].prod(quotient.coefs[i], z) quotient.coefs[i-1].sum(quotient.coefs[i-1], poly.coefs[i])Consider removing this unused function or fixing the implementation if it's intended for future use.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/commitments/t_kzg_multiproofs.nim` around lines 149 - 172, The function computeQuotientPolyCoef is unused and contains a bug in the synthetic division loop: it multiplies poly.coefs[i] by z instead of multiplying the running quotient term by z. Either remove the unused computeQuotientPolyCoef or fix the loop inside it so the update uses quotient.coefs[i] when computing quotient.coefs[i-1] (i.e., perform prod on quotient.coefs[i] with z then add poly.coefs[i]), ensure evalPolyAt is still used to compute p_z (or remove that if deleting the function), and keep the final zeroing of quotient.coefs[N-1] if the function is retained.erasure_codes/recovery.nim (1)
242-246: Dead code: Index range check is always false.The condition
if uint64(i) >= uint64(N)can never be true sinceiiterates from0 ..< N. This validation is redundant.♻️ Proposed fix to remove dead code
- # Validate missing indices are in range - for i in 0 ..< N: - if samples[i].isNone(): - if uint64(i) >= uint64(N): - return Recovery_InvalidDomainSize - var missing_idx = 0🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@erasure_codes/recovery.nim` around lines 242 - 246, The check inside the loop over "for i in 0 ..< N" is dead: the condition "if uint64(i) >= uint64(N)" can never be true; remove that redundant branch so the loop simply identifies missing samples without returning Recovery_InvalidDomainSize. Update the loop that iterates over "i" and references "samples" and "N" to drop the inner if and any use of Recovery_InvalidDomainSize in that context.eth_eip7594_peerdas.nim (1)
343-355: Using same error code for different validation failures reduces debuggability.All validation failures in
recover_cells_and_kzg_proofsreturncttEthKzg_InputsLengthsMismatch:
num_cells != cells.len→ length mismatch (correct)num_cells < CELLS_PER_EXT_BLOB div 2→ insufficient cells for recoverynum_cells > CELLS_PER_EXT_BLOB→ too many cells- Duplicate/unsorted indices → invalid input ordering
Consider using distinct error codes (e.g.,
cttEthKzg_InsufficientCells,cttEthKzg_DuplicateCellIndex) to aid debugging.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@eth_eip7594_peerdas.nim` around lines 343 - 355, In recover_cells_and_kzg_proofs, replace the repeated cttEthKzg_InputsLengthsMismatch return value with distinct error codes so failures are distinguishable: return cttEthKzg_LengthMismatch for the num_cells != cells.len check, return cttEthKzg_InsufficientCells for num_cells < CELLS_PER_EXT_BLOB div 2, return cttEthKzg_TooManyCells for num_cells > CELLS_PER_EXT_BLOB, and return cttEthKzg_DuplicateOrUnsortedIndex (or cttEthKzg_DuplicateCellIndex) for the uint64(cell_indices[i-1]) >= uint64(cell_indices[i]) check; ensure these new error symbols are declared/added to the existing error enum and used consistently in recover_cells_and_kzg_proofs.constantine/math/polynomials/fft.nim (1)
60-70: Destructor design looks correct but consider double-free protection.The destructors check for
isNil()before freeing, which is good. However, they don't set the pointer tonilafter freeing, so if=destroyis somehow called twice on the same object, it could double-free.In practice, Nim's destructor semantics should prevent this, but for defensive coding:
Optional defensive fix
proc `=destroy`*[F](ctx: FrFFT_Descriptor[F]) = if not ctx.rootsOfUnity.isNil(): ctx.rootsOfUnity.freeHeapAligned() + ctx.rootsOfUnity = nilNote: This requires making
ctxavarparameter, which changes the destructor signature. Given Nim's destructor guarantees, this is likely unnecessary.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@constantine/math/polynomials/fft.nim` around lines 60 - 70, Destructors for FrFFT_Descriptor, CosetFFT_Descriptor and ECFFT_Descriptor free ctx.rootsOfUnity but don't null it afterwards, risking a double-free if the destructor runs twice; update the three `proc `=destroy`*` signatures to take `ctx` as a var parameter and, after calling `ctx.rootsOfUnity.freeHeapAligned()`, set `ctx.rootsOfUnity = nil` (reference: the three `=destroy` procs and `rootsOfUnity.freeHeapAligned()`), or alternatively keep the current non-var signature but ensure some external ownership/guard prevents re-entry — the recommended fix is to make `ctx` mutable and clear the pointer after freeing.tests/testutils/eth_consensus_utils.nim (1)
34-36: Empty string inSkippedTestsarray.The
SkippedTestsarray contains an empty string""which will never match any filename. This appears to be a placeholder. Consider using an empty array or adding a comment:const SkippedTests = [ "" # Placeholder - no tests currently skipped ]Or simply:
const SkippedTests: array[0, string] = []🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/testutils/eth_consensus_utils.nim` around lines 34 - 36, The SkippedTests const currently contains an empty string entry which never matches any filename; update the SkippedTests constant to either be an empty sequence/array or keep a documented placeholder comment instead of "" so intent is clear—modify the SkippedTests declaration (symbol: SkippedTests) to remove the "" entry and replace it with an empty collection type or add an inline comment like "Placeholder - no tests currently skipped".
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: b46a84b3-216b-4f3a-8538-800677d38020
⛔ Files ignored due to path filters (21)
constantine/commitments_setups/trusted_setup_ethereum_kzg4844_reference.datis excluded by!**/*.datpapers/FFT/CG98 - bit-reversal-permutation - cobra.pdfis excluded by!**/*.pdfpapers/ethereum/WZ24 - 2024-1362 - A documentation of Ethereum PeerDAS.pdfis excluded by!**/*.pdfpapers/ethereum/images/WZ24 - Fig 1 - Section 2.1 - Reed-Solomon Codes and Roots of Unity.jpgis excluded by!**/*.jpgpapers/ethereum/images/WZ24 - Fig 2 - Section 3.1 - Warm-Up Single Row Commitment Scheme.jpgis excluded by!**/*.jpgpapers/polynomial_commitments/CJLMR24 - 2024-1279 - Improved polynomial division for cryptography.pdfis excluded by!**/*.pdfpapers/polynomial_commitments/FK20 - Kate_amortized.pdfis excluded by!**/*.pdfpapers/polynomial_commitments/FK23 - 2023-033 - Fast amortized KZG proofs.pdfis excluded by!**/*.pdfpapers/polynomial_commitments/KZG10 - Constant size commitments to polynomials and their applications.pdfis excluded by!**/*.pdfpapers/polynomial_commitments/images/CJLMR24 - Fig 2a - Comparison of proofs of openings at all positions.jpgis excluded by!**/*.jpgpapers/polynomial_commitments/images/CJLMR24 - Fig 2b - Comparison of proofs of openings smaller n.jpgis excluded by!**/*.jpgpapers/polynomial_commitments/images/CJLMR24 - Fig 2c - Comparison of proofs with precomputation.jpgis excluded by!**/*.jpgpapers/polynomial_commitments/images/CJLMR24 - Fig 2d - Comparison of proofs for larger sizes.jpgis excluded by!**/*.jpgpapers/polynomial_commitments/images/CJLMR24 - Fig 3a - Setup runtimes no optimizations.jpgis excluded by!**/*.jpgpapers/polynomial_commitments/images/CJLMR24 - Fig 3b - Setup runtimes with ColEDiv precomputation.jpgis excluded by!**/*.jpgpapers/polynomial_commitments/images/CJLMR24 - Fig 4 - Update Opening at index i not equal to j.jpgis excluded by!**/*.jpgpapers/polynomial_commitments/images/CJLMR24 - Fig 6a - EDiv_0 matrix.jpgis excluded by!**/*.jpgpapers/polynomial_commitments/images/CJLMR24 - Fig 6b - EDiv_1 matrix.jpgis excluded by!**/*.jpgpapers/polynomial_commitments/images/CJLMR24 - Fig 6c - EDiv_2 matrix.jpgis excluded by!**/*.jpgpapers/polynomial_commitments/images/CJLMR24 - Fig 6d - EDiv_3 matrix.jpgis excluded by!**/*.jpgpapers/polynomial_commitments/images/F21 - Fig 1 - verkle_trie.svgis excluded by!**/*.svg
📒 Files selected for processing (63)
.gitignorecommitments/kzg_multiproofs.nimconstantine.nimbleconstantine/commitments/kzg.nimconstantine/commitments/kzg_multiproofs.nimconstantine/commitments/kzg_parallel.nimconstantine/commitments_setups/ethereum_kzg_srs.nimconstantine/data_availability_sampling/eth_peerdas.nimconstantine/erasure_codes/recovery.nimconstantine/erasure_codes/zero_polynomial.nimconstantine/eth_eip7594_peerdas.nimconstantine/ethereum_eip4844_kzg.nimconstantine/ethereum_eip4844_kzg_parallel.nimconstantine/math/arithmetic/assembly/limbs_asm_mul_mont_arm64.nimconstantine/math/arithmetic/assembly/limbs_asm_redc_mont_arm64.nimconstantine/math/arithmetic/finite_fields.nimconstantine/math/elliptic/ec_shortweierstrass_batch_ops.nimconstantine/math/matrix/toeplitz.nimconstantine/math/pairings/pairings_generic.nimconstantine/math/polynomials/fft.nimconstantine/math/polynomials/polynomials.nimconstantine/named/zoo_endomorphisms.nimconstantine/serialization/codecs_bls12_381.nimconstantine/signatures/bls_signatures.nimconstantine/threadpool/instrumentation.nimconstantine/threadpool/primitives/topology_windows.nimdata_availability_sampling/eth_peerdas.nimerasure_codes/recovery.nimerasure_codes/zero_polynomial.nimeth_eip7594_peerdas.nimfffftmath/matrix/toeplitz.nimpapers/README.mdpapers/ethereum/B18 - reed-solomon_erasure_recovery_with_ffts.mdpapers/ethereum/B18->Z22 - reed-solomon_erasure_recovery_with_ffts_reply12.mdpapers/ethereum/EIP4844 - deneb fork - polynomial-commitments.mdpapers/ethereum/EIP7594 - fulu fork - polynomial-commitments-sampling.mdpapers/ethereum/KDF22 - universal_verification_equation_for_data_availability_samping.mdpapers/ethereum/WZ24 - 2024-1362 - A documentation of Ethereum PeerDAS.mdpapers/polynomial_commitments/CJLMR24 - 2024-1279 - Improved polynomial division for cryptography.mdpapers/polynomial_commitments/F21 - PCS-multiproofs-using-random-evaluation.mdpapers/polynomial_commitments/FK23 - 2023-033 - Fast amortized KZG proofs.mdpapers/polynomial_commitments/T20 - multiplying-a-vector-by-a-toeplitz-matrix.mdresearch/kzg/README.mdresearch/kzg/fft_fr.nimresearch/kzg/fft_g1.nimresearch/peerdas/roots_of_unity_check.pyresearch/peerdas/test_compute_cells_opt.pytests/commitments/t_kzg_multiproofs.nimtests/commitments/trusted_setup_generator.nimtests/data_availability_sampling/t_peerdas_recovery.nimtests/erasure_codes/t_recovery.nimtests/erasure_codes/t_zero_polynomial.nimtests/eth_eip7594_peerdas/t_cells_and_kzg_proofs_opt.nimtests/eth_eip7594_peerdas/t_compute_cells_opt.nimtests/math_matrix/t_toeplitz.nimtests/math_polynomials/fft_utils.nimtests/math_polynomials/t_fft.nimtests/math_polynomials/t_fft_coset.nimtests/t_eth_eip7594_peerdas.nimtests/t_ethereum_eip4844_deneb_kzg.nimtests/t_ethereum_eip4844_deneb_kzg_parallel.nimtests/testutils/eth_consensus_utils.nim
💤 Files with no reviewable changes (2)
- constantine/math/arithmetic/assembly/limbs_asm_mul_mont_arm64.nim
- constantine/math/arithmetic/assembly/limbs_asm_redc_mont_arm64.nim
commitments/kzg_multiproofs.nim
Outdated
| if status != FFT_Success: | ||
| freeHeapAligned(circulant) | ||
| freeHeapAligned(u) | ||
| return | ||
|
|
||
| # u is already in time domain (toeplitzMatVecMulSameSize did IFFT) |
There was a problem hiding this comment.
Silent failure: kzg_coset_prove doesn't propagate errors to caller.
The function returns void and exits early on FFT failures without informing the caller:
if status != FFT_Success:
freeHeapAligned(circulant)
freeHeapAligned(u)
return # Silent failure - proofs array left in undefined stateThe caller has no way to know if proof computation succeeded. Consider returning FFTStatus or using an out parameter for status.
Proposed fix
-func kzg_coset_prove*[N, L, CDS: static int, Name: static Algebra](
+func kzg_coset_prove*[N, L, CDS: static int, Name: static Algebra](
tauExtFftArray: array[L, array[CDS, EC_ShortW_Jac[Fp[Name], G1]]],
circulantDomain: PolyEvalRootsDomain[CDS, Fr[Name]],
proofs: var array[CDS, EC_ShortW_Aff[Fp[Name], G1]],
- poly: PolynomialCoef[N, Fr[Name]]) {.tags:[Alloca, HeapAlloc, Vartime].} =
+ poly: PolynomialCoef[N, Fr[Name]]): FFTStatus {.tags:[Alloca, HeapAlloc, Vartime].} =
# ... body ...
if status != FFT_Success:
freeHeapAligned(circulant)
freeHeapAligned(u)
- return
+ return status
# ... rest ...
+ return FFT_SuccessAlso applies to: 369-375
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@commitments/kzg_multiproofs.nim` around lines 356 - 361, The kzg_coset_prove
function currently returns void and silently returns on FFT failures (e.g., the
FFT_Success checks at the shown block and also at 369-375), leaving the proofs
array in an undefined state; change the function signature to return an
FFTStatus (or add an out parameter like out status: FFTStatus), update all
callers to check that return status, and replace each early "return" on FFT
error with "freeHeapAligned(...); return <error status>;" (or set the out status
and return) so errors are propagated; ensure you update both occurrences noted
and any other similar early exits and adjust callers to handle the new return
contract.
| var indices_arr: array[256, uint64] | ||
| doAssert indices.len <= 256 | ||
| for i in 0 ..< indices.len: | ||
| indices_arr[i] = indices[i] | ||
|
|
||
| var poly: PolynomialCoef[N, F] | ||
| vanishingPolynomialForIndices(poly, indices_arr, domain) |
There was a problem hiding this comment.
Type mismatch: vanishingPolynomialForIndices expects exact array size.
The function vanishingPolynomialForIndices has a static numIndices parameter, but here a fixed array[256, uint64] is passed regardless of actual indices.len. This will cause the vanishing polynomial to use 256 indices instead of the actual count.
Should use vanishingPolynomialForIndicesRT instead, which handles runtime-sized indices.
🐛 Proposed fix
- var indices_arr: array[256, uint64]
- doAssert indices.len <= 256
- for i in 0 ..< indices.len:
- indices_arr[i] = indices[i]
-
var poly: PolynomialCoef[N, F]
- vanishingPolynomialForIndices(poly, indices_arr, domain)
+ vanishingPolynomialForIndicesRT(poly, indices, domain)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@constantine/erasure_codes/zero_polynomial.nim` around lines 223 - 229, The
code constructs a fixed array `indices_arr: array[256, uint64]` and then calls
`vanishingPolynomialForIndices(poly, indices_arr, domain)`, which expects a
compile-time `numIndices` size and thus incorrectly treats all 256 entries as
valid; replace this with the runtime-capable API by building a runtime
slice/array of the actual length (using `indices.len`) and call
`vanishingPolynomialForIndicesRT(poly, indices_runtime, domain)` instead so the
vanishing polynomial uses the real number of indices; update references around
`indices_arr`, `vanishingPolynomialForIndices`, and `PolynomialCoef` to use the
RT variant and pass only the valid indices.
| let dividendDeg = N - 1 | ||
| let divisorDeg = M - 1 | ||
| var quotientDeg = dividendDeg - divisorDeg | ||
|
|
||
| if quotientDeg < 0: | ||
| quotientDeg = 0 | ||
|
|
||
| var invDivisorLead {.noInit.}: Field | ||
| invDivisorLead.inv_vartime(divisor.coefs[divisorDeg]) | ||
|
|
||
| for i in countdown(quotientDeg, 0): | ||
| var coef {.noInit.}: Field | ||
| coef.prod(working.coefs[i + divisorDeg], invDivisorLead) |
There was a problem hiding this comment.
Guard polyDiv before the divisor degree exceeds the dividend.
When M > N, quotientDeg is clamped to 0, but Line 163 still reads working.coefs[i + divisorDeg], which is out of bounds on the first iteration.
Suggested fix
let dividendDeg = N - 1
let divisorDeg = M - 1
+ if divisorDeg > dividendDeg:
+ for i in 0 ..< N:
+ quotient.coefs[i].setZero()
+ return
+
var quotientDeg = dividendDeg - divisorDeg
-
- if quotientDeg < 0:
- quotientDeg = 0📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let dividendDeg = N - 1 | |
| let divisorDeg = M - 1 | |
| var quotientDeg = dividendDeg - divisorDeg | |
| if quotientDeg < 0: | |
| quotientDeg = 0 | |
| var invDivisorLead {.noInit.}: Field | |
| invDivisorLead.inv_vartime(divisor.coefs[divisorDeg]) | |
| for i in countdown(quotientDeg, 0): | |
| var coef {.noInit.}: Field | |
| coef.prod(working.coefs[i + divisorDeg], invDivisorLead) | |
| let dividendDeg = N - 1 | |
| let divisorDeg = M - 1 | |
| if divisorDeg > dividendDeg: | |
| for i in 0 ..< N: | |
| quotient.coefs[i].setZero() | |
| return | |
| var quotientDeg = dividendDeg - divisorDeg | |
| var invDivisorLead {.noInit.}: Field | |
| invDivisorLead.inv_vartime(divisor.coefs[divisorDeg]) | |
| for i in countdown(quotientDeg, 0): | |
| var coef {.noInit.}: Field | |
| coef.prod(working.coefs[i + divisorDeg], invDivisorLead) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@constantine/math/polynomials/polynomials.nim` around lines 151 - 163, The
loop in polyDiv assumes divisorDeg <= dividendDeg; when M > N (divisorDeg >
dividendDeg) the expression working.coefs[i + divisorDeg] is out-of-bounds. Fix
by guarding that case before computing invDivisorLead/entering the countdown: if
divisorDeg > dividendDeg then set the quotient to zero (or return a zero
quotient and the original dividend as remainder) and skip the loop, otherwise
proceed to compute invDivisorLead and run the countdown using quotientDeg.
Ensure you reference quotientDeg, dividendDeg, divisorDeg, working,
invDivisorLead and the surrounding polyDiv routine when making the change.
| import | ||
| std/algorithm, | ||
| constantine/named/[algebras], | ||
| constantine/math/[arithmetic, ec_shortweierstrass], | ||
| constantine/math/polynomials/[polynomials, fft], | ||
| constantine/platforms/views |
There was a problem hiding this comment.
Missing import for heap allocation functions.
The module uses allocHeapArrayAligned and freeHeapAligned but only imports views from constantine/platforms. The allocs module needs to be imported.
🐛 Proposed fix
import
std/algorithm,
constantine/named/[algebras],
constantine/math/[arithmetic, ec_shortweierstrass],
constantine/math/polynomials/[polynomials, fft],
- constantine/platforms/views
+ constantine/platforms/[views, allocs]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@data_availability_sampling/eth_peerdas.nim` around lines 9 - 14, The file
imports constantine/platforms/views but uses heap allocation functions
allocHeapArrayAligned and freeHeapAligned; add the missing import for the allocs
module so those symbols are available (import constantine/platforms/allocs or
include allocs in the existing constantine/platforms import list) and ensure the
code references to allocHeapArrayAligned and freeHeapAligned resolve to that
module.
| var ext_times_zero_coeffs = allocHeapArrayAligned(Fr[Name], ext_size, alignment = 64) | ||
| let ifft_status = ifft_rn(fft_desc, ext_times_zero_coeffs.toOpenArray(ext_size), extended_times_zero.toOpenArray(ext_size)) | ||
| doAssert ifft_status == FFT_Success |
There was a problem hiding this comment.
Same toOpenArray issue - needs start and end indices.
🐛 Proposed fix
var ext_times_zero_coeffs = allocHeapArrayAligned(Fr[Name], ext_size, alignment = 64)
- let ifft_status = ifft_rn(fft_desc, ext_times_zero_coeffs.toOpenArray(ext_size), extended_times_zero.toOpenArray(ext_size))
+ let ifft_status = ifft_rn(fft_desc, ext_times_zero_coeffs.toOpenArray(0, ext_size-1), extended_times_zero.toOpenArray(0, ext_size-1))
doAssert ifft_status == FFT_Success📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| var ext_times_zero_coeffs = allocHeapArrayAligned(Fr[Name], ext_size, alignment = 64) | |
| let ifft_status = ifft_rn(fft_desc, ext_times_zero_coeffs.toOpenArray(ext_size), extended_times_zero.toOpenArray(ext_size)) | |
| doAssert ifft_status == FFT_Success | |
| var ext_times_zero_coeffs = allocHeapArrayAligned(Fr[Name], ext_size, alignment = 64) | |
| let ifft_status = ifft_rn(fft_desc, ext_times_zero_coeffs.toOpenArray(0, ext_size-1), extended_times_zero.toOpenArray(0, ext_size-1)) | |
| doAssert ifft_status == FFT_Success |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@data_availability_sampling/eth_peerdas.nim` around lines 178 - 180, The call
sites to toOpenArray on ext_times_zero_coeffs and extended_times_zero are
missing the required start/end indices; update both usages in the ifft_rn call
to pass explicit bounds (e.g., ext_times_zero_coeffs.toOpenArray(0, ext_size-1)
and extended_times_zero.toOpenArray(0, ext_size-1)) so the slices have correct
inclusive indices before calling ifft_rn(fft_desc, ..., ...); leave the doAssert
ifft_status == FFT_Success check as-is.
| In the *blob’s Lagrange Basis*, we carefully re-order the group of roots of unity of order 8192 in *reverse bit ordering* . This gives us the useful property that any power-of-two-aligned subset, is a subgroup (or a coset of a subgroup) of the multiplicative group of powers of \omega. | ||
|
|
||
| As an example, consider the group of roots of unity of order 8192: \{1, \omega, \omega^2, \omega^3, ..., \omega^{8191}\}. We use indices to denote the ordering in *reverse bit order*: \{\omega_0, \omega_1, \omega_2, \omega_3, ..., \omega_{8191}\} = \{1, \omega^{4096}, \omega^{2048}, \omega^{6072}, ..., \omega^{8191}\}. Then \Omega = \{\omega_0, \omega_1, \omega_2, ..., \omega_{15}\} is a subgroup of order 16, and for each 0 <= i < 512, we can use the *shifting factor* h_i = \omega_{16i} to construct the coset H_i = h_i \Omega. | ||
|
|
There was a problem hiding this comment.
Fix the reverse-bit-order worked example.
For an 8192-point domain, the 13-bit reversal of index 3 is 6144, not 6072. This example is used to motivate the coset layout later, so the current number will mislead anyone checking the permutation by hand.
Suggested correction
-... = {1, \omega^{4096}, \omega^{2048}, \omega^{6072}, ..., \omega^{8191}}
+... = {1, \omega^{4096}, \omega^{2048}, \omega^{6144}, ..., \omega^{8191}}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| In the *blob’s Lagrange Basis*, we carefully re-order the group of roots of unity of order 8192 in *reverse bit ordering* . This gives us the useful property that any power-of-two-aligned subset, is a subgroup (or a coset of a subgroup) of the multiplicative group of powers of \omega. | |
| As an example, consider the group of roots of unity of order 8192: \{1, \omega, \omega^2, \omega^3, ..., \omega^{8191}\}. We use indices to denote the ordering in *reverse bit order*: \{\omega_0, \omega_1, \omega_2, \omega_3, ..., \omega_{8191}\} = \{1, \omega^{4096}, \omega^{2048}, \omega^{6072}, ..., \omega^{8191}\}. Then \Omega = \{\omega_0, \omega_1, \omega_2, ..., \omega_{15}\} is a subgroup of order 16, and for each 0 <= i < 512, we can use the *shifting factor* h_i = \omega_{16i} to construct the coset H_i = h_i \Omega. | |
| In the *blob's Lagrange Basis*, we carefully re-order the group of roots of unity of order 8192 in *reverse bit ordering* . This gives us the useful property that any power-of-two-aligned subset, is a subgroup (or a coset of a subgroup) of the multiplicative group of powers of \omega. | |
| As an example, consider the group of roots of unity of order 8192: \{1, \omega, \omega^2, \omega^3, ..., \omega^{8191}\}. We use indices to denote the ordering in *reverse bit order*: \{\omega_0, \omega_1, \omega_2, \omega_3, ..., \omega_{8191}\} = \{1, \omega^{4096}, \omega^{2048}, \omega^{6144}, ..., \omega^{8191}\}. Then \Omega = \{\omega_0, \omega_1, \omega_2, ..., \omega_{15}\} is a subgroup of order 16, and for each 0 <= i < 512, we can use the *shifting factor* h_i = \omega_{16i} to construct the coset H_i = h_i \Omega. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@papers/ethereum/KDF22` -
universal_verification_equation_for_data_availability_samping.md around lines 62
- 65, The reverse-bit-order example is incorrect: for the 8192-point domain the
13-bit reversal of index 3 is 6144 (not 6072); update the worked example in the
paragraph that defines \{\omega_0, \omega_1, ...\} so the mapping shows \omega_3
= \omega^{6144} (and adjust the example sequence accordingly), and verify any
downstream references to \Omega, h_i, or H_i that rely on that explicit sample
index are consistent with the corrected reverse-bit permutation.
| The answer is, as before, the elements in the first columns of $C_8$, which are: | ||
|
|
||
| \[\vec{a_8}=[ a_0, a_1, a_2, a_3, a_0, a_{-3}, a_{-2}, a_{-1} ]^T\]Thus, applying the algorithm for circulant matrices from before, what we must do is: | ||
|
|
||
| - Build $\vec{a_{2n}}$ from the entries \(\{a_{n-1}, a_{n-2}, \dots, a_1, a_0, a_{-1}, \dots, a_{-(n-1)}\}\) of the Toeplitz matrix $T_n$ | ||
| - Compute $\vec{y}$ by doing a DFT on $[\vec{x}, \vec{0}]^T$ | ||
| - Compute $\vec{v}$ by doing a DFT on $\vec{a_{2n}}$ (e.g., on $\vec{a_8}$ from above) | ||
| - Compute the Hadamard product $\vec{u}=\vec{v} \circ \vec{y}$, | ||
| - Do an inverse DFT on $\vec{u}$ | ||
| - The product $T_n \vec{x}$ consists of the first $n$ entries of the resulting vector |
There was a problem hiding this comment.
Make the \vec{a_{2n}} construction explicit.
The bullet at Line 134 does not match the \vec{a_8} example right above it: as written, it reads like the positive-index half is reversed and the duplicated a_0 is omitted. A reader implementing from this step can easily build the wrong circulant embedding. Spell out the ordered vector directly, e.g. [a_0, a_1, …, a_{n-1}, a_0, a_{-(n-1)}, …, a_{-1}]^T.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@papers/polynomial_commitments/T20` -
multiplying-a-vector-by-a-toeplitz-matrix.md around lines 130 - 139, Clarify and
correct the construction of the circulant embedding vector vec{a_{2n}} so it
matches the vec{a_8} example: explicitly state the ordering as vec{a_{2n}} =
[a_0, a_1, …, a_{n-1}, a_0, a_{-(n-1)}, …, a_{-1}]^T (so for n=4 vec{a_8} =
[a_0, a_1, a_2, a_3, a_0, a_{-3}, a_{-2}, a_{-1}]^T), and update the bullet that
builds vec{a_{2n}} from the Toeplitz entries so it uses this exact ordered
sequence.
| let fft4 = FFTDescriptor[Fr[BLS12_381]].init(maxScale = 4) | ||
| for i in 0 ..< 4: | ||
| echo "fft4[" & $i & "] = " & fft4.expandedRootsOfUnity[i].toHex() | ||
|
|
||
| echo "===================" | ||
|
|
||
| let fft8 = FFTDescriptor[Fr[BLS12_381]].init(maxScale = 8) | ||
| for i in 0 ..< 8: | ||
| echo "fft8[" & $i & "] = " & fft8.expandedRootsOfUnity[i].toHex() | ||
|
|
||
| # when isMainModule: |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify the debug block is file-scope code and not guarded.
sed -n '189,210p' research/kzg/fft_fr.nimRepository: mratsim/constantine
Length of output: 671
Move the root-dump block back under when isMainModule.
Lines 195–203 currently execute on every import, causing the module to emit debug output and perform descriptor setup as side effects whenever it is imported by other code.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@research/kzg/fft_fr.nim` around lines 195 - 205, The debug/root-dump block
that constructs FFTDescriptor[Fr[BLS12_381]] (fft4, fft8) and echoes
expandedRootsOfUnity is running on import; wrap or move that entire block under
a when isMainModule conditional so it only runs as a script. Specifically,
relocate the lines that create fft4 and fft8 and the for/echo loops (referencing
FFTDescriptor, fft4, fft8, and expandedRootsOfUnity) into a when isMainModule:
section (or add the conditional around them) so importing this module no longer
prints or performs setup as a side effect.
| let n = 32 | ||
| let fftDesc = createFFTDescriptor(F, n) | ||
| let coset_shift = fftDesc.rootsOfUnity[1] ~^ uint32(CELLS_PER_EXT_BLOB) | ||
|
|
||
| var data = newSeq[F](n) | ||
| for i in 0 ..< n: | ||
| data[i].fromUint(uint64(i + 1)) | ||
|
|
||
| var coset_freq = newSeq[F](n) | ||
| let cosetDesc = CosetFFT_Descriptor[F].new( | ||
| order = n, generatorRootOfUnity = fftDesc.rootsOfUnity[1], shift = coset_shift) |
There was a problem hiding this comment.
This “PeerDAS” shift collapses to the base subgroup.
With n = 32, fftDesc.rootsOfUnity[1] has order 32, so raising it to CELLS_PER_EXT_BLOB = 128 yields 1. That means this block is exercising another regular subgroup FFT roundtrip, not a true coset case.
One way to make this a real coset test
block:
let n = 32
- let fftDesc = createFFTDescriptor(F, n)
- let coset_shift = fftDesc.rootsOfUnity[1] ~^ uint32(CELLS_PER_EXT_BLOB)
+ let extDesc = createFFTDescriptor(F, n * CELLS_PER_EXT_BLOB)
+ let subgroupRoot = extDesc.rootsOfUnity[CELLS_PER_EXT_BLOB]
+ let coset_shift = extDesc.rootsOfUnity[1]
var data = newSeq[F](n)
for i in 0 ..< n:
data[i].fromUint(uint64(i + 1))
var coset_freq = newSeq[F](n)
let cosetDesc = CosetFFT_Descriptor[F].new(
- order = n, generatorRootOfUnity = fftDesc.rootsOfUnity[1], shift = coset_shift)
+ order = n, generatorRootOfUnity = subgroupRoot, shift = coset_shift)
...
- fftDesc.delete()
+ extDesc.delete()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/math_polynomials/t_fft_coset.nim` around lines 93 - 103, The coset
shift currently computes coset_shift = fftDesc.rootsOfUnity[1] ~^
uint32(CELLS_PER_EXT_BLOB) which collapses to 1 because CELLS_PER_EXT_BLOB is a
multiple of the root order (n=32); update the shift so it is not in the base
subgroup (e.g. use the exponent modulo the order or pick a different root
index). Concretely, change the shift computation to use CELLS_PER_EXT_BLOB % n
(or pick fftDesc.rootsOfUnity[3] instead of [1]) and pass that value into
CosetFFT_Descriptor.new(…, shift = coset_shift) so the test exercises a true
nontrivial coset rather than the base subgroup.
| proc testECFFTRoundtrip*(EC: typedesc, F: typedesc[Fr]) = | ||
| echo "Testing EC FFT/IFFT roundtrip..." | ||
|
|
||
| for scale in 2 .. 5: | ||
| let order = 1 shl scale | ||
|
|
||
| let fftDesc = createFFTDescriptor(EC, F, order) | ||
|
|
||
| var data = newSeq[EC](order) | ||
| data[0].setGenerator() | ||
| let gen = EC.F.Name.getGenerator($EC.G) | ||
| for i in 1 ..< order: | ||
| data[i].mixedSum(data[i-1], gen) | ||
|
|
||
| var freq = newSeq[EC](order) | ||
| let fftOk = fft_vartime(fftDesc, freq, data) | ||
| doAssert fftOk == FFT_Success | ||
|
|
||
| var recovered = newSeq[EC](order) | ||
| let ifftOk = ec_ifft_rn(fftDesc, recovered, freq) | ||
| doAssert ifftOk == FFT_Success | ||
|
|
||
| for i in 0 ..< order: | ||
| doAssert (recovered[i] == data[i]).bool, | ||
| "EC Roundtrip failed at size " & $order & " index " & $i | ||
|
|
||
| fftDesc.delete() | ||
|
|
||
| echo " ✓ All EC FFT/IFFT roundtrip tests PASSED" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify if fft_vartime and ec_fft_nr are equivalent or if there's a reason for the asymmetry
rg -n "proc fft_vartime|proc ec_fft_nr" --type nimRepository: mratsim/constantine
Length of output: 45
🏁 Script executed:
# Search more broadly for fft_vartime and related functions
rg "fft_vartime|ec_ifft_rn" --type nim -B 2 -A 2Repository: mratsim/constantine
Length of output: 3356
🏁 Script executed:
# Also search for EC FFT function definitions
rg "proc.*fft.*EC|proc ec.*fft" --type nim | head -20Repository: mratsim/constantine
Length of output: 45
🏁 Script executed:
# Check the imports in the test file to understand available functions
head -30 tests/math_polynomials/t_fft.nimRepository: mratsim/constantine
Length of output: 1225
🏁 Script executed:
# Search for EC FFT forward function definitions in constantine/math/polynomials/fft.nim
rg "proc ec.*fft|func ec.*fft" constantine/math/polynomials/fft.nim -A 3Repository: mratsim/constantine
Length of output: 406
🏁 Script executed:
# Check what the test file imports and uses specifically
sed -n '1,50p' tests/math_polynomials/t_fft.nimRepository: mratsim/constantine
Length of output: 1777
🏁 Script executed:
# Search for all EC FFT functions available
rg "proc.*\*\[EC\]|func.*\*\[EC\]" constantine/math/polynomials/fft.nim | grep -i fftRepository: mratsim/constantine
Length of output: 155
🏁 Script executed:
# Check if fft_vartime is imported anywhere in the test file
cat tests/math_polynomials/t_fft.nim | grep -i "import\|fft_vartime"Repository: mratsim/constantine
Length of output: 118
🏁 Script executed:
# Check what's in the fft module that's imported
grep "^func\|^proc" constantine/math/polynomials/fft.nim | grep -i "fft\|ifft"Repository: mratsim/constantine
Length of output: 956
🏁 Script executed:
# Get the full test file to see all imports and the actual function call
sed -n '1,160p' tests/math_polynomials/t_fft.nimRepository: mratsim/constantine
Length of output: 4929
🏁 Script executed:
# Search for fft_vartime in research/kzg directory
rg "fft_vartime" research/kzg/ -B 5 -A 5 | head -40Repository: mratsim/constantine
Length of output: 2150
Use ec_fft_nr instead of fft_vartime and test EC FFT at size 2.
Line 142 calls fft_vartime, which is not available from the imported constantine/math/polynomials/fft module. The correct EC FFT forward function is ec_fft_nr, which pairs with the already-used ec_ifft_rn on line 146.
Also, the EC FFT roundtrip test starts at scale=2 (size 4), while the Fr FFT test starts at scale=1 (size 2). The test file's header comment explicitly notes that size 2 (with ω=-1, the 2nd root of unity) is critical for FK20 multi-proof verification. Begin the EC test at scale=1 to ensure the IFFT correctly inverts the FFT for all root orders.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/math_polynomials/t_fft.nim` around lines 127 - 155, In
testECFFTRoundtrip replace the unavailable fft_vartime call with the correct EC
forward FFT ec_fft_nr so it pairs with the existing ec_ifft_rn inverse, and
adjust the scale loop to start at 1 (for size 2) instead of 2; specifically,
edit the testECFFTRoundtrip procedure to use ec_fft_nr(fftDesc, freq, data) in
place of fft_vartime(...) and change "for scale in 2 .. 5:" to "for scale in 1
.. 5:" so the EC FFT/IFFT roundtrip covers the size-2 case.
|
/gemini summary /gemini review |
|
Thanks for the detailed summary, @greptile-apps[bot]! I appreciate the thoroughness and the identification of critical bugs and areas for improvement. I'll address these points in the coming commits. And thanks for running the summary and review, @mratsim! I'll be sure to take a close look at the potentially related PRs you've identified. |
|
You're welcome! Happy to help. Feel free to tag me if you need another look after making those changes, or if you have questions about any of the findings. Good luck with the implementation! 🚀 |
+ Add research papers + Fix workspace duplication from #592
|
FFT and polynomial refactorings are living in this commit 33a09af, from this branch: https://github.com/mratsim/constantine/compare/peerdas-review02-squashed:
|
This PR implements the cryptographic backend for PeerDAS, Ethereum Data Availability Sampling for its consensus layer.
Spec: https://github.com/ethereum/consensus-specs/blob/v1.6.1/specs/fulu/polynomial-commitments-sampling.md
Formalization: https://eprint.iacr.org/2024/1362.pdf
Status of the spec public method:
compute_cellsis implemented and tested against reference vectorscompute_cells_and_kzg_proofsis implemented and tested against reference vectorsrecover_cells_and_kzg_proofsis implemented and tested against reference vectorsverify_cell_kzg_proof_batchis pendingRemaining work in subsequent PRs in critical path:
verify_cell_kzg_proof_batchThis is currently a draft PR for initial review. Some refactoring is pending, for example the FFT functions are currently computing roots of unity / descriptors on the fly instead of using the ones stored in the Ethereum context / trusted setup.
Content:
Optional extensions:
This work has been sponsored by the Ethereum Foundation under FY25-2229
Summary by CodeRabbit
New Features
Documentation