Skip to content

[Draft] Implement EIP-7594 PeerDAS#592

Draft
mratsim wants to merge 4 commits intomasterfrom
eip7594-peerdas
Draft

[Draft] Implement EIP-7594 PeerDAS#592
mratsim wants to merge 4 commits intomasterfrom
eip7594-peerdas

Conversation

@mratsim
Copy link
Copy Markdown
Owner

@mratsim mratsim commented Mar 19, 2026

This PR implements the cryptographic backend for PeerDAS, Ethereum Data Availability Sampling for its consensus layer.

Spec: https://github.com/ethereum/consensus-specs/blob/v1.6.1/specs/fulu/polynomial-commitments-sampling.md
Formalization: https://eprint.iacr.org/2024/1362.pdf

Status of the spec public method:

Remaining work in subsequent PRs in critical path:

  • verification via verify_cell_kzg_proof_batch
  • API: C, Go, Nim, Rust
  • Multithreading

This is currently a draft PR for initial review. Some refactoring is pending, for example the FFT functions are currently computing roots of unity / descriptors on the fly instead of using the ones stored in the Ethereum context / trusted setup.

Content:

Optional extensions:

This work has been sponsored by the Ethereum Foundation under FY25-2229

Summary by CodeRabbit

  • New Features

    • Added EIP-7594 PeerDAS (Proof of Data Availability Sampling) support with cell computation, recovery, and KZG proof generation.
    • Implemented KZG multiproof operations for efficient polynomial commitments over cosets.
    • Added Reed-Solomon erasure code recovery for polynomial reconstruction.
    • Expanded FFT operations for field elements and elliptic-curve points.
  • Documentation

    • Added comprehensive cryptographic specifications and research papers for polynomial commitments and PeerDAS.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 19, 2026

📝 Walkthrough

Walkthrough

This PR introduces comprehensive Ethereum EIP-7594 PeerDAS (Peer Data Availability Sampling) support, adding KZG multiproof operations, polynomial recovery from erasure samples, extended FFT descriptors for coset domains, cell serialization, and complete workflows for computing, verifying, and recovering cells with cryptographic proofs. Includes supporting mathematical modules, test infrastructure, and technical documentation.

Changes

Cohort / File(s) Summary
PeerDAS Core Implementation
constantine/eth_eip7594_peerdas.nim, eth_eip7594_peerdas.nim, data_availability_sampling/eth_peerdas.nim, constantine/data_availability_sampling/eth_peerdas.nim
Main PeerDAS API layer: cell serialization (bytesToBlsField, cellToCosetEvals, cosetEvalsToCell), blob-to-cells conversion with optimized half-FFT, FK20 proof generation/recovery, and polynomial reconstruction from cell samples using FFT-based vanishing polynomials and coset division.
KZG Multiproof Functions
commitments/kzg_multiproofs.nim, constantine/commitments/kzg_multiproofs.nim
Naive O(n²) coset proof generation (kzg_coset_prove_naive), verification (kzg_coset_verify), and fast amortized FK20 provers via Toeplitz matrix-vector multiplication with FFT preprocessing (getTauExtFftArray, kzg_coset_prove).
Erasure Code Recovery
erasure_codes/recovery.nim, constantine/erasure_codes/recovery.nim
Reed–Solomon polynomial recovery from partial samples with coset-shift scaling, vanishing polynomial products, and pointwise division. Handles ≥50% availability threshold and returns detailed recovery status codes.
Vanishing Polynomial
erasure_codes/zero_polynomial.nim, constantine/erasure_codes/zero_polynomial.nim
Construction of vanishing polynomials for missing roots via synthetic multiplication, index-to-root mapping with compile-time/runtime variants, and efficient domain-wide evaluation via FFT.
FFT Extensions & Coset Operations
constantine/math/polynomials/fft.nim
Major FFT refactoring: renamed status enum (FFTS_*FFT_*), added field-element FFT descriptors (FrFFT_Descriptor, CosetFFT_Descriptor), implemented generic field FFT/IFFT (fft_nr, ifft_rn, coset_fft_nr, coset_ifft_rn), added destructors for automatic cleanup, renamed EC FFT entrypoints (ec_fft_nr, ec_ifft_rn), and extended bit-reversal to out-of-place variant.
Polynomial Operations
constantine/math/polynomials/polynomials.nim
Added schoolbook polynomial long division (polyDiv), coefficient-form conversion from evaluation form (computeCoefPoly) using FFT descriptors, and imports for algebra/FFT modules.
Toeplitz Matrix Operations
constantine/math/matrix/toeplitz.nim, math/matrix/toeplitz.nim
FFT-based Toeplitz matrix-vector multiplication: circulant embedding validation (checkCirculant, makeCirculantMatrix), pre-FFT multiplication (toeplitzMatVecMulPreFFT with optional accumulation), and top-level routine (toeplitzMatVecMul).
Pairing Verification Helpers
constantine/math/pairings/pairings_generic.nim
New pairing_check overloads for mixed affine/Jacobian point types in both G1/G2, replacing manual pairing computation + isOne() pattern.
KZG API & SRS Updates
constantine/commitments/kzg.nim, constantine/commitments_setups/ethereum_kzg_srs.nim
Added kzg_commit overload for monomial-form polynomials, simplified kzg_verify to use pairing_check, extended EthereumKZGContext with monomial SRS (srs_monomial_g1), renamed Lagrange fields to srs_lagrange_brp_g1/domain_brp, added PeerDAS FFT descriptors (ecfft_desc_ext, fft_desc_ext), and exported EIP-7594 sizing constants.
Parallel KZG Operations
constantine/commitments/kzg_parallel.nim, constantine/ethereum_eip4844_kzg_parallel.nim
Updated batch verification to use pairing_check instead of manual GT pairing, switched to BRP (bit-reversal permuted) SRS parameters throughout parallel pipelines.
EIP-4844 Integration
constantine/ethereum_eip4844_kzg.nim
Exported helper functions (bytes_to_bls_field, blob_to_field_polynomial, FIELD_ELEMENTS_PER_BLOB, size constants), switched commitment/proof/verification operations to BRP SRS fields.
Finite Field & EC Utilities
constantine/math/arithmetic/finite_fields.nim, constantine/math/elliptic/ec_shortweierstrass_batch_ops.nim
Fixed out-of-place exponentiation wrapper (pow_vartime), added vartime operator for unsigned exponents (~^), and added batch affine-to-Jacobian/Projective conversion functions.
Assembly & Threadpool Cleanup
constantine/math/arithmetic/assembly/limbs_asm_mul_mont_arm64.nim, constantine/math/arithmetic/assembly/limbs_asm_redc_mont_arm64.nim, constantine/threadpool/instrumentation.nim, constantine/threadpool/primitives/topology_windows.nim, constantine/named/zoo_endomorphisms.nim
Removed compile-time size assertions in Mont multiplication, updated threadpool address-to-hex casting (ByteAddressint), and added arithmetic imports.
Serialization & Module Exports
constantine/serialization/codecs_bls12_381.nim
Re-exported codec status enums (CttCodecScalarStatus, CttCodecEccStatus).
Signature Verification
constantine/signatures/bls_signatures.nim
Replaced manual pairing equality checks with pairing_check helper.
Test Infrastructure
tests/testutils/eth_consensus_utils.nim, constantine.nimble
New shared test harness for Ethereum consensus vectors: getTrustedSetup() loader, walkTests iterator, testGen/testGenPar templates for vector-driven tests, YAML parsing helpers (parseAssign, parseAssignList), and registered new PeerDAS test suite.
Comprehensive Test Suites
tests/commitments/trusted_setup_generator.nim, tests/commitments/t_kzg_multiproofs.nim, tests/math_polynomials/fft_utils.nim, tests/math_polynomials/t_fft.nim, tests/math_polynomials/t_fft_coset.nim, tests/math_matrix/t_toeplitz.nim, tests/erasure_codes/t_recovery.nim, tests/erasure_codes/t_zero_polynomial.nim, tests/eth_eip7594_peerdas/t_*.nim, tests/data_availability_sampling/t_peerdas_recovery.nim, tests/t_*.nim
Trusted setup generation (gen_setup), FK20 single/multi-proof validation, coset FFT roundtrips, Toeplitz matrix-vector multiplication, erasure code recovery with 50%+ threshold, vanishing polynomial correctness, cell computation/recovery/proof workflows against YAML vectors, and refactored EIP-4844 test suites to use shared utilities.
Documentation & Research
papers/, research/
Added EIP-4844/EIP-7594 specifications, PeerDAS technical papers, KZG multiproof theory, FK23 fast amortized proofs, Toeplitz-circulant FFT methods, Reed–Solomon erasure recovery algorithms, Python reference implementations, and roots-of-unity analysis for bit-reversal permutation.
Configuration
.gitignore
Added Python bytecode cache directory (__pycache__/) to ignore list.

Sequence Diagram(s)

sequenceDiagram
    participant App as Application
    participant Cell as Cell Computation
    participant FFT as FFT/Coset Ops
    participant KZG as KZG Proofs
    participant Verify as Verification

    App->>Cell: compute_cells(blob)
    Cell->>FFT: blob_to_field_polynomial()
    FFT->>FFT: coset_ifft_rn() for coefficients
    Cell->>FFT: coset_fft_nr() half per cell
    FFT-->>Cell: cell evaluations
    Cell-->>App: Cell[] bytes

    App->>KZG: compute_cells_and_kzg_proofs(blob)
    KZG->>Cell: compute_cells(blob)
    Cell-->>KZG: Cell[]
    KZG->>KZG: getTauExtFftArray() precompute
    KZG->>KZG: kzg_coset_prove() FK20 proofs
    KZG-->>App: Cell[], KZGProof[]

    App->>Verify: recover_cells_and_kzg_proofs(cell_indices, cells)
    Verify->>FFT: recoverPolynomialCoeff() reconstruct
    FFT->>FFT: vanishingPolynomial() missing cells
    FFT->>FFT: coset division recover evaluations
    FFT-->>Verify: polynomial coefficients
    Verify->>KZG: kzg_coset_prove() proofs on recovered
    KZG-->>Verify: recovered Cell[], KZGProof[]
    Verify-->>App: recovery result
Loading
sequenceDiagram
    participant Caller
    participant Poly as Polynomial Ops
    participant FFT as FFT Engine
    participant Vanish as Vanishing Poly
    participant Recover as Recovery

    Caller->>Recover: recoverPolyFromSamples(samples, domain)
    Recover->>Recover: validate inputs (50% threshold)
    Recover->>Vanish: vanishingPolynomial(missing_roots)
    Vanish->>Vanish: synthetic_multiply(x - root_i)
    Vanish-->>Recover: Z(x) coefficients
    Recover->>FFT: fft_nr(Z) get evaluations
    FFT-->>Recover: Z(ω^k) evaluations
    Recover->>FFT: (E·Z) product in evaluation form
    Recover->>FFT: ifft_rn(E·Z) back to coefficients
    FFT-->>Recover: (E·Z) coefficients
    Recover->>Poly: coset shift scale both (E·Z), Z
    Recover->>FFT: coset_fft_nr() shift to coset domain
    FFT-->>Recover: shifted evaluations
    Recover->>Recover: pointwise division E·Z / Z
    Recover->>FFT: coset_ifft_rn() unshift to original
    FFT-->>Recover: final polynomial coefficients
    Recover-->>Caller: recovered PolynomialCoef
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes


Possibly related PRs

  • Add EIP 7594 fixtures #590: Adds EIP-7594 test vectors (protocol_ethereum_eip7594_fulu_peerdas/*.yaml) that are directly consumed by the new PeerDAS test modules introduced in this PR.
  • Fix compilation warnings #587: Modifies constantine/math/polynomials/polynomials.nim (FFT/algebra imports and helpers) with overlapping changes to the same file in this PR.

Poem

🐰 Hark! A coset of proofs doth spring,
Where vanishing roots make FFTs sing,
From erasure's gaps, we cells recover—
PeerDAS samples, bit-reversed over!
With FK20's speed and Toeplitz grace,
Eight-thousand blobs find their place! ✨

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch eip7594-peerdas

@gemini-code-assist
Copy link
Copy Markdown

Warning

Gemini encountered an error creating the summary. You can try again by commenting /gemini summary.

@greptile-apps
Copy link
Copy Markdown

greptile-apps bot commented Mar 19, 2026

Greptile Summary

This PR implements the cryptographic backend for EIP-7594 PeerDAS, including FK20 multi-proofs, Reed-Solomon erasure-code recovery, coset FFTs, and the full public API (compute_cells, compute_cells_and_kzg_proofs, recover_cells_and_kzg_proofs). The overall architecture is sound and well-documented, mirroring the c-kzg-4844 reference. Several concrete bugs and one infrastructure issue require attention before the draft is ready to merge:

  • Critical logic bugs

    • evalVanishingPolynomial in zero_polynomial.nim passes a full 256-element zero-padded array to the static vanishingPolynomialForIndices, causing the resulting polynomial to incorrectly vanish at index 0 for every unused trailing entry.
    • toeplitzMatVecMul in toeplitz.nim allocates vExtFft but never uses or frees it — a guaranteed memory leak on every FK20 invocation path that exercises this helper.
    • recover_cells_and_kzg_proofs never validates that each cell index is < CELLS_PER_EXT_BLOB, allowing out-of-bounds access in downstream FFT buffers.
    • The fffft git submodule is recorded without a .gitmodules entry, making the repository uncloneable with submodule support.
  • Notable style/performance issues

    • odd_evals (a 128 KB stack array) in compute_cells is declared but never read or written.
    • getTauExtFftArray (64 EC-FFTs of size 128) is recomputed on every compute_cells_and_kzg_proofs and recover_cells_and_kzg_proofs call; this should be precomputed once and stored in EthereumKZGContext, as c-kzg-4844 does with x_ext_fft_columns.
    • FIELD_ELEMENTS_PER_CELL, CELLS_PER_BLOB, CELLS_PER_EXT_BLOB, and BYTES_PER_CELL are defined identically in both ethereum_kzg_srs.nim and eth_eip7594_peerdas.nim.
    • recoverPolyFromSamples has a hard 256-missing-sample cap that contradicts its own documentation of N/2 capacity.

Confidence Score: 2/5

  • Not safe to merge in current state — multiple correctness bugs exist, including a wrong vanishing polynomial, a memory leak in the core FK20 path, missing input validation, and a broken git submodule reference.
  • The overall design is solid and aligns well with the EIP-7594 spec and reference implementations. However, four P1 issues were found: a logically incorrect vanishing polynomial in evalVanishingPolynomial, a guaranteed memory leak in toeplitzMatVecMul, missing cell-index bounds validation in the public recovery API, and a git submodule without .gitmodules. These need to be fixed before the code can be considered production-ready. The PR is explicitly marked as draft, and the author acknowledges pending refactoring, so a lower confidence score is appropriate.
  • constantine/erasure_codes/zero_polynomial.nim (wrong vanishing polynomial), constantine/math/matrix/toeplitz.nim (memory leak), constantine/eth_eip7594_peerdas.nim (missing bounds check, dead variable, duplicate constants, expensive recomputation), and fffft (broken submodule reference).

Important Files Changed

Filename Overview
constantine/eth_eip7594_peerdas.nim Main PeerDAS public API: contains a dead variable (odd_evals, ~128 KB stack waste), missing cell-index bounds validation in recover_cells_and_kzg_proofs, duplicate constant definitions, and expensive getTauExtFftArray recomputed on every call.
constantine/erasure_codes/zero_polynomial.nim New vanishing polynomial module; evalVanishingPolynomial has a critical bug where it passes a full 256-element zero-padded array to the static-size vanishingPolynomialForIndices, causing the polynomial to also vanish at index 0 for all unused trailing entries.
constantine/math/matrix/toeplitz.nim New Toeplitz matrix-vector multiplication module; toeplitzMatVecMul allocates vExtFft but never uses or frees it, creating a guaranteed memory leak on every call. Core FK20 logic otherwise looks correct.
constantine/commitments/kzg_multiproofs.nim New FK20 multi-proof implementation; naive O(n²) and optimised O(n log n) provers, plus coset verifier. Logic is well-structured and aligns with the FK23 paper. No critical bugs found.
constantine/erasure_codes/recovery.nim New erasure-code recovery module; recoverPolyFromSamples has a hard 256-missing-sample cap that contradicts the documented N/2 capacity guarantee. PeerDAS itself uses a separate recovery path, so this doesn't affect the primary API.
constantine/math/polynomials/fft.nim Major refactor: introduces FrFFT_Descriptor and CosetFFT_Descriptor, renames fft_vartime/ifft_vartime to ec_fft_nr/ec_ifft_rn, adds fft_nr, fft_nn, ifft_rn, ifft_nn, coset_fft_nr, coset_ifft_rn, and an out-of-place COBRA bit-reversal permutation. Destructor-based memory management is a clear improvement.
constantine/commitments_setups/ethereum_kzg_srs.nim Adds srs_monomial_g1 and fft_desc_ext/ecfft_desc_ext to EthereumKZGContext, renames srs_lagrange_g1 to srs_lagrange_brp_g1 and domain to domain_brp for clarity, and parses the extended trusted-setup format with monomial G1 points for FK20.
constantine/data_availability_sampling/eth_peerdas.nim Generic (non-serialization) PeerDAS layer: coset primitives and recoverPolynomialCoeff. The vanishing polynomial construction here duplicates zero_polynomial.nim's logic rather than calling it, which is a maintenance concern. Missing newline at EOF.
fffft Git submodule reference added without a corresponding .gitmodules file — no URL or remote path is recorded, making the submodule unresolvable for anyone who clones the repository.

Sequence Diagram

sequenceDiagram
    participant Caller
    participant EIP7594 as eth_eip7594_peerdas
    participant PeerDAS as data_availability_sampling/eth_peerdas
    participant FK20 as kzg_multiproofs (FK20)
    participant FFT as math/polynomials/fft
    participant Toeplitz as math/matrix/toeplitz
    participant SRS as EthereumKZGContext (SRS)

    Note over Caller,SRS: compute_cells_and_kzg_proofs
    Caller->>EIP7594: compute_cells_and_kzg_proofs(blob)
    EIP7594->>EIP7594: blob_to_field_polynomial → poly_lagrange
    EIP7594->>FFT: ifft_nn (Lagrange→Monomial) → poly_monomial
    EIP7594->>EIP7594: compute_cells (half-FFT trick)
    EIP7594->>FK20: getTauExtFftArray(srs_monomial_g1, domainRoot)
    FK20->>FFT: ec_fft_nr × L (kernel FFTs)
    EIP7594->>FK20: kzg_coset_prove(tauExtFftArray, poly_monomial)
    loop offset 0..L-1
        FK20->>Toeplitz: makeCirculantMatrix
        FK20->>Toeplitz: toeplitzMatVecMulPreFFT
        Toeplitz->>FFT: fft_nr (Fr circulant)
        Toeplitz->>FFT: ec_ifft_rn (EC IFFT)
    end
    FK20->>FFT: ec_fft_nr (final proof FFT)
    EIP7594-->>Caller: cells[], proofs[]

    Note over Caller,SRS: recover_cells_and_kzg_proofs
    Caller->>EIP7594: recover_cells_and_kzg_proofs(indices, cells)
    EIP7594->>PeerDAS: recoverPolynomialCoeff(indices, cosets, fft_desc_ext)
    PeerDAS->>FFT: fft_nr (zero_poly eval)
    PeerDAS->>FFT: ifft_rn (E·Z coeffs)
    PeerDAS->>FFT: coset_fft_nr (E·Z over coset)
    PeerDAS->>FFT: coset_fft_nr (Z over coset)
    PeerDAS->>PeerDAS: pointwise divide
    PeerDAS->>FFT: coset_ifft_rn (recover coeffs)
    PeerDAS-->>EIP7594: poly_coeff[2N]
    EIP7594->>FFT: fft_nr (eval all cells)
    EIP7594->>FK20: getTauExtFftArray + kzg_coset_prove
    EIP7594-->>Caller: recovered_cells[], recovered_proofs[]
Loading

Last reviewed commit: "Add research papers"

Comment on lines +248 to +250
## @return: FFTStatus indicating success or failure

let n = v.len
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Memory leak: vExtFft allocated but never freed

vExtFft is allocated on every call but is never used and never freed, leading to a guaranteed memory leak on each invocation of toeplitzMatVecMul. The actual FFT output buffer used is vExtFftSeq (allocated a few lines later). The vExtFft allocation should be removed entirely.

Suggested change
## @return: FFTStatus indicating success or failure
let n = v.len
let vExt = allocHeapArrayAligned(EC, n2, 64)
for i in 0 ..< n:

Comment on lines +222 to +227

var indices_arr: array[256, uint64]
doAssert indices.len <= 256
for i in 0 ..< indices.len:
indices_arr[i] = indices[i]

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Wrong vanishing polynomial: all 256 indices used instead of indices.len

evalVanishingPolynomial initialises indices_arr as array[256, uint64] (zero-initialised), then passes the full 256-element array to vanishingPolynomialForIndices. Because numIndices is the compile-time array length (256), the call iterates over all 256 entries, treating the zero-filled tail entries as index 0 — i.e., it adds (domain.rootsOfUnity[0] = 1) as an extra root (256 - indices.len) times. The resulting vanishing polynomial has completely wrong roots for any call where indices.len < 256.

Use the runtime version vanishingPolynomialForIndicesRT which correctly limits iteration to indices.len:

Suggested change
var indices_arr: array[256, uint64]
doAssert indices.len <= 256
for i in 0 ..< indices.len:
indices_arr[i] = indices[i]
var poly: PolynomialCoef[N, F]
vanishingPolynomialForIndicesRT(poly, indices, domain)

Comment on lines +353 to +356
for i in 1 ..< num_cells:
if uint64(cell_indices[i-1]) >= uint64(cell_indices[i]):
return cttEthKzg_InputsLengthsMismatch

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Missing bounds validation for cell indices

recover_cells_and_kzg_proofs validates that cell_indices is sorted and duplicate-free, but never checks that each index is within the valid range [0, CELLS_PER_EXT_BLOB). An index of, say, 200 would pass all current checks but later cause out-of-bounds memory access when used in recoverPolynomialCoeff (e.g., accessing fft_desc.rootsOfUnity[root_idx] with root_idx = 200 * stride).

Add a bounds check after the sort check:

Suggested change
for i in 1 ..< num_cells:
if uint64(cell_indices[i-1]) >= uint64(cell_indices[i]):
return cttEthKzg_InputsLengthsMismatch
# Check that input is sorted and has no duplicates
for i in 1 ..< num_cells:
if uint64(cell_indices[i-1]) >= uint64(cell_indices[i]):
return cttEthKzg_InputsLengthsMismatch
# Check that all cell indices are in range
for idx in cell_indices:
if uint64(idx) >= uint64(CELLS_PER_EXT_BLOB):
return cttEthKzg_InputsLengthsMismatch

poly_coef_nat.computeCoefPoly(poly_eval_brp, ctx.fft_desc_ext)

# Step 3b: Shift coefficients by w_8192^k
# w_8192 = primitive 8192nd root of unity (coset shift factor)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Dead variable odd_evals wastes stack space

odd_evals is declared as a 4096-element Fr[BLS12_381] array (≈128 KB on the stack) but is never read or written. This is a significant unnecessary stack allocation and likely a leftover from an earlier draft.

Suggested change
# w_8192 = primitive 8192nd root of unity (coset shift factor)

Comment on lines +221 to +226
order = N,
generatorRootOfUnity = domain.rootsOfUnity[1]
)

var coeffs: array[N, F]
let ifft_status = ifft_rn(fft_desc, coeffs, all_samples)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 256-sample hard limit contradicts documented N/2 capacity

recoverPolyFromSamples documents support for "up to N - n = n(r - 1) erasures" (i.e. up to N/2 for r=2), but the implementation uses a fixed array[256, uint64] for missing_indices and returns Recovery_TooManyMissing when missing_count >= 256. For any N > 512, having between 256 and N/2 missing samples — which should be recoverable — will incorrectly return this error. The limit should either be raised (using heap allocation) or the documentation should explicitly state the 256-index cap.

Comment on lines +276 to +282
return cttEthKzg_ScalarZero
of cttCodecScalar_ScalarLargerThanCurveOrder:
return cttEthKzg_ScalarLargerThanCurveOrder

# Step 2: Convert to monomial form via IFFT (needed for FK20 proofs)
var poly_monomial: PolynomialCoef[FIELD_ELEMENTS_PER_BLOB, Fr[BLS12_381]]
poly_monomial.computeCoefPoly(poly_lagrange, ctx.fft_desc_ext)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 getTauExtFftArray recomputed on every call — very expensive

Both compute_cells_and_kzg_proofs and recover_cells_and_kzg_proofs call getTauExtFftArray, which performs L=64 EC-FFTs of size CDS=128 on every invocation. The function's own docstring notes "The result can be cached and reused for multiple polynomials." This precomputed table is structurally equivalent to s->x_ext_fft_columns in c-kzg-4844, which is computed once during trusted setup. Recomputing it per call makes the API orders of magnitude slower than the reference implementation for batch workloads. It should be stored in EthereumKZGContext and computed once during trusted_setup_load.

Comment on lines +56 to +59
## 5. **Recovery**: If ≥50% of cells available, recover all cells
##
## References:
## - EIP-7594: https://eips.ethereum.org/EIPS/eip-7594
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Duplicate constant definitions

FIELD_ELEMENTS_PER_CELL, CELLS_PER_BLOB, CELLS_PER_EXT_BLOB, and BYTES_PER_CELL are defined identically here and also in constantine/commitments_setups/ethereum_kzg_srs.nim. Keeping two independent definitions risks them diverging silently. The definitions in eth_eip7594_peerdas.nim should be removed in favour of importing them from the SRS module.

fffft Outdated
@@ -0,0 +1 @@
Subproject commit be8702c88e7564ae83d4e5327430927720c5dd49
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Git submodule added without .gitmodules metadata

fffft is tracked as a git submodule (mode 160000, commit be8702c) but no .gitmodules file exists in the repository. Without .gitmodules, there is no URL or path record for this submodule, so git submodule update --init will silently fail for anyone who clones the repo. Either add a .gitmodules entry (with url, path, and optionally branch fields), or remove this submodule reference if it is not yet ready to be included.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 18

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
constantine/commitments_setups/ethereum_kzg_srs.nim (1)

273-281: ⚠️ Potential issue | 🟡 Minor

Potential logic error in fscanf validation: and should likely be or.

At lines 275, 290, and 306, the error check uses and but should likely use or:

if num_matches != 1 and charsRead != 2*g1Bytes:

This passes if either condition fails, but you probably want to fail if any condition is wrong. With and, if num_matches is wrong but charsRead is correct, the check passes erroneously.

Proposed fix
-      if num_matches != 1 and charsRead != 2*g1Bytes:
+      if num_matches != 1 or charsRead != 2*g1Bytes:

Apply similar fix at lines 290 and 306.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@constantine/commitments_setups/ethereum_kzg_srs.nim` around lines 273 - 281,
The validation condition currently uses a logical AND so it only fails when both
num_matches and charsRead are wrong; change the operator to OR in the fscanf
checks so the function returns tsInvalidFile if either num_matches != 1 or
charsRead != expected (2*g1Bytes or 2*g2Bytes as appropriate). Update the three
occurrences surrounding the G1/G2 deserialization checks (the blocks using
f.c_fscanf, bufG1Hex/ bufG2Hex, charsRead, g1Bytes/g2Bytes and the calls to
ctx.srs_lagrange_brp_g1.evals[i].deserialize_g1_compressed and
ctx.srs_lagrange_brp_g2.evals[i].deserialize_g2_compressed) to use "or" instead
of "and".
♻️ Duplicate comments (2)
constantine/math/matrix/toeplitz.nim (1)

262-264: ⚠️ Potential issue | 🟠 Major

Memory leak: vExtFft is allocated but never used or freed.

Same issue as noted previously - vExtFft is allocated but vExtFftSeq is used instead, causing a memory leak.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@constantine/math/matrix/toeplitz.nim` around lines 262 - 264, The code
allocates vExtFft via allocHeapArrayAligned(EC, n2, 64) but never uses or frees
it because the rest of the code references vExtFftSeq; either stop allocating
vExtFft or switch the code to use the allocated buffer. Update references to use
vExtFft instead of vExtFftSeq where the FFT buffer is needed, or if vExtFftSeq
is correct, remove the allocHeapArrayAligned call for vExtFft (or ensure vExtFft
is deallocated) so there is no memory leak; key symbols to update are vExtFft,
vExtFftSeq, and the allocHeapArrayAligned call that creates vExtFft.
constantine/erasure_codes/recovery.nim (1)

276-291: ⚠️ Potential issue | 🟠 Major

Memory leak: zero_poly_eval is never freed.

Same issue as noted previously - zero_poly_eval allocated at line 276 is never freed.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@constantine/erasure_codes/recovery.nim` around lines 276 - 291,
zero_poly_eval allocated via allocHeapArrayAligned is not freed on the success
path; after calling fft_nr and using zero_poly_eval to compute
poly_evaluations_with_zero (the for loop that calls prod with
extended_evaluation and zero_poly_eval), call freeHeapAligned(zero_poly_eval)
once you no longer need its data (e.g., immediately after the loop or before any
early returns) to match the failure branch that already frees it; ensure you do
not free it before all uses and avoid double-freeing by removing any later
redundant frees.
🟡 Minor comments (10)
papers/polynomial_commitments/FK23 - 2023-033 - Fast amortized KZG proofs.md-423-423 (1)

423-423: ⚠️ Potential issue | 🟡 Minor

Fix heading level jumps (MD001).

The heading hierarchy skips a level at both locations, which breaks markdownlint and navigation structure.

🛠️ Proposed fix
-##### Construction
+#### Construction
...
-##### Construction
+#### Construction

Also applies to: 449-449

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@papers/polynomial_commitments/FK23` - 2023-033 - Fast amortized KZG proofs.md
at line 423, Heading levels jump at the "Construction" sections; change the
current '##### Construction' headings to the correct level to follow the
surrounding hierarchy (e.g., use '#### Construction' if the parent is '###' or
otherwise adjust by one level) for both occurrences (the one shown and the one
at the other noted location) so markdownlint MD001 is satisfied and navigation
structure is preserved.
papers/ethereum/B18 - reed-solomon_erasure_recovery_with_ffts.md-16-16 (1)

16-16: ⚠️ Potential issue | 🟡 Minor

Math multiplication syntax is being parsed as emphasis (MD037).

Several * operators are interpreted as Markdown emphasis markers. Prefer $...$ math with \cdot (or inline code) to preserve rendering.

🛠️ Example fix pattern
-First of all, note that D * Z_{r,I} = E * Z_{r,I}.
+First, note that $D \cdot Z_{r,I} = E \cdot Z_{r,I}$.

Also applies to: 18-18, 24-24, 30-30, 39-39

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@papers/ethereum/B18` - reed-solomon_erasure_recovery_with_ffts.md at line 16,
The markdown is mis-parsing the multiplication operator * as emphasis in inline
math; update all occurrences like "D * Z_{r,I}", "E * Z_{r,I}", "d[i] *
Z_{r,I}(r^i)", and similar instances on the other noted lines to use proper
inline LaTeX math (e.g., $D \cdot Z_{r,I}$, $E \cdot Z_{r,I}$, $d[i] \cdot
Z_{r,I}(r^i)$) or wrap them in inline code, ensuring you replace the asterisk
with \cdot inside $...$ so the expressions render correctly and keep the same
identifiers (D, E, Z_{r,I}, d[i], r^i, E \cdot Z_{r,I}) for clarity.
research/peerdas/roots_of_unity_check.py-3-3 (1)

3-3: ⚠️ Potential issue | 🟡 Minor

Fix domain size typo in header comment.

The comment says 80192; based on the script body this should be 8192.

🛠️ Proposed fix
-# are the first half of the roots of the 80192 domain
+# are the first half of the roots of the 8192 domain
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@research/peerdas/roots_of_unity_check.py` at line 3, Update the top-of-file
header comment that currently reads "are the first half of the roots of the
80192 domain" to correct the domain size typo to "8192" so the comment matches
the script logic; locate the header comment in
research/peerdas/roots_of_unity_check.py (the file-level comment at the top) and
replace 80192 with 8192.
papers/ethereum/B18 - reed-solomon_erasure_recovery_with_ffts.md-12-12 (1)

12-12: ⚠️ Potential issue | 🟡 Minor

Heading level should not skip (MD001).

Use an ## heading here to keep hierarchy valid.

🛠️ Proposed fix
-### Erasure code recovery in O(n*log^2(n)) time
+## Erasure code recovery in O(n*log^2(n)) time
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@papers/ethereum/B18` - reed-solomon_erasure_recovery_with_ffts.md at line 12,
The heading "Erasure code recovery in O(n*log^2(n)) time" is currently a
third-level header (###) and skips hierarchy; change it to a second-level header
(##) to maintain proper Markdown heading order—update the line containing that
header text to use '## Erasure code recovery in O(n*log^2(n)) time' so the
document follows MD001 rules.
papers/polynomial_commitments/F21 - PCS-multiproofs-using-random-evaluation.md-16-16 (1)

16-16: ⚠️ Potential issue | 🟡 Minor

Proofread the transcription before treating this as an implementation note.

There are a few sentence-level errors in math-heavy passages here (commiments, remans, we will the roots of unity…, allow all q_j…). In this kind of derivation, those read like notation mistakes rather than simple typos.

Also applies to: 106-106, 197-197, 279-279

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@papers/polynomial_commitments/F21` -
PCS-multiproofs-using-random-evaluation.md at line 16, Proofread and correct the
transcription errors in the document: fix spelling/notation mistakes such as
"commiments" -> "commitments", "remans" (likely "remains"), the phrase "we will
the roots of unity…" (reconstruct intended wording), and "allow all q_j…"
(clarify quantification/notation for q_j), and apply the same corrections at the
indicated other occurrences (lines referenced as 106, 197, 279) so mathematical
notation and sentence structure are unambiguous throughout the
KZG/additively-homomorphic discussion.
tests/eth_eip7594_peerdas/t_compute_cells_opt.nim-14-16 (1)

14-16: ⚠️ Potential issue | 🟡 Minor

Update the example command to the new test path.

The comment still points to tests/eth_eip4844_peerdas/..., so copy-pasting it will fail from the repo root.

Suggested fix
-##   nim c -r -d:release --hints:off --warnings:off --outdir:build/wip --nimcache:nimcache/wip tests/eth_eip4844_peerdas/t_compute_cells_opt.nim
+##   nim c -r -d:release --hints:off --warnings:off --outdir:build/wip --nimcache:nimcache/wip tests/eth_eip7594_peerdas/t_compute_cells_opt.nim
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/eth_eip7594_peerdas/t_compute_cells_opt.nim` around lines 14 - 16,
Update the example "## Run with" command string (the comment block starting with
"## Run with") to reference the new test directory and file name: replace the
old path segment "tests/eth_eip4844_peerdas/t_compute_cells_opt.nim" with
"tests/eth_eip7594_peerdas/t_compute_cells_opt.nim" so the copy-paste command
runs from the repo root.
papers/ethereum/WZ24 - 2024-1362 - A documentation of Ethereum PeerDAS.md-92-92 (1)

92-92: ⚠️ Potential issue | 🟡 Minor

Minor typo in mathematical notation.

"there exists a subset $\mathbb{H}_j$ of $\mathbb{H}$ of size $2^{t-j}$ that is a subgroup. In particular, we have $\mathbb{H}_j = {\omega^i \in \mathbb{H} \mid i \equiv 0 \mod 2^j}$. Note that $\nu = \omega^{2^j}$ is a $2^{t-j}$ that root of unity" - appears to be missing "th" to form "2^{t-j}-th root of unity".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@papers/ethereum/WZ24` - 2024-1362 - A documentation of Ethereum PeerDAS.md at
line 92, Fix the minor typo in the sentence describing nu: change "ν = ω^{2^j}
is a 2^{t-j} that root of unity" to read "ν = ω^{2^j} is a 2^{t-j}-th root of
unity" so the phrase correctly states ν is the 2^{t-j}-th root of unity
(referencing ν, ω, and the exponent 2^{t-j} in the paragraph about ℍ_j and
cosets ℍ_{j,s}).
papers/ethereum/WZ24 - 2024-1362 - A documentation of Ethereum PeerDAS.md-18-18 (1)

18-18: ⚠️ Potential issue | 🟡 Minor

Minor grammatical error in abstract.

"about to integrated" should be "about to be integrated".

📝 Suggested fix
-PeerDAS aims to pave the way for the full data availability solution and will soon become an essential part of Ethereum's protocol. Therefore, it is crucial to formally understand its security guarantees.
+As the next step towards this vision, an intermediate solution called PeerDAS is about to be integrated, to bridge the way to the full protocol.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@papers/ethereum/WZ24` - 2024-1362 - A documentation of Ethereum PeerDAS.md at
line 18, Fix the grammatical error in the abstract by replacing the phrase
"about to integrated" with "about to be integrated" in the sentence that
introduces PeerDAS (the sentence beginning "As the next step towards this
vision, an intermediate solution called PeerDAS..."); ensure the updated
sentence reads "...an intermediate solution called PeerDAS is about to be
integrated, to bridge the way to the full protocol."
tests/erasure_codes/t_zero_polynomial.nim-5-6 (1)

5-6: ⚠️ Potential issue | 🟡 Minor

Incomplete license header.

The license header appears to be missing the MIT license option. Other files in this PR include both MIT and Apache v2 licenses.

📝 Suggested fix
-#   * MIT license (license terms in the root directory or at http://www.apache.org/licenses/LICENSE-2.0).
+#   * MIT license (license terms in the root directory or at http://opensource.org/licenses/MIT).
+#   * Apache v2 license (license terms in the root directory or at http://www.apache.org/licenses/LICENSE-2.0).
 # at your option. This file may not be copied, modified, or distributed except according to those terms.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/erasure_codes/t_zero_polynomial.nim` around lines 5 - 6, The header in
t_zero_polynomial.nim is incomplete; update it to the same dual-license
boilerplate used elsewhere in the PR by adding the MIT license option alongside
Apache-2.0 (e.g., a short top-of-file notice stating the file is licensed under
the MIT License or the Apache License, Version 2.0, with links and the standard
line about choosing at your option), ensuring the header text matches the other
files' exact wording and license links so the file's license attribution is
consistent.
eth_eip7594_peerdas.nim-241-244 (1)

241-244: ⚠️ Potential issue | 🟡 Minor

Dead code: odd_evals variable is declared but never used.

The variable odd_evals is declared at line 242 but the FFT writes directly to pHalfCells:

var odd_evals: array[N, Fr[BLS12_381]]  # Never used
let fft_status = ctx.fft_desc_ext.fft_nn(pHalfCells.toOpenArray(N), poly_coef_nat.coefs)
Proposed fix
   # Step 3d: FFT of shifted coefficients -> evaluations directly into cells 64-127
   # fft_nn: natural input -> natural output
   let pHalfCells = cast[ptr UncheckedArray[Fr[BLS12_381]]](cells_evals[HALF_CDS][0].addr)
-  var odd_evals: array[N, Fr[BLS12_381]]
   let fft_status = ctx.fft_desc_ext.fft_nn(pHalfCells.toOpenArray(N), poly_coef_nat.coefs)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@eth_eip7594_peerdas.nim` around lines 241 - 244, Remove the unused local
variable declaration `odd_evals` (array[N, Fr[BLS12_381]]) which is dead code;
the FFT result is written directly to `pHalfCells` via
`ctx.fft_desc_ext.fft_nn(pHalfCells.toOpenArray(N), poly_coef_nat.coefs)` and
the subsequent `fft_status` check remains, so delete the `var odd_evals` line to
eliminate the unused symbol and any related warnings.
🧹 Nitpick comments (17)
research/peerdas/roots_of_unity_check.py (1)

31-44: Add an explicit assertion for the stated BRP relation.

This script currently prints samples but does not actually fail when the claimed property is false.

🛠️ Proposed fix
 for N in [4, 8, 4096, 8192]:
@@
     print(f"  >>> {top} first bit-reversed roots of unity")
     for i in range(top):
         print(f"      - {hex(BRP_ROOTS_OF_UNITY[i])}")
+
+# Actual check for the note above:
+roots_4096 = [pow(pow(PRIMITIVE_ROOT_OF_UNITY, (MODULUS - 1) // 4096, MODULUS), i, MODULUS) for i in range(4096)]
+roots_8192 = [pow(pow(PRIMITIVE_ROOT_OF_UNITY, (MODULUS - 1) // 8192, MODULUS), i, MODULUS) for i in range(8192)]
+brp_4096 = bit_reversal_permutation(roots_4096)
+brp_8192 = bit_reversal_permutation(roots_8192)
+assert brp_8192[:4096] == brp_4096, "BRP prefix relation failed for 4096 vs 8192"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@research/peerdas/roots_of_unity_check.py` around lines 31 - 44, The script
only prints samples of the BRP relation but doesn't assert it; add an explicit
assertion that verifies bit_reversal_permutation produces the expected
rearrangement of ROOTS_OF_UNITY for each N: for each index i in range(N) check
BRP_ROOTS_OF_UNITY[i] == ROOTS_OF_UNITY[bit_reverse_index(i, N)] (or
equivalently build expected = [ROOTS_OF_UNITY[bit_reverse_index(i, N)] for i in
range(N)] and assert BRP_ROOTS_OF_UNITY == expected), and if false raise an
AssertionError with a clear message including i, expected and actual so the test
fails when the claimed property is false; use the existing
bit_reversal_permutation/ROOTS_OF_UNITY/BRP_ROOTS_OF_UNITY symbols to locate
where to add this check.
research/kzg/README.md (1)

25-65: Avoid duplicated roots-of-unity script across docs and repo scripts.

This script is effectively duplicated in research/peerdas/roots_of_unity_check.py; keeping one canonical script and linking to it from the README will prevent drift.

🔧 Suggested doc simplification
-This can be checked with the following script
-
-```python
-... full script ...
-```
+This can be checked with:
+
+`python research/peerdas/roots_of_unity_check.py`
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@research/kzg/README.md` around lines 25 - 65, The README contains a
duplicated roots-of-unity script (functions is_power_of_two, reverse_bits,
bit_reversal_permutation and the N-loop) that already exists in
research/peerdas/roots_of_unity_check.py; remove the full script block from
research/kzg/README.md and replace it with a short note pointing readers to run
the canonical script (e.g. "Run: python
research/peerdas/roots_of_unity_check.py") and optionally include a one-line
description of what that script checks so the README stays informative without
duplicating code.
tests/eth_eip7594_peerdas/t_cells_and_kzg_proofs_opt.nim (2)

230-232: TODO: Test filter limits coverage.

The filter compute_cells_and_kzg_proofs_case_valid_2 restricts the test to a single vector. Consider removing this filter before merging to ensure full test coverage.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/eth_eip7594_peerdas/t_cells_and_kzg_proofs_opt.nim` around lines 230 -
232, The test loop currently short-circuits using the hardcoded filter string
"compute_cells_and_kzg_proofs_case_valid_2" via the expression
file.contains(...), limiting execution to one vector; remove or disable that
conditional (the if not file.contains(...) continue) in
tests/eth_eip7594_peerdas/t_cells_and_kzg_proofs_opt.nim so the loop processes
all test files/vectors, ensuring full coverage.

46-46: Duplicate import of constantine/named/algebras.

This module is already imported on line 40.

♻️ Remove duplicate import
 import
-  constantine/named/algebras,
   constantine/math/polynomials/[fft, polynomials],
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/eth_eip7594_peerdas/t_cells_and_kzg_proofs_opt.nim` at line 46, The
file imports the module constantine/named/algebras twice; remove the duplicate
import statement that references constantine/named/algebras (the one at or
around the diff shown) so the module is only imported once (keep the original
import near line 40 and delete the repeated constantine/named/algebras entry).
Ensure no other code depends on having it listed twice and run the tests to
verify no unresolved symbols remain.
tests/erasure_codes/t_recovery.nim (2)

21-30: Consider whether createDomain export is necessary.

The createDomain proc is exported with * but appears to be a test utility. If it's only used in test files, the export marker may not be needed.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/erasure_codes/t_recovery.nim` around lines 21 - 30, The proc
createDomain currently has a public export marker (*) but it appears to be a
test-only helper; remove the export by deleting the '*' on the proc signature
(proc createDomain(F: typedesc[Fr], N: static int): PolyEvalRootsDomain[N, F])
or alternatively move the proc into the test module so it remains internal;
locate the proc by name createDomain and related types PolyEvalRootsDomain and
createFFTDescriptor to update callers if you change visibility.

10-10: Remove unused random import.

The random module is imported but not used in this file.

♻️ Remove unused import
 import
-  std/[random, options],
+  std/options,
   constantine/named/algebras,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/erasure_codes/t_recovery.nim` at line 10, The import list includes an
unused module 'random' which should be removed; edit the import statement that
currently reads "std/[random, options]" in t_recovery.nim and delete 'random' so
only the used module(s) (e.g., "options") remain, then run tests to confirm no
references to the 'random' symbol remain.
tests/data_availability_sampling/t_peerdas_recovery.nim (2)

90-97: Consider simplifying index collection logic.

The current code adds index 0 separately, then loops from 1 to add indices < 65. This works but could be clearer.

♻️ Simplified version
   var available_indices: seq[CellIndex]
   var available_cells: seq[Cell]
-  available_indices.add(CellIndex(0))
-  available_cells.add(cells[0])
-  for i in 1 ..< CELLS_PER_EXT_BLOB:
-    if i < 65:
-      available_indices.add(CellIndex(i))
-      available_cells.add(cells[i])
+  for i in 0 ..< 65:
+    available_indices.add(CellIndex(i))
+    available_cells.add(cells[i])
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/data_availability_sampling/t_peerdas_recovery.nim` around lines 90 -
97, The code separately adds index 0 then loops 1..\<CELLS_PER_EXT_BLOB with an
inner if i < 65; simplify by iterating once and adding when within the desired
bound: use a single loop over i (referencing available_indices, available_cells,
CellIndex, CELLS_PER_EXT_BLOB, and cells) and add CellIndex(i) and cells[i] only
while i < 65 (or loop i from 0 up to min(CELLS_PER_EXT_BLOB, 65)) to remove the
special-case for 0 and make the logic clearer.

10-10: Remove unused imports.

The imports random, algorithm, and options appear to be unused in this file.

♻️ Suggested cleanup
 import
-  std/[os, random, algorithm, strutils, options],
+  std/[os, strutils],
   constantine/named/algebras,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/data_availability_sampling/t_peerdas_recovery.nim` at line 10, The
single import line std/[os, random, algorithm, strutils, options] includes
unused modules; remove the unused imports `random`, `algorithm`, and `options`
from that import list so only the modules actually used in this file (e.g., `os`
and `strutils`) remain. Before changing, search the file for any references to
`random`, `algorithm` or `options` (functions/types) to ensure they truly aren't
used; then update the import list in the line containing std/[os, random,
algorithm, strutils, options] accordingly.
constantine/erasure_codes/zero_polynomial.nim (1)

9-14: Unused import: std/options.

The options module is imported but not used anywhere in the file.

🧹 Remove unused import
 import
-  std/options,
   constantine/named/algebras,
   constantine/math/arithmetic,
   constantine/math/polynomials/[polynomials, fft],
   constantine/platforms/[allocs, views]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@constantine/erasure_codes/zero_polynomial.nim` around lines 9 - 14, The file
imports std/options but never uses it; remove the unused import from the import
list in zero_polynomial.nim (the import block that currently includes
std/options, constantine/named/algebras, constantine/math/arithmetic,
constantine/math/polynomials/[polynomials, fft], constantine/platforms/[allocs,
views]) so only the actually used modules remain.
tests/commitments/trusted_setup_generator.nim (2)

23-31: Unused function genPowersOfTauImpl.

This function is defined but never called. It appears to be superseded by computePowersOfTauG1 and computePowersOfTauG2.

🧹 Remove dead code
-func genPowersOfTauImpl(EC: typedesc, secret: auto, length: int): seq[EC] =
-  result.setLen(length)
-  var P {.noInit.}: EC
-  P.fromAffine(EC.F.C.getGenerator($EC.G))
-  result[0] = P
-  for i in 1 ..< length:
-    P.scalarMul_vartime(secret)
-    result[i] = P
-
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/commitments/trusted_setup_generator.nim` around lines 23 - 31, Remove
the dead function genPowersOfTauImpl: locate the genPowersOfTauImpl definition
and delete it (and any related unused declarations) since computePowersOfTauG1
and computePowersOfTauG2 supersede it; verify there are no remaining references
to genPowersOfTauImpl and run the test build to ensure
computePowersOfTauG1/computePowersOfTauG2 cover the needed behavior.

1-7: Missing copyright header.

This file lacks the standard Constantine copyright header present in all other source files in this PR.

📝 Add copyright header
+# Constantine
+# Copyright (c) 2018-2019    Status Research & Development GmbH
+# Copyright (c) 2020-Present Mamy André-Ratsimbazafy
+# Licensed and distributed under either of
+#   * MIT license (license terms in the root directory or at http://opensource.org/licenses/MIT).
+#   * Apache v2 license (license terms in the root directory or at http://www.apache.org/licenses/LICENSE-2.0).
+# at your option. This file may not be copied, modified, or distributed except according to those terms.
+
 import
   constantine/named/algebras,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/commitments/trusted_setup_generator.nim` around lines 1 - 7, The file
trusted_setup_generator.nim is missing the standard Constantine copyright
header; add the project’s canonical header block at the very top of the file
(above the existing import statements) using the same text and format used in
other PR files so the header is consistent across the codebase.
data_availability_sampling/eth_peerdas.nim (1)

110-212: Memory leak on assertion failure.

If any doAssert fails after heap allocations but before the corresponding freeHeapAligned calls, memory will leak. Consider using defer or a cleanup pattern to ensure memory is freed even on failure paths.

♻️ Suggested pattern using defer
# After each allocation, immediately register cleanup:
var extended_evaluation = allocHeapArrayAligned(Fr[Name], ext_size, alignment = 64)
defer: freeHeapAligned(extended_evaluation)

# Then remove individual freeHeapAligned calls at the end
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@data_availability_sampling/eth_peerdas.nim` around lines 110 - 212, Multiple
heap buffers (e.g., extended_evaluation, short_roots, short_vanishing_poly,
zero_poly_coeff, zero_poly_eval, extended_times_zero, ext_times_zero_coeffs,
ext_eval_over_coset, zero_poly_over_coset, reconstructed_over_coset,
reconstructed_coeff) are allocated but several freeHeapAligned calls occur after
doAssert checks, causing memory leaks on assertion failure; fix by registering
cleanup immediately after each alloc (use Nim's defer: freeHeapAligned(<buffer>)
right after each allocHeapArrayAligned) so every allocation is freed on all
control paths, then remove or avoid duplicate freeHeapAligned calls at the end
to prevent double-free. Ensure you apply this pattern around functions/methods
where fft_nr, ifft_rn, coset_fft_nr, coset_ifft_rn and related doAssert checks
appear (references: fft_nr, ifft_rn, coset_fft_nr, coset_ifft_rn, doAssert).
tests/commitments/t_kzg_multiproofs.nim (1)

149-172: Unused function with incorrect synthetic division implementation.

computeQuotientPolyCoef appears to be defined but never called in this test file. Additionally, the synthetic division logic at lines 163-164 is incorrect - it multiplies poly.coefs[i] by z instead of quotient.coefs[i] by z:

# Current (incorrect):
quotient.coefs[i-1].prod(poly.coefs[i], z)

# Expected for synthetic division:
quotient.coefs[i-1].prod(quotient.coefs[i], z)
quotient.coefs[i-1].sum(quotient.coefs[i-1], poly.coefs[i])

Consider removing this unused function or fixing the implementation if it's intended for future use.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/commitments/t_kzg_multiproofs.nim` around lines 149 - 172, The function
computeQuotientPolyCoef is unused and contains a bug in the synthetic division
loop: it multiplies poly.coefs[i] by z instead of multiplying the running
quotient term by z. Either remove the unused computeQuotientPolyCoef or fix the
loop inside it so the update uses quotient.coefs[i] when computing
quotient.coefs[i-1] (i.e., perform prod on quotient.coefs[i] with z then add
poly.coefs[i]), ensure evalPolyAt is still used to compute p_z (or remove that
if deleting the function), and keep the final zeroing of quotient.coefs[N-1] if
the function is retained.
erasure_codes/recovery.nim (1)

242-246: Dead code: Index range check is always false.

The condition if uint64(i) >= uint64(N) can never be true since i iterates from 0 ..< N. This validation is redundant.

♻️ Proposed fix to remove dead code
-  # Validate missing indices are in range
-  for i in 0 ..< N:
-    if samples[i].isNone():
-      if uint64(i) >= uint64(N):
-        return Recovery_InvalidDomainSize
-
   var missing_idx = 0
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@erasure_codes/recovery.nim` around lines 242 - 246, The check inside the loop
over "for i in 0 ..< N" is dead: the condition "if uint64(i) >= uint64(N)" can
never be true; remove that redundant branch so the loop simply identifies
missing samples without returning Recovery_InvalidDomainSize. Update the loop
that iterates over "i" and references "samples" and "N" to drop the inner if and
any use of Recovery_InvalidDomainSize in that context.
eth_eip7594_peerdas.nim (1)

343-355: Using same error code for different validation failures reduces debuggability.

All validation failures in recover_cells_and_kzg_proofs return cttEthKzg_InputsLengthsMismatch:

  • num_cells != cells.len → length mismatch (correct)
  • num_cells < CELLS_PER_EXT_BLOB div 2 → insufficient cells for recovery
  • num_cells > CELLS_PER_EXT_BLOB → too many cells
  • Duplicate/unsorted indices → invalid input ordering

Consider using distinct error codes (e.g., cttEthKzg_InsufficientCells, cttEthKzg_DuplicateCellIndex) to aid debugging.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@eth_eip7594_peerdas.nim` around lines 343 - 355, In
recover_cells_and_kzg_proofs, replace the repeated
cttEthKzg_InputsLengthsMismatch return value with distinct error codes so
failures are distinguishable: return cttEthKzg_LengthMismatch for the num_cells
!= cells.len check, return cttEthKzg_InsufficientCells for num_cells <
CELLS_PER_EXT_BLOB div 2, return cttEthKzg_TooManyCells for num_cells >
CELLS_PER_EXT_BLOB, and return cttEthKzg_DuplicateOrUnsortedIndex (or
cttEthKzg_DuplicateCellIndex) for the uint64(cell_indices[i-1]) >=
uint64(cell_indices[i]) check; ensure these new error symbols are declared/added
to the existing error enum and used consistently in
recover_cells_and_kzg_proofs.
constantine/math/polynomials/fft.nim (1)

60-70: Destructor design looks correct but consider double-free protection.

The destructors check for isNil() before freeing, which is good. However, they don't set the pointer to nil after freeing, so if =destroy is somehow called twice on the same object, it could double-free.

In practice, Nim's destructor semantics should prevent this, but for defensive coding:

Optional defensive fix
 proc `=destroy`*[F](ctx: FrFFT_Descriptor[F]) =
   if not ctx.rootsOfUnity.isNil():
     ctx.rootsOfUnity.freeHeapAligned()
+    ctx.rootsOfUnity = nil

Note: This requires making ctx a var parameter, which changes the destructor signature. Given Nim's destructor guarantees, this is likely unnecessary.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@constantine/math/polynomials/fft.nim` around lines 60 - 70, Destructors for
FrFFT_Descriptor, CosetFFT_Descriptor and ECFFT_Descriptor free ctx.rootsOfUnity
but don't null it afterwards, risking a double-free if the destructor runs
twice; update the three `proc `=destroy`*` signatures to take `ctx` as a var
parameter and, after calling `ctx.rootsOfUnity.freeHeapAligned()`, set
`ctx.rootsOfUnity = nil` (reference: the three `=destroy` procs and
`rootsOfUnity.freeHeapAligned()`), or alternatively keep the current non-var
signature but ensure some external ownership/guard prevents re-entry — the
recommended fix is to make `ctx` mutable and clear the pointer after freeing.
tests/testutils/eth_consensus_utils.nim (1)

34-36: Empty string in SkippedTests array.

The SkippedTests array contains an empty string "" which will never match any filename. This appears to be a placeholder. Consider using an empty array or adding a comment:

const SkippedTests = [
  ""  # Placeholder - no tests currently skipped
]

Or simply:

const SkippedTests: array[0, string] = []
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/testutils/eth_consensus_utils.nim` around lines 34 - 36, The
SkippedTests const currently contains an empty string entry which never matches
any filename; update the SkippedTests constant to either be an empty
sequence/array or keep a documented placeholder comment instead of "" so intent
is clear—modify the SkippedTests declaration (symbol: SkippedTests) to remove
the "" entry and replace it with an empty collection type or add an inline
comment like "Placeholder - no tests currently skipped".

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: b46a84b3-216b-4f3a-8538-800677d38020

📥 Commits

Reviewing files that changed from the base of the PR and between f16702e and 98757aa.

⛔ Files ignored due to path filters (21)
  • constantine/commitments_setups/trusted_setup_ethereum_kzg4844_reference.dat is excluded by !**/*.dat
  • papers/FFT/CG98 - bit-reversal-permutation - cobra.pdf is excluded by !**/*.pdf
  • papers/ethereum/WZ24 - 2024-1362 - A documentation of Ethereum PeerDAS.pdf is excluded by !**/*.pdf
  • papers/ethereum/images/WZ24 - Fig 1 - Section 2.1 - Reed-Solomon Codes and Roots of Unity.jpg is excluded by !**/*.jpg
  • papers/ethereum/images/WZ24 - Fig 2 - Section 3.1 - Warm-Up Single Row Commitment Scheme.jpg is excluded by !**/*.jpg
  • papers/polynomial_commitments/CJLMR24 - 2024-1279 - Improved polynomial division for cryptography.pdf is excluded by !**/*.pdf
  • papers/polynomial_commitments/FK20 - Kate_amortized.pdf is excluded by !**/*.pdf
  • papers/polynomial_commitments/FK23 - 2023-033 - Fast amortized KZG proofs.pdf is excluded by !**/*.pdf
  • papers/polynomial_commitments/KZG10 - Constant size commitments to polynomials and their applications.pdf is excluded by !**/*.pdf
  • papers/polynomial_commitments/images/CJLMR24 - Fig 2a - Comparison of proofs of openings at all positions.jpg is excluded by !**/*.jpg
  • papers/polynomial_commitments/images/CJLMR24 - Fig 2b - Comparison of proofs of openings smaller n.jpg is excluded by !**/*.jpg
  • papers/polynomial_commitments/images/CJLMR24 - Fig 2c - Comparison of proofs with precomputation.jpg is excluded by !**/*.jpg
  • papers/polynomial_commitments/images/CJLMR24 - Fig 2d - Comparison of proofs for larger sizes.jpg is excluded by !**/*.jpg
  • papers/polynomial_commitments/images/CJLMR24 - Fig 3a - Setup runtimes no optimizations.jpg is excluded by !**/*.jpg
  • papers/polynomial_commitments/images/CJLMR24 - Fig 3b - Setup runtimes with ColEDiv precomputation.jpg is excluded by !**/*.jpg
  • papers/polynomial_commitments/images/CJLMR24 - Fig 4 - Update Opening at index i not equal to j.jpg is excluded by !**/*.jpg
  • papers/polynomial_commitments/images/CJLMR24 - Fig 6a - EDiv_0 matrix.jpg is excluded by !**/*.jpg
  • papers/polynomial_commitments/images/CJLMR24 - Fig 6b - EDiv_1 matrix.jpg is excluded by !**/*.jpg
  • papers/polynomial_commitments/images/CJLMR24 - Fig 6c - EDiv_2 matrix.jpg is excluded by !**/*.jpg
  • papers/polynomial_commitments/images/CJLMR24 - Fig 6d - EDiv_3 matrix.jpg is excluded by !**/*.jpg
  • papers/polynomial_commitments/images/F21 - Fig 1 - verkle_trie.svg is excluded by !**/*.svg
📒 Files selected for processing (63)
  • .gitignore
  • commitments/kzg_multiproofs.nim
  • constantine.nimble
  • constantine/commitments/kzg.nim
  • constantine/commitments/kzg_multiproofs.nim
  • constantine/commitments/kzg_parallel.nim
  • constantine/commitments_setups/ethereum_kzg_srs.nim
  • constantine/data_availability_sampling/eth_peerdas.nim
  • constantine/erasure_codes/recovery.nim
  • constantine/erasure_codes/zero_polynomial.nim
  • constantine/eth_eip7594_peerdas.nim
  • constantine/ethereum_eip4844_kzg.nim
  • constantine/ethereum_eip4844_kzg_parallel.nim
  • constantine/math/arithmetic/assembly/limbs_asm_mul_mont_arm64.nim
  • constantine/math/arithmetic/assembly/limbs_asm_redc_mont_arm64.nim
  • constantine/math/arithmetic/finite_fields.nim
  • constantine/math/elliptic/ec_shortweierstrass_batch_ops.nim
  • constantine/math/matrix/toeplitz.nim
  • constantine/math/pairings/pairings_generic.nim
  • constantine/math/polynomials/fft.nim
  • constantine/math/polynomials/polynomials.nim
  • constantine/named/zoo_endomorphisms.nim
  • constantine/serialization/codecs_bls12_381.nim
  • constantine/signatures/bls_signatures.nim
  • constantine/threadpool/instrumentation.nim
  • constantine/threadpool/primitives/topology_windows.nim
  • data_availability_sampling/eth_peerdas.nim
  • erasure_codes/recovery.nim
  • erasure_codes/zero_polynomial.nim
  • eth_eip7594_peerdas.nim
  • fffft
  • math/matrix/toeplitz.nim
  • papers/README.md
  • papers/ethereum/B18 - reed-solomon_erasure_recovery_with_ffts.md
  • papers/ethereum/B18->Z22 - reed-solomon_erasure_recovery_with_ffts_reply12.md
  • papers/ethereum/EIP4844 - deneb fork - polynomial-commitments.md
  • papers/ethereum/EIP7594 - fulu fork - polynomial-commitments-sampling.md
  • papers/ethereum/KDF22 - universal_verification_equation_for_data_availability_samping.md
  • papers/ethereum/WZ24 - 2024-1362 - A documentation of Ethereum PeerDAS.md
  • papers/polynomial_commitments/CJLMR24 - 2024-1279 - Improved polynomial division for cryptography.md
  • papers/polynomial_commitments/F21 - PCS-multiproofs-using-random-evaluation.md
  • papers/polynomial_commitments/FK23 - 2023-033 - Fast amortized KZG proofs.md
  • papers/polynomial_commitments/T20 - multiplying-a-vector-by-a-toeplitz-matrix.md
  • research/kzg/README.md
  • research/kzg/fft_fr.nim
  • research/kzg/fft_g1.nim
  • research/peerdas/roots_of_unity_check.py
  • research/peerdas/test_compute_cells_opt.py
  • tests/commitments/t_kzg_multiproofs.nim
  • tests/commitments/trusted_setup_generator.nim
  • tests/data_availability_sampling/t_peerdas_recovery.nim
  • tests/erasure_codes/t_recovery.nim
  • tests/erasure_codes/t_zero_polynomial.nim
  • tests/eth_eip7594_peerdas/t_cells_and_kzg_proofs_opt.nim
  • tests/eth_eip7594_peerdas/t_compute_cells_opt.nim
  • tests/math_matrix/t_toeplitz.nim
  • tests/math_polynomials/fft_utils.nim
  • tests/math_polynomials/t_fft.nim
  • tests/math_polynomials/t_fft_coset.nim
  • tests/t_eth_eip7594_peerdas.nim
  • tests/t_ethereum_eip4844_deneb_kzg.nim
  • tests/t_ethereum_eip4844_deneb_kzg_parallel.nim
  • tests/testutils/eth_consensus_utils.nim
💤 Files with no reviewable changes (2)
  • constantine/math/arithmetic/assembly/limbs_asm_mul_mont_arm64.nim
  • constantine/math/arithmetic/assembly/limbs_asm_redc_mont_arm64.nim

Comment on lines +356 to +361
if status != FFT_Success:
freeHeapAligned(circulant)
freeHeapAligned(u)
return

# u is already in time domain (toeplitzMatVecMulSameSize did IFFT)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Silent failure: kzg_coset_prove doesn't propagate errors to caller.

The function returns void and exits early on FFT failures without informing the caller:

if status != FFT_Success:
  freeHeapAligned(circulant)
  freeHeapAligned(u)
  return  # Silent failure - proofs array left in undefined state

The caller has no way to know if proof computation succeeded. Consider returning FFTStatus or using an out parameter for status.

Proposed fix
-func kzg_coset_prove*[N, L, CDS: static int, Name: static Algebra](
+func kzg_coset_prove*[N, L, CDS: static int, Name: static Algebra](
        tauExtFftArray: array[L, array[CDS, EC_ShortW_Jac[Fp[Name], G1]]],
        circulantDomain: PolyEvalRootsDomain[CDS, Fr[Name]],
        proofs: var array[CDS, EC_ShortW_Aff[Fp[Name], G1]],
-       poly: PolynomialCoef[N, Fr[Name]]) {.tags:[Alloca, HeapAlloc, Vartime].} =
+       poly: PolynomialCoef[N, Fr[Name]]): FFTStatus {.tags:[Alloca, HeapAlloc, Vartime].} =
   # ... body ...
   if status != FFT_Success:
     freeHeapAligned(circulant)
     freeHeapAligned(u)
-    return
+    return status
   # ... rest ...
+  return FFT_Success

Also applies to: 369-375

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@commitments/kzg_multiproofs.nim` around lines 356 - 361, The kzg_coset_prove
function currently returns void and silently returns on FFT failures (e.g., the
FFT_Success checks at the shown block and also at 369-375), leaving the proofs
array in an undefined state; change the function signature to return an
FFTStatus (or add an out parameter like out status: FFTStatus), update all
callers to check that return status, and replace each early "return" on FFT
error with "freeHeapAligned(...); return <error status>;" (or set the out status
and return) so errors are propagated; ensure you update both occurrences noted
and any other similar early exits and adjust callers to handle the new return
contract.

Comment on lines +223 to +229
var indices_arr: array[256, uint64]
doAssert indices.len <= 256
for i in 0 ..< indices.len:
indices_arr[i] = indices[i]

var poly: PolynomialCoef[N, F]
vanishingPolynomialForIndices(poly, indices_arr, domain)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Type mismatch: vanishingPolynomialForIndices expects exact array size.

The function vanishingPolynomialForIndices has a static numIndices parameter, but here a fixed array[256, uint64] is passed regardless of actual indices.len. This will cause the vanishing polynomial to use 256 indices instead of the actual count.

Should use vanishingPolynomialForIndicesRT instead, which handles runtime-sized indices.

🐛 Proposed fix
-  var indices_arr: array[256, uint64]
-  doAssert indices.len <= 256
-  for i in 0 ..< indices.len:
-    indices_arr[i] = indices[i]
-
   var poly: PolynomialCoef[N, F]
-  vanishingPolynomialForIndices(poly, indices_arr, domain)
+  vanishingPolynomialForIndicesRT(poly, indices, domain)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@constantine/erasure_codes/zero_polynomial.nim` around lines 223 - 229, The
code constructs a fixed array `indices_arr: array[256, uint64]` and then calls
`vanishingPolynomialForIndices(poly, indices_arr, domain)`, which expects a
compile-time `numIndices` size and thus incorrectly treats all 256 entries as
valid; replace this with the runtime-capable API by building a runtime
slice/array of the actual length (using `indices.len`) and call
`vanishingPolynomialForIndicesRT(poly, indices_runtime, domain)` instead so the
vanishing polynomial uses the real number of indices; update references around
`indices_arr`, `vanishingPolynomialForIndices`, and `PolynomialCoef` to use the
RT variant and pass only the valid indices.

Comment on lines +151 to +163
let dividendDeg = N - 1
let divisorDeg = M - 1
var quotientDeg = dividendDeg - divisorDeg

if quotientDeg < 0:
quotientDeg = 0

var invDivisorLead {.noInit.}: Field
invDivisorLead.inv_vartime(divisor.coefs[divisorDeg])

for i in countdown(quotientDeg, 0):
var coef {.noInit.}: Field
coef.prod(working.coefs[i + divisorDeg], invDivisorLead)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Guard polyDiv before the divisor degree exceeds the dividend.

When M > N, quotientDeg is clamped to 0, but Line 163 still reads working.coefs[i + divisorDeg], which is out of bounds on the first iteration.

Suggested fix
   let dividendDeg = N - 1
   let divisorDeg = M - 1
+  if divisorDeg > dividendDeg:
+    for i in 0 ..< N:
+      quotient.coefs[i].setZero()
+    return
+
   var quotientDeg = dividendDeg - divisorDeg
-
-  if quotientDeg < 0:
-    quotientDeg = 0
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let dividendDeg = N - 1
let divisorDeg = M - 1
var quotientDeg = dividendDeg - divisorDeg
if quotientDeg < 0:
quotientDeg = 0
var invDivisorLead {.noInit.}: Field
invDivisorLead.inv_vartime(divisor.coefs[divisorDeg])
for i in countdown(quotientDeg, 0):
var coef {.noInit.}: Field
coef.prod(working.coefs[i + divisorDeg], invDivisorLead)
let dividendDeg = N - 1
let divisorDeg = M - 1
if divisorDeg > dividendDeg:
for i in 0 ..< N:
quotient.coefs[i].setZero()
return
var quotientDeg = dividendDeg - divisorDeg
var invDivisorLead {.noInit.}: Field
invDivisorLead.inv_vartime(divisor.coefs[divisorDeg])
for i in countdown(quotientDeg, 0):
var coef {.noInit.}: Field
coef.prod(working.coefs[i + divisorDeg], invDivisorLead)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@constantine/math/polynomials/polynomials.nim` around lines 151 - 163, The
loop in polyDiv assumes divisorDeg <= dividendDeg; when M > N (divisorDeg >
dividendDeg) the expression working.coefs[i + divisorDeg] is out-of-bounds. Fix
by guarding that case before computing invDivisorLead/entering the countdown: if
divisorDeg > dividendDeg then set the quotient to zero (or return a zero
quotient and the original dividend as remainder) and skip the loop, otherwise
proceed to compute invDivisorLead and run the countdown using quotientDeg.
Ensure you reference quotientDeg, dividendDeg, divisorDeg, working,
invDivisorLead and the surrounding polyDiv routine when making the change.

Comment on lines +9 to +14
import
std/algorithm,
constantine/named/[algebras],
constantine/math/[arithmetic, ec_shortweierstrass],
constantine/math/polynomials/[polynomials, fft],
constantine/platforms/views
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Missing import for heap allocation functions.

The module uses allocHeapArrayAligned and freeHeapAligned but only imports views from constantine/platforms. The allocs module needs to be imported.

🐛 Proposed fix
 import
   std/algorithm,
   constantine/named/[algebras],
   constantine/math/[arithmetic, ec_shortweierstrass],
   constantine/math/polynomials/[polynomials, fft],
-  constantine/platforms/views
+  constantine/platforms/[views, allocs]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@data_availability_sampling/eth_peerdas.nim` around lines 9 - 14, The file
imports constantine/platforms/views but uses heap allocation functions
allocHeapArrayAligned and freeHeapAligned; add the missing import for the allocs
module so those symbols are available (import constantine/platforms/allocs or
include allocs in the existing constantine/platforms import list) and ensure the
code references to allocHeapArrayAligned and freeHeapAligned resolve to that
module.

Comment on lines +178 to +180
var ext_times_zero_coeffs = allocHeapArrayAligned(Fr[Name], ext_size, alignment = 64)
let ifft_status = ifft_rn(fft_desc, ext_times_zero_coeffs.toOpenArray(ext_size), extended_times_zero.toOpenArray(ext_size))
doAssert ifft_status == FFT_Success
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Same toOpenArray issue - needs start and end indices.

🐛 Proposed fix
   var ext_times_zero_coeffs = allocHeapArrayAligned(Fr[Name], ext_size, alignment = 64)
-  let ifft_status = ifft_rn(fft_desc, ext_times_zero_coeffs.toOpenArray(ext_size), extended_times_zero.toOpenArray(ext_size))
+  let ifft_status = ifft_rn(fft_desc, ext_times_zero_coeffs.toOpenArray(0, ext_size-1), extended_times_zero.toOpenArray(0, ext_size-1))
   doAssert ifft_status == FFT_Success
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
var ext_times_zero_coeffs = allocHeapArrayAligned(Fr[Name], ext_size, alignment = 64)
let ifft_status = ifft_rn(fft_desc, ext_times_zero_coeffs.toOpenArray(ext_size), extended_times_zero.toOpenArray(ext_size))
doAssert ifft_status == FFT_Success
var ext_times_zero_coeffs = allocHeapArrayAligned(Fr[Name], ext_size, alignment = 64)
let ifft_status = ifft_rn(fft_desc, ext_times_zero_coeffs.toOpenArray(0, ext_size-1), extended_times_zero.toOpenArray(0, ext_size-1))
doAssert ifft_status == FFT_Success
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@data_availability_sampling/eth_peerdas.nim` around lines 178 - 180, The call
sites to toOpenArray on ext_times_zero_coeffs and extended_times_zero are
missing the required start/end indices; update both usages in the ifft_rn call
to pass explicit bounds (e.g., ext_times_zero_coeffs.toOpenArray(0, ext_size-1)
and extended_times_zero.toOpenArray(0, ext_size-1)) so the slices have correct
inclusive indices before calling ifft_rn(fft_desc, ..., ...); leave the doAssert
ifft_status == FFT_Success check as-is.

Comment on lines +62 to +65
In the *blob’s Lagrange Basis*, we carefully re-order the group of roots of unity of order 8192 in *reverse bit ordering* . This gives us the useful property that any power-of-two-aligned subset, is a subgroup (or a coset of a subgroup) of the multiplicative group of powers of \omega.

As an example, consider the group of roots of unity of order 8192: \{1, \omega, \omega^2, \omega^3, ..., \omega^{8191}\}. We use indices to denote the ordering in *reverse bit order*: \{\omega_0, \omega_1, \omega_2, \omega_3, ..., \omega_{8191}\} = \{1, \omega^{4096}, \omega^{2048}, \omega^{6072}, ..., \omega^{8191}\}. Then \Omega = \{\omega_0, \omega_1, \omega_2, ..., \omega_{15}\} is a subgroup of order 16, and for each 0 <= i < 512, we can use the *shifting factor* h_i = \omega_{16i} to construct the coset H_i = h_i \Omega.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix the reverse-bit-order worked example.

For an 8192-point domain, the 13-bit reversal of index 3 is 6144, not 6072. This example is used to motivate the coset layout later, so the current number will mislead anyone checking the permutation by hand.

Suggested correction
-... = {1, \omega^{4096}, \omega^{2048}, \omega^{6072}, ..., \omega^{8191}}
+... = {1, \omega^{4096}, \omega^{2048}, \omega^{6144}, ..., \omega^{8191}}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
In the *blobs Lagrange Basis*, we carefully re-order the group of roots of unity of order 8192 in *reverse bit ordering* . This gives us the useful property that any power-of-two-aligned subset, is a subgroup (or a coset of a subgroup) of the multiplicative group of powers of \omega.
As an example, consider the group of roots of unity of order 8192: \{1, \omega, \omega^2, \omega^3, ..., \omega^{8191}\}. We use indices to denote the ordering in *reverse bit order*: \{\omega_0, \omega_1, \omega_2, \omega_3, ..., \omega_{8191}\} = \{1, \omega^{4096}, \omega^{2048}, \omega^{6072}, ..., \omega^{8191}\}. Then \Omega = \{\omega_0, \omega_1, \omega_2, ..., \omega_{15}\} is a subgroup of order 16, and for each 0 <= i < 512, we can use the *shifting factor* h_i = \omega_{16i} to construct the coset H_i = h_i \Omega.
In the *blob's Lagrange Basis*, we carefully re-order the group of roots of unity of order 8192 in *reverse bit ordering* . This gives us the useful property that any power-of-two-aligned subset, is a subgroup (or a coset of a subgroup) of the multiplicative group of powers of \omega.
As an example, consider the group of roots of unity of order 8192: \{1, \omega, \omega^2, \omega^3, ..., \omega^{8191}\}. We use indices to denote the ordering in *reverse bit order*: \{\omega_0, \omega_1, \omega_2, \omega_3, ..., \omega_{8191}\} = \{1, \omega^{4096}, \omega^{2048}, \omega^{6144}, ..., \omega^{8191}\}. Then \Omega = \{\omega_0, \omega_1, \omega_2, ..., \omega_{15}\} is a subgroup of order 16, and for each 0 <= i < 512, we can use the *shifting factor* h_i = \omega_{16i} to construct the coset H_i = h_i \Omega.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@papers/ethereum/KDF22` -
universal_verification_equation_for_data_availability_samping.md around lines 62
- 65, The reverse-bit-order example is incorrect: for the 8192-point domain the
13-bit reversal of index 3 is 6144 (not 6072); update the worked example in the
paragraph that defines \{\omega_0, \omega_1, ...\} so the mapping shows \omega_3
= \omega^{6144} (and adjust the example sequence accordingly), and verify any
downstream references to \Omega, h_i, or H_i that rely on that explicit sample
index are consistent with the corrected reverse-bit permutation.

Comment on lines +130 to +139
The answer is, as before, the elements in the first columns of $C_8$, which are:

\[\vec{a_8}=[ a_0, a_1, a_2, a_3, a_0, a_{-3}, a_{-2}, a_{-1} ]^T\]Thus, applying the algorithm for circulant matrices from before, what we must do is:

- Build $\vec{a_{2n}}$ from the entries \(\{a_{n-1}, a_{n-2}, \dots, a_1, a_0, a_{-1}, \dots, a_{-(n-1)}\}\) of the Toeplitz matrix $T_n$
- Compute $\vec{y}$ by doing a DFT on $[\vec{x}, \vec{0}]^T$
- Compute $\vec{v}$ by doing a DFT on $\vec{a_{2n}}$ (e.g., on $\vec{a_8}$ from above)
- Compute the Hadamard product $\vec{u}=\vec{v} \circ \vec{y}$,
- Do an inverse DFT on $\vec{u}$
- The product $T_n \vec{x}$ consists of the first $n$ entries of the resulting vector
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Make the \vec{a_{2n}} construction explicit.

The bullet at Line 134 does not match the \vec{a_8} example right above it: as written, it reads like the positive-index half is reversed and the duplicated a_0 is omitted. A reader implementing from this step can easily build the wrong circulant embedding. Spell out the ordered vector directly, e.g. [a_0, a_1, …, a_{n-1}, a_0, a_{-(n-1)}, …, a_{-1}]^T.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@papers/polynomial_commitments/T20` -
multiplying-a-vector-by-a-toeplitz-matrix.md around lines 130 - 139, Clarify and
correct the construction of the circulant embedding vector vec{a_{2n}} so it
matches the vec{a_8} example: explicitly state the ordering as vec{a_{2n}} =
[a_0, a_1, …, a_{n-1}, a_0, a_{-(n-1)}, …, a_{-1}]^T (so for n=4 vec{a_8} =
[a_0, a_1, a_2, a_3, a_0, a_{-3}, a_{-2}, a_{-1}]^T), and update the bullet that
builds vec{a_{2n}} from the Toeplitz entries so it uses this exact ordered
sequence.

Comment on lines +195 to +205
let fft4 = FFTDescriptor[Fr[BLS12_381]].init(maxScale = 4)
for i in 0 ..< 4:
echo "fft4[" & $i & "] = " & fft4.expandedRootsOfUnity[i].toHex()

echo "==================="

let fft8 = FFTDescriptor[Fr[BLS12_381]].init(maxScale = 8)
for i in 0 ..< 8:
echo "fft8[" & $i & "] = " & fft8.expandedRootsOfUnity[i].toHex()

# when isMainModule:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify the debug block is file-scope code and not guarded.
sed -n '189,210p' research/kzg/fft_fr.nim

Repository: mratsim/constantine

Length of output: 671


Move the root-dump block back under when isMainModule.

Lines 195–203 currently execute on every import, causing the module to emit debug output and perform descriptor setup as side effects whenever it is imported by other code.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@research/kzg/fft_fr.nim` around lines 195 - 205, The debug/root-dump block
that constructs FFTDescriptor[Fr[BLS12_381]] (fft4, fft8) and echoes
expandedRootsOfUnity is running on import; wrap or move that entire block under
a when isMainModule conditional so it only runs as a script. Specifically,
relocate the lines that create fft4 and fft8 and the for/echo loops (referencing
FFTDescriptor, fft4, fft8, and expandedRootsOfUnity) into a when isMainModule:
section (or add the conditional around them) so importing this module no longer
prints or performs setup as a side effect.

Comment on lines +93 to +103
let n = 32
let fftDesc = createFFTDescriptor(F, n)
let coset_shift = fftDesc.rootsOfUnity[1] ~^ uint32(CELLS_PER_EXT_BLOB)

var data = newSeq[F](n)
for i in 0 ..< n:
data[i].fromUint(uint64(i + 1))

var coset_freq = newSeq[F](n)
let cosetDesc = CosetFFT_Descriptor[F].new(
order = n, generatorRootOfUnity = fftDesc.rootsOfUnity[1], shift = coset_shift)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

This “PeerDAS” shift collapses to the base subgroup.

With n = 32, fftDesc.rootsOfUnity[1] has order 32, so raising it to CELLS_PER_EXT_BLOB = 128 yields 1. That means this block is exercising another regular subgroup FFT roundtrip, not a true coset case.

One way to make this a real coset test
   block:
     let n = 32
-    let fftDesc = createFFTDescriptor(F, n)
-    let coset_shift = fftDesc.rootsOfUnity[1] ~^ uint32(CELLS_PER_EXT_BLOB)
+    let extDesc = createFFTDescriptor(F, n * CELLS_PER_EXT_BLOB)
+    let subgroupRoot = extDesc.rootsOfUnity[CELLS_PER_EXT_BLOB]
+    let coset_shift = extDesc.rootsOfUnity[1]

     var data = newSeq[F](n)
     for i in 0 ..< n:
       data[i].fromUint(uint64(i + 1))

     var coset_freq = newSeq[F](n)
     let cosetDesc = CosetFFT_Descriptor[F].new(
-      order = n, generatorRootOfUnity = fftDesc.rootsOfUnity[1], shift = coset_shift)
+      order = n, generatorRootOfUnity = subgroupRoot, shift = coset_shift)
...
-    fftDesc.delete()
+    extDesc.delete()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/math_polynomials/t_fft_coset.nim` around lines 93 - 103, The coset
shift currently computes coset_shift = fftDesc.rootsOfUnity[1] ~^
uint32(CELLS_PER_EXT_BLOB) which collapses to 1 because CELLS_PER_EXT_BLOB is a
multiple of the root order (n=32); update the shift so it is not in the base
subgroup (e.g. use the exponent modulo the order or pick a different root
index). Concretely, change the shift computation to use CELLS_PER_EXT_BLOB % n
(or pick fftDesc.rootsOfUnity[3] instead of [1]) and pass that value into
CosetFFT_Descriptor.new(…, shift = coset_shift) so the test exercises a true
nontrivial coset rather than the base subgroup.

Comment on lines +127 to +155
proc testECFFTRoundtrip*(EC: typedesc, F: typedesc[Fr]) =
echo "Testing EC FFT/IFFT roundtrip..."

for scale in 2 .. 5:
let order = 1 shl scale

let fftDesc = createFFTDescriptor(EC, F, order)

var data = newSeq[EC](order)
data[0].setGenerator()
let gen = EC.F.Name.getGenerator($EC.G)
for i in 1 ..< order:
data[i].mixedSum(data[i-1], gen)

var freq = newSeq[EC](order)
let fftOk = fft_vartime(fftDesc, freq, data)
doAssert fftOk == FFT_Success

var recovered = newSeq[EC](order)
let ifftOk = ec_ifft_rn(fftDesc, recovered, freq)
doAssert ifftOk == FFT_Success

for i in 0 ..< order:
doAssert (recovered[i] == data[i]).bool,
"EC Roundtrip failed at size " & $order & " index " & $i

fftDesc.delete()

echo " ✓ All EC FFT/IFFT roundtrip tests PASSED"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify if fft_vartime and ec_fft_nr are equivalent or if there's a reason for the asymmetry
rg -n "proc fft_vartime|proc ec_fft_nr" --type nim

Repository: mratsim/constantine

Length of output: 45


🏁 Script executed:

# Search more broadly for fft_vartime and related functions
rg "fft_vartime|ec_ifft_rn" --type nim -B 2 -A 2

Repository: mratsim/constantine

Length of output: 3356


🏁 Script executed:

# Also search for EC FFT function definitions
rg "proc.*fft.*EC|proc ec.*fft" --type nim | head -20

Repository: mratsim/constantine

Length of output: 45


🏁 Script executed:

# Check the imports in the test file to understand available functions
head -30 tests/math_polynomials/t_fft.nim

Repository: mratsim/constantine

Length of output: 1225


🏁 Script executed:

# Search for EC FFT forward function definitions in constantine/math/polynomials/fft.nim
rg "proc ec.*fft|func ec.*fft" constantine/math/polynomials/fft.nim -A 3

Repository: mratsim/constantine

Length of output: 406


🏁 Script executed:

# Check what the test file imports and uses specifically
sed -n '1,50p' tests/math_polynomials/t_fft.nim

Repository: mratsim/constantine

Length of output: 1777


🏁 Script executed:

# Search for all EC FFT functions available
rg "proc.*\*\[EC\]|func.*\*\[EC\]" constantine/math/polynomials/fft.nim | grep -i fft

Repository: mratsim/constantine

Length of output: 155


🏁 Script executed:

# Check if fft_vartime is imported anywhere in the test file
cat tests/math_polynomials/t_fft.nim | grep -i "import\|fft_vartime"

Repository: mratsim/constantine

Length of output: 118


🏁 Script executed:

# Check what's in the fft module that's imported
grep "^func\|^proc" constantine/math/polynomials/fft.nim | grep -i "fft\|ifft"

Repository: mratsim/constantine

Length of output: 956


🏁 Script executed:

# Get the full test file to see all imports and the actual function call
sed -n '1,160p' tests/math_polynomials/t_fft.nim

Repository: mratsim/constantine

Length of output: 4929


🏁 Script executed:

# Search for fft_vartime in research/kzg directory
rg "fft_vartime" research/kzg/ -B 5 -A 5 | head -40

Repository: mratsim/constantine

Length of output: 2150


Use ec_fft_nr instead of fft_vartime and test EC FFT at size 2.

Line 142 calls fft_vartime, which is not available from the imported constantine/math/polynomials/fft module. The correct EC FFT forward function is ec_fft_nr, which pairs with the already-used ec_ifft_rn on line 146.

Also, the EC FFT roundtrip test starts at scale=2 (size 4), while the Fr FFT test starts at scale=1 (size 2). The test file's header comment explicitly notes that size 2 (with ω=-1, the 2nd root of unity) is critical for FK20 multi-proof verification. Begin the EC test at scale=1 to ensure the IFFT correctly inverts the FFT for all root orders.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/math_polynomials/t_fft.nim` around lines 127 - 155, In
testECFFTRoundtrip replace the unavailable fft_vartime call with the correct EC
forward FFT ec_fft_nr so it pairs with the existing ec_ifft_rn inverse, and
adjust the scale loop to start at 1 (for size 2) instead of 2; specifically,
edit the testECFFTRoundtrip procedure to use ec_fft_nr(fftDesc, freq, data) in
place of fft_vartime(...) and change "for scale in 2 .. 5:" to "for scale in 1
.. 5:" so the EC FFT/IFFT roundtrip covers the size-2 case.

@mratsim mratsim marked this pull request as draft March 19, 2026 10:38
@mratsim mratsim changed the title Implement EIP-7594 PeerDAS [Draft] Implement EIP-7594 PeerDAS Mar 20, 2026
@mratsim
Copy link
Copy Markdown
Owner Author

mratsim commented Mar 20, 2026

/gemini summary

/gemini review

@gemini-code-assist
Copy link
Copy Markdown

Thanks for the detailed summary, @greptile-apps[bot]! I appreciate the thoroughness and the identification of critical bugs and areas for improvement. I'll address these points in the coming commits.

And thanks for running the summary and review, @mratsim! I'll be sure to take a close look at the potentially related PRs you've identified.

@greptile-apps
Copy link
Copy Markdown

greptile-apps bot commented Mar 20, 2026

You're welcome! Happy to help. Feel free to tag me if you need another look after making those changes, or if you have questions about any of the findings. Good luck with the implementation! 🚀

mratsim added a commit that referenced this pull request Mar 20, 2026
+ Add research papers
+ Fix workspace duplication from #592
@mratsim
Copy link
Copy Markdown
Owner Author

mratsim commented Mar 20, 2026

FFT and polynomial refactorings are living in this commit 33a09af, from this branch: https://github.com/mratsim/constantine/compare/peerdas-review02-squashed:

  • PeerDAS: no more on-the-fly FFT descriptors, everything is managed from the context
  • FFT & bit-reversal permutation: they are now benched, and bit-reversal permutation has heuristics to choose the fastest implementation depending on problem size
  • Polynomials: Now whether evaluations and roots of unity are bit-reversed or not are part of the type and so compatibility is compiler-enforced.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant