Skip to content

5x buffer size memory usage for in-place real to hermitian interleaved 2D transform #241

@octionary

Description

@octionary

When performing in-place real to hermitian interleaved 2D transforms the memory usage seems to consistently be around 5 times the size of the buffer. For instance for transforming a 8194x8192 buffer so that the buffer size is 256 MB given sizeof(float) = 4 bytes, the VRAM utilization after the transform is ~1500 MB. This is significantly limiting the size of transforms that can be computed.

I understand that the memory usage is likely due to internally allocated buffers, but what is the reasoning for ~5x? If it is not possible to reduce this, is it at the very least possible to predict exactly how much will be used, so the maximum possible transform size can be calculated given memory constraints?

My example plan is created by

clfftPlanHandle planHandle;
clfftCreateDefaultPlan(&planHandle, ctx, CLFFT_2D, clLengths);
clfftSetPlanPrecision(planHandle, CLFFT_SINGLE);
clfftSetLayout(planHandle, CLFFT_REAL, CLFFT_HERMITIAN_INTERLEAVED);
clfftSetPlanInStride(planHandle, CLFFT_2D, clRealStride);
clfftSetPlanOutStride(planHandle, CLFFT_2D, clComplexStride);
clfftSetResultLocation(planHandle, CLFFT_INPLACE);
clfftSetPlanScale(planHandle, CLFFT_FORWARD, 1.0);
clfftBakePlan(planHandle, 1, &queue, NULL, NULL);

And the transform is executed by
clfftEnqueueTransform(planHandle, CLFFT_FORWARD, 1, &queue, 0, NULL, NULL, &buffer, NULL, NULL);

Here is a complete minimal reproducible example:
https://pastebin.com/XPxCmy1g

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions