Skip to content

TilemapChunk does not display tile colors correctly #23171

@trepidacious

Description

@trepidacious

Bevy version and features

main, commit 196606c

Relevant system information

Rendering related - adapter info:

AdapterInfo { name: "Apple M1 Max", vendor: 0, device: 0, device_type: IntegratedGpu, device_pci_bus_id: "", driver: "", driver_info: "", backend: Metal, subgroup_min_size: 4, subgroup_max_size: 64, transient_saves_memory: true }

What you did

I modified the tilemap_chunk_orientation example to display a gradient of tile colors, replace the existing code with:

    let chunk_size = uvec2(17, 1);
    let tile_display_size = UVec2::splat(32);

    let tile_data = (0..chunk_size.element_product())
        .map(|i| {
            let v = i as f32 / 16.0;
            Some(TileData {
                tileset_index: 0,
                color: Color::srgba(v, v, v, 1.0),
                visible: true,
                orientation: TileOrientation::Default,
            })
        })
        .collect();

I ran this code and checked the displayed colors of the tiles.

What went wrong

Expected: The pixels in the "P" are white in the tileset, and so should be exactly the requested color in the output, forming a visually linear gradient with even steps in the sRGB output.

Observed: The output shows an incorrect color gradient (first step is too big, final steps are too small):

Image

Using the macOs "Digital Color Meter" onscreen, or picking colors from the saved screenshot, the gradient gives grey with values of:

0, 71, 99, 120, ... 240, 248, 255

This should be approximately (+/-1?):

0, 15, 31, 47, ... 223, 239, 255

Additional information

Looking at the code, I think the issue is that the TileData color values are being:

  1. Converted to sRGBA in tilemap_chunk_material; the conversion to PackedTileData uses color: color.to_srgba().to_u8_array()
  2. Treated as linear in tilemap_chunk_material.wgsl, in get_tile_data.

I'm not that familiar with shaders, but from reading around it seems like it's correct to have a linear tile color in the shader, since we want to multiply the tileset texture color by it (I'm assuming the tileset texture will produce linear colors so we're multiplying linear by linear to produce a linear shader output?). The issue seems to be that the data going to the shader is sRGB.

Since the color is packed with one byte per color channel it seems like a good idea to use sRGB to avoid losing precision for darker colors, so it seems like this could be addressed by sRGB to linear conversion in the shader, e.g.:

    let color_r = pow(f32(data.g & 0xFFu) / 255.0, 2.2);
    let color_g = pow(f32((data.g >> 8u) & 0xFFu) / 255.0, 2.2);
    let color_b = pow(f32(data.b & 0xFFu) / 255.0, 2.2);

This doesn't seem ideal since the pow call will have a cost, and in addition this is just an approximation of the sRGB to linear conversion, but it does make the gradient look roughly correct:

Image

This gives improved sRGB values on screen:

0, 7, 26, 44, ... 224, 239, 255

The first 4 values are a bit off (presumably because we're using plain gamma rather than the proper curve with an initial linear section), but after that it's within +/-1 of the expected values.

Is there a better (faster/more accurate) way of doing the sRGB to linear conversion?

Another approach could be to expand the data passed into the tile_data texture, so it can have higher-precision linear data, e.g. two pixels per tile, one for the linear RGBA and the other for the tile index and flags?

Metadata

Metadata

Assignees

No one assigned

    Labels

    A-ColorColor spaces and color mathA-RenderingDrawing game state to the screenC-BugAn unexpected or incorrect behaviorS-Needs-InvestigationThis issue requires detective work to figure out what's going wrong

    Type

    No type

    Projects

    Status

    Needs SME Triage

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions