Skip to content

Conversation

@ehwan
Copy link
Owner

@ehwan ehwan commented Oct 3, 2025

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ehwan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a data_push boolean parameter to the DataStack::reduce_action method, centralizing the control over whether the result of a grammar reduction is pushed onto the data stack. This refactoring simplifies the logic in the parser's context and the generated reduce functions, leading to a more streamlined and clearer internal state management during parsing operations.

Highlights

  • DataStack::reduce_action Signature Update: The reduce_action trait method in DataStack now includes a push_data: bool parameter and its return type has been changed from Result<bool, Self::ReduceActionError> to Result<(), Self::ReduceActionError>. This refactors the control flow for data pushing.
  • Centralized Data Pushing Logic: The decision of whether to push the reduced non-terminal's data onto the stack is now explicitly managed by the new push_data parameter within the reduce_action method and its generated implementations. This removes redundant conditional logic from parser contexts.
  • Simplified Parser Contexts: Both deterministic and nondeterministic parser contexts (context.rs) have been streamlined. They no longer contain explicit conditional logic for popping or pushing data based on the reduce_action's return value, as this is now handled internally by the updated reduce_action.
  • Code Generation Adaptation: The rusty_lr_parser's code generation logic has been updated to produce reduce_action implementations that correctly utilize the push_data flag, conditionally pushing the result of a reduction or an Empty tag onto the data stack.
  • Enhanced Debuggability: Several methods within the parser contexts now include std::fmt::Debug trait bounds for non-terminal types, which will improve debugging capabilities by allowing these types to be printed.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a push_data parameter to the DataStack::reduce_action method, which is a nice refactoring. It makes the intent clearer and moves the logic of whether to push data into the DataStack implementation, simplifying the parser context code in both deterministic and non-deterministic parsers. The changes are well-propagated through the codebase, including the code generator and example files. I have one suggestion to improve the maintainability of the code generator.

Comment on lines 1153 to 1246
if rule.tokens.len() > 0 {
// if first token's tag is equal to new_tag, no need to (pop n tokens -> push new token).
// just pop n-1 tokens
let first_tag_name = token_to_stack_name(rule.tokens[0].token)
.unwrap_or(&empty_tag_name);

if first_tag_name == new_tag_name {
// pop n-1 tokens, no new insertion
let len = rule.tokens.len() - 1;
let truncate_stream = if len > 0 {
quote! {__data_stack.#tag_stack_name.truncate(__data_stack.#tag_stack_name.len() - #len);}
if new_tag_name == &empty_tag_name {
// if first token's tag is equal to new_tag, no need to (pop n tokens -> push new token).
// just pop n-1 tokens
let first_tag_name = token_to_stack_name(rule.tokens[0].token)
.unwrap_or(&empty_tag_name);

if first_tag_name == new_tag_name {
// pop n-1 tokens, no new insertion
let len = rule.tokens.len() - 1;
let truncate_stream = if len > 0 {
quote! {__data_stack.#tag_stack_name.truncate(__data_stack.#tag_stack_name.len() - #len);}
} else {
TokenStream::new()
};
truncate_stream
} else {
TokenStream::new()
};
truncate_stream
let len = rule.tokens.len();
// len > 0 here
quote! {
__data_stack.#tag_stack_name.truncate(__data_stack.#tag_stack_name.len() - #len);
__data_stack.#tag_stack_name.push(#tag_enum_name::#new_tag_name);
}
}
} else {
let len = rule.tokens.len();
// len > 0 here
quote! {
__data_stack.#tag_stack_name.truncate(__data_stack.#tag_stack_name.len() - #len);
__data_stack.#tag_stack_name.push(#tag_enum_name::#new_tag_name);
// if first token's tag is equal to new_tag, no need to (pop n tokens -> push new token).
// just pop n-1 tokens
let first_tag_name = token_to_stack_name(rule.tokens[0].token)
.unwrap_or(&empty_tag_name);

if first_tag_name == new_tag_name {
// pop n-1 tokens, no new insertion
let lenm1 = rule.tokens.len() - 1;
let truncate_stream = if lenm1 > 0 {
quote! {__data_stack.#tag_stack_name.truncate(__data_stack.#tag_stack_name.len() - #lenm1);}
} else {
TokenStream::new()
};

let len = rule.tokens.len();
quote! {
if __push_data {
#truncate_stream
} else {
__data_stack.#tag_stack_name.truncate(__data_stack.#tag_stack_name.len() - #len);
__data_stack.#tag_stack_name.push(#tag_enum_name::#empty_tag_name);
}
}
} else if first_tag_name == &empty_tag_name {
// pop n-1 tokens, no new insertion
let lenm1 = rule.tokens.len() - 1;
let truncate_stream = if lenm1 > 0 {
quote! {__data_stack.#tag_stack_name.truncate(__data_stack.#tag_stack_name.len() - #lenm1);}
} else {
TokenStream::new()
};

let len = rule.tokens.len();
quote! {
if __push_data {
__data_stack.#tag_stack_name.truncate(__data_stack.#tag_stack_name.len() - #len);
__data_stack.#tag_stack_name.push(#tag_enum_name::#new_tag_name);
} else {
#truncate_stream
}
}
} else {
let len = rule.tokens.len();
// len > 0 here
quote! {
__data_stack.#tag_stack_name.truncate(__data_stack.#tag_stack_name.len() - #len);
if __push_data {
__data_stack.#tag_stack_name.push(#tag_enum_name::#new_tag_name);
} else {
__data_stack.#tag_stack_name.push(#tag_enum_name::#empty_tag_name);
}
}
}
}
} else {
quote! {
__data_stack.#tag_stack_name.push(#tag_enum_name::#new_tag_name);
if new_tag_name == &empty_tag_name {
quote! {
__data_stack.#tag_stack_name.push(#tag_enum_name::#new_tag_name);
}
} else {
quote! {
if __push_data {
__data_stack.#tag_stack_name.push(#tag_enum_name::#new_tag_name);
} else {
__data_stack.#tag_stack_name.push(#tag_enum_name::#empty_tag_name);
}
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for generating tag stack modifications is quite complex and contains a lot of duplicated code. This makes it hard to read and maintain. For example, let first_tag_name = ... and the logic to create truncate_stream are repeated.

Consider refactoring this block to reduce duplication. You could extract common variables and structures to the top of the if rule.tokens.len() > 0 block. This would make the logic for each case clearer and the whole block more maintainable.

This refactoring would also help avoid generating code like if __push_data {} else { ... } which is functionally correct but stylistically awkward.

                        if rule.tokens.len() > 0 {
                            let first_tag_name = token_to_stack_name(rule.tokens[0].token)
                                .unwrap_or(&empty_tag_name);
                            let len = rule.tokens.len();
                            let lenm1 = len - 1;

                            let truncate_lenm1 = if lenm1 > 0 {
                                quote! {__data_stack.#tag_stack_name.truncate(__data_stack.#tag_stack_name.len() - #lenm1);}
                            } else {
                                TokenStream::new()
                            };
                            let truncate_len = quote! {__data_stack.#tag_stack_name.truncate(__data_stack.#tag_stack_name.len() - #len);};

                            if new_tag_name == &empty_tag_name {
                                if first_tag_name == new_tag_name {
                                    // pop n-1 tokens, no new insertion
                                    truncate_lenm1
                                } else {
                                    // pop n tokens, push empty
                                    quote! {
                                        #truncate_len
                                        __data_stack.#tag_stack_name.push(#tag_enum_name::#new_tag_name);
                                    }
                                }
                            } else { // new_tag_name is not empty
                                if first_tag_name == new_tag_name {
                                    // pop n-1 tokens if __push_data, else pop n and push empty
                                    quote! {
                                        if __push_data {
                                            #truncate_lenm1
                                        } else {
                                            #truncate_len
                                            __data_stack.#tag_stack_name.push(#tag_enum_name::#empty_tag_name);
                                        }
                                    }
                                } else if first_tag_name == &empty_tag_name {
                                    // pop n and push new if __push_data, else pop n-1
                                    quote! {
                                        if __push_data {
                                            #truncate_len
                                            __data_stack.#tag_stack_name.push(#tag_enum_name::#new_tag_name);
                                        } else {
                                            #truncate_lenm1
                                        }
                                    }
                                } else {
                                    // pop n, and push new or empty based on __push_data
                                    quote! {
                                        #truncate_len
                                        if __push_data {
                                            __data_stack.#tag_stack_name.push(#tag_enum_name::#new_tag_name);
                                        } else {
                                            __data_stack.#tag_stack_name.push(#tag_enum_name::#empty_tag_name);
                                        }
                                    }
                                }
                            }
                        } else {
                            if new_tag_name == &empty_tag_name {
                                quote! {
                                    __data_stack.#tag_stack_name.push(#tag_enum_name::#new_tag_name);
                                }
                            } else {
                                quote! {
                                    if __push_data {
                                        __data_stack.#tag_stack_name.push(#tag_enum_name::#new_tag_name);
                                    } else {
                                        __data_stack.#tag_stack_name.push(#tag_enum_name::#empty_tag_name);
                                    }
                                }
                            }
                        }

@ehwan ehwan merged commit 1ca5013 into main Oct 3, 2025
1 check passed
@ehwan ehwan deleted the next_state_push branch October 3, 2025 04:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants