Skip to content

Conversation

@guan404ming
Copy link
Member

Why

The scatter_nd operator was missing FRelaxInferLayout attribute, which is needed for proper layout transformation during model optimization.

How

  • Added InferLayoutScatterND function that uses data tensor's layout for output since scatter_nd maintains input shape
  • Registered FRelaxInferLayout attribute

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @guan404ming, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical missing component for the scatter_nd operator within the Relax framework by adding its FRelaxInferLayout attribute. This enhancement ensures that scatter_nd can correctly participate in layout transformations during model optimization, which is essential for maintaining data consistency and achieving optimal performance. The change includes the implementation of the layout inference logic and a new test to verify its functionality.

Highlights

  • Layout Inference Function: Implemented the InferLayoutScatterND function, which determines the output layout for the scatter_nd operator by adopting the layout of its primary data input tensor.
  • Attribute Registration: Registered the FRelaxInferLayout attribute for the relax.scatter_nd operator, linking it to the newly created InferLayoutScatterND function to enable proper layout transformation.
  • New Test Case: Added a new test case, test_conv2d_scatter_nd, to test_transform_convert_layout.py to validate the correct layout transformation behavior of scatter_nd when used in conjunction with a conv2d operation.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds the FRelaxInferLayout attribute for the scatter_nd operator, which is a good addition for enabling layout transformations. The implementation is mostly correct, but I've found a potential issue in the layout inference logic for the updates tensor. The current logic doesn't handle the layout of updates, which could lead to compilation errors in certain scenarios. I've provided a suggestion to make the implementation more robust by ensuring layout consistency between the data and updates tensors.

Comment on lines 2783 to 2803
InferLayoutOutput InferLayoutScatterND(
const Call& call, const ffi::Map<ffi::String, ffi::Array<ffi::String>>& desired_layouts,
const VarLayoutMap& var_layout_map) {
ICHECK(NoDesiredLayout(call, desired_layouts));

LayoutDecision data_layout = GetLayoutDecision(var_layout_map, call->args[0]);
LayoutDecision indices_layout = GetLayoutDecision(var_layout_map, call->args[1]);
LayoutDecision updates_layout = GetLayoutDecision(var_layout_map, call->args[2]);

LayoutDecision layout = data_layout;

if (layout->layout.ndim() != layout->layout.ndim_primal()) {
const auto* tensor_sinfo = GetStructInfoAs<TensorStructInfoNode>(call->args[0]);
ICHECK(tensor_sinfo != nullptr) << "Invalid Call";
ICHECK(!tensor_sinfo->IsUnknownNdim()) << "Only support static ndim for now";
int ndim = tensor_sinfo->ndim;
layout = LayoutDecision(InitialLayout(ndim));
}

return InferLayoutOutput({layout, indices_layout, updates_layout}, {layout}, Attrs(call->attrs));
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current implementation of InferLayoutScatterND doesn't enforce any layout on the updates tensor. This can lead to shape mismatches after layout conversion if updates has a layout that is incompatible with the new layout of data, as the updates tensor's shape is dependent on the data tensor's shape. For instance, if data is transformed to NHWC, but updates remains in NCHW, InferStructInfo will likely fail.

I suggest a more robust implementation that enforces the same layout for data and updates when they have the same rank. For more complex cases, like different ranks or when data has a sub-indexed layout, it safely falls back to the initial layout for both tensors. This will prevent potential compilation errors.

InferLayoutOutput InferLayoutScatterND(
    const Call& call, const ffi::Map<ffi::String, ffi::Array<ffi::String>>& desired_layouts,
    const VarLayoutMap& var_layout_map) {
  ICHECK(NoDesiredLayout(call, desired_layouts));

  LayoutDecision data_layout = GetLayoutDecision(var_layout_map, call->args[0]);
  LayoutDecision indices_layout = GetLayoutDecision(var_layout_map, call->args[1]);
  LayoutDecision updates_layout = GetLayoutDecision(var_layout_map, call->args[2]);

  const auto* data_sinfo = GetStructInfoAs<TensorStructInfoNode>(call->args[0]);
  ICHECK(data_sinfo != nullptr) << "Invalid Call";
  const auto* updates_sinfo = GetStructInfoAs<TensorStructInfoNode>(call->args[2]);
  ICHECK(updates_sinfo != nullptr) << "Invalid Call";

  bool same_rank = !data_sinfo->IsUnknownNdim() && !updates_sinfo->IsUnknownNdim() &&
                   data_sinfo->ndim == updates_sinfo->ndim;

  if (data_layout->layout.ndim() != data_layout->layout.ndim_primal() || !same_rank) {
    // Fallback to initial layout if data has sub-indexed layout or ranks are different.
    // The case of different ranks is complex to handle, so we fallback for safety.
    ICHECK(!data_sinfo->IsUnknownNdim()) << "Only support static ndim for now";
    data_layout = LayoutDecision(InitialLayout(data_sinfo->ndim));
    if (!updates_sinfo->IsUnknownNdim()) {
      updates_layout = LayoutDecision(InitialLayout(updates_sinfo->ndim));
    } else {
      updates_layout = LayoutDecision::InitUnknownDim();
    }
  } else {
    // Same rank, not sub-indexed. Enforce same layout for data and updates.
    updates_layout = data_layout;
  }

  return InferLayoutOutput({data_layout, indices_layout, updates_layout}, {data_layout},
                           Attrs(call->attrs));
}

@guan404ming guan404ming marked this pull request as ready for review January 7, 2026 09:08
@guan404ming
Copy link
Member Author

cc @tlopex @mshr-h

@mshr-h mshr-h merged commit b975db9 into apache:main Jan 7, 2026
14 checks passed
@guan404ming guan404ming deleted the add-scatter-nd-layout-inference branch January 7, 2026 12:47
@guan404ming
Copy link
Member Author

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants