-
Notifications
You must be signed in to change notification settings - Fork 3.8k
[Relax] Add FRelaxInferLayout for scatter_nd operator #18643
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Relax] Add FRelaxInferLayout for scatter_nd operator #18643
Conversation
Summary of ChangesHello @guan404ming, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a critical missing component for the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds the FRelaxInferLayout attribute for the scatter_nd operator, which is a good addition for enabling layout transformations. The implementation is mostly correct, but I've found a potential issue in the layout inference logic for the updates tensor. The current logic doesn't handle the layout of updates, which could lead to compilation errors in certain scenarios. I've provided a suggestion to make the implementation more robust by ensuring layout consistency between the data and updates tensors.
| InferLayoutOutput InferLayoutScatterND( | ||
| const Call& call, const ffi::Map<ffi::String, ffi::Array<ffi::String>>& desired_layouts, | ||
| const VarLayoutMap& var_layout_map) { | ||
| ICHECK(NoDesiredLayout(call, desired_layouts)); | ||
|
|
||
| LayoutDecision data_layout = GetLayoutDecision(var_layout_map, call->args[0]); | ||
| LayoutDecision indices_layout = GetLayoutDecision(var_layout_map, call->args[1]); | ||
| LayoutDecision updates_layout = GetLayoutDecision(var_layout_map, call->args[2]); | ||
|
|
||
| LayoutDecision layout = data_layout; | ||
|
|
||
| if (layout->layout.ndim() != layout->layout.ndim_primal()) { | ||
| const auto* tensor_sinfo = GetStructInfoAs<TensorStructInfoNode>(call->args[0]); | ||
| ICHECK(tensor_sinfo != nullptr) << "Invalid Call"; | ||
| ICHECK(!tensor_sinfo->IsUnknownNdim()) << "Only support static ndim for now"; | ||
| int ndim = tensor_sinfo->ndim; | ||
| layout = LayoutDecision(InitialLayout(ndim)); | ||
| } | ||
|
|
||
| return InferLayoutOutput({layout, indices_layout, updates_layout}, {layout}, Attrs(call->attrs)); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current implementation of InferLayoutScatterND doesn't enforce any layout on the updates tensor. This can lead to shape mismatches after layout conversion if updates has a layout that is incompatible with the new layout of data, as the updates tensor's shape is dependent on the data tensor's shape. For instance, if data is transformed to NHWC, but updates remains in NCHW, InferStructInfo will likely fail.
I suggest a more robust implementation that enforces the same layout for data and updates when they have the same rank. For more complex cases, like different ranks or when data has a sub-indexed layout, it safely falls back to the initial layout for both tensors. This will prevent potential compilation errors.
InferLayoutOutput InferLayoutScatterND(
const Call& call, const ffi::Map<ffi::String, ffi::Array<ffi::String>>& desired_layouts,
const VarLayoutMap& var_layout_map) {
ICHECK(NoDesiredLayout(call, desired_layouts));
LayoutDecision data_layout = GetLayoutDecision(var_layout_map, call->args[0]);
LayoutDecision indices_layout = GetLayoutDecision(var_layout_map, call->args[1]);
LayoutDecision updates_layout = GetLayoutDecision(var_layout_map, call->args[2]);
const auto* data_sinfo = GetStructInfoAs<TensorStructInfoNode>(call->args[0]);
ICHECK(data_sinfo != nullptr) << "Invalid Call";
const auto* updates_sinfo = GetStructInfoAs<TensorStructInfoNode>(call->args[2]);
ICHECK(updates_sinfo != nullptr) << "Invalid Call";
bool same_rank = !data_sinfo->IsUnknownNdim() && !updates_sinfo->IsUnknownNdim() &&
data_sinfo->ndim == updates_sinfo->ndim;
if (data_layout->layout.ndim() != data_layout->layout.ndim_primal() || !same_rank) {
// Fallback to initial layout if data has sub-indexed layout or ranks are different.
// The case of different ranks is complex to handle, so we fallback for safety.
ICHECK(!data_sinfo->IsUnknownNdim()) << "Only support static ndim for now";
data_layout = LayoutDecision(InitialLayout(data_sinfo->ndim));
if (!updates_sinfo->IsUnknownNdim()) {
updates_layout = LayoutDecision(InitialLayout(updates_sinfo->ndim));
} else {
updates_layout = LayoutDecision::InitUnknownDim();
}
} else {
// Same rank, not sub-indexed. Enforce same layout for data and updates.
updates_layout = data_layout;
}
return InferLayoutOutput({data_layout, indices_layout, updates_layout}, {data_layout},
Attrs(call->attrs));
}|
Thanks! |
Why
The scatter_nd operator was missing FRelaxInferLayout attribute, which is needed for proper layout transformation during model optimization.
How