-
Notifications
You must be signed in to change notification settings - Fork 99
Replace op_schema with op_signature #2771
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
❌ 2 Tests Failed:
View the full list of 2 ❄️ flaky test(s)
To view more test analytics, go to the Test Analytics Dashboard |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR refactors how operator schemas are handled by replacing direct usage of onnx.defs.OpSchema with a new _schemas.OpSignature abstraction throughout the autocast, converter, and evaluator modules. This change improves modularity and encapsulation when dealing with operator signatures.
Changes:
- Replaced
OpSchemawithOpSignaturein autocast functions for input type casting - Updated converter to pass
op_signatureinstead ofop_schemato autocast functions - Refactored evaluator methods to be private (
_adapt_inputs,_adapt_attributes,_adapt_outputs) and removed theuse_graph_attributemethod along with its associated conditional logic
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| onnxscript/_internal/autocast.py | Updated to use OpSignature instead of OpSchema, including parameter filtering and type constraint access changes |
| onnxscript/_internal/converter.py | Changed to pass op_signature instead of op_schema to autocast functions |
| onnxscript/_internal/evaluator.py | Made adapter methods private, removed use_graph_attribute method, and simplified _adapt_attributes to always use graph attributes |
Comments suppressed due to low confidence (1)
onnxscript/_internal/evaluator.py:548
- The
use_graph_attributemethod inORTMixedEvaluatoris now dead code since it's no longer called after removing the conditional logic in_adapt_attributes. This method should be removed from theORTMixedEvaluatorclass.
def use_graph_attribute(self, schema: onnx.defs.OpSchema) -> bool:
return _schema_id(schema) not in self._python_ops
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 8 out of 8 changed files in this pull request and generated 2 comments.
Comments suppressed due to low confidence (1)
onnxscript/_internal/values.py:364
- The docstring still mentions a 'functions' parameter that was removed from the method signature. This outdated documentation should be removed.
functions: A list of functions to include in the model.
By default, all functions called at least once are included.
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 13 out of 13 changed files in this pull request and generated 4 comments.
| ) | ||
|
|
||
| # Duplicate the graph to create the model | ||
| main_graph = self.function_ir.graph.clone() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if this is necessary. I guess this is being used to handle the functions down below better. But, in practice, I don't think I have seen uses where functions are passed in, but we do call to_model_proto commonly.
Aren't we making the common case more expensive (cloning the graph before serializaing it instead of serializing it) in order to handle a usage I have not seen?
I guess two other options are: (a) Handle the special-case where no functions are specified more efficiently, or (b) Break compatibility and even drop the functions parameter or design a better API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to_model_proto is called very rarely, so I don't think this will be costly. I cloned because the graph was subsequently modified (with updated opset imports etc.)
I do agree we should do (b). But that can be independent of copying.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I call to_model_proto all the time ... admittedly that is not production code, but creating test models for various testing purposes. But: I think it is better to get the API right before trying to fix any internal implementation issue, especially if it is going to affect the implementation choice. In this case, the API change can make the copy unnecessary. (If and when we expand the scope of onnxscript to define entire models, I think conversion to Model (proto or IR) will become more important.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still, copying the IR object is cheap. I don't think it will make any material runtime difference but the duplication makes the process more robust.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed clone and the functions parameter.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
|
@gramalingam all tests passed |
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
This pull request refactors how operator schemas are handled throughout the autocast, converter, and evaluator modules. The main change is replacing direct usage of
OpSchemawith a new_schemas.OpSignatureabstraction, leading to more consistent and modular code when dealing with operator signatures, especially for input casting and evaluation. Several related methods are renamed and refactored for clarity and encapsulation.Important changes
Evaluatorinterface now defineseval_opon onnx ops. The oldevalwas removed in favor of a more flexibleeval_op. The exporter's eval will continue to function with a compatible logic inclass Opop_schemaproperties from Functions are removedOperator signature abstraction and autocast refactor:
OpSchemawith_schemas.OpSignatureinonnxscript/_internal/autocast.py, updating all relevant function signatures and internal logic to use the new abstraction. This includes changing how input parameters are filtered and type constraints are accessed.AST Converter integration:
onnxscript/_internal/converter.py) to passop_signatureinstead ofop_schemato autocast functions, ensuring compatibility with the new signature abstraction.Evaluator refactor and encapsulation:
onnxscript/_internal/evaluator.py) to use_adapt_inputs,_adapt_attributes, and_adapt_outputsmethods, encapsulating the logic for adapting inputs/outputs and removing unused or redundant methods. Now, operator signatures are consistently adapted fromOpSchemausing_schemas.OpSignature.Addtionally: