Releases: Talmaj/onnx2pytorch
Releases · Talmaj/onnx2pytorch
v0.5.3
Release Summary
BatchNorm Fixes & Improvements
- Fixed critical bias bug: Corrected line 59 in
batchnorm.pywhere bias was incorrectly set toscaleinstead ofB - Fixed inference mode for batch_size > 1: Explicitly set BatchNorm to eval mode to use running statistics (ONNX inference behavior) #44
- Removed experimental flag: BatchNorm now works correctly with
batch_size > 1by default #35 - Added comprehensive tests: Validated against onnxruntime with various batch sizes (1, 2, 4, 8), channels, spatial dimensions, epsilon values, and momentum values
ReduceSumSquare
- Implemented ReduceSumSquare operator: Computes
sum(x^2)along specified axes - Supports both opset versions: Handles axes as attribute (opset < 13) and as optional input (opset >= 13)
- Comprehensive test coverage: 16+ parametrized test cases validating against onnxruntime with different input shapes, axes, and keepdims settings
LogSoftmax
v0.5.2
Release Summary
This release adds significant new operator support and includes several important bug fixes.
New Operators & Features
- LayerNorm: Full LayerNorm operator support with comprehensive tests (#74)
- GRU: Added GRU (Gated Recurrent Unit) operator (#70)
- LRN: Added Local Response Normalization operator (#53)
- AutoPad: New AutoPad operation with support for
auto_pad=SAME_UPPER(#58) - Control Flow: Added If operator for conditional execution (#63)
- Sequence Operations: Added Optional and SequenceConstruct operators
- RandomUniformLike: Added RandomUniformLike operator (#57)
Enhancements
- Added comprehensive support for all ONNX attributes
- Implemented ONNX to PyTorch dtypes mapping for better type conversion
Bug Fixes
- Fixed issue with MatMul when it's the last operation (not followed by Add) (#59)
- Removed warning for read-only tensors (#73)
Miscellaneous
- Updated .gitignore
This release significantly expands ONNX operator coverage, particularly for sequence operations, control flow, and recurrent neural networks, while improving stability and type handling.
Contributors:
v0.5.1
v0.5.0
- Updated the library to work with latest onnx and pytorch versions
- Added new operators
- Switch to GitHub actions for CI
Special thanks to the contributors!
v0.4.1
v0.4.0
- Improve memory management by removing activations that are no longer required by any following operations
- New supported operators
- Tile
- Loop
- Bitshift
- Div
- Constant
- GatherND
- GlobalAveragePool
- LSTM
- Matmul
- NonMaxSuppression
- Prelu
- ReduceSum
- Scatter
- ScatterElements
- ScatterND
- ThresholdedReLu
- TopK
- Transpose
- Where
Main contributor for this release was @calvinmccarter-at-lightmatter
v0.3.0
- MLPerf v0.7 model support
- New operators support
- Elu
- And
- Or
- Not
- Range
- Expand
- Unsqueeze - version 13
- Squeeze - version 13
Special thanks to our first contributor @calvinmccarter-at-lightmatter for adding his part to this release.