Skip to content

Releases: Talmaj/onnx2pytorch

v0.5.3

04 Nov 20:44

Choose a tag to compare

Release Summary

BatchNorm Fixes & Improvements

  • Fixed critical bias bug: Corrected line 59 in batchnorm.py where bias was incorrectly set to scale instead of B
  • Fixed inference mode for batch_size > 1: Explicitly set BatchNorm to eval mode to use running statistics (ONNX inference behavior) #44
  • Removed experimental flag: BatchNorm now works correctly with batch_size > 1 by default #35
  • Added comprehensive tests: Validated against onnxruntime with various batch sizes (1, 2, 4, 8), channels, spatial dimensions, epsilon values, and momentum values

ReduceSumSquare

  • Implemented ReduceSumSquare operator: Computes sum(x^2) along specified axes
  • Supports both opset versions: Handles axes as attribute (opset < 13) and as optional input (opset >= 13)
  • Comprehensive test coverage: 16+ parametrized test cases validating against onnxruntime with different input shapes, axes, and keepdims settings

LogSoftmax

  • Added LogSoftmax operator: Supports axis attribute with proper default handling (dim=-1) #49 #30
  • Added comprehensive tests: Each operator validated against onnxruntime with various input shapes, axes, and mathematical property verification

v0.5.2

01 Nov 23:34

Choose a tag to compare

Release Summary

This release adds significant new operator support and includes several important bug fixes.

New Operators & Features

  • LayerNorm: Full LayerNorm operator support with comprehensive tests (#74)
  • GRU: Added GRU (Gated Recurrent Unit) operator (#70)
  • LRN: Added Local Response Normalization operator (#53)
  • AutoPad: New AutoPad operation with support for auto_pad=SAME_UPPER (#58)
  • Control Flow: Added If operator for conditional execution (#63)
  • Sequence Operations: Added Optional and SequenceConstruct operators
  • RandomUniformLike: Added RandomUniformLike operator (#57)

Enhancements

  • Added comprehensive support for all ONNX attributes
  • Implemented ONNX to PyTorch dtypes mapping for better type conversion

Bug Fixes

  • Fixed issue with MatMul when it's the last operation (not followed by Add) (#59)
  • Removed warning for read-only tensors (#73)

Miscellaneous

  • Updated .gitignore

This release significantly expands ONNX operator coverage, particularly for sequence operations, control flow, and recurrent neural networks, while improving stability and type handling.

Contributors:

v0.5.1

13 Nov 09:18

Choose a tag to compare

  • Added new operators
    • Hardsigmoid #65
    • Hardswish #60

v0.5.0

17 Sep 09:43

Choose a tag to compare

  • Updated the library to work with latest onnx and pytorch versions
  • Added new operators
  • Switch to GitHub actions for CI

Special thanks to the contributors!

v0.4.1

14 Nov 16:53

Choose a tag to compare

  • Fix compatibility with torch=1.10.0
  • Fix Clip operator when min and max are None
  • Fix from onnx2pytorch import __version__

v0.4.0

07 Oct 18:34

Choose a tag to compare

  • Improve memory management by removing activations that are no longer required by any following operations
  • New supported operators
    • Tile
    • Loop
    • Bitshift
    • Div
    • Constant
    • GatherND
    • GlobalAveragePool
    • LSTM
    • Matmul
    • NonMaxSuppression
    • Prelu
    • ReduceSum
    • Scatter
    • ScatterElements
    • ScatterND
    • ThresholdedReLu
    • TopK
    • Transpose
    • Where

Main contributor for this release was @calvinmccarter-at-lightmatter

v0.3.0

12 May 21:08

Choose a tag to compare

  • MLPerf v0.7 model support
  • New operators support
    • Elu
    • And
    • Or
    • Not
    • Range
    • Expand
    • Unsqueeze - version 13
    • Squeeze - version 13

Special thanks to our first contributor @calvinmccarter-at-lightmatter for adding his part to this release.