How about Switch OpenAI-based tests to gpt-4.1-nano
#6361
Closed
SongChiYoung
started this conversation in
Ideas
Replies: 2 comments
-
|
I think it is a good idea. |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
Resolved it |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all,
Many of our test cases currently use
gpt-4oorgpt-4o-mini. These models are powerful, but also relatively heavier in both latency and cost — especially for running full test suites repeatedly.Right now, this is still manageable. But looking ahead, it’s easy to imagine scenarios where the cumulative cost or runtime becomes a bottleneck — especially as more tests, workflows, or contributors get added.
That’s why I’d like to suggest:
Let’s switch to
gpt-4.1-nanofor all OpenAI-based tests.According to OpenAI’s official documentation,
gpt-4.1-nanosupports function calling, JSON mode, and logprobs — so structurally, it covers all the core API features we rely on in tests.Since our tests are mostly structural (not evaluating model quality), the tradeoff seems minimal and the gain is long-term sustainability.
Open to feedback — happy to send a PR if folks agree.
Beta Was this translation helpful? Give feedback.
All reactions