Skip to content
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 9 additions & 16 deletions sentry_sdk/integrations/pydantic_ai/patches/tools.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,7 @@
import sentry_sdk

from ..spans import execute_tool_span, update_execute_tool_span
from ..utils import (
_capture_exception,
get_current_agent,
)
from ..utils import get_current_agent

from typing import TYPE_CHECKING

Expand Down Expand Up @@ -73,18 +70,14 @@ async def wrapped_call_tool(
agent,
tool_type=tool_type,
) as span:
try:
result = await original_call_tool(
self,
call,
*args,
**kwargs,
)
update_execute_tool_span(span, result)
return result
except Exception as exc:
_capture_exception(exc)
raise exc from None
result = await original_call_tool(
self,
call,
*args,
**kwargs,
)
update_execute_tool_span(span, result)
return result
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if removing the exception capturing altogether is the way to go. If a user has errors happening in their tool, even if they don't necessarily bubble all the way up, they'll lose visibility into them. If e.g. a tool is consistently providing erroneous output, the dev might want to know about it and fix it, even if the whole LLM chain manages to recover.

What we could consider:

  • still capturing other exceptions, just not ToolRetryError (that wouldn't fix the example I talk about above, but it'd at least make sure we don't stop capturing other potentially important exceptions)
  • see which of the exceptions captured here will be handled by Pydantic AI internally and capture those with handled=True
  • add an integration-level option to turn tool exceptions on/off

Since I have little experience in this type of development though, might be this is also fine as is and no one cares about recoverable errors in their tools. 😅

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good points, I did some more investigation.

  • Other errors bubble all the way up, so if we catch them here we're catching them multiple times. At least on the code path in
_agent_graph.process_tool_calls()
-> _agent_graph._call_tools()
-> _agent_graph._call_tool()
-> ToolManager.handle_call()
-> ToolManager._call_tool()`

A few places call process_tool_calls(), but only ToolRetryError is handled, and experimentally, other errors bubble up to the top-level run methods.

  • I'm open to this. Tool authors can raise ModelRetry to trigger a ToolRetryError, or ToolRetryError can result from a validation failure. These are documented, and the only two ways to raise ToolRetryError in ToolManager._call_tool(). See https://ai.pydantic.dev/tools-advanced/#tool-retries.
  • We should add the option if we capture ToolRetryError.

Copy link
Contributor

@sentrivana sentrivana Dec 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for investigating!

I'm leaning towards the following:

  • removing the broad except Exception and replacing it with except ToolRetryError, to prevent double capture of other errors
  • gating the actual ToolRetryError capture behind an option (unsure about the default)
  • if the option is on and we end up capturing a ToolRetryError, capturing it with handled=True

WDYT?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, was already working on this approach.

I set the option to True, since this is useful information in my view.


# No span context - just call original
return await original_call_tool(
Expand Down