Skip to content

feat: add new agent type wo heartbeats or send message #326

feat: add new agent type wo heartbeats or send message

feat: add new agent type wo heartbeats or send message #326

Triggered via pull request September 19, 2025 20:02
Status Failure
Total duration 3h 55m 9s
Artifacts

test-ollama.yml

on: pull_request
test-ollama  /  changed-files
14s
test-ollama / changed-files
test-ollama  /  Check cache key
8s
test-ollama / Check cache key
test-ollama  /  block-until-sdk-preview-finishes
0s
test-ollama / block-until-sdk-preview-finishes
Matrix: test-ollama.test-run
Fit to window
Zoom out
Zoom in

Annotations

12 errors and 14 warnings
test-ollama / test-run (integration_test_send_message.py): tests/integration_test_send_message.py#L1255
test_background_token_streaming_greeting_with_assistant_message[qwen2.5:7b] letta_client.core.api_error.ApiError: headers: {'date': 'Fri, 19 Sep 2025 23:56:40 GMT', 'server': 'uvicorn', 'content-length': '159', 'content-type': 'application/json'}, status_code: 503, body: {'detail': 'Background streaming requires Redis to be running. Please ensure Redis is properly configured. LETTA_REDIS_HOST: localhost, LETTA_REDIS_PORT: 6379'}
test-ollama / test-run (integration_test_send_message.py): tests/integration_test_send_message.py#L1211
test_token_streaming_agent_loop_error[qwen2.5:7b] Failed: DID NOT RAISE <class 'letta_client.core.api_error.ApiError'>
test-ollama / test-run (integration_test_send_message.py): tests/integration_test_send_message.py#L1132
test_token_streaming_greeting_without_assistant_message[qwen2.5:7b] assert False + where False = isinstance(ToolCallMessage(id='message-17223bdb-1c51-420f-a377-2370347de61f', date=datetime.datetime(2025, 9, 19, 23, 56, 36, tzinfo=TzInfo(UTC)), name=None, message_type='tool_call_message', otid='17223bdb-1c51-420f-a377-2370347de600', sender_id=None, step_id='step-6be14408-e6e4-4136-a2a0-5750ca80ae6f', is_err=None, seq_id=None, run_id=None, tool_call=ToolCall(name='send_message', arguments='{"message": "Teamwork makes the dream work"}', tool_call_id='call_2q36muxq')), ReasoningMessage)
test-ollama / test-run (integration_test_send_message.py): tests/integration_test_send_message.py#L1095
test_token_streaming_greeting_with_assistant_message[qwen2.5:7b] assert 5 == 4 + where 5 = len([ToolCallMessage(id='message-883471ac-de4b-41cb-ab27-696c23851232', date=datetime.datetime(2025, 9, 19, 23, 56, 34, tzinfo=TzInfo(UTC)), name=None, message_type='tool_call_message', otid='883471ac-de4b-41cb-ab27-696c23851200', sender_id=None, step_id='step-7c59fda4-8ed9-4ef4-a3e7-2605cf6b8d9c', is_err=None, seq_id=None, run_id=None, tool_call=ToolCall(name='send_message', arguments='{"message": "Teamwork makes the dream work"}', tool_call_id='call_wi4qlnjw')), ToolReturnMessage(id='message-d141af3b-f965-46ae-a60f-918ad5c83310', date=datetime.datetime(2025, 9, 19, 23, 56, 34, tzinfo=TzInfo(UTC)), name='send_message', message_type='tool_return_message', otid='d141af3b-f965-46ae-a60f-918ad5c83300', sender_id=None, step_id='step-7c59fda4-8ed9-4ef4-a3e7-2605cf6b8d9c', is_err=None, seq_id=None, run_id=None, tool_return='Sent message successfully.', status='success', tool_call_id='call_wi4qlnjw', stdout=None, stderr=None), AssistantMessage(id='message-8ee95dbe-fea0-4ed7-925a-fbe8c1a99b60', date=datetime.datetime(2025, 9, 19, 23, 56, 35, tzinfo=TzInfo(UTC)), name=None, message_type='assistant_message', otid='8ee95dbe-fea0-4ed7-925a-fbe8c1a99b00', sender_id=None, step_id='step-7f56645d-5742-4f52-b02b-4e898b974895', is_err=None, seq_id=None, run_id=None, content='send_message'), LettaStopReason(message_type='stop_reason', stop_reason='end_turn'), LettaUsageStatistics(message_type='usage_statistics', completion_tokens=28, prompt_tokens=1193, total_tokens=1221, step_count=2, steps_messages=None, run_ids=None)])
test-ollama / test-run (integration_test_send_message.py): tests/integration_test_send_message.py#L1053
test_step_stream_agent_loop_error[qwen2.5:7b] Failed: DID NOT RAISE <class 'letta_client.core.api_error.ApiError'>
test-ollama / test-run (integration_test_send_message.py): tests/integration_test_send_message.py#L982
test_step_streaming_greeting_without_assistant_message[qwen2.5:7b] assert False + where False = isinstance(ToolCallMessage(id='message-0439c45f-e81f-4170-a849-6d6d7315a135', date=datetime.datetime(2025, 9, 19, 23, 56, 30, tzinfo=TzInfo(UTC)), name=None, message_type='tool_call_message', otid='0439c45f-e81f-4170-a849-6d6d7315a100', sender_id=None, step_id='step-36f62240-a7f1-4b6b-935f-de192a23b3d0', is_err=None, seq_id=None, run_id=None, tool_call=ToolCall(name='send_message', arguments='{"message": "Teamwork makes the dream work"}', tool_call_id='call_m5u9mp1c')), ReasoningMessage)
test-ollama / test-run (integration_test_send_message.py): tests/integration_test_send_message.py#L954
test_step_streaming_greeting_with_assistant_message[qwen2.5:7b] assert 5 == 4 + where 5 = len([ToolCallMessage(id='message-646f2442-a87c-4d56-a9ec-58373f0c0a08', date=datetime.datetime(2025, 9, 19, 23, 56, 27, tzinfo=TzInfo(UTC)), name=None, message_type='tool_call_message', otid='646f2442-a87c-4d56-a9ec-58373f0c0a00', sender_id=None, step_id='step-8936f8bd-7495-4fe1-87a1-465bfc541c99', is_err=None, seq_id=None, run_id=None, tool_call=ToolCall(name='send_message', arguments='{"message": "Teamwork makes the dream work"}', tool_call_id='call_9jay7r5k')), ToolReturnMessage(id='message-528df348-15d1-4186-b31c-c5691238cb32', date=datetime.datetime(2025, 9, 19, 23, 56, 27, tzinfo=TzInfo(UTC)), name='send_message', message_type='tool_return_message', otid='528df348-15d1-4186-b31c-c5691238cb00', sender_id=None, step_id='step-8936f8bd-7495-4fe1-87a1-465bfc541c99', is_err=None, seq_id=None, run_id=None, tool_return='Sent message successfully.', status='success', tool_call_id='call_9jay7r5k', stdout=None, stderr=None), AssistantMessage(id='message-f4b4ab69-8084-4d2f-8574-1e3b65ce8f69', date=datetime.datetime(2025, 9, 19, 23, 56, 28, tzinfo=TzInfo(UTC)), name=None, message_type='assistant_message', otid='f4b4ab69-8084-4d2f-8574-1e3b65ce8f00', sender_id=None, step_id='step-74899bff-dd81-41b9-9c6f-9209417b96cb', is_err=None, seq_id=None, run_id=None, content="send_message has been called with the message 'Teamwork makes the dream work'. The message was sent successfully."), LettaStopReason(message_type='stop_reason', stop_reason='end_turn'), LettaUsageStatistics(message_type='usage_statistics', completion_tokens=48, prompt_tokens=1193, total_tokens=1241, step_count=2, steps_messages=None, run_ids=None)])
test-ollama / test-run (integration_test_send_message.py): tests/integration_test_send_message.py#L804
test_tool_call[qwen2.5:7b] letta_client.core.api_error.ApiError: headers: {'date': 'Fri, 19 Sep 2025 23:56:22 GMT', 'server': 'uvicorn', 'content-length': '46', 'content-type': 'application/json'}, status_code: 500, body: {'detail': 'An internal server error occurred'}
test-ollama / test-run (integration_test_send_message.py): tests/integration_test_send_message.py#L763
test_greeting_without_assistant_message[qwen2.5:7b] letta_client.core.api_error.ApiError: headers: {'date': 'Fri, 19 Sep 2025 23:56:18 GMT', 'server': 'uvicorn', 'content-length': '46', 'content-type': 'application/json'}, status_code: 500, body: {'detail': 'An internal server error occurred'}
test-ollama / test-run (integration_test_send_message.py): tests/integration_test_send_message.py#L737
test_greeting_with_assistant_message[qwen2.5:7b] httpx.ReadTimeout: timed out
test-ollama / test-run (test_providers.py::test_ollama)
Process completed with exit code 1.
test-ollama / test-run (test_providers.py::test_ollama): tests/test_providers.py#L171
test_ollama assert 0 > 0 + where 0 = len([])
test-ollama / test-run (integration_test_send_message.py): .venv/lib/python3.12/site-packages/websockets/legacy/__init__.py#L6
websockets.legacy is deprecated; see https://websockets.readthedocs.io/en/stable/howto/upgrade.html for upgrade instructions
test-ollama / test-run (integration_test_send_message.py): tests/integration_test_send_message.py#L1672
Unknown pytest.mark.flaky - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html
test-ollama / test-run (integration_test_send_message.py): .venv/lib/python3.12/site-packages/pydantic/fields.py#L1093
Using extra keyword arguments on `Field` is deprecated and will be removed. Use `json_schema_extra` instead. (Extra keys: 'example')
test-ollama / test-run (integration_test_send_message.py): .venv/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py#L298
`json_encoders` is deprecated. See https://docs.pydantic.dev/2.11/concepts/serialization/#custom-serializers for alternatives
test-ollama / test-run (integration_test_send_message.py): .venv/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py#L298
`json_encoders` is deprecated. See https://docs.pydantic.dev/2.11/concepts/serialization/#custom-serializers for alternatives
test-ollama / test-run (integration_test_send_message.py): .venv/lib/python3.12/site-packages/pydantic/_internal/_config.py#L323
Support for class-based `config` is deprecated, use ConfigDict instead.
test-ollama / test-run (test_providers.py::test_ollama): .venv/lib/python3.12/site-packages/pydantic/fields.py#L1093
Using extra keyword arguments on `Field` is deprecated and will be removed. Use `json_schema_extra` instead. (Extra keys: 'example')
test-ollama / test-run (test_providers.py::test_ollama): .venv/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py#L298
`json_encoders` is deprecated. See https://docs.pydantic.dev/2.11/concepts/serialization/#custom-serializers for alternatives
test-ollama / test-run (test_providers.py::test_ollama): .venv/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py#L298
`json_encoders` is deprecated. See https://docs.pydantic.dev/2.11/concepts/serialization/#custom-serializers for alternatives
test-ollama / test-run (test_providers.py::test_ollama): .venv/lib/python3.12/site-packages/pydantic/_internal/_config.py#L323
Support for class-based `config` is deprecated, use ConfigDict instead.