-
Notifications
You must be signed in to change notification settings - Fork 20.1k
Description
Checked other resources
- This is a bug, not a usage question.
- I added a clear and descriptive title that summarizes this issue.
- I used the GitHub search to find a similar question and didn't find it.
- I am sure that this is a bug in LangChain rather than my code.
- The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
- This is not related to the langchain-community package.
- I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.
Package (Required)
- langchain
- langchain-openai
- langchain-anthropic
- langchain-classic
- langchain-core
- langchain-cli
- langchain-model-profiles
- langchain-tests
- langchain-text-splitters
- langchain-chroma
- langchain-deepseek
- langchain-exa
- langchain-fireworks
- langchain-groq
- langchain-huggingface
- langchain-mistralai
- langchain-nomic
- langchain-ollama
- langchain-perplexity
- langchain-prompty
- langchain-qdrant
- langchain-xai
- Other / not sure / general
Example Code (Python)
from langchain.chat_models import init_chat_model
from langchain_core.messages import HumanMessage
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.graph import END, START, MessagesState, StateGraph
def stream(graph, config, msg: str):
for chunk, _ in graph.stream(
MessagesState(messages=[HumanMessage(content=msg)]),
config=config,
stream_mode="messages",
):
print(f"Content: {repr(chunk.content)}")
model = init_chat_model(
"gpt-5-nano",
streaming=True,
reasoning_effort="minimal",
use_responses_api=True, # only fails with the responses API
# use_previous_response_id=True, # this avoids the bug
)
def chatbot(state: MessagesState):
response = model.invoke(state["messages"])
return MessagesState(messages=[response])
graph = (
StateGraph(MessagesState)
.add_node("chatbot", chatbot)
.add_edge(START, "chatbot")
.add_edge("chatbot", END)
.compile(checkpointer=InMemorySaver())
)
config = {"configurable": {"thread_id": "test"}}
print("------------ Asking for an empty response ---------")
stream(graph, config, "respond with empty string")
print("--------------Sending a placeholder response--------------")
stream(graph, config, "hi")Error Message and Stack Trace (if applicable)
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid 'input[1].id': string too long. Expected a string with maximum length 64, but got a string with length 107 instead.", 'type': 'invalid_request_error', 'param': 'input[1].id', 'code': 'string_above_max_length'}}
During task with name 'chatbot' and id '.....'Description
I expect the langchain library to not error when thread contains an empty AIMessage.
The code example might need to be rerun if the model doens't output an empty message to reproduce the error.
The error only happens with the Responses API, without the use_previous_response_id option.
System Info
System Information
OS: Darwin
OS Version: Darwin Kernel Version 25.1.0: Mon Oct 20 19:34:05 PDT 2025; root:xnu-12377.41.6~2/RELEASE_ARM64_T6041
Python Version: 3.12.9 (main, Mar 11 2025, 17:41:32) [Clang 20.1.0 ]
Package Information
langchain_core: 1.1.3
langchain: 1.1.3
langsmith: 0.4.56
langchain_model_profiles: 0.0.5
langchain_ollama: 1.0.0
langchain_openai: 1.1.1
langchain_tests: 1.0.2
langgraph_sdk: 0.2.14
Optional packages not installed
langserve
Other Dependencies
httpx: 0.28.1
jsonpatch: 1.33
langgraph: 1.0.4
numpy: 2.3.5
ollama: 0.6.1
openai: 2.9.0
openai-agents: 0.6.2
opentelemetry-api: 1.39.0
opentelemetry-exporter-otlp-proto-http: 1.39.0
opentelemetry-sdk: 1.39.0
orjson: 3.11.5
packaging: 25.0
pydantic: 2.12.5
pytest: 8.4.2
pytest-asyncio: 1.3.0
pytest-benchmark: 5.2.3
pytest-codspeed: 4.2.0
pytest-recording: 0.13.4
pytest-socket: 0.7.0
pyyaml: 6.0.3
requests: 2.32.5
requests-toolbelt: 1.0.0
rich: 14.2.0
syrupy: 4.9.1
tenacity: 9.1.2
tiktoken: 0.12.0
typing-extensions: 4.15.0
uuid-utils: 0.12.0
vcrpy: 7.0.0
zstandard: 0.25.0