fix: handle missing 'tools' key in LLMToolSelectorMiddleware response #34369
+112
−9
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description:
This PR fixes a
KeyError: 'tools'that occurs inLLMToolSelectorMiddlewarewhen the LLM returns a response missing the expected'tools'key. The middleware now gracefully handles malformed or incomplete responses by falling back to using all available tools (respecting themax_toolslimit) instead of raising an exception.Problem:
When using
LLMToolSelectorMiddlewarewith an agent, the middleware calls an LLM to select relevant tools before the main model invocation. However, if the LLM response doesn't strictly follow the structured output schema (e.g., missing the'tools'key or having an invalid type), the code would raise aKeyErrorat line 229 intool_selection.py, interrupting all downstream processing.This issue occurs intermittently, especially with complex prompts or edge cases where the LLM doesn't strictly adhere to the expected schema format.
Solution:
The fix adds defensive checks in
_process_selection_response()to handle three scenarios:Missing 'tools' key: When the response dictionary doesn't contain a
'tools'key, the middleware logs a warning and falls back to using all available tools (up tomax_toolslimit).Invalid 'tools' type: When the
'tools'key exists but is not a list, the middleware logs a warning and falls back to using all available tools.Normal case: When the response is valid, the existing logic continues to work as before.
This approach ensures backward compatibility while making the middleware more resilient to LLM response variations. The fallback behavior (using all available tools) is reasonable because:
max_toolslimit is still respectedalways_includetools are still added as expectedChanges
_process_selection_response()inlibs/langchain_v1/langchain/agents/middleware/tool_selection.pyto check for missing or invalid'tools'key before accessing it'tools'keys correctlyIssue: Fixes : 🐛 KeyError: 'tools' in LLMToolSelectorMiddleware when model response misses expected key #34358
Dependencies: None