Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ agentic architectures that range from simple tasks to complex workflows.
interacts with various services like session management, artifact storage,
and memory, and integrates with application-wide plugins. The runner
provides different execution modes: `run_async` for asynchronous execution
in production, `run_live` for bi-directional streaming interaction, and
in production, `run_live` for bidirectional streaming interaction, and
`run` for synchronous execution suitable for local testing and debugging. At
the end of each invocation, it can perform event compaction to manage
session history size.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ response. This keeps the turn events small, saving context space.
the *next* request to the LLM. This makes the report data available
immediately, allowing the agent to summarize it or answer questions in the
same turn, as seen in the logs. This artifact is only appended for that
round and not saved to session. For furtuer rounds of conversation, it will
round and not saved to session. For further rounds of conversation, it will
be removed from context.
3. **Loading on Demand**: The `CustomLoadArtifactsTool` enhances the default
`load_artifacts` behavior.
Expand Down
2 changes: 1 addition & 1 deletion contributing/samples/gepa/adk_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ def _adk_agent(


class _UserAgent(base_agent.BaseAgent):
"""An agent that wraps the provided environment and simulates an user."""
"""An agent that wraps the provided environment and simulates a user."""

env: Env

Expand Down
2 changes: 1 addition & 1 deletion contributing/samples/gepa/tau_bench_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ def solve(
max_num_steps: The maximum number of steps to run the agent.

Returns:
The result of the solve.
The result of the solve function.

Raises:
- ValueError: If the LLM inference failed.
Expand Down
4 changes: 2 additions & 2 deletions src/google/adk/agents/llm_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -461,7 +461,7 @@ async def _run_async_impl(
self.__maybe_save_output_to_state(event)
yield event
if ctx.should_pause_invocation(event):
# Do not pause immediately, wait until the long running tool call is
# Do not pause immediately, wait until the long-running tool call is
# executed.
should_pause = True
if should_pause:
Expand All @@ -471,7 +471,7 @@ async def _run_async_impl(
events = ctx._get_events(current_invocation=True, current_branch=True)
if events and any(ctx.should_pause_invocation(e) for e in events[-2:]):
return
# Only yield an end state if the last event is no longer a long running
# Only yield an end state if the last event is no longer a long-running
# tool call.
ctx.set_agent_state(self.name, end_of_agent=True)
yield self._create_agent_state_event(ctx)
Expand Down
4 changes: 2 additions & 2 deletions src/google/adk/cli/adk_web_server.py
Original file line number Diff line number Diff line change
Expand Up @@ -205,7 +205,7 @@ class RunAgentRequest(common.BaseModel):
new_message: types.Content
streaming: bool = False
state_delta: Optional[dict[str, Any]] = None
# for resume long running functions
# for resume long-running functions
invocation_id: Optional[str] = None


Expand Down Expand Up @@ -394,7 +394,7 @@ def _setup_gcp_telemetry(
# TODO - use trace_to_cloud here as well once otel_to_cloud is no
# longer experimental.
enable_cloud_tracing=True,
# TODO - reenable metrics once errors during shutdown are fixed.
# TODO - re-enable metrics once errors during shutdown are fixed.
enable_cloud_metrics=False,
enable_cloud_logging=True,
google_auth=(credentials, project_id),
Expand Down
2 changes: 1 addition & 1 deletion src/google/adk/evaluation/eval_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ class EvalConfig(BaseModel):
In the sample below, `tool_trajectory_avg_score`, `response_match_score` and
`final_response_match_v2` are the standard eval metric names, represented as
keys in the dictionary. The values in the dictionary are the corresponding
criterions. For the first two metrics, we use simple threshold as the criterion,
criteria. For the first two metrics, we use simple threshold as the criterion,
the third one uses `LlmAsAJudgeCriterion`.
{
"criteria": {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@

# Definition of Conversation History
The Conversation History is the actual dialogue between the User Simulator and the Agent.
The Conversation History may not be complete, but the exsisting dialogue should adhere to the Conversation Plan.
The Conversation History may not be complete, but the existing dialogue should adhere to the Conversation Plan.
The Conversation History may contain instances where the User Simulator troubleshoots an incorrect/inappropriate response from the Agent in order to enforce the Conversation Plan.
The Conversation History is finished only when the User Simulator outputs `{stop_signal}` in its response. If this token is missing, the conversation between the User Simulator and the Agent has not finished, and more turns can be generated.

Expand Down Expand Up @@ -175,7 +175,7 @@ def _parse_llm_response(response: str) -> Label:
response,
)

# If there was not match for "is_valid", return NOT_FOUND
# If there was no match for "is_valid", return NOT_FOUND
if is_valid_match is None:
return Label.NOT_FOUND

Expand Down
4 changes: 2 additions & 2 deletions src/google/adk/flows/llm_flows/functions.py
Original file line number Diff line number Diff line change
Expand Up @@ -410,7 +410,7 @@ async def _run_with_trace():
function_response = altered_function_response

if tool.is_long_running:
# Allow long running function to return None to not provide function
# Allow long-running function to return None to not provide function
# response.
if not function_response:
return None
Expand Down Expand Up @@ -893,7 +893,7 @@ def find_matching_function_call(
)
for i in range(len(events) - 2, -1, -1):
event = events[i]
# looking for the system long running request euc function call
# looking for the system long-running request euc function call
function_calls = event.get_function_calls()
if not function_calls:
continue
Expand Down
2 changes: 1 addition & 1 deletion src/google/adk/models/gemini_llm_connection.py
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@ async def send_realtime(self, input: RealtimeInput):
def __build_full_text_response(self, text: str):
"""Builds a full text response.

The text should not partial and the returned LlmResponse is not be
The text should not be partial and the returned LlmResponse is not
partial.

Args:
Expand Down
2 changes: 1 addition & 1 deletion src/google/adk/models/gemma_llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -323,7 +323,7 @@ def _get_last_valid_json_substring(text: str) -> tuple[bool, str | None]:
"""Attempts to find and return the last valid JSON object in a string.

This function is designed to extract JSON that might be embedded in a larger
text, potentially with introductory or concluding remarks. It will always chose
text, potentially with introductory or concluding remarks. It will always choose
the last block of valid json found within the supplied text (if it exists).

Args:
Expand Down
2 changes: 1 addition & 1 deletion src/google/adk/models/google_llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@


class _ResourceExhaustedError(ClientError):
"""Represents an resources exhausted error received from the Model."""
"""Represents a resource exhaustion error received from the Model."""

def __init__(
self,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -250,7 +250,7 @@ async def prepare_auth_credentials(
credential = existing_credential or self.auth_credential
# fetch credential from adk framework
# Some auth scheme like OAuth2 AuthCode & OpenIDConnect may require
# multi-step exchange:
# multistep exchange:
# client_id , client_secret -> auth_uri -> auth_code -> access_token
# adk framework supports exchange access_token already
# for other credential, adk can also get back the credential directly
Expand Down
12 changes: 6 additions & 6 deletions tests/integration/fixture/home_automation_agent/agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -138,29 +138,29 @@ def set_device_info(


def get_temperature(location: str) -> int:
"""Get the current temperature in celsius of a location (e.g., 'Living Room', 'Bedroom', 'Kitchen').
"""Get the current temperature in Celsius of a location (e.g., 'Living Room', 'Bedroom', 'Kitchen').

Args:
location (str): The location for which to retrieve the temperature (e.g.,
'Living Room', 'Bedroom', 'Kitchen').

Returns:
int: The current temperature in celsius in the specified location, or
int: The current temperature in Celsius in the specified location, or
'Location not found' if the location does not exist.
"""
return TEMPERATURE_DB.get(location, "Location not found")


def set_temperature(location: str, temperature: int) -> str:
"""Set the desired temperature in celsius for a location.
"""Set the desired temperature in Celsius for a location.

Acceptable range of temperature: 18-30 celsius. If it's out of the range, do
Acceptable range of temperature: 18-30 Celsius. If it's out of the range, do
not call this tool.

Args:
location (str): The location where the temperature should be set.
temperature (int): The desired temperature as integer to set in celsius.
Acceptable range: 18-30 celsius.
temperature (int): The desired temperature as integer to set in Celsius.
Acceptable range: 18-30 Celsius.

Returns:
str: A message indicating whether the temperature was successfully set.
Expand Down
Loading