feat: Add support for external coding agents via ACP#217
feat: Add support for external coding agents via ACP#217the-snesler wants to merge 17 commits intospacedriveapp:mainfrom
Conversation
…CP worker configuration
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds an ACP (Agent Client Protocol) worker backend and generalizes “OpenCode” into “Code Workers”, propagating ACP support across docs, prompts, config, runtime, Rust backend, UI/settings API, spawn tooling, and worker lifecycle/event handling. Changes
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
| patterns: Vec::new(), | ||
| }); | ||
|
|
||
| let selected = args |
There was a problem hiding this comment.
Right now permissions are effectively auto-approved (and AllowAlways is preferred). If this is meant to be interactive, I'd at least prefer AllowOnce over AllowAlways as the default selection.
| let selected = args | |
| let selected = args | |
| .options | |
| .iter() | |
| .find(|option| matches!(option.kind, PermissionOptionKind::AllowOnce)) | |
| .or_else(|| { | |
| args.options | |
| .iter() | |
| .find(|option| matches!(option.kind, PermissionOptionKind::AllowAlways)) | |
| }) | |
| .or_else(|| args.options.first()) | |
| .ok_or_else(|| AcpError::invalid_params().data("permission request has no options"))?; |
There was a problem hiding this comment.
Actionable comments posted: 12
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/config.rs (1)
3948-3971:⚠️ Potential issue | 🟠 MajorACP config is not hot-reloaded into
RuntimeConfig.
RuntimeConfig::reload_configupdates many ArcSwap fields but never storesresolved.acp. ACP edits inconfig.tomlwon’t take effect until restart.♻️ Proposed fix
self.cron_timezone.store(Arc::new(resolved.cron_timezone)); self.cortex.store(Arc::new(resolved.cortex)); self.warmup.store(Arc::new(resolved.warmup)); + self.acp.store(Arc::new(resolved.acp)); // sandbox config is not hot-reloaded here because the Sandbox instance🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config.rs` around lines 3948 - 3971, RuntimeConfig::reload_config updates many ArcSwap-backed fields but neglects to store the ACP changes; add a store for resolved.acp into the RuntimeConfig ACP ArcSwap field (e.g., call self.acp.store(Arc::new(resolved.acp)) alongside the other .store(...) calls in reload_config), making sure to match the expected type (clone or move as needed) so ACP edits in config.toml are hot-reloaded without restart.src/tools/spawn_worker.rs (1)
176-217:⚠️ Potential issue | 🟡 MinorReject unknown
worker_typevalues instead of silently defaulting to builtin.Any unexpected string currently falls through to
spawn_worker_from_state, which hides bad inputs and can route work to the wrong backend.✅ Suggested validation with explicit match
- let readiness = self.state.deps.runtime_config.work_readiness(); - let is_opencode = args.worker_type.as_deref() == Some("opencode"); - let is_acp = args.worker_type.as_deref() == Some("acp"); - - let worker_id = if is_opencode { + let readiness = self.state.deps.runtime_config.work_readiness(); + let worker_type = args.worker_type.as_deref().unwrap_or("builtin"); + + let worker_id = if worker_type == "opencode" { let directory = args.directory.as_deref().ok_or_else(|| { SpawnWorkerError("directory is required for opencode workers".into()) })?; @@ - } else if is_acp { + } else if worker_type == "acp" { let directory = args .directory .as_deref() .ok_or_else(|| SpawnWorkerError("directory is required for acp workers".into()))?; @@ - } else { + } else if worker_type == "builtin" { spawn_worker_from_state( &self.state, &args.task, @@ .await .map_err(|e| SpawnWorkerError(format!("{e}")))? + } else { + return Err(SpawnWorkerError(format!( + "unsupported worker_type '{worker_type}'" + ))); };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tools/spawn_worker.rs` around lines 176 - 217, The code currently treats any unknown args.worker_type string as the builtin path by falling through to spawn_worker_from_state; change the logic in call to explicitly match args.worker_type.as_deref() (or use a match on args.worker_type) and handle Some("opencode") -> call spawn_opencode_worker_from_state, Some("acp") -> call spawn_acp_worker_from_state, None -> call spawn_worker_from_state (or whatever represents the default), and return an Err(SpawnWorkerError(...)) for any other Some(value) with a clear message referencing the invalid worker_type; update the branches that call spawn_opencode_worker_from_state, spawn_acp_worker_from_state, and spawn_worker_from_state accordingly so unknown strings are rejected instead of silently defaulting.
🧹 Nitpick comments (4)
src/lib.rs (1)
128-143: Document semantic difference betweenWorkerResultandWorkerComplete.Both variants currently have identical fields. Adding variant-level doc comments will reduce emitter/consumer ambiguity and accidental double-emission.
📝 Suggested doc update
- WorkerResult { + /// Intermediate worker output; may be emitted before terminal completion. + WorkerResult { @@ - WorkerComplete { + /// Terminal worker lifecycle event; emitted once when the worker is fully done. + WorkerComplete {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib.rs` around lines 128 - 143, Add variant-level documentation to the enum variants WorkerResult and WorkerComplete to clearly state their differing semantics: describe when each should be emitted/consumed (e.g., WorkerResult = intermediate or partial results, may be emitted multiple times and not imply finalization; WorkerComplete = terminal result, emitted once to signal work completion and trigger notifications). Update the doc comments directly above the WorkerResult and WorkerComplete variants in the enum so future emitters/consumers can unambiguously choose the correct variant and avoid double-emission.src/agent/cortex_chat.rs (1)
336-342: Sortacp_agentsfor deterministic prompt rendering.
HashMapiteration order is unstable; sorting avoids prompt churn and makes output/test behavior reproducible.🔧 Proposed fix
- let acp_agents = runtime_config + let mut acp_agents = runtime_config .acp .load() .iter() .filter(|(_, config)| config.enabled && !config.command.trim().is_empty()) .map(|(id, _)| id.clone()) .collect::<Vec<_>>(); + acp_agents.sort_unstable();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/agent/cortex_chat.rs` around lines 336 - 342, The acp_agents Vec is built from a HashMap so iteration order is non-deterministic; after collecting IDs in the acp_agents expression (built from runtime_config.acp.load()), sort the vector (e.g., call sort_unstable() or sort()) before using it so prompt rendering is deterministic and tests are reproducible — update the code that constructs acp_agents to sort the collected ids after collect::<Vec<_>>() (referencing the acp_agents variable and the runtime_config.acp.load() mapping).src/config.rs (1)
2246-2247: Preferconfigover abbreviatedcfgin ACP merge loop.Use a non-abbreviated variable name here for consistency/readability.
✏️ Proposed rename
- for (id, cfg) in overrides { - merged.insert(id.clone(), cfg.clone()); + for (id, config) in overrides { + merged.insert(id.clone(), config.clone()); }As per coding guidelines, "Don't abbreviate variable names (use queue not q, message not msg, channel not ch). Common abbreviations like config are fine".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config.rs` around lines 2246 - 2247, Rename the abbreviated loop variable `cfg` to `config` in the ACP merge loop (the `for (id, cfg) in overrides { ... }` block) and update all uses inside the block (e.g., `merged.insert(id.clone(), cfg.clone());` → `merged.insert(id.clone(), config.clone());`) to improve readability and follow naming guidelines; ensure there are no name collisions with existing `config` bindings in the surrounding scope and run cargo check to verify the change compiles.src/acp/worker.rs (1)
261-289: Use.ok()(or logging) instead oflet _ =for dropped send results.These helper methods silently discard
broadcast::Sender::sendresults withlet _ = ....♻️ Small guideline-aligned tweak
- let _ = self.event_tx.send(ProcessEvent::WorkerStatus { + self.event_tx.send(ProcessEvent::WorkerStatus { agent_id: self.agent_id.clone(), worker_id: self.id, channel_id: self.channel_id.clone(), status: status.to_string(), - }); + }).ok();As per coding guidelines: “Don't silently discard errors; no let _ = on Results. Handle, log, or propagate errors. Only exception is .ok() on channel sends where the receiver may be dropped.”
Also applies to: 391-398
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/acp/worker.rs` around lines 261 - 289, Replace the silent "let _ = self.event_tx.send(...)" drops with explicit handling: call .ok() on the Result (or log the error) when sending in the helper methods send_status, send_result, and send_complete so any Err is not silently ignored; do the same for the other send site referenced by the reviewer (the similar send block around the later worker send functions). Ensure you keep the same sent ProcessEvent variants (WorkerStatus, WorkerResult, WorkerComplete) and only change the terminal handling of the send() Result to .ok() or a logging call.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/content/docs/`(features)/code-workers.mdx:
- Line 21: Replace the non-hyphenated phrase "builtin worker" with the
hyphenated "built-in worker" throughout the docs sentence that mentions
spawn_worker, route, and the channel status block (i.e., the sentence "When
neither external backend is selected, Spacebot uses the builtin worker.").
Ensure consistency by updating any other occurrences of "builtin" to "built-in"
in the same document around references to spawn_worker, route, and the status
block.
In `@interface/src/routes/Settings.tsx`:
- Around line 1301-1329: The code builds acpPayload from acpWorkers but never
validates worker.command after trimming, so add a check in the same loop (where
id and timeout are validated) to ensure const cmd = worker.command.trim(); if
cmd is empty call setMessage({ text: `ACP command cannot be empty for ${id}`,
type: "error" }) and return; then use cmd for the payload (i.e., set command:
cmd) to avoid persisting empty commands that will break spawn time; reference
the acpWorkers loop, worker.command, acpPayload and setMessage when making the
change.
- Around line 1523-1525: The current key in the acpWorkers.map uses worker.id
(and fallback with index) which is user-editable and causes remounts during ID
edits; instead add a stable, non-editable identifier property (e.g., worker._uid
or worker.__uuid) to each worker when the worker object is created/initialized
(where new workers are pushed/constructed and when loading defaults) using a
stable generator (crypto.randomUUID() or uuid v4), ensure this field is not
bound to any editable input, and change the map key to use that identifier
(e.g., key={worker._uid}) so rows remain stable across ID edits and deletions.
In `@prompts/en/fragments/worker_capabilities.md.j2`:
- Line 33: The sentence "These workers on a specified directory, using a coding
agent (OpenCode or ACP-backed) that maintains its own context and tool suite."
is missing a verb—update the fragment in worker_capabilities.md.j2 so it reads
clearly, e.g. "These workers operate on a specified directory and use a coding
agent (OpenCode or ACP-backed) that maintains its own context and tool suite."
Replace the malformed sentence with this corrected wording (or similar: "work
on" / "operate on" + conjunction) to restore proper grammar and clarity.
In `@README.md`:
- Line 488: Update the README table row that currently reads "|
[OpenCode](docs/content/docs/(features)/opencode.mdx#acp-workers) | ACP workers
and configuration |" so the link label and target reflect the renamed doc:
change the link text from "OpenCode" to a correct label like "Code Workers" or
"ACP workers" and update the URL from "opencode.mdx#acp-workers" to the new
document/anchor (e.g., "code-workers.mdx#acp-workers" or the actual file that
now contains the ACP workers section) so the table points to the current doc and
anchor.
In `@src/acp/worker.rs`:
- Around line 535-552: The write_text_file implementation (function
write_text_file handling WriteTextFileRequest/Response and using resolve_path)
currently only enforces workspace boundaries but does not block protected
identity/memory paths; add a guard after resolving the path that checks whether
the resolved path is an identity or memory path (either by calling an existing
helper like is_identity_path/is_memory_path or adding a small predicate using
the same rules as other file tools) and return an appropriate AcpError (e.g., an
internal or validation error variant) that instructs the LLM to use the correct
identity/memory tool instead of allowing the write; ensure this check occurs
before creating directories or writing (before create_dir_all and
tokio::fs::write) so protected paths are rejected early.
- Around line 121-136: The current stderr reader in the tokio::spawn closure for
child.stderr uses tokio::io::AsyncReadExt::read_to_end (reader, buffer) which
can buffer an unbounded stream and exhaust memory; change the logic in the
closure (the reader/buffer block) to stream stderr in fixed-size chunks (e.g.,
read into a small Vec<u8> or use AsyncBufReadExt::read_until/read_line in a
loop) and enforce a total cap (e.g., 64KB) so you stop accumulating once the cap
is reached, then log each chunk or the truncated result with a “...[truncated]”
marker; keep the same worker_id and tracing::debug calls but replace read_to_end
with a bounded loop that breaks when EOF or cap reached to avoid unbounded
memory growth.
In `@src/agent/channel.rs`:
- Around line 2467-2476: The wrapper currently sends a
ProcessEvent::WorkerComplete whenever emit_completion_event is true OR !success,
which causes duplicate completion events for ACP (emit_completion_event ==
false) on failures; change the conditional that calls
event_tx.send(ProcessEvent::WorkerComplete { ... }) so it only sends when
emit_completion_event is true (i.e., remove the "|| !success" branch) to avoid
emitting duplicates when ACP already emits its own completion event.
- Around line 2375-2389: The current spawn_blocking closure used to run
worker.run() is not cancel-safe: cancelling the task handle in cancel_worker
only prevents the closure from starting but cannot stop it once started, so pass
a cancellation signal (e.g., a tokio_util::sync::CancellationToken or an
Arc<AtomicBool>) into the closure and into the worker so it can observe
cancellation and exit early. Modify the code that creates the task and the
worker invocation (the spawn_blocking closure and the worker.run call) to accept
this token/signal, check it periodically (or use tokio::select inside the nested
runtime) and return promptly when cancelled, and ensure cancel_worker triggers
the token (or flips the atomic) before/when aborting the handle so DB state and
actual work stop consistently.
In `@src/config.rs`:
- Line 386: DefaultsConfig is currently including the full AcpAgentConfig in its
Debug output (.field("acp", &self.acp)), and AcpAgentConfig still derives Debug
which prints raw env entries (possible secrets). Fix by removing or masking the
sensitive data in DefaultsConfig's Debug implementation (don't call
.field("acp", &self.acp) with the raw struct; instead log a redacted placeholder
like "(redacted)" or a struct summary), and implement a custom Debug for
AcpAgentConfig that omits or masks the env field (or formats it as "<redacted>"
/ counts only) rather than deriving Debug; update both places where
AcpAgentConfig is printed (including the other referenced range) to use the
redacted representation.
- Around line 3492-3515: The current mapping creates concrete defaults for
omitted per-agent overrides (command -> "", args -> [], env -> {}, timeout ->
300), which wipes inherited values; to fix, change the merge logic in the acp
mapping to perform a field-wise merge with the base/default AcpAgentConfig
rather than materializing empty defaults: for each id look up the existing base
config (e.g., defaults_map.get(&id) or the prior AcpAgentConfig) and construct
AcpAgentConfig by taking toml_acp.command (resolve via resolve_env_value) if
present else existing.command, toml_acp.args if present else existing.args,
toml_acp.env if present else existing.env, and toml_acp.timeout.map(|t|
t).unwrap_or(existing.timeout), and similarly handle enabled (or make
TomlAcpAgentConfig.enabled Option<bool> if you want inherited enabled
semantics); update the mapping around toml_acp -> AcpAgentConfig construction to
use these Option-aware fallbacks instead of unwrap_or_default.
---
Outside diff comments:
In `@src/config.rs`:
- Around line 3948-3971: RuntimeConfig::reload_config updates many
ArcSwap-backed fields but neglects to store the ACP changes; add a store for
resolved.acp into the RuntimeConfig ACP ArcSwap field (e.g., call
self.acp.store(Arc::new(resolved.acp)) alongside the other .store(...) calls in
reload_config), making sure to match the expected type (clone or move as needed)
so ACP edits in config.toml are hot-reloaded without restart.
In `@src/tools/spawn_worker.rs`:
- Around line 176-217: The code currently treats any unknown args.worker_type
string as the builtin path by falling through to spawn_worker_from_state; change
the logic in call to explicitly match args.worker_type.as_deref() (or use a
match on args.worker_type) and handle Some("opencode") -> call
spawn_opencode_worker_from_state, Some("acp") -> call
spawn_acp_worker_from_state, None -> call spawn_worker_from_state (or whatever
represents the default), and return an Err(SpawnWorkerError(...)) for any other
Some(value) with a clear message referencing the invalid worker_type; update the
branches that call spawn_opencode_worker_from_state,
spawn_acp_worker_from_state, and spawn_worker_from_state accordingly so unknown
strings are rejected instead of silently defaulting.
---
Nitpick comments:
In `@src/acp/worker.rs`:
- Around line 261-289: Replace the silent "let _ = self.event_tx.send(...)"
drops with explicit handling: call .ok() on the Result (or log the error) when
sending in the helper methods send_status, send_result, and send_complete so any
Err is not silently ignored; do the same for the other send site referenced by
the reviewer (the similar send block around the later worker send functions).
Ensure you keep the same sent ProcessEvent variants (WorkerStatus, WorkerResult,
WorkerComplete) and only change the terminal handling of the send() Result to
.ok() or a logging call.
In `@src/agent/cortex_chat.rs`:
- Around line 336-342: The acp_agents Vec is built from a HashMap so iteration
order is non-deterministic; after collecting IDs in the acp_agents expression
(built from runtime_config.acp.load()), sort the vector (e.g., call
sort_unstable() or sort()) before using it so prompt rendering is deterministic
and tests are reproducible — update the code that constructs acp_agents to sort
the collected ids after collect::<Vec<_>>() (referencing the acp_agents variable
and the runtime_config.acp.load() mapping).
In `@src/config.rs`:
- Around line 2246-2247: Rename the abbreviated loop variable `cfg` to `config`
in the ACP merge loop (the `for (id, cfg) in overrides { ... }` block) and
update all uses inside the block (e.g., `merged.insert(id.clone(),
cfg.clone());` → `merged.insert(id.clone(), config.clone());`) to improve
readability and follow naming guidelines; ensure there are no name collisions
with existing `config` bindings in the surrounding scope and run cargo check to
verify the change compiles.
In `@src/lib.rs`:
- Around line 128-143: Add variant-level documentation to the enum variants
WorkerResult and WorkerComplete to clearly state their differing semantics:
describe when each should be emitted/consumed (e.g., WorkerResult = intermediate
or partial results, may be emitted multiple times and not imply finalization;
WorkerComplete = terminal result, emitted once to signal work completion and
trigger notifications). Update the doc comments directly above the WorkerResult
and WorkerComplete variants in the enum so future emitters/consumers can
unambiguously choose the correct variant and avoid double-emission.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (3)
Cargo.lockis excluded by!**/*.lock,!**/*.lockCargo.tomlis excluded by!**/*.tomldocs/content/docs/(features)/meta.jsonis excluded by!**/*.json
📒 Files selected for processing (18)
README.mddocs/content/docs/(features)/code-workers.mdxdocs/content/docs/(features)/workers.mdxinterface/src/api/client.tsinterface/src/routes/Settings.tsxprompts/en/fragments/worker_capabilities.md.j2prompts/en/tools/spawn_worker_description.md.j2src/acp.rssrc/acp/worker.rssrc/agent/channel.rssrc/agent/cortex_chat.rssrc/api/agents.rssrc/api/settings.rssrc/config.rssrc/lib.rssrc/prompts/engine.rssrc/tools/spawn_worker.rstests/context_dump.rs
|
I will get to the feedback eventually :O |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/tools/spawn_worker.rs (1)
180-229:⚠️ Potential issue | 🟠 MajorReject unsupported
worker_typevalues instead of silently defaulting to builtin.Current branching treats any unknown
worker_typeas builtin. That can spawn the wrong backend without a clear error and can silently ignore ACP-specific intent.Suggested fix
async fn call(&self, args: Self::Args) -> Result<Self::Output, Self::Error> { let readiness = self.state.deps.runtime_config.work_readiness(); - let is_opencode = args.worker_type.as_deref() == Some("opencode"); - let is_acp = args.worker_type.as_deref() == Some("acp"); + let worker_type = args.worker_type.as_deref().unwrap_or("builtin"); + if args.acp_id.is_some() && worker_type != "acp" { + return Err(SpawnWorkerError( + "acp_id is only valid when worker_type is \"acp\"".into(), + )); + } - let worker_id = if is_opencode { + let worker_id = if worker_type == "opencode" { let directory = args.directory.as_deref().ok_or_else(|| { SpawnWorkerError("directory is required for opencode workers".into()) })?; spawn_opencode_worker_from_state(&self.state, &args.task, directory, args.interactive) .await .map_err(|e| SpawnWorkerError(format!("{e}")))? - } else if is_acp { + } else if worker_type == "acp" { let directory = args .directory .as_deref() .ok_or_else(|| SpawnWorkerError("directory is required for acp workers".into()))?; spawn_acp_worker_from_state( &self.state, &args.task, directory, args.acp_id.as_deref(), args.interactive, ) .await .map_err(|e| SpawnWorkerError(format!("{e}")))? - } else { + } else if worker_type == "builtin" { spawn_worker_from_state( &self.state, &args.task, args.interactive, &args .suggested_skills .iter() .map(String::as_str) .collect::<Vec<_>>(), ) .await .map_err(|e| SpawnWorkerError(format!("{e}")))? + } else { + return Err(SpawnWorkerError(format!( + "invalid worker_type \"{worker_type}\"; expected \"builtin\", \"opencode\", or \"acp\"" + ))); }; - let worker_type_label = if is_opencode { - "OpenCode" - } else if is_acp { - "ACP" - } else { - "builtin" - }; + let worker_type_label = match worker_type { + "opencode" => "OpenCode", + "acp" => "ACP", + _ => "builtin", + };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tools/spawn_worker.rs` around lines 180 - 229, The branch in call treats any unknown args.worker_type as builtin which can spawn the wrong backend; validate worker_type early (in call) and return an Err(SpawnWorkerError with a clear message) for unsupported values instead of falling through to spawn_worker_from_state. Concretely: check args.worker_type.as_deref() — allow None (meaning builtin) or the exact strings "opencode" and "acp"; if Some(v) and v is not "opencode" or "acp", return Err(SpawnWorkerError(format!("unsupported worker_type: {}", v))). Keep the existing calls to spawn_opencode_worker_from_state, spawn_acp_worker_from_state, and spawn_worker_from_state and update the worker_type_label logic to map only the known allowed values ("opencode" -> "OpenCode", "acp" -> "ACP", None -> "builtin").
🧹 Nitpick comments (1)
prompts/en/fragments/worker_capabilities.md.j2 (1)
57-58: Merge the redundant consecutive{%- if opencode_enabled or acp_enabled %}blocks.The outer block ends at line 57 and an identical guard immediately reopens at line 58 just for the "When to Use Which" section. Moving the "When to Use Which" content inside the block that closes at line 57 eliminates the extra
endif/ifpair.♻️ Proposed refactor
-{%- endif %} -{%- if opencode_enabled or acp_enabled %} - ### When to Use WhichAnd close with a single
{%- endif %}at the end of the file.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prompts/en/fragments/worker_capabilities.md.j2` around lines 57 - 58, There are two consecutive identical Jinja guards `{%- if opencode_enabled or acp_enabled %}` around the "When to Use Which" section; remove the inner ` {%- if opencode_enabled or acp_enabled %}` and its matching `{%- endif %}` and move the "When to Use Which" content into the already-open outer block so the template uses a single `{%- if opencode_enabled or acp_enabled %}` ... `{%- endif %}` pair (close that single `{%- endif %}` at the end of the file).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@README.md`:
- Line 294: Insert proper sentence separation after the bold ACP link by adding
a period (or other punctuation) between "**[ACP
workers](https://agentclientprotocol.com/)**" and the following sentence "Code
workers bring their own codebase exploration, LSP awareness, and context
management — purpose-built for deep coding sessions." so the two phrases are not
a run-on; update the README.md sentence that contains the bold ACP link to read
as two distinct sentences (or use a semicolon) to correct the run-on.
In `@src/tools/spawn_worker.rs`:
- Around line 49-50: Update the doc comment for the directory field in
spawn_worker.rs (the directory field on the worker config/struct) to reflect
that the working directory is required for both "opencode" workers and ACP
runtime/schema, not just "opencode"; locate the comment above the directory
field declaration and change the text to explicitly state the directory is
required for opencode and ACP and that the OpenCode agent will operate there.
---
Outside diff comments:
In `@src/tools/spawn_worker.rs`:
- Around line 180-229: The branch in call treats any unknown args.worker_type as
builtin which can spawn the wrong backend; validate worker_type early (in call)
and return an Err(SpawnWorkerError with a clear message) for unsupported values
instead of falling through to spawn_worker_from_state. Concretely: check
args.worker_type.as_deref() — allow None (meaning builtin) or the exact strings
"opencode" and "acp"; if Some(v) and v is not "opencode" or "acp", return
Err(SpawnWorkerError(format!("unsupported worker_type: {}", v))). Keep the
existing calls to spawn_opencode_worker_from_state, spawn_acp_worker_from_state,
and spawn_worker_from_state and update the worker_type_label logic to map only
the known allowed values ("opencode" -> "OpenCode", "acp" -> "ACP", None ->
"builtin").
---
Nitpick comments:
In `@prompts/en/fragments/worker_capabilities.md.j2`:
- Around line 57-58: There are two consecutive identical Jinja guards `{%- if
opencode_enabled or acp_enabled %}` around the "When to Use Which" section;
remove the inner ` {%- if opencode_enabled or acp_enabled %}` and its matching
`{%- endif %}` and move the "When to Use Which" content into the already-open
outer block so the template uses a single `{%- if opencode_enabled or
acp_enabled %}` ... `{%- endif %}` pair (close that single `{%- endif %}` at the
end of the file).
| /// Working directory for the worker. Required for "opencode" workers. | ||
| /// The OpenCode agent will operate in this directory. |
There was a problem hiding this comment.
Update the directory field docs to match ACP behavior.
The field comment still says directory is required only for opencode, but runtime/schema now require it for ACP too. Keep this docstring aligned to avoid confusion.
Suggested fix
- /// Working directory for the worker. Required for "opencode" workers.
- /// The OpenCode agent will operate in this directory.
+ /// Working directory for the worker. Required for "opencode" and "acp" workers.
+ /// OpenCode/ACP workers operate in this directory.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/tools/spawn_worker.rs` around lines 49 - 50, Update the doc comment for
the directory field in spawn_worker.rs (the directory field on the worker
config/struct) to reflect that the working directory is required for both
"opencode" workers and ACP runtime/schema, not just "opencode"; locate the
comment above the directory field declaration and change the text to explicitly
state the directory is required for opencode and ACP and that the OpenCode agent
will operate there.
There was a problem hiding this comment.
Actionable comments posted: 7
♻️ Duplicate comments (1)
src/tools/spawn_worker.rs (1)
49-50:⚠️ Potential issue | 🟡 MinorUpdate
directoryfield docs to include ACP requirement.Line [49] still documents
directoryas opencode-only, but Line [200] enforces it for ACP workers too.📝 Suggested fix
- /// Working directory for the worker. Required for "opencode" workers. - /// The OpenCode agent will operate in this directory. + /// Working directory for the worker. Required for "opencode" and "acp" workers. + /// OpenCode/ACP workers operate in this directory.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tools/spawn_worker.rs` around lines 49 - 50, Update the documentation comment for the `directory` field to state that this working directory is required not only for "opencode" workers but also for ACP workers (since runtime logic enforces it for ACP as well); locate the `directory` field's doc comment in spawn_worker.rs and change the phrasing to mention both "opencode" and "ACP" agents so the comment matches the enforcement code (the place that validates/uses `directory` for ACP workers).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/acp/worker.rs`:
- Around line 450-458: The wait_for_exit function currently holds the
exit_status Mutex across an await which blocks until the child exits; change
wait_for_exit to first lock exit_status and check for Some(status) and return if
present, then drop that guard before awaiting the child by separately acquiring
and awaiting the child via self.child.lock().await.wait().await, and only after
the child has exited re-acquire the exit_status lock to store and return the
status; reference functions/fields: wait_for_exit, exit_status, child (and
poll_exit_status callers) to ensure the short-lived exit_status lock is not held
across the long child.wait().await.
- Around line 312-327: Replace fragile substring matching with a typed sentinel:
define a Cancellation marker type (e.g., struct Cancelled;) used to construct
cancellation errors (wrap cancellation points with anyhow::Error::new(Cancelled)
or .context(Cancelled) where cancellation is detected), then in this match arm
replace error.to_string().contains("cancelled") with a typed downcast check like
error.downcast_ref::<Cancelled>().is_some(); keep existing calls to send_status
and send_complete but use the typed check to decide between "cancelled" and
"failed". Ensure the marker type is exported/visible to the code that produces
cancellation errors so the downcast works.
In `@src/agent/channel.rs`:
- Around line 2390-2413: The ACP wrapper currently passes emit_completion_event
= false to spawn_worker_task so if the outer wrapper fails (e.g., runtime build
or join errors inside the async block around worker.run()) the channel never
receives a WorkerComplete and cleanup is skipped; change the logic so that
spawn_worker_task is called with emit_completion_event = true or, if you must
keep it false, ensure the error paths inside the async closure explicitly send a
WorkerComplete (using state.deps.event_tx) before returning an Err; update the
spawn_worker_task invocation and the error-handling branch around
worker.run()/spawn_blocking so WorkerComplete is always emitted on success or
failure (referencing spawn_worker_task, emit_completion_event, worker.run, and
the WorkerComplete emission logic).
- Line 2489: Replace the "let _ = event_tx.send(ProcessEvent::WorkerComplete {
... })" pattern with the explicit channel-send exception form by calling .ok()
on the Result; locate the send call using the event_tx variable and the
ProcessEvent::WorkerComplete variant and change the discard to
event_tx.send(...).ok() so the send result is handled per project guidelines.
In `@src/api/settings.rs`:
- Around line 399-444: The current code only upserts entries from request.acp
into doc["defaults"]["acp"] (see request.acp, acp_table, agent_table) but never
removes keys that were deleted in the UI; modify the logic after acquiring
acp_table to compute the set of incoming IDs (from request.acp) and then iterate
existing keys in acp_table to remove any key not present in the incoming set
(use acp_table.remove(key) or equivalent on toml_edit::Table) before/after
performing the upsert loop so stale defaults.acp.<id> entries are persisted as
removed.
In `@src/config.rs`:
- Around line 595-596: The doc comment for AcpAgentConfig.timeout incorrectly
mentions `None`; update the comment to reflect that `timeout` is a concrete u64
value in seconds and that the default resolved timeout is 300 seconds (e.g.,
"Session timeout in seconds. Defaults to 300s"). Locate the `timeout` field on
the `AcpAgentConfig` struct and replace the incorrect "None" wording with the
correct concrete default description.
- Around line 3323-3329: The computed resolved_command should not fall back to
the raw toml_acp.command when resolve_env_value fails for env:... entries;
update the logic around resolved_command (and the analogous block later) so that
you use resolve_env_value(toml_acp.command) and only accept the original
toml_acp.command when it is not an env: reference (or when resolve_env_value
returned Some), otherwise skip to base_entry.command or default; implement this
by checking the toml_acp.command prefix ("env:") before or_elseing to
toml_acp.command, or by making resolve_env_value return None only for unresolved
env: refs and only falling back to toml_acp.command when it wasn’t an env: ref,
and apply the same change at the other occurrence that constructs
resolved_command.
---
Duplicate comments:
In `@src/tools/spawn_worker.rs`:
- Around line 49-50: Update the documentation comment for the `directory` field
to state that this working directory is required not only for "opencode" workers
but also for ACP workers (since runtime logic enforces it for ACP as well);
locate the `directory` field's doc comment in spawn_worker.rs and change the
phrasing to mention both "opencode" and "ACP" agents so the comment matches the
enforcement code (the place that validates/uses `directory` for ACP workers).
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
README.mddocs/content/docs/(features)/code-workers.mdxinterface/src/routes/Settings.tsxsrc/acp/worker.rssrc/agent/channel.rssrc/agent/cortex_chat.rssrc/api/settings.rssrc/config.rssrc/lib.rssrc/tools/spawn_worker.rs
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (2)
src/acp/worker.rs (1)
522-527:⚠️ Potential issue | 🟠 MajorProtected write-path guard is incomplete for memory paths.
Current guard blocks only specific identity filenames and does not enforce the identity/memory path restriction required for file tools.
As per coding guidelines: "Implement workspace path guard: file tools must reject writes to identity/memory paths with an error directing the LLM to the correct tool".Also applies to: 683-687
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/acp/worker.rs` around lines 522 - 527, Update the write-path guard in is_protected_write_path to also reject any path that resides under an identity/memory workspace segment: keep the existing case-insensitive file-name block for PROTECTED_FILES (SOUL.md, IDENTITY.md, USER.md) and add a check that iterates path.components() (normalized) to detect a case-insensitive "identity" directory immediately followed by a "memory" segment (or any descendant under identity/memory) and return true for those; factor this logic into a single helper used by both is_protected_write_path and the duplicate check referenced around lines 683-687 so writes under identity/memory are consistently denied with the same guard.src/agent/channel.rs (1)
2398-2431:⚠️ Potential issue | 🟠 MajorACP runtime-init failures can still skip
WorkerCompleteemission.With
emit_completion_event = false, only theErr(join_error)branch emitsWorkerComplete. If runtime creation fails (Line 2403–2406), the closure returnsOk(Err(_)), and the channel never gets completion/cleanup for that worker.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/agent/channel.rs` around lines 2398 - 2431, The spawn_blocking result handling currently only emits WorkerComplete on join errors, so failures inside the blocking closure (e.g., runtime build error in the closure around tokio::runtime::Builder or worker.run()) are returned as Ok(Err(_)) and never trigger completion; update the match handling of result so that Ok(inner) is deconstructed into Ok(Ok(success_str)) => success_str and Ok(Err(inner_error)) => emit the same crate::ProcessEvent::WorkerComplete via event_tx_for_task (using agent_id_for_task, worker_id, channel_id_for_task, set notify:true and success:false) and then return Err(anyhow::anyhow!(...)) for the inner_error, mirroring the existing Err(join_error) branch, so runtime-init or worker.run() errors also produce WorkerComplete and cleanup.
🧹 Nitpick comments (2)
src/config.rs (2)
3326-3330: Use non-abbreviated variable names in closures.Rename
cmdtocommandfor consistency with repository naming rules.As per coding guidelines, "Don't abbreviate variable names (use queue not q, message not msg). Common abbreviations like config are fine".
Also applies to: 3518-3522
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config.rs` around lines 3326 - 3330, Rename the closure parameter `cmd` to `command` in the closures that call resolve_env_value (the closure that currently does resolve_env_value(cmd).or_else(|| (!cmd.starts_with("env:")).then(|| cmd.to_string()))), and update all uses inside those closures to `command`; also apply the same rename to the other identical occurrence later in the file so both closures use `command` and obey the non-abbreviated naming guideline.
3319-3358: ACP merge logic is duplicated in defaults and per-agent paths.The two ACP merge blocks are nearly identical. Extracting a shared helper would reduce drift risk and keep inheritance behavior consistent over time.
Also applies to: 3510-3555
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config.rs` around lines 3319 - 3358, The ACP merge logic is duplicated; extract the common logic into a helper function (e.g., fn merge_acp_maps(base: &HashMap<String, AcpAgentConfig>, toml_map: &HashMap<String, TomlAcpType>) -> HashMap<String, AcpAgentConfig>) that encapsulates the current merging steps (resolving command via resolve_env_value, falling back to base_entry.command, computing enabled, args, env, timeout, and constructing AcpAgentConfig), then replace both inline merge blocks (the one iterating toml.defaults.acp and the other at lines ~3510-3555) with calls to that helper passing base_defaults.acp and the respective toml map; ensure the helper references AcpAgentConfig, resolve_env_value, and the same fallback semantics so behavior is identical.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/acp/worker.rs`:
- Around line 556-567: The current path resolution canonicalizes only the
immediate parent and fails when that parent doesn't exist (blocking
write_text_file creating nested dirs); update the logic in the resolve_path code
path used by write_text_file (and the similar block referenced near the other
occurrence) to walk up the path chain until it finds an existing ancestor,
canonicalize that existing ancestor, then append the remaining relative
components (the non-existent subpath and file_name) to the canonicalized
ancestor (instead of canonicalizing the non-existent immediate parent); ensure
error handling still returns AcpError::invalid_params for missing file_name and
use create_dir_all later as before to create missing directories.
In `@src/api/settings.rs`:
- Around line 407-416: The code currently treats request.acp as a full-replace
by removing any IDs in doc["defaults"]["acp"] not present in acp_configs (see
acp_table, acp_configs and the doc["defaults"]["acp"].as_table_mut() block);
change this to non-destructive merge semantics (or explicitly document
full-replace). To fix, stop pruning existing_ids: for each (id, cfg) in
acp_configs, upsert into acp_table (overwrite existing entries) and do not
remove entries missing from acp_configs unless the API caller explicitly signals
full-replace (e.g., a replace flag) or you add a clear comment on the function
handling request.acp that full-replacement is intentional; update the code
around acp_table/as_table_mut and acp_configs accordingly so external callers
don’t get their ACPs deleted unexpectedly.
In `@src/config.rs`:
- Around line 1886-1894: TomlAcpAgentConfig currently uses concrete collections
for args and env so callers cannot distinguish omission from explicit empty;
change the fields args: Vec<String> and env: HashMap<String,String> to
Option<Vec<String>> and Option<HashMap<String,String>> (preserve
#[serde(default)] semantics appropriately or remove it), update any
deserialization defaults, and then modify the merge/override logic that combines
per-agent configs to treat Some(Vec::new())/Some(HashMap::new()) as an explicit
override (clear) while None means "inherit" — apply this same pattern to the
other analogous Toml* agent config structs that declare args/env so explicit
empty values are respected.
- Around line 3323-3332: When an ACP is enabled but its resolved command becomes
empty (e.g., command = "env:VAR" unresolved), fail fast during config load: in
the code that computes resolved_command (the toml_acp.command ->
resolve_env_value -> base_entry fallback logic used with toml_acp and
base_entry), validate after unwrap_or_default() that if toml_acp.enabled is true
the resolved_command is non-empty and return a configuration-loading error (or
propagate Err) when empty; apply the same guard to the other analogous
resolved_command computations (the other resolve_env_value/base_entry blocks) so
enabled configs never silently produce an empty command.
---
Duplicate comments:
In `@src/acp/worker.rs`:
- Around line 522-527: Update the write-path guard in is_protected_write_path to
also reject any path that resides under an identity/memory workspace segment:
keep the existing case-insensitive file-name block for PROTECTED_FILES (SOUL.md,
IDENTITY.md, USER.md) and add a check that iterates path.components()
(normalized) to detect a case-insensitive "identity" directory immediately
followed by a "memory" segment (or any descendant under identity/memory) and
return true for those; factor this logic into a single helper used by both
is_protected_write_path and the duplicate check referenced around lines 683-687
so writes under identity/memory are consistently denied with the same guard.
In `@src/agent/channel.rs`:
- Around line 2398-2431: The spawn_blocking result handling currently only emits
WorkerComplete on join errors, so failures inside the blocking closure (e.g.,
runtime build error in the closure around tokio::runtime::Builder or
worker.run()) are returned as Ok(Err(_)) and never trigger completion; update
the match handling of result so that Ok(inner) is deconstructed into
Ok(Ok(success_str)) => success_str and Ok(Err(inner_error)) => emit the same
crate::ProcessEvent::WorkerComplete via event_tx_for_task (using
agent_id_for_task, worker_id, channel_id_for_task, set notify:true and
success:false) and then return Err(anyhow::anyhow!(...)) for the inner_error,
mirroring the existing Err(join_error) branch, so runtime-init or worker.run()
errors also produce WorkerComplete and cleanup.
---
Nitpick comments:
In `@src/config.rs`:
- Around line 3326-3330: Rename the closure parameter `cmd` to `command` in the
closures that call resolve_env_value (the closure that currently does
resolve_env_value(cmd).or_else(|| (!cmd.starts_with("env:")).then(||
cmd.to_string()))), and update all uses inside those closures to `command`; also
apply the same rename to the other identical occurrence later in the file so
both closures use `command` and obey the non-abbreviated naming guideline.
- Around line 3319-3358: The ACP merge logic is duplicated; extract the common
logic into a helper function (e.g., fn merge_acp_maps(base: &HashMap<String,
AcpAgentConfig>, toml_map: &HashMap<String, TomlAcpType>) -> HashMap<String,
AcpAgentConfig>) that encapsulates the current merging steps (resolving command
via resolve_env_value, falling back to base_entry.command, computing enabled,
args, env, timeout, and constructing AcpAgentConfig), then replace both inline
merge blocks (the one iterating toml.defaults.acp and the other at lines
~3510-3555) with calls to that helper passing base_defaults.acp and the
respective toml map; ensure the helper references AcpAgentConfig,
resolve_env_value, and the same fallback semantics so behavior is identical.
| let parent = path | ||
| .parent() | ||
| .ok_or_else(|| AcpError::invalid_params().data("path has no parent"))?; | ||
| let canonical_parent = parent.canonicalize().map_err(|error| { | ||
| AcpError::resource_not_found(Some(parent.display().to_string())) | ||
| .data(error.to_string()) | ||
| })?; | ||
| canonical_parent.join( | ||
| path.file_name() | ||
| .ok_or_else(|| AcpError::invalid_params().data("path is missing file name"))?, | ||
| ) | ||
| }; |
There was a problem hiding this comment.
write_text_file cannot create files in new nested directories.
resolve_path canonicalizes the immediate parent for non-existent paths. If that parent doesn’t exist yet, it errors before create_dir_all runs, so valid writes like <workspace>/new/subdir/file.txt fail.
Also applies to: 689-693
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/acp/worker.rs` around lines 556 - 567, The current path resolution
canonicalizes only the immediate parent and fails when that parent doesn't exist
(blocking write_text_file creating nested dirs); update the logic in the
resolve_path code path used by write_text_file (and the similar block referenced
near the other occurrence) to walk up the path chain until it finds an existing
ancestor, canonicalize that existing ancestor, then append the remaining
relative components (the non-existent subpath and file_name) to the
canonicalized ancestor (instead of canonicalizing the non-existent immediate
parent); ensure error handling still returns AcpError::invalid_params for
missing file_name and use create_dir_all later as before to create missing
directories.
| if let Some(acp_table) = doc["defaults"]["acp"].as_table_mut() { | ||
| // Remove stale entries not present in the incoming config. | ||
| let incoming_ids: std::collections::HashSet<_> = | ||
| acp_configs.keys().map(String::as_str).collect(); | ||
| let existing_ids: Vec<_> = acp_table.iter().map(|(k, _)| k.to_string()).collect(); | ||
| for existing_id in existing_ids { | ||
| if !incoming_ids.contains(existing_id.as_str()) { | ||
| acp_table.remove(&existing_id); | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, find the file and examine the context around the flagged lines
fd -e rs settings | head -20Repository: spacedriveapp/spacebot
Length of output: 101
🏁 Script executed:
# Find GlobalSettingsUpdate interface definition
rg -n 'GlobalSettingsUpdate|struct GlobalSettingsUpdate' --type=ts --type=rs -A 5Repository: spacedriveapp/spacebot
Length of output: 92
🏁 Script executed:
# Find all callers of update_global_settings or settings update endpoints
rg -n 'update.*[Ss]ettings|/settings|acp' --type=ts -B 2 -A 2 | head -100Repository: spacedriveapp/spacebot
Length of output: 6362
🏁 Script executed:
# Find the API endpoint definition for GlobalSettingsUpdate
rg -n 'GlobalSettingsUpdate|struct.*Settings.*Update' -A 10 src/api/Repository: spacedriveapp/spacebot
Length of output: 3994
🏁 Script executed:
# Search for the update_global_settings endpoint handler
rg -n 'update.*[Ss]ettings|GlobalSettings.*Update' src/api/settings.rs -B 3 -A 15 | head -150Repository: spacedriveapp/spacebot
Length of output: 3155
🏁 Script executed:
# Look for API request/response type definitions
fd -e rs src/api/ -exec grep -l "GlobalSettings\|SettingsUpdate" {} \;Repository: spacedriveapp/spacebot
Length of output: 331
🏁 Script executed:
# Check if there are other callers or integrations that call this endpoint
rg -n 'POST.*settings|update.*settings' --type=rs --type=ts | grep -v Settings.tsxRepository: spacedriveapp/spacebot
Length of output: 92
🏁 Script executed:
# Look at the complete update_global_settings function focusing on ACP handling
cat -n src/api/settings.rs | sed -n '392,425p'Repository: spacedriveapp/spacebot
Length of output: 1693
🏁 Script executed:
# Check the initialization logic for acp
cat -n src/api/settings.rs | sed -n '399,455p'Repository: spacedriveapp/spacebot
Length of output: 2793
🏁 Script executed:
# Confirm there are no other comments or documentation about ACP update semantics
rg -n "acp|replacement|patch|full.*map|atomic" src/api/settings.rs | grep -i "comment\|doc\|//"Repository: spacedriveapp/spacebot
Length of output: 322
🏁 Script executed:
# Check if there are any tests that verify the behavior
fd -e rs -path "*/tests/*" | xargs grep -l "update.*settings\|acp" 2>/dev/nullRepository: spacedriveapp/spacebot
Length of output: 236
🏁 Script executed:
# Search for test files and verify behavior tests
find . -name "*test*" -type f | grep -E "\.(rs|ts)$"Repository: spacedriveapp/spacebot
Length of output: 48
🏁 Script executed:
# Check if there's any API documentation or comments explaining the semantic behavior
rg -B 5 -A 5 "if let Some\(acp_configs\)" src/api/settings.rsRepository: spacedriveapp/spacebot
Length of output: 510
ACP update implements full-replacement semantics with undocumented destructive pruning.
request.acp prunes any existing IDs not included in the incoming map. This is correct for the frontend UI (which always sends the complete ACP table), but the API contract is ambiguous: the acp: Option<HashMap<...>> type signature suggests callers can send partial updates, yet the implementation deletes all IDs missing from the payload.
External callers (CLI tools, third-party integrations, future code) could reasonably assume Option<HashMap> allows partial updates and inadvertently delete unrelated ACP entries.
Document the full-replacement semantics explicitly in comments, or restructure to support true patch updates.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/settings.rs` around lines 407 - 416, The code currently treats
request.acp as a full-replace by removing any IDs in doc["defaults"]["acp"] not
present in acp_configs (see acp_table, acp_configs and the
doc["defaults"]["acp"].as_table_mut() block); change this to non-destructive
merge semantics (or explicitly document full-replace). To fix, stop pruning
existing_ids: for each (id, cfg) in acp_configs, upsert into acp_table
(overwrite existing entries) and do not remove entries missing from acp_configs
unless the API caller explicitly signals full-replace (e.g., a replace flag) or
you add a clear comment on the function handling request.acp that
full-replacement is intentional; update the code around acp_table/as_table_mut
and acp_configs accordingly so external callers don’t get their ACPs deleted
unexpectedly.
| #[derive(Deserialize, Clone, Default)] | ||
| struct TomlAcpAgentConfig { | ||
| enabled: Option<bool>, | ||
| command: Option<String>, | ||
| #[serde(default)] | ||
| args: Vec<String>, | ||
| #[serde(default)] | ||
| env: HashMap<String, String>, | ||
| timeout: Option<u64>, |
There was a problem hiding this comment.
Per-agent ACP overrides cannot explicitly clear inherited args/env.
Because TomlAcpAgentConfig.args/env are concrete collections with defaults, []/{} are indistinguishable from omission. Current merge logic treats empties as inherit, so users can’t intentionally set empty args/env.
💡 Proposed fix (use Option to preserve "unset" vs "explicitly empty")
#[derive(Deserialize, Clone, Default)]
struct TomlAcpAgentConfig {
enabled: Option<bool>,
command: Option<String>,
- #[serde(default)]
- args: Vec<String>,
- #[serde(default)]
- env: HashMap<String, String>,
+ args: Option<Vec<String>>,
+ env: Option<HashMap<String, String>>,
timeout: Option<u64>,
}-args: if toml_acp.args.is_empty() { base_entry.map(|b| b.args.clone()).unwrap_or_default() } else { toml_acp.args.clone() },
-env: if toml_acp.env.is_empty() { base_entry.map(|b| b.env.clone()).unwrap_or_default() } else { toml_acp.env.clone() },
+args: toml_acp
+ .args
+ .clone()
+ .or_else(|| base_entry.map(|b| b.args.clone()))
+ .unwrap_or_default(),
+env: toml_acp
+ .env
+ .clone()
+ .or_else(|| base_entry.map(|b| b.env.clone()))
+ .unwrap_or_default(),Also applies to: 3341-3349, 3534-3547
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/config.rs` around lines 1886 - 1894, TomlAcpAgentConfig currently uses
concrete collections for args and env so callers cannot distinguish omission
from explicit empty; change the fields args: Vec<String> and env:
HashMap<String,String> to Option<Vec<String>> and Option<HashMap<String,String>>
(preserve #[serde(default)] semantics appropriately or remove it), update any
deserialization defaults, and then modify the merge/override logic that combines
per-agent configs to treat Some(Vec::new())/Some(HashMap::new()) as an explicit
override (clear) while None means "inherit" — apply this same pattern to the
other analogous Toml* agent config structs that declare args/env so explicit
empty values are respected.
| let resolved_command = toml_acp | ||
| .command | ||
| .as_deref() | ||
| .and_then(|cmd| { | ||
| // Only fall back to the raw value if it's not an env: reference. | ||
| resolve_env_value(cmd) | ||
| .or_else(|| (!cmd.starts_with("env:")).then(|| cmd.to_string())) | ||
| }) | ||
| .or_else(|| base_entry.map(|b| b.command.clone())) | ||
| .unwrap_or_default(); |
There was a problem hiding this comment.
Enabled ACP configs can silently resolve to an empty command.
If command = "env:VAR" is unresolved and no base entry exists, the merged config can be enabled = true with command = "". This should fail fast during config load to avoid late spawn-time errors.
💡 Proposed guard
let resolved_command = ...;
+if toml_acp.enabled.unwrap_or(true) && resolved_command.trim().is_empty() {
+ return Err(ConfigError::Invalid(format!(
+ "ACP agent '{}' is enabled but command is empty/unresolved",
+ id
+ ))
+ .into());
+}Also applies to: 3337-3341, 3515-3525, 3530-3534
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/config.rs` around lines 3323 - 3332, When an ACP is enabled but its
resolved command becomes empty (e.g., command = "env:VAR" unresolved), fail fast
during config load: in the code that computes resolved_command (the
toml_acp.command -> resolve_env_value -> base_entry fallback logic used with
toml_acp and base_entry), validate after unwrap_or_default() that if
toml_acp.enabled is true the resolved_command is non-empty and return a
configuration-loading error (or propagate Err) when empty; apply the same guard
to the other analogous resolved_command computations (the other
resolve_env_value/base_entry blocks) so enabled configs never silently produce
an empty command.
|
going to take another crack at this, I think I started off fundamentally wrong |
This PR adds Agent Client Protocol worker support across backend runtime, worker orchestration, prompting, settings API/UI, and docs. It makes ACP workers first-class alongside builtin and OpenCode workers, including interactive follow-up flows and live status visibility.
Tested using
opencode acp. I would love support to test with Codex, Claude Code, Gemini, etc, though I can get to those when I have time.Why
Not everyone uses OpenCode. As providers like Google and Anthropic clamp down on usage of subscriber auth tokens outside of their respective CLIs, allowing Spacebot to work with a variety of coding agents becomes more important.
What changed
Docs/UI
Note
This PR expands Spacebot's worker capabilities by introducing first-class support for Agent Client Protocol (ACP) agents, enabling integration with any ACP-compatible coding agent (Claude Code, Codex, Gemini, etc). The implementation includes a new ACP worker backend that handles subprocess lifecycle management, permissions, filesystem operations, and live status updates. Key changes involve backend runtime support, enhanced spawn_worker functionality for ACP selection, settings API/UI for agent configuration, improved thought/status streaming with batched paragraph updates, and explicit worker lifecycle signaling to prevent dangling sessions. All changes maintain compatibility with existing builtin and OpenCode workers while making the worker model extensible for future agent types.
Written by Tembo for commit 045d1b6. This will update automatically on new commits.