Skip to content

Feedback after reading. #63

@SamYuan1990

Description

@SamYuan1990

It's a great work! I am really appreciate it.

Here are my feedback, and things I want to further discuss.

In general, I am not sure if we can simply an atom agent into parts like, tool, context mgr, prompt.

If we take context mgr as core as where we orchestrate our business logic, and the logic, formed into a DAG way. Where ... maybe there 4 boundaries as

  • North: The human interface as how agent interaction with human, UI, or just a bot to capture your event. as @Depbot rebase
  • South: In most case, for an agent, it invokes a restful API(openAI sdk, Google sdk or etc...) , which means, as parts of factor 5, execution status can be invoked here.
  • West: I suppose the memory mgr happens here, we decided what information been cached and how it been used.
  • East: The core interaction with Tools(either direct invokes a system function or function call, MCP or A2A)

The business logic, when human using this Agent, which means, we start from North, go to South and back.
In this journey, we may need to move to East, as tooling support, and move to West for historical reason.

The visa here you need is a structured data format, as we run in a Digital way.

The map here I want to share is like a 4 steps strategy which for agent running in automate:

  1. start from a NLP question, we need to know the scope and in what condition LLM can answer the specific business question.
  2. From NLP to structure output to the our Visa.
  3. This step may a huge one, build DAG and make control feedback to human.(as ask human Y/N in each steps to confirm)
  4. Summarize a workflow let it go(runs in automate).

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions