top of page
Search

Agentic AI project using n8n.io

  • Ken Munson
  • 9 hours ago
  • 6 min read

I'm going to be lazy and just slap you-know-who's summary of the work I (we) did together on this project here. I learned a lot and am going to take a stab at doing something more complicated next - and do it "from scratch" in Python as opposed to using n8n.


Here is a quick summary:

This workflow implements an agentic pattern in n8n. A Webhook node receives a POST request containing a user_query. The Edit Fields node normalizes this into a consistent user_query field. An LLM node then acts as a planner: instead of answering directly, it chooses a tool and parameters, returning a JSON object like { "tool": "github.repo_status", "params": {...} }. A Code node parses this JSON, and a Switch node routes execution based on the selected tool. For github.repo_status, an HTTP Request node calls the GitHub API to fetch repository metadata. A second LLM node interprets this raw JSON and generates a human-friendly answer. Finally, a Code node extracts just the answer text into { "answer": "..." }, which the Webhook returns as the HTTP response. Together, this forms a complete loop of understand → plan → route → act → interpret → answer.

Here is a screen capture from n8n.io of the whole Agentic Workflow. Small I know, but you can click on it to expand it.


ree

1. Basic workflow concepts (generic terminology)

Before we talk about n8n specifically, here’s the generic vocabulary that applies to any workflow engine:

  1. Trigger

    • How the workflow starts.

    • Example: HTTP request received, a file uploaded, a timer fired, a message in a queue, etc.

  2. Steps / Nodes

    • Individual units of work.

    • Each step receives some input, does one thing, and produces output.

  3. Data / Payload

    • The information flowing through the workflow.

    • Often represented as a JSON object: { "user_query": "..." }.

  4. Transformation

    • A step that changes the shape/content of the data.

    • Example: parsing JSON, adding fields, cleaning text.

  5. Branching / Routing

    • Logic that decides which path the workflow takes.

    • Example: If tool is github.repo_status, go this way; otherwise go that way.

  6. External Call / Integration

    • A step where the workflow calls an external system or API (GitHub, OpenAI, database, etc.) and uses its response.

  7. Termination / Response

    • The last step(s) that return a result, send a message, or just finish.

In your case, you’ve built an agentic workflow, so there’s an extra conceptual layer:

Agent loop: Understand → Plan → Route → Act → Interpret → Answer

We’ll map that to your nodes next.

2. Your Mermaid diagram (for reference)


flowchart TD

    WB[Webhook POST ops copilot]
    EF[Edit Fields user_query from body]
    DT[LLM Decide Tool]
    PTJ[Code Parse Tool JSON]
    SW[Switch on tool]
    GH[HTTP Request Repo Status]
    SUM[LLM Summarize]
    EA[Code Extract Answer]
    OUT[API Response answer]

    WB --> EF --> DT --> PTJ --> SW
    SW -->|github repo status| GH
    GH --> SUM --> EA --> OUT
    SW -->|unknown tool| EA

We’ll reference these node labels as we walk through the flow.

3. High-level story of your workflow

At a high level, here’s what the Ops Copilot does:

  1. It receives an HTTP POST request with a question like:{ "user_query": "Is my RAG-Workflow repo public?" }

  2. It hands that question to an LLM whose job is to:

    • understand what you’re asking

    • decide what tool to use (GitHub in this case)

    • and return a structured plan like:

      { "tool": "github.repo_status", "params": { "owner": "kmunson007", "repo": "RAG-Workflow" } }

  3. Based on that plan, it routes to the correct tool:

    • fetches data from the GitHub API about the repo.

  4. It then feeds the GitHub JSON response into another LLM, which:

    • interprets the raw JSON

    • and writes a friendly natural-language answer.

  5. Finally, a small code step extracts just the answer text and returns it as:

    { "answer": "Yes, your RAG-Workflow repository is public..." }

That’s the generic picture. Now let’s go node-by-node.

4. Step-by-step: each node explained

4.1 WB – Webhook POST ops copilot

Type: Trigger nodeRole: Start of the workflow (Trigger)

  • This node exposes an HTTP endpoint:

    • Test URL: .../webhook-test/ops-copilot

    • Production URL: .../webhook/ops-copilot

  • It waits for an HTTP POST request with a JSON body like:

    { "user_query": "Is my RAG-Workflow repo public?" }

  • When a request comes in, it starts the workflow, and the JSON request becomes the initial payload.

Conceptually:

“When someone POSTs a question to this URL, start the Ops Copilot workflow.”

4.2 EF – Edit Fields user_query from body

Type: Data transformationRole: Normalize the input into a consistent field

  • The Webhook node’s JSON often looks like:

    { "body": { "user_query": "Is my RAG-Workflow repo public?" }, "headers": { ... }, "query": { ... } }

  • This node uses an expression like:

    {{$json["body"]["user_query"]}}

    to pull the question out of body.user_query.

  • It then sets a clean field:

    { "user_query": "Is my RAG-Workflow repo public?" }

Conceptually:

“Take whatever question came in via HTTP and standardize it into user_query for the rest of the flow.”

4.3 DT – LLM Decide Tool

Type: LLM node (planner)Role: Understand the question and choose a tool

This is the first LLM and it does planning, not answering.

  • Input:

    { "user_query": "Is my RAG-Workflow repo public?" }

  • The prompt tells the LLM:

    • You are an ops assistant.

    • The user will ask about GitHub repositories.

    • You must output only JSON with:

      • "tool": which tool to use

      • "params": any parameters needed

For this question, the LLM outputs something like (as a JSON string):

{"tool":"github.repo_status","params":{"owner":"kmunson007","repo":"RAG-Workflow"}}

Conceptually:

“I think the best way to answer this question is to call the github.repo_status tool with these parameters.”

This node is the “Plan” part of the agent loop.

4.4 PTJ – Code Parse Tool JSON

Type: Code nodeRole: Turn the LLM’s string into real JSON fields

  • The LLM put JSON in message.content as a string.

  • This node does something like:

    const raw = $json.message.content; const parsed = JSON.parse(raw); return [{ json: parsed }];

  • Output becomes:

    { "tool": "github.repo_status", "params": { "owner": "kmunson007", "repo": "RAG-Workflow" } }

Conceptually:

“Convert the LLM’s plan from a plain text string into structured JSON we can route on.”

This is a data transformation step.

4.5 SW – Switch on tool

Type: Routing / branching nodeRole: Direct execution based on tool

  • It checks the value of the tool field.

  • If tool == "github.repo_status", it sends the flow down one branch (to GH).

  • If tool is something else (like unknown), it can send the flow to a different branch (e.g. error handling via EA).

In your Mermaid:

SW -->|github repo status| GH
SW -->|unknown tool| EA

Conceptually:

“Depending on which tool the planner chose, go down the appropriate branch.”

This is the “Route” part of the agent loop.

4.6 GH – HTTP Request Repo Status

Type: HTTP integration nodeRole: Call the GitHub API (tool execution)

Conceptually:

“Execute the chosen tool by calling GitHub’s API with the given parameters and get real data back.”

This is the “Act” / “Tool Use” part of the loop.

4.7 SUM – LLM Summarize

Type: LLM node (interpreter)Role: Turn raw JSON into a friendly answer

  • Input includes:

    • The original user_query (via expression back to Edit Fields)

    • The tool used (via expression back to Parse Tool JSON)

    • The raw GitHub JSON (via {{$json}} or JSON.stringify($json, null, 2))

  • The prompt tells the LLM:

    • Use this JSON.

    • Don’t claim you can’t answer if visibility/private are present.

    • Answer the original question in 2–4 sentences.

    • Mention if the repo is public/private and include the URL.

Output is an LLM completion object with content like:

“Yes, your RAG-Workflow repository is public. The JSON data indicates that the visibility field is set to public. You can view it here: https://github.com/kmunson007/RAG-Workflow.”

Conceptually:

“Read the raw GitHub JSON and explain the answer in clear English.”

This is the “Interpret + Answer” step.

4.8 EA – Code Extract Answer

Type: Code nodeRole: Simplify the LLM output into a clean API response

The summarizer’s output is an OpenAI-style object. This node extracts just the text:

let content = $json?.choices?.[0]?.message?.content;

if (!content) {
  content = $json?.message?.content ?? $json?.content ?? "No answer content found.";
}

return [
  {
    json: {
      answer: content
    }
  }
];

So the final output becomes:

{
  "answer": "Yes, your RAG-Workflow repository is public. ..."
}

Conceptually:

“Strip the OpenAI wrapping and return just the human-friendly answer in a stable, simple JSON shape.”

This defines your API contract.

4.9 OUT – API Response answer

Not a separate node in n8n, but conceptually the final stepRole: Webhook returns the last node’s output

Because your Webhook is configured with Response Mode = Last Node, whatever EA outputs is sent back as the HTTP response.

So a client calling your endpoint sees:

{
  "answer": "Yes, your RAG-Workflow repository is public. ..."
}

Conceptually:

“Return the final answer to the caller in a simple JSON object.”

 
 
 

Recent Posts

See All
Introduction to this blog, the why

It has been quite a while since I started this site. The forces in the world of technology have shifted. There is now simply no denying what AI is and what impact it is having, and will have, on eve

 
 
 
RAG workflow project

Here’s a summary (with a little bit of detail) of the full Load → Chunk → Embed → Save to FAISS → Reload → Query workflow I built. I don't guess I have to say that I used ChatGPT 5.1 for help, especia

 
 
 

Comments


bottom of page