Ani Motion inserted. Add the attribute data-ani or data-ani-progress to any instance and give it a value such as fade.

Converting disparate analytics data into actionable insights, accessible to everyone in one place

Summary

2025

AI

AI Agent

Visual Development

n8n Workflow

After repositioning the Adaki brand from gaming to blockchain-authenticated streetwear and collectibles, we needed a way to consolidate data from GA, Plausible, Shopify, X, and other platforms. Tracking what worked and what didn’t was essential to shaping our next steps.

With a two-person team and a startup budget of zero, I built an n8n workflow capable of handling both automated API calls and manual uploads. The data was processed through multiple agents to create an intuitive, chat-based interface that could answer questions, surface insights, and suggest actions, all through Telegram. This gave us a holistic view of everything happening across the brand.

IMPACT

Connecting all our data sources gave us an unprecedented view of the brand’s performance. Real-time insights meant we could quickly learn from wins and failures in a way that was concise and easy to understand.

5

data sources connected with automatic daily fetching

2

agents working together to process large volumes of data

2,088

individual data points accessible through the agents

224

actionable insights saved to memory for future reference

Client

Adaki

Role

Ai Architect

  • n8n Workflow development
  • Backend development
  • Prompt engineering
  • Testing

Step 1

Kick off

The Challenge

After repositioning the Adaki brand and launching our first products, we needed a way to see all our data in one place. This meant creating a holistic view from multiple, very different data sources, each with its own schema and language. We needed to consolidate everything into a format that could be accessed through a single, simple chat interface.

The Concept

Our plan was to use n8n to connect multiple AI agents capable of interpreting and processing data in a consistent, intuitive way. By integrating with the Telegram API, the interface would allow the agents to not only report on the data, but also generate actionable insights that we could implement, test, and refine over time.

Considerations

With a potentially massive dataset over time and limitations on AI context windows, there were several key challenges to solve:

  • How do we create consistency in the model’s responses?
  • Some data is richer from manual exports than from API calls. How do we handle both manually and automatically ingested data?
  • How do we avoid hitting rate limits with a large dataset?
  • How do we save insights for future reference to avoid re-running large datasets?
  • How do we access the agent without building a dedicated chat infrastructure?

Location

Local deployment for Adaki

Step 2

The Workflows

To make the agent effective, we started by giving it access to data in a simple, intuitive way. This began with a Supabase SQL database. We needed both manual and automated data fetching, so we built a manual upload process through Google Drive alongside scheduled cron jobs to pull data from multiple APIs, including Google Analytics, Plausible, X, Shopify, and Sender.

Since each platform recorded data in a different schema, we normalised everything before inserting it into the database. This was done without AI to keep costs down and ensure consistent results.

Sending all data to the model for every query wasn’t practical, so we split the work across multiple agents. The master agent had access to three core tools: SQL Tool (secondary agent), Add to Memory (inserts insights into a separate database), Recall from Memory (retrieves stored long-term insights)

The SQL tool was essential for keeping the context length manageable. SQL queries allowed most of the heavy lifting to happen inside the database, where calculations could run outside of the workflow. We created a dedicated SQL expert agent to generate and execute these queries based on the master agent’s requests.

Once the SQL tool returned results, the master agent evaluated whether it had enough information to answer the user’s request. If not, it would run additional queries. Once it had the necessary data, it produced a report with actionable insights tailored to the query.

Step 3

Prompt Engineering

The models we used at the time were the first generation optimised for agent workflows. We ran GPT-4.1 mini for the SQL tool to keep costs low, and GPT-4.1 for the master agent. Through extensive trial and error, we reduced hallucinations and achieved consistent, reliable responses.

Step 4

The interface

To make accessing a complex tool as simple as possible, we used the Telegram API to create a straightforward bot for interacting with the agents. There was no need for a custom-built interface for our specific use case, as Telegram already handled the interactions elegantly.

Step 5

Results

Although still in development, the insights became increasingly consistent and accurate as prompts were refined and the dataset grew. With 2,088 data entries across five data sources, we generated 208 insights during its use for the Adaki brand.

Due to the recent closure of the Adaki brand, we couldn’t continue applying these insights in a way that would allow us to measure their long-term impact. However, the tool will now be used for the upcoming PHF and JRS.Studio brands launching with this website. This case study will be updated as new data comes in and as the tool is used to support future projects.

Project

Up Next

engaging global Healthcare professionals through the first in person generative AI experience in pharma

Find out how

2024

UX Design

UI Design

AI

Project Lead