2025

Periskope AI Agent

An AI Agent that can be trained auto respond to important queries on Whatsapp and raise tickets and communicate with internal team members

Node
Express
Openrouter
Gemini
BullMQ
RabbitMQ
Periskope AI Agent

1. Overview

The Periskope AI Agent is an AI-powered automation layer embedded into the Periskope WhatsApp-based customer management platform. Its primary purpose is to assist or fully handle inbound customer conversations, reduce manual agent workload, and ensure consistent, policy-aligned responses at scale.

The agent operates in real-time WhatsApp conversations, with deep integration into Periskope’s internal systems such as:

  • Ticketing
  • Private internal notes
  • Customer context
  • Business knowledge bases
  • Message history and metadata

Unlike a generic chatbot, the AI Agent is context-aware, stateful, and action-capable, meaning it can:

  • Understand ongoing conversation history
  • Retrieve relevant organizational knowledge
  • Decide when to act vs escalate
  • Perform internal actions (ticket creation, notes, tagging) via tools

2. Customer Problem Being Solved

Before the AI Agent:

  • Human agents had to manually respond to repetitive questions
  • Context switching between chats caused delays and errors
  • Internal notes and tickets were inconsistently created
  • Knowledge reuse across conversations was limited
  • Scaling support teams directly increased cost

The AI Agent addresses these by:

  • Handling repetitive and first-level queries automatically
  • Maintaining consistent answers using shared organizational knowledge
  • Creating internal artifacts (tickets, notes) without human intervention
  • Escalating only when confidence is low or rules require it
  • Reducing response time and operational load

3. Core Capabilities

3.1 Context-Aware Conversation Handling

  • Processes full WhatsApp chat history
  • Understands message order, sender roles, and prior resolutions
  • Maintains short-term conversational memory per chat

3.2 Knowledge-Augmented Responses (RAG)

  • Retrieves relevant documents, FAQs, and self-learned contexts
  • Uses similarity search instead of keyword matching
  • Grounds responses strictly in retrieved context when required
Users can upload documents or add FAQs to train the AI on how to respond to common queries

3.3 Tool-Based Actions

The agent can invoke internal tools, including:

  • Ticket creation
  • Private internal notes for human agents
  • Context tagging and metadata updates

Each tool has:

  • Explicit schema
  • Strict usage rules
  • Validation at execution time

Hall Of Fame

4. Evolution & Progression

Phase 1: Static AI Replies and Basic RAG

  • Initial version was a simple LLM prompt + response
  • No actions, no memory, no tools
  • Limited usefulness beyond basic replies
  • I implemented a Retrieval-Augmented Generation (RAG) system:
    • Business documents and FAQs are vectorized
    • Chat context is embedded and queried
    • Relevant context is fetched (cosine + semantic similarity) and then injected into the prompt

Limitations identified:

  • Hallucinations
  • No alignment with internal workflows
  • No escalation mechanism

Phase 2: Tool-Enabled Agent

I introduced function calling / tool invocation, allowing the AI to:

  • Create tickets
  • Add private notes
  • Follow predefined operational rules

Key improvements built:

  • Explicit tool schemas
  • Rule-based instructions for when tools can be used
  • Separation between customer-visible responses and internal actions

This transformed the AI from a chatbot into an operational agent.

Phase 3: Improved RAG & Self-Learning

In this iteration I massively improved the context retrieval by:

  • Parsing more conversation context by condensing it for the AI
  • Using a combination of vector search, cosine similarity and falling back to semantic search for retrieval.

I added self-learned context generation, where:

  • A training AI listens to messages being unflagged by internal team members and schedules a training job after 5 minutes.
  • In the training job, it fetches the last 10 messages in the conversation and finds out how the internal member solved the customer query
  • It then generates high-quality Q&A pairs or modifies existing ones
  • These are stored and reused in future chats
Storing Self Learned Context

Challenges identified:

  • Self-learned contexts could be incorrect
  • Overconfidence in low-quality learned data

Phase 4: Confidence, Re-ranking & Guardrails

To address reliability issues, I worked on:

a) Context Re-ranking

  • Multiple retrieved contexts are scored
  • Only the most relevant are injected
  • Reduces noise and hallucination risk

b) Confidence Scoring

  • Agent responses are evaluated post-generation
  • Confidence thresholds decide:
    • Respond automatically
    • Ask a clarification
    • Escalate to a human

c) Human Review Loop

  • Self-learned contexts can be reviewed before activation
  • Prevents polluted knowledge bases
  • Adds accountability and trust

Phase 5: Evaluation & Scoring Framework

I built an AI evaluation pipeline:

  • Responses are judged against system rules
  • Scored across multiple dimensions (granular scoring out of 100)
  • Helps compare prompt versions and agent behaviors

This enabled:

  • Regression detection
  • Safer prompt changes
  • Objective quality tracking over time

5. Current Architecture (High-Level)

Periskope AI Agent Architecture

Input

  • WhatsApp message
  • Chat history
  • Customer metadata

Processing

  1. Context retrieval (RAG)
  2. Prompt assembly with rules and constraints
  3. Tool availability injection
  4. Model generation
  5. Tool execution (if applicable)
  6. Post-response evaluation

Output

  • Customer-facing message
  • Optional internal actions
  • Confidence score
  • Logs for audit and training

6. What I Built & Owned

I was directly responsible for:

  • Designing the AI Agent architecture
  • Implementing tool-based actions with strict schemas
  • Building the RAG pipeline and self-learning system
  • Identifying and fixing hallucination and overconfidence issues
  • Designing re-ranking and confidence-scoring strategies
  • Creating the evaluation and scoring framework
  • Iteratively refining prompts based on real customer feedback
  • Bridging AI behavior with real operational workflows

I shaped how the AI reasons, acts, and fails safely.

7. Current State

As of now, the Periskope AI Agent:

  • Handles real customer conversations reliably
  • Acts within well-defined operational boundaries
  • Learns cautiously with human oversight
  • Is measurable, debuggable, and improvable
  • Reduces support workload while maintaining quality

It has moved from a simple LLM feature to a production-grade AI system embedded deeply into Periskope’s core product.

8. Impact

  • Faster response times for customers
  • Reduced manual effort for support teams
  • Consistent answers across conversations
  • Lower onboarding cost for new agents
  • A scalable foundation for future AI-driven workflows