Optimizing Semantic Search: A Two-Stage Reranker

Previously in Synapse, we have a core feature: an agentic semantic search system (Still confused what actually makes RAG, a RAG and what’s the difference between it and semantic search? They both seem to intersect. But like Andrew Ng mentioned in one of the interviews, how to define if an agent is truly autonomous? How? maybe the definition does not matter, as long as the agent works itself.) that allows users to ask natural language questions about their activities. The initial version was a standard, three-step pipeline: a Planner agent to generate keywords, a Retrieval step using those keywords against the database, and a Synthesizer agent to generate a summary.

This worked, but it suffered from the classic limitation of keyword search—it lacked true semantic understanding. A query for “time spent on backend improvements” might miss an entry described as “refactored the authentication service.”

Read More >>


Building an Agentic RAG Pipeline for Journal Analysis

Today, I implemented one of the new features proposed on roadmap in my personal journal analysis app: a semantic search and synthesis engine. Instead of a traditional keyword search, I opted for an “Agentic RAG” (Retrieval-Augmented Generation) pipeline. This approach leverages a local LLM (gemma3n:latest) not just for generating text, but for orchestrating the entire search process. This post outlines the design thinking behind this implementation and its future potential.

Read More >>


From .txt to Web App: Building a Journaling System with Gemini's Help

For a while now, I’ve maintained a daily journal in a simple text file. It’s a straightforward system: I write the date, then list my activities under WORK: and LIFE: headers. This approach is simple and low-friction, but as the file grew, its limitations became obvious. Searching for specific activities was a grep-and-pray operation, and any form of analysis was purely manual. The data had no structure.

Read More >>


Containerization: A Case Study in Environment and Configuration Management

Today’s objective was to containerize the SEC filing analysis application to ensure a consistent and reproducible runtime environment. This process involved not only creating the necessary Docker artifacts but also refactoring the application’s configuration to adhere to best practices for handling secrets and environment-specific variables.

Read More >>


The Illusion of Quarterly Data: Correctly Calculating Financials from SEC Filings

When building financial analysis tools, one of the most common and dangerous assumptions is that the financial data you receive—whether from an API or directly from SEC filings—represents discrete, isolated time periods. A “Q2” report, for instance, should contain data only for the second quarter. Right?

Unfortunately, this is often not the case. Raw SEC filings, specifically the quarterly 10-Q and annual 10-K reports, follow reporting rules that can be misleading if taken at face value. In this post, I’ll walk through the challenges of parsing these documents and present a robust Python solution to derive true, discrete quarterly financial figures.

Read More >>


Mess in SEC Financial Filings: A new Challenge of Data Extraction

In the world of financial data extraction, the challenge of parsing and interpreting complex documents is ever-present. As I continue to refine my financial analysis tool, I’ve encountered a new set of challenges that highlight the messy nature of financial filings. This post delves into these issues and how they impact the accuracy and reliability of data extraction.

Read More >>


Refactoring for Resilience: Introducing a Database Caching Layer

For our financial analysis tool, the latest series of updates focuses on an architectural enhancement: the integration of a persistent database layer for caching, performance tracking, and data retention. This post details the changes, the rationale behind them, and how they set the stage for future development.

Read More >>


From Sequential to Supersonic: A Developer's Journey into Parallel LLM Queries

When I first started building this application, my focus was on a simple goal: use a Large Language Model (LLM) to read dense SEC filings and extract structured, easy-to-digest insights. The initial prototype was magical. I could feed it the “Business” section of a 10-K filing, and it would return a beautiful JSON object with competitive advantages, key products, and more.

But then, I started to find out each analysis takes time especially when I wanted to analyze multiple sections like Business, Management’s Discussion and Analysis (MD&A), Risk Factors, and Financials. Each of these sections required a separate LLM API call, and I was making those calls one after another in a synchronous loop.

That’s when I hit the wall, together with the previous ‘cache’ implementation that wasn’t caching anything. The user experience was not ideal, and I knew I had to do something about it. So in this post I will show how to transform a sequential script to a multi-layered concurrent application that feels responsive and powerful to reduce the wait time from a couple of min to just seconds.

Read More >>