Aperilex Infrastructure Layer Completion: An Intense Learning and Developing Journey

Been into a week of rebuilding Aperilex with Claude Code, focusing on the domain and infrastructure layers to build the solid foundation for the app. The experience has been intense, filled with learning and development challenges. Never thought about this kind of speed before but here we go. Today marks the completion for both phases and I’m excited to see how it all comes together.

Read More >>


Aperilex Architecture Update: Integrating EdgarTools and a Hierarchical LLM Pipeline in Infrastructure Layer

Into the 4th day of rebuilding with Claude Code. Started to work on the edgartools and LLM in infrastructure layer overhauling the core of Aperilex. The main thrust was to pivot from generalized analysis to a highly structured, data-driven workflow, which involved a significant refactoring of our data ingestion and LLM orchestration layers.

Read More >>


Domain Layer Implementation: Building Aperilex's Core and Revision with Claude Code

Into the third night of coding with Claude Code, I focused on implementing the domain layer of Aperilex as outlined on the project PHASE.md, refining the core logic and data structures that will drive our SEC Filing Analysis Engine. This post details the process and decisions made during this phase.

Read More >>


Learning with Claude Code: Plan the Rewrite for SEC Analysis Project

I knew this would come at some point as I was developing the SEC Analysis Project. Last weekend I started to refactor the codebase for better code quality check and planned the roadmap for better architecture and security. Instead of applying band-aid fixes, I decided today on a complete rewrite—but this time, of course, not alone.

Read More >>


From Static to Dynamic - A Foundational Refactor for Synapse

Technical debt often manifests as rigidity. Our application’s journal feature was a prime example, built on a hardcoded assumption that all activities were either “Work” or “Life”. This was simple to implement initially but impossible to scale. Every user’s life is more nuanced than that binary choice.

This post outlines a foundational refactor to move from this static system to a dynamic, user-driven one.

Read More >>


Building AI-Powered Weekly Reports in Synapse

Raw data is valuable, but insights are transformative. Synapse has been good enough at capturing daily activities, but the next logical step has always been to synthesize that data into high-level understanding. Today’s task is to implement the new feature on the roadmap: AI-Powered Weekly Reports.

This feature moves beyond simple data retrieval and uses LLM to analyze a week’s worth of activities, generating a concise, structured summary of your time allocation, key accomplishments, and emerging trends.

Read More >>


Optimizing Semantic Search: A Two-Stage Reranker

Previously in Synapse, we have a core feature: an agentic semantic search system (Still confused what actually makes RAG, a RAG and what’s the difference between it and semantic search? They both seem to intersect. But like Andrew Ng mentioned in one of the interviews, how to define if an agent is truly autonomous? How? maybe the definition does not matter, as long as the agent works itself.) that allows users to ask natural language questions about their activities. The initial version was a standard, three-step pipeline: a Planner agent to generate keywords, a Retrieval step using those keywords against the database, and a Synthesizer agent to generate a summary.

This worked, but it suffered from the classic limitation of keyword search—it lacked true semantic understanding. A query for “time spent on backend improvements” might miss an entry described as “refactored the authentication service.”

Read More >>


Building an Agentic RAG Pipeline for Journal Analysis

Today, I implemented one of the new features proposed on roadmap in my personal journal analysis app: a semantic search and synthesis engine. Instead of a traditional keyword search, I opted for an “Agentic RAG” (Retrieval-Augmented Generation) pipeline. This approach leverages a local LLM (gemma3n:latest) not just for generating text, but for orchestrating the entire search process. This post outlines the design thinking behind this implementation and its future potential.

Read More >>