Logo

Blog

Check out our software-related posts below.

Performance

Building Better Filters: A Case Study in Performance Optimization

A user request to add filtering to a location search system seemed straightforward. The existing UI only offered city filtering, forcing users to parse long, unorganized lists. We planned to add intuitive filtering by state, improve the UI, and complete some refactoring to keep up with maintenance and simplicity as the list of search filters grow. This simple feature request led to a deep performance issue, one we caught before it ever reached production.

Technical Context: The client’s application is built on Node.js with PostgreSQL, using TypeScript and an ORM for database interactions. The location search handles thousands of records with complex relationships across multiple tables.

Feature Design

Let me walk you through what we built and the critical performance issue we discovered during development.

We redesigned the filter experience by grouping locations logically, such as “Preferred Location” (State/City) and “Current Location” (State/City). Selecting a state dynamically filters the city dropdown to only relevant cities. City selection takes precedence over a state. If a user selects a state incompatible with a chosen city, the city field clears.

We reordered the filters by priority and implemented several UX improvements:

  • using full state names instead of abbreviations
  • adding a single clear button per filter group
  • using visual separators between filter sections
  • syncing all filter parameters to the URL for shareability

Catching the Problem Early

During development, we integrated performance monitoring tools into our local and staging environments. Using PostgreSQL query analyzers and Node.js memory profilers, we tested the new filtering feature against production sized datasets. The results were concerning, memory usage spiked sharply and scaled poorly with a growth in dataset size. Query execution times also increased non-linearly as the number of records grew.

Profiling quickly revealed the root cause. The query structure was aggregating location data for every matching record in the dataset. While the subqueries for filtering were correct, some joins were being executed too early and too broadly, a recipe for memory exhaustion at scale.

Without our performance testing harness, this would have been a critical production incident. Instead, we caught it during development.

The Fix: Separating Concerns

The solution was to decompose the query logic into smaller, purpose specific steps, each optimized for its own task. This refactor also included general code cleanup that revealed additional opportunities for simplification and optimization.

The result was dramatic:

  • memory usage dropped by roughly 95%
  • query performance improved by 30–50%.

Most importantly, memory consumption is now constant, independent of dataset size.

We also added targeted code comments to document subtle aspects of the implementation, ensuring future developers understand the reasoning behind these changes and avoid similar pitfalls.

Beyond the core fix, we implemented complementary optimizations. We replaced a notInArray helper with NOT EXISTS subqueries, enabling more efficient query plans and correct handling of NULL values. We also added a handful of strategic database indexes to eliminate sequential scans during filtering.

Code Quality and Maintainability

With the performance issue resolved, we focused on improving code organization and reducing technical debt. We moved the main fetch function from a monolithic data.ts access file into a dedicated item-data.ts, significantly reducing the original file’s size and grouping all item related data logic together. We relocated a helper function for sort conditions to follow DRY principles and we cleaned up the codebase by deleting obsolete filter components no longer in use.

Ensuring Stability Through Testing

Our testing strategy validated the new implementation at every level. Unit tests verified specific business logic, filter parameter parsing, priority rules, pagination calculations, and date range handling. Component tests confirmed UI interactions, including button clicks, clear functionality, and role-based visibility. E2E tests using Playwright, served as smoke tests for page loads, validated the filter results, and used visual regression checks to catch unexpected layout changes.

Results and Lessons Learned

With these changes in place, the results were immediate and dramatic. We delivered a feature rich filtering system with dynamic, intuitive filters. Performance testing showed a 95% memory reduction and 30-50% speed increase compared to the original implementation. We added dozens of new tests across unit, component, and E2E layers. We refactored a large, monolithic function, added necessary database indexes, and removed obsolete files, paying down technical debt while shipping new features.

This project reinforced a critical lesson about modern software development, proactive performance testing is non-negotiable for data intensive features. A query that performs well with a dozen test records can easily fail at production scale, and memory constraints, not just CPU, are often a bottleneck. By integrating performance monitoring tools into our development workflow, we caught and fixed a critical scalability issue before users ever encountered it.