Avi Sardana
AI WORKFLOWAI AGENTAUTOMATIONBuilt 2026

Job Search AI Agent

Job searching is broken for people who care about fit. I built an agent that collects 8,000+ daily postings, scores each one for relevance, and delivers a ranked digest to my inbox every morning.

TOP JOB MATCHES
#1
#2
#3

8,204 postings scanned today

The Problem

I was spending two to three hours a day manually checking job boards, seeing the same stale postings over and over, and struggling to find roles that actually matched what I was looking for. The pressure to apply quickly makes it worse. Early applications have a better shot at being seen, so you end up moving fast and casting a wide net just to stay competitive, even when you know the fit isn't perfect. I wanted a way to stay on top of new postings without sacrificing the selectivity that actually makes applications worth sending.

What I Built

The Job Agent runs automatically every day, collecting job postings from 9 sources simultaneously, a mix of targeted company watchlists on Ashby, Greenhouse, and Lever, plus RSS feeds and open APIs that surface roles from companies I would not have thought to monitor directly. Every run pulls over 8,000 raw postings, filters out international roles, intern titles, and senior experience requirements, then scores each passing job against a custom weighted rubric covering title match, recency, location, and company signal. A ranked digest of up to 20 roles lands in my inbox with a full score breakdown per job, and roles seen in previous runs are automatically skipped so the digest only surfaces what's actually new. Every result logs to a Google Sheets CRM with the score, source, and link already filled in. I just update the applied and response columns as I move through the process. The part I'm most proud of is the on-demand application packet generator. One command with any job ID searches my resume and STAR stories for the content most relevant to that specific role and returns the top matching sections with an explanation of why each was surfaced. The system finds the most relevant content, but what I actually do with it stays my call.

How I'd Evolve It

The most obvious next step is closing the feedback loop. Right now I can see what scores high but I don't have a systematic way to track whether those results are actually useful. How often do I apply to something in the top results? How many of those move forward? What am I consistently skipping even when the score is high? Answering those questions would let me tune the scoring weights based on what actually predicts a good fit rather than what I assumed would when I built the rubric. I'd also surface a short explanation per job showing why it ranked where it did, so the system stays transparent and easy to override. Longer term I'd let the user adjust preferences directly, things like which titles to prioritize or how flexible the location requirement is, so the agent adapts to how your search evolves over time.