Claude Code in the Wild

From a Luma event to inboxed outreach

Kushagra ran a 5-skill Claude Code pipeline live: scraped a Luma event, enriched LinkedIn profiles, qualified 40 GTM engineers, published live profile pages, found verified emails, and sent personalized outreach. End to end in one session.

📅 April 22, 2026 ⏰ 64 minutes 🚀 Intermediate
▶️ Watch the Full Recording

Clay Bootcamp • Claude Code Cohort

Want to build outbound systems like this yourself?

The Clay Bootcamp Claude Code cohort is a hands-on program where you learn to build production-grade GTM workflows. Scraping, enrichment, qualification, copywriting, send. No coding background required.

View the cohort → Coached by GTM engineers like Kushagra • Clay Bootcamp

Meet Kushagra Tiwari

Kushagra Tiwari

Kushagra Tiwari

Principal GTM Engineer at Starbridge • Coach at Clay Bootcamp

Kushagra ran his own GTM engineering agency for two years before going in-house at Starbridge as a principal GTM engineer. He coaches at Clay Bootcamp and was a Clay Cup competitor last year. His thing is end-to-end Claude Code pipelines for outbound, the kind that take you from a list source to a sent, personalized email without you touching anything in between.

The side project he showed in this session is gtme.jobs, a job board and directory for GTM Engineers. Every page on it was published by the same Claude Code skills he ran live.

What we covered

🎯

The full outbound loop, in one terminal

List build, enrichment, qualification, copy, and send. Five skills chained together so one prompt can run the whole pipeline.

🧩

Skills as Lego pieces

Every step was its own slash command. The profile writer can fit any landing page, the qualifier can become an LLM step, the email finder can sit inside any sequencer flow.

📥

Stream data, do not hoard it

API calls run in background scripts and write straight to a database (Convex) or a local CSV. The terminal never sees the payload, so you do not eat your context window.

🌐

A live directory that writes itself

40 attendees got their own published landing page on gtme.jobs in one go. The site updates the moment a profile lands, no deploy, no refresh.

The 5 skills, exactly as Kushagra ran them

Each step is its own skill. Each skill calls a script that runs in the background and writes to Convex or a local CSV. You can run them one at a time the way Kushagra did, or chain all 5 into a single command.

1 /luma-scraper Scrape every attendee from a Luma event

Paste a Luma URL. Claude Code spins up a dedicated Chrome session via the Chrome DevTools MCP, opens the attendee list, scrolls the full roster, then opens each attendee profile to grab the LinkedIn and Twitter URLs they linked. Not everyone connects their LinkedIn, so this is also where you discover who is reachable. Built on top of Andrew Teasdale's open-source Luma scraper.

~11s
60 attendees
~22s
133 attendees
~50%
have LinkedIn linked
2 /linkedin-enricher One LinkedIn URL, a full career snapshot

Takes the LinkedIn URLs from step 1 and enriches each one via RapidAPI (Vikrant's Professional Network Data endpoint). You get headline, current role + company, every past position, top skills, location, certifications, and bio. Runs 10 in parallel with polite pacing to stay under the rate limit. People without a LinkedIn are skipped, so you only spend credits on profiles that can actually become emails.

10×
parallel workers
~2-3s
per profile
1 credit
per person enriched
3 /gtme-profile-writer Qualify, then write, then publish

First, qualify. Each enriched profile gets scanned for GTM signals (GTM Engineer, Clay, RevOps). Anyone who does not match is dropped before a single OpenAI token gets spent. Then write. The survivors get a one-liner, a 2-sentence career narrative, and credentials, all grounded in their LinkedIn data so nothing is invented. The prompt enforces tense, bans clichés (impressive, passionate, driven), and blocks repetition. The moment a profile lands in Convex, the live directory at gtme.jobs/directory updates. No deploy, no refresh.

40
qualified
34
disqualified
< 1 min
to publish all 40
4 /enrich-email Verified emails only. No bounces, no guesses.

Only runs on the 40 already-qualified profiles. Two-step lookup via Findymail: first try is name + current company, fallback is LinkedIn URL. Catches people that single-path approaches miss. Every address gets a separate delivery check. Anything that fails is flagged and never flows to the outreach step, so your sender reputation stays clean.

~67%
found
~52%
verified & kept
0
bounces sent
5 /launch-campaign Personalized outreach, written and sent

Two AI passes. First, research: read the person's full career and the open roles on gtme.jobs, find the non-obvious angle, surface the top 3 job matches and the specific reason each one fits. Second, copy: turn that research into one peer-voice sentence with strict rules and banned phrases, with automatic rewrites if anything trips them. For the demo, send goes out via Gmail through the Google Workspace CLI. In production, route this through a real sequencer (EmailBison is Kushagra's pick).

2-step
research + copy
3 jobs
matched per email
< $1
total run cost
🔗

See the actual published profiles

Every attendee who passed qualification got a live landing page. The directory updates the moment a profile lands in Convex.

Open gtme.jobs →

How Kushagra builds Claude Code pipelines

Skills are Lego pieces

Each skill should do one job and do it well. The profile writer can fit any landing page. The qualifier can be deepened into an LLM step. Build them so they snap into other pipelines without rewriting.

Off-the-shelf primitives, zero custom infra

Claude Code orchestrates. Chrome DevTools MCP scrapes. RapidAPI enriches. Findymail verifies. OpenAI writes. Convex stores. Gmail sends. Nothing was custom-built that an existing primitive already does.

Heavy work runs in background scripts

Never let an API call dump its full response into the terminal. Write a skill that calls a script, and have the script push results straight to your database or a local CSV. Your terminal context stays clean and your runs scale to thousands of rows.

Stream, do not batch-and-hope

Each row gets saved the moment it is processed, both to Convex and to a local CSV. Long-running background work can lose data. Streaming protects you, and gives you two places to audit.

Test 10 rows like you would in Clay

Run 10 rows multiple times. Look at the output. Once you trust it, run 5,000. Same instinct as Clay, just a different surface. The work of validation does not go away because you switched tools.

Do not send from Gmail in production

Gmail was used in the demo for speed. Real outbound goes through a dedicated sequencer (EmailBison, Instantly, Smartlead). Warmup, inbox rotation, reply detection, deliverability. That is their job, not Gmail's.

Questions Kushagra answered

Did you have to be the Luma event admin to scrape it?
No. The skill works off the public Luma URL. It reads what is rendered on the attendee list and the public attendee profiles. You do not need admin access to the event.
Where did the Luma scraper come from? Can we get our hands on it?
Originally built by Andrew Teasdale, who released it for free. Kushagra took that as a starting point, paired it with the Chrome DevTools MCP, and wrapped it in a Claude Code skill so it runs as part of the wider pipeline. He will share the spec so you can rebuild a similar version yourself.
Why Convex instead of Supabase?
For a small-to-medium side project where you want the directory to feel reactive (a profile lands in the database, the page updates immediately), Convex needs less plumbing than Supabase. Function logic lives inside the code, not in SQL queries. For larger production datasets, Supabase still wins on scale. Either choice works. Kushagra just liked Convex for this one.
Why Opus 4.7? Could you run this on a smaller model?
No specific reason. Once the skills are built, the model is mostly just triggering scripts, so a smaller model (even Haiku) would handle the runs fine. Kushagra defaults to Opus 4.7 while building because the workflow has to work end-to-end, but downgrades for execution.
How does the LinkedIn skill know someone uses HubSpot, Salesforce, Clearbit?
It reads it from the LinkedIn profile data: experience entries, the about section, headline, skills. Nothing is invented. If a tool is not on the profile, it does not show up in the editorial copy.
For people without a LinkedIn URL, can you find one?
Yes. Add a step that takes their name plus any company info you have and runs a SERP query (cheap), or use a dedicated researcher like Parallel AI / Perplexity (more expensive but more accurate). LLMs with their own web search work too, just slower and pricier than running a SERP query and parsing the result with a small model.
Do you not blow through your token budget at scale?
The whole live run cost under $1 for 40 profiles. The cost-intensive steps are LinkedIn enrichment, email lookup, copywriting research. All of those use cheaper models and cheaper APIs by design. The narrowing also helps: jobs are filtered to the past 30 days, profiles are pre-summarized, so the matching step processes summaries, not full LinkedIn dumps.
How do you audit results visually, like you would in Clay?
This is where Claude Code is harder than a SaaS UI. The fix: keep VS Code open alongside the terminal, drop into the local CSV or Convex dashboard, and dig into the data directly. Same instinct as Clay (run 10 rows, validate output, scale up), different surface for the validation.
Could the same pipeline handle a recruiting use case (ATS → candidate matching)?
Yes, and probably very well. Same skeleton: scrape a candidate list, enrich profiles, qualify, match against open roles, write outreach. Recruiting is one of the most natural fits for this pattern. Cost stays low because the heavy steps use cheap models. The depth (podcast mentions, news, company context) is up to you.

Every tool that came up

Everything from the session

Session recording
Full 64-minute Fathom recording
Watch on Fathom →
Slide deck
The 8-slide walkthrough Kushagra used in the session
Open deck →
gtme.jobs (the live directory)
See the actual profile pages the pipeline published live
Open site →
Andrew Teasdale's Luma scraper
The original open-source scraper Kushagra built on top of
View on LinkedIn →
5-skill spec (coming soon)
Kushagra is putting together a detailed spec you can feed straight into Claude Code to rebuild this
Coming soon
Claude Code cohort page
One-week hands-on program. Build pipelines like this one yourself.
View cohort →
Clay Bootcamp

Ready to build your own outbound pipeline?

The next Claude Code cohort runs for one week. You build real pipelines, get real feedback, and walk out with skills you can run the next day.

Join the Cohort →

Clay Bootcamp • claudecode.claybootcamp.com

Questions about the cohort?

Reach out to either of us on LinkedIn. Always happy to chat.

Heather Melton
Heather Melton
Head of Community Strategy, Clay Bootcamp
Connect on LinkedIn
Kushagra Tiwari
Kushagra Tiwari
Principal GTM Engineer, Starbridge • Coach, Clay Bootcamp
Connect on LinkedIn