Marketplaces are one of the hardest products to build. You have two sets of users with different needs — in our case, worker publishers and API consumers — and you need to create enough value for both that neither side ever has a reason to leave.
We’ve been building Seek API for the past year: a marketplace where developers publish callable worker functions, set prices per run, and earn from usage. Here’s what we’ve learned about making it work.
Why we built this
The origin story is simple: we were building automation tools and kept running into the same friction. Good scrapers, good AI wrappers, good data processors existed on GitHub — but they weren’t callable. You had to clone the repo, understand the dependencies, run it locally, build your own API wrapper, and deploy your own infra.
The gap wasn’t code quality. It was distribution and productization.
There was no standard way to say: “Here is my function. Call it via HTTP with an API key. Pay per execution.” That’s what we set out to build.
The cold start problem
Every marketplace faces the cold start problem: you need supply to attract demand, and demand to attract supply. Without workers, buyers leave. Without buyers, publishers don’t bother.
We solved this in three ways:
1. Eat your own cooking
We published the first 12 workers ourselves. Not minimal viable demos — genuinely useful workers we needed for our own tooling: an email validator, a LinkedIn scraper, a PDF text extractor, a website tech detector.
This served two purposes. First, it primed the marketplace with real inventory. Second, it forced us to use our own developer experience — which revealed dozens of painful edges before we had public users.
2. Invite early creators with guaranteed upside
We onboarded 20 early creators under a “launch partner” program. They got:
- Direct Slack support
- Early access to the analytics dashboard
- 80% revenue share (versus the standard 70%) for the first 6 months
The stronger revenue share was the key incentive. Creators who were on the fence became enthusiastic when they saw their effective take rate was near the top of any comparable platform.
3. Treat the first 50 consumers as design partners
We personally onboarded the first 50 API consumers. For each of them, we understood their use case in detail, helped them find the right worker, and told them directly when something was missing. Those conversations shaped our search, filtering, and worker card UI more than any user research session would have.
Architecture decisions that mattered
Async-first
We decided early that all jobs are asynchronous, even for workers that complete in 500ms. This was controversial — “just make it sync for fast workers” was a common early suggestion.
We held the line, and it was right. Async-first means:
- No timeout footguns. Workers can run up to 60 minutes without any change to the API contract.
- Retry safety. Callers retry polls, not expensive work.
- Consistent API shape.
POST /jobsreturnsjob_uuid. Always. No special-casing fast workers. - Better observability. Every job is a discrete record with a start time, end time, duration, cost, and status.
The price is a small amount of polling overhead. Worth it.
Credits as the unit of value
We use credits instead of dollars as the billing primitive. One credit = $0.001. Workers set their price in fractions of USD; the platform converts to credits.
Why credits?
- They abstract currency. International users don’t have to think in USD cents. “This costs 5 credits” reads more clearly than “$0.005.”
- They disconnect cost perception from price changes. We can adjust the dollar-to-credit conversion in response to costs without exposing raw price changes to users.
- They enable platform fees cleanly. Deducting 30% is a simple multiplication on credits, not a currency conversion.
- They support gifting and promotions. “100 free credits on signup” is cleaner than “$0.10 free.”
Deployment as a first-class feature
Many API platforms treat deployment as an afterthought. You upload code and hope. We invested early in making deployment fast, predictable, and debuggable.
Our deployment pipeline:
- CLI zips the worker directory
- Uploads to S3 (presigned URL)
- Triggers a CodeBuild job
- Installs dependencies in the Lambda layer
- Deploys to Lambda with a versioned alias
- Runs a smoke test (
HEAD /v1/workers/:id) - Updates the worker record to ACTIVE
Total time: 60–90 seconds. Failures surface build logs instantly.
The key insight: deployment confidence is a prerequisite to creator retention. If creators don’t trust that their code will run exactly as they deployed it, they’ll stop deploying.
Incentive design: where most marketplaces fail
The hardest part of building a marketplace isn’t the technology. It’s designing incentives that keep both sides engaged over time.
For creators: the quality signal problem
In a marketplace without quality signals, race-to-the-bottom pricing wins. Cheap, low-quality workers crowd out well-built ones. Buyers have no way to distinguish.
Our approach:
- Worker ratings (coming Q2): callers can rate workers after successful runs
- Run count visibility: social proof that a worker is trusted
- Error rate tracking: workers with high error rates get a warning badge
- Featured workers: editors curate a “verified” collection of high-quality workers
We treat creator quality as a compounding asset. A well-maintained worker accumulates reviews and runs. A broken worker accumulates failure counts and eventually gets deprioritized in search.
For consumers: the discovery problem
With hundreds of workers, search quality is critical. Our search is based on:
- Exact name match (highest weight)
- Category match
- Tag overlap
- Run count (popularity signal)
- Error rate inverse (quality signal)
We deliberately don’t weight revenue share or creator status into search — that would create a pay-to-rank dynamic that destroys consumer trust.
For both: the trust problem
Marketplaces live and die on trust. Consumers need to trust that workers are safe (no malware, no data exfiltration). Creators need to trust that they’ll be paid correctly.
Our trust infrastructure:
- Workers run in isolated Lambda environments with no cross-worker access
- Outbound network is allowed but logged
- We scan worker code for obvious exfiltration patterns at deploy time
- Creator payouts are computed from immutable job records (not creator-reported)
- Payout reconciliation is auditable via the
/v1/me/payoutsAPI
What we’d do differently
We underestimated SDKs
We launched with a pure REST API and assumed developers would “just use curl.” Some did. But most wanted client libraries. We lost creators who bounced because they didn’t want to write their own Node.js wrapper.
We now maintain official SDKs for Node.js and Python. In retrospect, we should have shipped them before the public launch.
We over-complicated the pricing model
Our first pricing had 6 plans with 12 feature dimensions. Decision fatigue was killing conversions. We cut it to 4 plans with 5 meaningful limits. Conversions improved 34%.
The lesson: in B2B developer tools, simplicity is a feature. If you can’t explain your pricing in two sentences, it’s too complicated.
We should have launched the blog earlier
We launched the product before the blog. Big mistake. Our organic traffic was near zero for the first 4 months. We wrote 20 posts in month 5 and saw a 3x increase in signups from organic within 8 weeks.
Technical content — tutorials, architecture guides, use case walkthroughs — is the highest-ROI marketing for developer tools. Ship it early, ship it consistently.
The metrics that matter
After a year of running this marketplace, here are the metrics we watch closely:
| Metric | Why it matters |
|---|---|
| Worker publish rate | Health of the supply side |
| Worker error rate | Quality of the supply side |
| Consumer second run rate | Did the first run succeed? Did they find value? |
| Revenue share per creator | Are creators earning enough to keep publishing? |
| Search-to-job conversion | Is discovery working? |
| Time-to-first-successful-job | Activation metric for new consumers |
The metric we watch most closely is time-to-first-successful-job for new API consumers. If a developer signs up and can’t run a job successfully within 5 minutes, something is broken — the docs, the onboarding, the available workers, or the API itself.
Getting this metric below 2 minutes was one of the highest-leverage improvements we made.
What’s next
We’re focused on three things for the next 6 months:
Webhooks on job completion. Polling is fine but webhooks are better for long-running jobs. We’ll add callback_url support to the jobs API.
Worker versioning and rollbacks. Currently, deploying a new version is destructive. We’re building a versioned deployment system with instant rollback.
Batch job submissions. A single API call that submits N jobs and returns N job_uuids. Critical for data enrichment pipelines processing thousands of records.
The goal is consistent: make it easier to publish workers that work, and easier for consumers to find and trust them. The marketplace takes care of the rest.
Building a marketplace is slow, iterative work. You rarely get it right the first time. But if you get the incentive design right — creators earn enough to keep building, consumers find enough value to keep paying — the compounding effects are remarkable. Every new worker makes the marketplace more useful for consumers. Every consumer adds revenue that attracts more creators.
That flywheel is the whole game.