AI Automation & Agent Engineer (Early Career)
zipBoard
AI Automation & Agent Engineer (Early Career)
Remote | Early-stage SaaS | Long-term role
Location: Remote (India preferred; some overlap with North America)
Type: Full-time, long-term
Experience: 0–2 years or strong fresh graduate
Compensation: $8,000 – $15,000 USD + equity
About the Company
We’re an early-stage SaaS startup building tools that help teams review, collaborate on, and manage complex documents and digital assets. Our customers span engineering, design, construction, and content-heavy teams.
We’re now exploring AI-powered workflows and standalone agents to improve how reviews, documentation, and approvals happen - with the intent to integrate the best ideas into our core product over time.
This role works closely with the founder and plays a direct part in experimentation, evaluation, and early product validation.
About the Role
This is a builder + experimenter role for someone early in their career who wants hands-on exposure to AI, automation, and real product problems.
You’ll help:
Build automation workflows
Prototype standalone AI agents
Evaluate how well AI performs in real review scenarios
Turn experiments into clear insights for product decisions
Your focus is on systems, agents, and evaluation.
What You’ll Work On
AI Automation (Documentation, Marketing, Ops)
Build automation workflows using n8n / APIs / webhooks
Connect systems like GitHub, Confluence, Figma, support tools, CRMs, and internal trackers
Use AI (LLMs) to generate structured outputs (JSON, summaries, analyses)
Automate internal processes across documentation, marketing ops, and reportingStandalone AI Agent Prototyping
You’ll help prototype small, focused AI agents such as:
Documentation gap detection agents
Review summarization agents
QA or review-readiness agents
Marketing asset review agents
These agents will initially live outside the core product as experiments.AI Review Agent Evaluation (Key Responsibility)
You’ll play a critical role in evaluating AI review agents, including:
Designing test cases using real-world review data
Running agents against sample inputs (documents, comments, feedback)
Comparing AI outputs to human reviews
Identifying:
Where AI performs well
Where it fails or hallucinates
What guardrails or workflows are needed
Documenting findings clearly so product decisions can be made
This evaluation work directly informs what gets built into the product later.Systems & Experimentation
Write lightweight Python scripts or services when needed
Iterate quickly: test → learn → refine
Document experiments and findings for future reuse
Work closely with the founder to decide what to pursue vs. drop
What We’re Looking For
We care more about curiosity, learning ability, and systems thinking than formal experience.
You’re a good fit if you:
- Are comfortable with basic programming concepts (Python, JS, APIs, JSON)
- Enjoy experimenting and testing ideas
- Are curious about how AI behaves in real-world scenarios
- Like solving messy problems with structured thinking
- Want long-term growth in a startup environment
Nice to have (not required):
- Experience with automation tools (n8n, Zapier, Make)
- Interest in AI evaluation, QA, or tooling
- Familiarity with SaaS products or workflows
Why This Role
- You’ll work directly with the founder
- You’ll help shape how AI is used in a real product
- You’ll gain rare experience in AI evaluation and product discovery
- You’ll build systems that actually get used
- You’ll have room to grow into a senior or specialist role over time
- Flexible working hours
- Strong mentorship and learning opportunities
How to Apply
Please include:
- A short note on why this role excites you
- Any projects, experiments, or systems you’ve worked on (academic or personal is fine)
- One example of something you tried to build or test, even if it didn’t work
