Skip to main content

2 posts tagged with "automation"

View All Tags

Your Backlog Can’t Keep Up With Your Agents

· 8 min read

We're seeing an explosion in AI coding agents that enable people to write software faster than ever. That shifts the bottleneck from writing code to choosing and validating what to build next.

Here is the issue: When code is cheap, you’re forced to make more product bets, more often. The limiting factor becomes selecting the right thing to build and validating it quickly, not implementing it.

At the same time, the mechanical artefacts of product management are getting cheap too. An agent can increasingly draft the roadmaps, feedback syntheses, and usage reports that used to take real time and context switching. That helps, but it doesn’t make the call.

In this essay, I argue that as execution gets cheaper, a product engineer role becomes more common. This is an engineer who owns a product outcome, not just it's delivery. They don't just ship features, they write explicit product bets (if we build X, we expect Y to move, because Z is true). They instrument the product before they ship, so learning isn’t optional, it’s baked into the workflow.


Take a look at your backlog, if you're anything like me, you'll see that what once took you a week to finish might now only take a couple of days with an agent's help. Features, bugs and refactors are being pushed and resolved far faster. Then you pick up new work more often than ever.

If this doesn't make you feel nervous, it should. Your feedback cycles likely haven't scaled with the speed of your execution. Solo engineers used to code at a pace which would allow for feedback to surface naturally, mature and be refined long before we ever got to work on it. Now though, you may be looking for new things to build long before that feedback has a chance to surface.

And it's not because you're careless. Feedback is structurally slower than execution. Users need time to discover a change, try it in their own context, hit edge cases, and decide it’s worth talking about. Analytics lags too: you need enough volume for signal, cohorts to mature, and time for effects (retention, churn, referrals) to show up.

Because we are engineers, we feel security in productivity and coding feels productive. Only, that productivity could be a trap if you end up building more things that people just don't need.

Product decisions are happening more frequently and with less feedback. These poor feature judgements have a considerable chance of compounding. Feature A leads to B and then C, long before A has been validated. Here's a lived example: I've seen an education startup add search because "people can't find things". Then build indexing, synonyms, ranking, and filters. Soon they were deep into maintaining an ingestion pipeline, reindex jobs, and relevance tuning dashboards, all before confirming users even want to browse that content in the first place. That’s the trap: when shipping gets cheap, it becomes easier to build B and C than to validate A, so AI coding agents don’t just speed up output, they can amplify unvalidated decisions.

Product decisions therefore will have to be made with more precision and with better systems in place for validation, testing and exploration. As solo engineers, this change is on you.


Solo engineers will often treat product management processes as optional. You know what you've been working on recently. You know what you should reasonably do next. And there's nobody you need to sync with before you do it. The traditional artefacts are time-consuming to produce. They also don't contribute to much if they only reaffirm what you were thinking in the first place. At solo scale, coordination costs disappear so PM artefacts feel redundant. Enterprise processes for stakeholder engagement, quarterly planning cycles simply don't map to a solo engineer shipping from their laptop.

In the time where execution was slower, not producing these artefacts and relying on instinct was sufficient. Now we're entering a time where it's not. The volume of decisions increases and the surface area of impact grows in tandem. For the engineer's processes to grow with the increased execution, the cost of product thinking must fall.

Happily, the increased effectiveness of LLMs grants an opportunity to improve our decision processes. Given the correct context, agents like Claude Code will be able to reliably produce the artefacts that are the current remit of PMs. Artefacts like roadmaps, prioritisation reports, analytics or user feedback reports are the mechanical side of product management. They consume data from multiple places and maybe do some clustering and simple activities to produce an end report. Sounds like ideal work for an AI agent. These automated mechanics could be increasingly embedded in the engineer's workflow.

But it’s important to draw a hard line, these outputs don’t make product judgement on behalf of the product engineer. They change the context decisions are made in by making tradeoffs, constraints, and signals more legible, but they don’t decide what “good” means, what to ignore, or which bet to place next. That decision is where the value lies, and it’s also where accountability sits, you still own the call, the sequencing, and the outcome. In larger teams this doesn’t remove the product manager, it shifts time away from manual synthesis and toward judgement and alignment. For a solo engineer or small dev team, it backfills context you previously didn’t have at hand.

This is where the shift becomes structural. AI does not eliminate product management but redistributes it.It is easier for an engineer to absorb product decision-making than it is for a product manager to absorb engineering. The engineer is already embedded in the reality of the code, the constraints and the distribution mechanisms. Engineers will have access to tools which will provide insights, reduce ambiguity and maintain long-term alignment, all while closing/tightening feedback loops and without leaving their development environment. Such a future will lead to a compression of the two roles.

What emerges is not more product managers and fewer engineers, but the product engineer. A builder who writes code and designs the decision systems that guide what code gets written. In an era where judgement compounds faster than implementation, this hybrid role is no longer optional. It is the natural evolution of the solo builder.


If you've heard of platforms like Productboard, Cascade & DoubleLoop, I'd likely guess you've spent time in an enterprise product team. Modern product management software is quite understandably created for enterprise customers. Browse the landing pages of Productboard and you'll notice many assumptions; communication overhead, reports for senior stakeholders, alignment of multiple teams, docs & dashboards for non-technical team members. In short, they're not built for the solo engineer or small dev team.

Solo engineers live in a different world. Work can happen without moving cards from one board to another. They don't need sprint rituals to synchronise with their team and the rest of the org. What they want is not communication software but decision software, systems that align product vision and execution. The bottleneck isn’t stakeholder alignment, it's clarity. Yet the landscape of product thinking software for small dev teams & solo engineers is incredibly thin. The cognitive load and overbloat of unnecessary features killed adoption.

As we see execution accelerating through AI coding agents, the demand for higher-quality product judgement increases along with it. At the same time, LLMs are dramatically reducing the cost of compiling the operational outputs of product management; collating feedback, clustering themes, summarising analytics and maintaining up-to-date structured artefacts. These are the types of structured synthesis that language models excel in. And when the costs of compiling the inputs to structured decision-making fall, it will become feasible to embed product thinking into the engineer's workflow rather than layering it on top as a manual ritual.

What will emerge is a new class of tools, tailored to and optimised for the individual. Systems which can actively maintain long-term clarity, close and shorten feedback loops, and automate the mechanics of product judgement. All contributing to an engineer making faster, more accurate product decisions, more regularly.

Solo engineers don't need more communication software. They need cognition software.


With AI collapsing the cost of the main bottleneck of the software development lifecycle, it is inevitable that the roles in software engineering will change. What will emerge is not merely an engineer who codes faster, nor a product manager who can write scripts, but a product engineer. A builder who can integrate strategy, execution and feedback into faster, tighter loops.

With faster execution through AI coding agents comes an increased volume of product decisions. To prevent the compounding of poor judgement from this increased volume, the engineer will have to absorb and automate the operational mechanics of product management. A new class of AI tooling will embed synthesis, validation and alignment, close to code and within the existing workflow.

For the first time, individuals can operate with the leverage once reserved for teams. The solo builder equipped with disciplined product systems and AI-native tooling will not simply move faster; they will compete at scale.

User feedback automation

· 4 min read

While the world focuses on ai-coding agents, the true impact of Ai on solo engineers is not only faster code generation but the automation and compression of feedback loops.

We're going to see a new class of tools which raise the bar of our ability to explore and validate ideas then iterate on them at speeds previously reserved for teams.

These tools will partially automate the operational processe currently undertaken by product managers in the enterprise world. In those larger teams, it'll free up those PMs to the more valuable work of talking to stakeholders and speeding up decisions. But for solo engineers and small teams it has the ability to transform thought processes, opening up a world in which they can make more informed speedy decisions.

For example the highly valuable but work intensive process of collecting and evaluating user feedback into something which can easily be used for roadmap analysis and prioritisation.

Below we'll present a system which you can set up in under an hour to deliver reliable reports for your prioritisation sessions.

Why collect feedback

Collecting feedback is not the hard part. The hard part is keeping it usable over time.

Common failure modes:

  • Feedback is scattered across tools and impossible to review in one place.
  • Important patterns are hidden because every note is free-form.
  • Decisions are made from memory, not from a stable historical record.

What we want instead:

  • One lightweight system for capturing raw input continuously.
  • A repeatable process for converting text into tags and themes.
  • Outputs that both humans and LLMs can consume directly.

Turning feedback into something useful

The pipeline:

  1. Collect free-form feedback in Tally.so.
  2. Pull submissions directly from the Tally API to a csv in your git repo.
  3. Produce structured outputs (tagged-feedback.csv + weekly summary markdown).

Core fields to generate per entry:

  • feedback (free text field)
  • email (some field to tie the feedback to a user)
  • theme (UX, bug, feature request, confusion)
  • tags (LLM generated tags)
  • sentiment (positive, neutral, negative)
  • urgency (low, medium, high)
  • repeat_signal (single mention vs repeated pattern)

Output artifacts:

  • reports/weekly-feedback-summary.md for fast human review.
  • reports/tagged-feedback.csv for filtering, trend checks, and LLM context windows.

The free 30min setup

Tool stack (all low-cost or free)

  • Tally.so for intake form.
  • GitHub repo for versioned CSV snapshots and reports.
  • Small Python script using an LLM call for tagging.
  • Optional cron job or GitHub Actions for weekly runs.

Setup checklist

  1. Create a short feedback form (what happened, what they expected, impact).
  2. Add hidden fields to your form (email, theme, tags, sentiment, urgency, repeat_signal)
  3. Create a Tally API key and keep it in your environment (for example TALLY_API_KEY).
  4. Capture your formId from Tally.
  5. Run the Python script download_feedback.py to pull submissions into a local CSV snapshot
  6. Run tag_feedback.py to classify each row by theme/sentiment/urgency using the Claude Code CLI
  7. Run generate_report.py to produce a weekly markdown summary
  8. Commit generated outputs to keep an auditable product feedback history.

What this unlocks immediately

  • Faster weekly review loops without manual sorting.
  • Better product conversations grounded in tagged evidence.
  • Reusable context for AI agents working on roadmap or implementation decisions.

Closing

You do not need a PM org or expensive tooling to get real signal from feedback. You need a simple, repeatable pipeline that turns raw text into structured inputs you can act on.