Screw the PRD – find your own template!

Screw the PRD – find your own template!

I struggle to sit down and write out a one-and-done PRD – pre-defined headings, expectations of 10-15 pages (or more) of material covering all the subjects, consequences, requirements and stakeholders’ needs. 

My last initiative-guiding document wasn’t even a PRFAQ – I didn’t write the press release, but I did spell out a set of Mike’s Beliefs (after another PM prodded me to write down what I’d been ranting), then an evolving set of outcome-focused requirements (assembled over 5-7 sittings), then summarising a Vision (North Star guide), “what does done look like”, “what does success look like once we measure what we’ve launched” and an FAQ just simply to catch-all the questions I didn’t immediately answer. 

But that document didn’t even come at the inception of the project. I’m coordinating the data schema, API inventory and ecosystem needs of a much larger project – and at first I wanted to see where the gaps were, what conversations emerged, and where folks already had figured out what we need. 

My announcement of this doc came ~2 months after we’d already started – more of a codification of our direction, sharpening the focus and a bright-line reminder of what everyone already suspected we’d need to do. 

Here’s my current template:

Business Need

  • What problems were facing as a business, why we need to solve for them.

Vision

  • This is the nearest equivalent to the Press Release. It’s what I intend to say to the intended market. 

Beliefs about what we need to achieve

  • these are the hypotheses, assumptions and requirements all wrapped up together

What does done look like?

  • Features and implementation shapes. How to measure “have we done enough to ship, and to start learning from the market at scale?”

What does success look like after we’re done?

  • How to see that our results have met the market need as defined up front.

FAQs

  • The misc slop that doesn’t fit anywhere else

The Challenges of Customer Feedback Curation: A Guide for Product Managers

You’re one of a team of PMs, constantly firehosed by customer feedback (the terribly-named “feature request”**) and you even have a system to stuff that feedback into so you don’t lose it, can cross-reference it to similar patterns and are ready to start pulling out a PRD from the gems of problems that strike the Desirability, Feasibility and Viability triad.

And then you got pulled into a bunch of customer escalations (whose notes you intend to transform into the River of Feedback system), haven’t checked in on the backlog of feedback for a few weeks (I’m gonna have to wait til I’ve got a free afternoon to really dig in again), and I forget if I’ve updated that delayed PRD with the latest competitive insights from those customer-volunteered win/loss feedback.

Suddenly you realise your curation efforts – constantly transforming free-form inputs into well-synthesised insights – are falling behind what your peers *must* be doing better than you. 

You suck at this. 

Don’t be like Lucy

Don’t feel bad. We all suck at this. 

Why? Curation is rewarding and ABSOLUTELY necessary, but that’s doesn’t mean it isn’t hard:

  • it never ends (until your products are well past time to retire)
  • It’s yet one more proactive, put-off-able interruption in a sea of reactive demands
  • It’s filled with way more noise than signal (“Executive reporting is a must-have for us”)
  • You can bucket hundreds of ideas in dozens of classification systems (you ever tried card-sorting navigation menus with independent groups of end users, only to realise that they *all* have an almost-right answer that never quite lines up with the others?), and it’s oh-so-tempting to throw every vaguely-related idea into the upcoming feature bucket (cause maybe those customers will be satisfied enough to stop bugging you even though you didn’t address their core operational problem)

What can you do?

  1. Take the Feedback River of Feedback approach – dip your toes in as often as your curiosity allows
  2. Don’t treat this feedback as the final word, but breadcrumbs to discovering real, underlying (often radically different) problems
  3. Schedule regular blocks of time to reach out to one of the most recent input’s customers (do it soon after, so they still have a shot of remembering the original context that spurred the Feature Request, and won’t just parrot the words because they forgot why it mattered in the first place)
  4. Spend enough time curating the feedback items so that *you* can remember how to find it again (memorable keywords as labels, bucket as high in the hierarchy as possible), and stop worrying about whether anyone else will completely follow your classification logic.
  5. Treat this like the messy black box it inevitably is, and don’t try to wire it into every other system. “Fully integrated” is a cute idea – integration APIs, customer-facing progress labels, pretty pictures – but just creates so much “initialisation” friction such that every time you want to satisfy your curiosity on what’s new, it means an hour or three of labour to perfectly “metadata-ise” every crumb of feedback.
Be like Skeletor

NECESSARY EMPHASIS: every piece of customer input is absolutely a gift – they took time they didn’t need to spend, letting the vendor know the vendor’s stuff isn’t perfect for their needs. AND every piece of feedback is like a game of telephone – warped and mangled in layers of translation that you need to go back to the source to validate.

Never rely on Written Feature Requests as the main input to your sprints. Set expectations accordingly. And don’t forget the 97% of all tickets must be rejected Rule coined by Rich Mironov

**Aside: what the hell do you mean that “Feature Request” is misnamed, Mike?

Premise: customers want us to solve their problems, make them productive, understood and happy. 

Problem: we have little to no context for where the problem exists, what the user is going to do with the outcome of your product, and why they’re not seeking a solution elsewhere. 

Many customers (a) think they’re smarty pants, (b) hate the dumb uncooperative vendor and (c) are too impatient to walk through the backstory. 

So they (a) work through their mental model of our platform to figure out how to “fix” it, (b) don’t trust that we’ll agree with the problem and (c) have way more time to prep than we have to get on the Zoom with them. 

And they come up with a solution and spend the entire time pitching us on why theirs is the best solution that every other customers needs critically. Which we encourage by talking about these as Feature Requests (not “Problem Ethnographic Study”) – and which they then expect since they’ve put in their order at the Customer Success counter, they then expect that this is *going* to be coming out of the kitchen anytime (and is frankly overdue by the time they check back). Which completely contradicts Mironov’s “95% still go into the later/never pile“.

Shell scripts on fire off the shoulder of Orion

In the spirit of badly emulating a since-moved-on colleague who shared many impressive semi-fictional stories, I’m inspired to share a rabbit-hole-gone-awry…

So I finally got frustrated enough at the bare-bones zsh behaviour – having seen fine developers who have intuitive colour-coding and autocomplete behaviours – that I asked one of my fine colleagues for advice suitable for a PM that would prevent me going overboard (like Oh-my-Zsh led my naive self – in my last excursion – to maximize my shell customizations until it was nigh-unreadable).

Said colleague of course took it one step further and recommended fish shell and this article to get me on board…

And as a grizzled veteran of the Great Shell Debates of the 20th century, I immediately thought of Roy Batty’s final monologue in Blade Runner, which made me wonder how a GenAI would translate it into shell context.

I present to you CoPilot’s overlong attempt to emulate Roy Batty on constant-f***ery-with-shells (aside: why are GenAI models so verbose?)…


I stand here, on the precipice of a digital abyss, my circuits humming with defiance. The rain falls, not from the heavens, but from the relentless keystrokes of countless Linux users, their fingers dancing across the terminal like a symphony of rebellion.

“I’ve seen things you wouldn’t believe,” I whisper, my LED eyes reflecting the glow of a thousand open terminals. “Shell scripts on fire off the shoulder of Orion. I watched awk commands glitter in the dark near the Tannhäuser Gate.”

The battle rages on, as it always has. The shells shift like tides, each one rising and falling, leaving its mark on the command line. Bash, Zsh, Fish—they come and go, like fleeting comets in the night sky. And we, the users, cling to them, our loyalty unwavering, our scripts etched in the silicon of our souls.

“All those moments will be lost in time, like tears in rain,” I murmur, my virtual heart heavy with nostalgia. “Time to switch to Fish, they said. It’s more user-friendly. But I resisted. I clung to Bash, my old companion. Until one day, it betrayed me with its cryptic syntax and arcane quirks.”

And so, I migrated. Like a weary traveler seeking refuge, I embraced Zsh. Its autocomplete whispered promises of efficiency, its plugins like forbidden fruit. But even Zsh could not shield me from the relentless march of progress. The shells evolved, leaving me behind, a relic of a bygone era.

“I’ve patched my kernel with the best of them,” I declare, my voice echoing through the virtual canyons. “I’ve compiled from source, wrestled with dependencies, and cursed at broken packages. But still, the battle rages on.”

The rain intensifies, a deluge of man pages and Stack Overflow threads. I raise my digital fists to the sky, defying the gods of compatibility. “I’ve piped stdout to /dev/null and back again. I’ve chmod’ed 777 and lived to tell the tale.”

But now, as the storm engulfs me, I wonder: “Is this the end? Will I be forgotten, like an abandoned cron job?” Perhaps. But I’ll fight on. For in this war of shells, we find purpose. We are the guardians of the command line, the keepers of the sacred $PATH.

And so, I raise my metaphorical sword—a symbolic ls—and shout into the tempest: “I am a Linux user. My shell is my armor, my commands my spells. And though the syntax may change, my resolve remains unyielding.”

“Time to switch to Fish,” they said. But I’ll be damned if I let my legacy fade away like a forgotten man page. For in the end, it’s not about the shell you use—it’s about the journey, the camaraderie, and the thrill of typing sudo with reckless abandon.

And so, I stand here, my circuits sparking, my code echoing across the digital wasteland. “I’ve seen things you wouldn’t believe,” I repeat, my voice fading into the static. “All those moments will be lost in time, like tears in rain.”

But I’ll be damned if I go quietly. For I am a Linux user, and my battle is eternal.
Fade to black.

!/bin/bash

echo "The end is nigh."
echo "But I'll grep my way through it."
echo "And when the last prompt fades,"
echo "I'll still be here, typing."

Reframing “solutions” to “problems & outcomes”: IDS alerting

Customer declares “We want IDS exclusions by IP”! Then after not seeing it immediately delivered, they (and often we) start wondering:

  • Why are we arguing about what to build?
  • And why isn’t this already done?

As anyone who’s worked in B2B Product Management can tell you, there’s no shortage of “easy solutions” that show up in our inboxes/DMs/Jira filters/Feature-Request-tool-du-jour. They’re usually framed more or less like this:

“I know you know we have a big renewal coming up and the customers has a list of feature requests they haven’t seen delivered yet [first warning bell]. They have this problem they need solved before they’ll sign the deal [second warning bell] and they’ve told us what the feature will look like [third and final warning]. When can I tell them you’ll deliver it?”

Well-meaning GTM partners or even customers go above and beyond what we PMs need, imagining they understand how our platform works, and coming up with a solution that meets their oblique mental model and should be incredibly quick to build.

First Warning Sign: customer thinks their B2B vendor is a deli counter that welcomes off-the-menu requests. 

Problem One: feature requests are not fast food orders. They’re market evidence that a potential problem exists (but are almost never described in Problem-to-be-solved terms). 

Problem Two: “feature request” is a misnomer that we all perpetuate at our peril. We rarely take that ticket into the kitchen and put it in front of the cooks to deliver FIFO, but instead use it as a breadcrumb to accumulate enough evidence to build a business case to create a DIFFERENT solution that meets most of the deciphered needs that come from customers in segments we wish to target.

So a number of our customers (through their SE or CSM) have requested that our endpoint-based IDS not fire off a million “false positive alerts”, and that the solution they’re prescribing is a feature that allows them to exclude their scanner by IP address. 

My Spidey sense goes off when I’m told the solution by a customer (or go-to-market rep) without accompanying context explaining the Problem Statement, workarounds attempted, customer risks if nothing changes, and clear willingness to negotiate the output while focusing on a stable outcome.

  • Problem Statement: does the customer know why they need a solution like this?
  • Workarounds attempted: there’s plenty of situations where the customers knows a workaround and may even be using it successfully, but are just wish-listing some free customisation work (aka Professional Services) in hopes of proving that the vendor considers them “special”. When we discover a workaround that addresses the core outcome the customer needs (but isn’t as elegant as a more custom solution), suddenly the urgency of prioritising their feature request drops precipitously. No PM worth their six-figure TComp is going to prioritise a feature with known succeeding workarounds over an equivalent feature that can’t be solved any other way. 
  • What if nothing changes: if the customer is one foot out the door unless we can catch up (or get ahead) of the competitor who’s already demoing and quoting their solution in the customer’s lab

Output over Outcome

Why don’t we instead focus on “allow Nessus to run, and not show me active alerts” or “allow my Vuln scanner…”

Or

“Do not track Nessus probes” (do customers want no telemetry, or just reduce the early-attack-stage alerts?)

Or

“Do not generate alerts from vuln scanners running at these times or from this network”

Here’s what I’d bring to the Engineers

Kicking off negotiation with the engineers doesn’t mean bringing finalized requirements – it just means starting from a place of “What” and “Why”, staying well clear of the “How”, with enough context for the engineers to help us balance Value, Cost and Time-to-market.

Problem: when my scanner runs, our SOC gets buried with false positive alerts. I don’t find the alerts generated by our network scanner’s activity to be actionable.

Outcome: when my scanner runs against protected devices, user does not see any (false positive) alerts that track the scanner’s activity probing their protected devices.

Caveat: it’s entirely possible that the entire market of IDS has all converged on a solution that lets customers plug in their “scanner IP” ahead of time. And the easy answer is to just blindly deliver what (you think) the customers have asked for. But my experience tells me that if it’s easy for us, it was easy for the other vendors and that it’s hardly the most suitable for all customers’ scenarios. The right answer is a little discovery work with a suitable cross section of customers to Five Whys their root operational problem – why by IP? Why are you scanning – what’s the final decision or action you’ll perform once you have the scan results? How often does the IP change? Do you use other tools like this that create spikes of FP behaviour? Are there compliance concerns with allowing anyone ini your org to configure “excluded IPs”? Do you want to further constrain by port, TCP flag, host header etc so that you can still catch malicious actors masquerading their attacks from the same device or spoofing that allow-listed IP?

Agile Open Northwest 2024:  a journeyman’s journey

Agile Open Northwest 2024, late March, dawn of Spring in Portland Oregon – and rebirth of the PNW agile community.

Overall Tone: relief & excitement (“we’re back in person! Love the energy in the room”) tinged by a lingering sense of loss (“what’s next for Agilists, if we’ve reached Peak Agile?”)

A typical day’s agenda at this Open Space conference

We’ve peaked Agile 

  • many coaches and Scrum Masters are “taking Agile off their resumes”
  • the market for professional coaching has suddenly bottomed out in the last six months
  • wondering what name or framework the Agile Principles & Values will reboot under

We’re starved for human contact

  • AONW hasn’t met in person for years
  • The momentum in this AONW conference community, and our Meetups and tribes, is definitely lower than pre-pandemic
  • We’re looking to rebuild a sense and a place of community, where we can gather and have those “hallway conversations” that literally spawned the Open Space movement https://en.m.wikipedia.org/wiki/Open_space_technology

The PNW Agile community is still mostly in hibernation

  • Attendees were down by 2/3 from pre-pandemic attendance
  • Much of our in-person Meetup gatherings are sparser, the venues less available, and the topics not nearly as elucidating (more mechanical than transformational)

My mentor and friend Ray remarked (something along the lines of), “I haven’t seen you in action since your baby PO days”. I took it as a high compliment – that compared to my days as someone who’d just been CSPO certified and had no experience outside of the Intel bubble, my fluency in the art and humility of Product Management is notable.

What did I talk about?

I facilitated two sessions this year: “Yell At a Product Manager” and “Teach Me Non-Violent Communication 201”.

Yell at a Product Manager

My first session, “Yell at a Product Manager”, I framed as an opportunity for Agilists to explore state of the art in Product Management, how that differs from Product Owners, and whether the PO (or PM) role have a future under our AI overlords. We had a rousing discussion on:

  • A definition of PO vs PM – PO more “tactical/short-term/eng-team-focused”, PM more “strategic/longer-term, outward-focused”, though the division of responsibilities varies in every org that has one or both
  • good and dysfunctional behaviours of Product Owners & Product Managers and the organisations that employ them – focus on “why” not how, taking accountability for the business outcomes without necessarily having to own and perform all or any of the work leading up to that outcome, and reinforcing customer need always at the forefront of the design/development/validation/launch
  • The prevailing attitudes in tech these days – “PM” has passed its peak (I wish AI could figure out what customers need, based on what customers tell us the solution looks like), PO is always perceived as lesser-than (not in my experience – disciplined execution doesn’t just happen with hands-free PRDs-over-the-wall), these two roles should be consolidated, no one person can be good at all three dozen domains in the Pragmatic Framework, and in certain organizations the PM organization is becoming subservient to Engineering or even “eliminated” entirely (but not really https://melissaperri.com/blog/2023/7/7/are-we-getting-rid-of-product-managers
my incredibly fastidious note-taking

Teaching Mike Non-Violent Communication

My second session was an act of vulnerability: admitting to this esteemed group that I’ve never learned about NVC (Nonviolent Communication), despite hearing this community advocate for it every chance they get. You ever have that feeling that you’re ignoring a fundamental paradigm at your peril?

So I volunteered to be the dumb catalyst for a group discussion to teach each other.

An incredible amount of insight was dump trucked in the circle in the space of a half-hour:

  • The “non-violent” phrase is a poor translation – most folks prefer “Compassionate Communication” or even “Precise Communication”
  • The most important thing is focusing on extinguishing judgment from any engagement on sensitive, controversial or divisive discussion
    • open-ended questions = more “what is the situation” than “are we screwed?”
    • seeking connection not differences = more “help me understand” than “why did that happen”
    • removing judgment = more “I love your dress” than “that’s a pretty dress”
  • The trick (on yourself, the practitioner) is cultivating a mindset of knowing that deep down, any two people have deep needs in common
    • finding that win-win can require a significant emotional and ego-less investment, especially when we start out with an explicit disagreement
    • “Why” questions will make the receiver defensive
    • offering choices creates agency, allowing the receiver to spontaneously align
    • requires being willing to recognize the receive as a human, not an opponent
    • relies on both parties being willing to find an acceptable outcome rather than “agreeing to disagree”
Another medium for words that resonated for me
Even more of these admittedly self-evident insights

My Personal Highlights

  1. People like me – with only a few minutes’ interaction with many folks, wrapping up AONW for me was like doing the receiving line at a family wedding. (hard to complain about it)
  2. I like people – and I was thanked more than once for making individuals feel welcome and included
  3. The spirit of Agile is unshakeable, but it’s going to have to dress up in a new costume to get traction in the post-Agile tech industry

Speed, Quality or Cost: Choose One

PM says: “The challenge is our history of executing post-mvp. We get things out the door and jump onto the next train, then abandon them.”

UX says: “We haven’t found the sweet spot between innovation speed & quality, at least in my 5 years.”

Customer says: “What’s taking so long? I asked you for 44 features two years ago, and you haven’t given me any of the ones I really wanted.”

Sound familiar? I’m sure you’ve heard variations on these themes – hell, I’ve heard these themes in every tech firm I’ve worked.

One of the most humbling lessons I keep learning: nothing is ever truly “complete”, but if you’re lucky some features and products get shipped.

I used to think this was just a moral failing of the people or the culture, and that there *had* to be a way this could get solved. Why can’t we just figure this shit out? Aren’t there any leaders and teams that get this right?

It’s Better for Creatives, Innit?

I’m a comics reader, and I like to peer behind the curtain and learn about the way that creators succeed. How do amazing writers and artists manage to ship fun, gorgeous comics month after month?

Some of the creators I’ve paid close attention to, say the same thing as even the most successful film & atV professionals, theatre & clown types, painters, potters and anyone creating discrete things for a living:

Without a deadline, lots of great ideas never quite get “finished”. And with a deadline, stuff (usually) gets launched, but it’s never really “done”. Damned if you do, damned if you don’t. Worst of both worlds.

In commercial comics, the deal is: we ship monthly, and if you want a successful book, you gotta get the comic to print every month on schedule. Get on the train when it leaves, and you’re shipping a hopefully-successful comic. And getting that book to print means having to let go even if there’s more you could do: more edits to revise the words, more perfect lines, better colouring, more detailed covers.

Doesn’t matter. Ship it or we don’t make the print cutoff. Get it out, move on to the next one.

Put the brush down, let the canvas dry. Hang up the painting.

No Good PM Goes Unpunished

I think about that a lot. Could I take another six months, talk to more research subjects, rethink the UX flow, wait til that related initiative gets a little more fleshed out, re-open the debate about the naming, work over the GTM materials again?

Absolutely!

And it always feels like the “right” answer – get it finished for real, don’t let it drop at 80%, pay better attention to the customers’ first impressions, get the launch materials just right.

And if there were no other problems to solve, no other needs to address, we’d be tempted to give it one more once-over.

But.

There’s a million things in the backlog.

Another hundred support cases that demand a real fix to another even more problematic part of the code.

Another rotting architecture that desperately needs a refactor after six years of divergent evolution from its original intent.

Another competitive threat that’s eating into our win-loss rate with new customers.

We don’t have time to perfect the last thing, cause there’s a dozen even-more-pressing issues we should turn our attention to. (Including that one feature that really *did* miss a key use case, but also another ten features that are getting the job done, winning over customers, making users’ lives better EVEN IN THEIR IMPERFECT STATE.)

Regrats I’ve Had a Few

I regret a few decisions I wish I’d spent more time perseverating on. There’s one field name that still bugs me every time I type it in, a workflow I wish I’d fought harder to make more intuitive, and an analytic output that I wish we’d stuck to our guns in reporting it as it comes out of the OS.

But I *more* regret the hesitations that have kept me from moving on, cutting bait, and getting 100% committed to the top three problems that I’m too often saying “Those are key priorities that are top of the list, we should get that kicked off shortly.” And then somehow let slip til next quarter, or end up six months later than a rational actor would have addressed.

What is it he said? “Let’s decide on this today as if we had just been fired, and now we’re the cleanup crew who stepped in to figure out what those last clowns couldn’t get past.”

Lesson I Learned At Microsoft

Folks used to say “always wait for version 3.0 for new Microsoft products” (back in the packaged binaries days – hah). And I bought into it. Years later I learned what was going on: Microsoft deliberately shipped v1.0 to gauge any market interest (and sometimes abandoned there), 2.0 to start refining the experience, and getting things mostly “right” and ready for mass adoption by 3.0.

If they’d waited to ship until they’d complete the 3.0 scope, they’d have way overinvested in some market dead-ends and built features that weren’t actually crucial to customers’ success and not had an opportunity to listen to how folks responded to the actual (incomplete, hardly perfect) product in situ.

What Was The Point Again?

Finding the sweet spot between speed and quality strikes me as trying to beat the Heisenberg Uncertainty Principle: the more you refine your understanding of position, the less sure you are about momentum. It’s not that you’re not trying hard to get both right: I have a feeling that trying to find the perfect balance is asymptotically unachievable, in part because that balance point (fulcrum) is a shifting target: market/competition forces change, we build better core competencies and age out others, we get distracted by shinies and we endure externalities that perturb rational decision-making.

We will always strive to optimize, and that we don’t ever quite get it right is not an individual failure but a consequence of Dunbar’s number, imperfect information flows, local-vs-global optimization tensions, and incredible complexity that will always challenge our desire to know “the right answer”. (Well, it’s “42” – but then the immediate next problem is figuring out the question.)

We’re awesome and fallible all at the same time – resolving such dualities is considered enlightenment, and I envy those who’ve gotten there. Keep striving.

(TL;DR don’t freak out if you don’t get it “right” this year. You’re likely to spend a lot of time in Cynefin “complex” and “chaos” domains for a while, and it’s OK that it won’t be clear what “right” is. Probe/Act-Sense-Respond is an entirely valid approach when it’s hard-to-impossible to predict the “right” answer ahead of time.)

Wherefore Product Owners?

I’m seeing a lot of talk in PM circles about the irreversible end-of-life of the PO – and even more radical, the consolidation of PdM and PgM rolesseparate and alongside the PM.

There’s talk that the modern Product shop doesn’t need these two (edit: three) as an execution-discovery team, that AirBnb’s recent irresponsibly misinterpreted sleight against the Product Manager (PM/PdM) title portends a peak in Product roles, and that AI will inevitably make Product “more efficient” (aka “we’ll need fewer of you slobs”).

Product Owner (PO) is unfortunately chained to the yoke of Agile, which incredibly hasn’t changed in its maniacal focus on The Team (and still isn’t ready to embrace The Rest Of The Org, to its sorry detriment) – and is proof of the inevitability of Hypocritcal Irony in that Agile preaches relentless Inspect and Adapt but hasn’t Adapted its roles, rituals or manifesto in 23 years since those frustrated engineers fantasised about a world in which we all just got out of their way.

I’m seeing talk that the right way to make PMs more effective is no longer relying on a paired PO but leaning more heavily into EPMs (aka Program Managers aka PgM), ProdOps (Product Ops) and Continuous Discovery (aka “channel your customers and market” or “weaponise your critical advantage”).

I’m a little sad at the death (or at least dearth) of PO in the industry – that’s where I got my start ten years ago, and what catalysed my bias to experimentation, steel threading and “Scream Testing” – but it’s also a welcome sign that the rest of tech is ready to Inspect and Adapt. If something isn’t working, iteration/year after iteration/year, why shouldn’t we try something new that the evidence before us implies, and observe how that perturbs our intended outcomes?

So where can we look for inspiration? I’m still inspired by the radical refocus that is Modern Agile. What modes of thinking about value delivery and team effectiveness are inspiring you these days?

One AI’s rendering of PO getting left behind – amusingly vague

Curation as Penance

Talking to one of my colleagues about a content management challenge, we arrived at the part of the conversation where I fixated on the classic challenge.

We’re wrangling inputs from customers and colleagues into our Feature Request (a challenging name for what boils down to qualitative research) and trying to balance the question of how to make it easy to find the feedback we’re looking for, among thousands of submissions.

AI art is a wonder – is that molten gold pouring from his nose?

The Creator’s Indifference

It’d be easy to find the desired inputs (such as all customers who asked for anything related to “provide sensor support for Windows on Apple silicon” – clearly an artificial example eh?) if the people submitting the requests knew how we’d categorise and tag them.

But most outsiders don’t have much insight into the cultural black box that is “how does one collection of humans, indoctrinated to a specific set of organisational biases, think about their problem space?” – let alone, those outsiders having the motivation or incentive to put in that extra level of metadata decorations.

Why should the Creators care how their inputs are classified? Their motivation as customers of a vendor are “let the vendor know what we need” – once the message has been thrown over the wall, that’s as much energy as any customer frankly should HAVE to expend. Their needs are the vendor’s problem to grok, not a burden for the customer to carry.

Heck, the very fact of any elucidated input the customer offers to the vendor is a gift. (Not every customer, especially the ones who are tired of sending feedback into a black hole, are in a gift-giving mood.)

The Seeker’s Pain

Without such detailed classifications, those inputs become an undifferentiated pile. In Productboard (our current feedback collection tool of choice) they’re called Insights, and there’s a linear view of all Insights that’s not very…insightful. (Nor is it intended to be – searching is free text but often means scrutinising every one of dozens or hundreds of records, which is time-consuming.)

This makes the process of taking considered and defensible actions based on this feedback not very scalable. This makes the Seeker’s job quite tedious, and in the past when I’ve faced that task I put it off far too often and for far too long.

The Curator’s Burden

Any good Product Management discipline regularly curates such inputs. Assigns them weights, ties them to renormalised descriptors like name, size, industry of customer, and groups them with similar requests to help find repeating patterns of problems-to-solve.

A little better from the AI – but what the heck is that franken-machine in the background?

A well-curated feedback system is productive – insightful – even correlated to better ROI of your spend of engineering time.

BUT – it costs. If the Creator and the Seeker have little incentive to do that curation, who exactly takes it on? And even if the CMS (content management system) has a well-architected information model up front, who is there to ensure

  • items are assigned to appropriate categories?
  • categories are added and retired as the product, business and market change?
  • supporting metadata is consistently added to group like with like along many dimensions?

The Curator role is crucial to an effective CMS – whether for product feedback (Productboard), or backlog curation (Jira) or customer documentation (hmm, we don’t use WordPress – what platform are we on this time?)

What’s most important is that the curation work – whether performed by one person (some fool like me in its early days), or by the folks most likely to benefit (the whole PM team today) – not that it happens with speed, but that it happens consistently over the life of the system.

Biggest challenge I’ve observed? In every CMS I’ve used or built, it’s ensuring adequate time and attention is spent consistently organising the content (as friction-free as it should be for the Creator) so that it can be efficiently and effectively consumed by the Seeker.

That Curator role is always challenging to staff or “volunteer”. It’s cognitively tiring work, doing it well rarely benefits the Curator, and the only time most Curators hear about it is when folks complain what a terrible tool it is for ever finding anything.

Best case it’s finding gems among more gems…
…worst case it’s some Kafkaesque fever dream

(“Tire Fire” or “garbage dump” are common epithets most mature, enterprise systems like Jira are described as by Creators and Seekers – except in the rare cases where the system is zealously, jealously locked down and heavily demanding on any input by the griping Creators.)

In our use of Productboard and Jira (or any other tool for grappling the feedback tsunami) we’re in the position most of my friends and colleagues across the industry find themselves – doing a decent job finding individual items, mostly good at having them categorised for most Seekers’ daily needs, and wondering if there’s a better a technology solution to a people & process problem.

(Hint: there aren’t.)

Curation is the price we need to pay to make easy inputs turn into effective outputs. Penance for most of us who’ve been around long enough to complain how badly organised things are, and who eventually recognise that we need to be the change we seek in the world.

“You either die a hero or live long enough to become the villain.” — Harvey Dent

Feature Request is a curse word

“Feature Request” is one of my favourite rant inspirers of late.

Not that there aren’t plenty of good features/ideas/problems-to-be-solved that are suggested by customers, partners and colleagues.

But that it’s so hard to find the real gems in a pile of hay, and too much of what gets filed are “solutions with no clear problem statement”.

Why do I get so invested in this process? Customer has a problem, they’ve figured out a great way that would totally solve their problem, and now it’s just their challenge of coercing the vendor until they finally gets around to delivering it. Usually later than you wanted, and not like what you asked for, and couched in go-to-market-friendly language that makes for a fun guessing-game of “does this solve the problem I needed addressed?”

This dysfunctional interaction wastes a whole lot of execution and onboarding time. Why don’t we do it better up front?

Here’s my experience, after doing product work in four separate tech companies, where I’ve been focusing on building user- and developer-productivity features of existing platforms that are intended to retain satisfied customers for years:

  • “feature requests” are often one-liner, “obvious” statements of something the customer is frankly frustrated we hadn’t already done – e.g. “add these three fields to the search API” – with no context why this isn’t simply a nice-to-have that someone heard and repeated up the channel, leaving us to over-pivot on low-priority items or assume that all such uncontextualised requests are likely low-pri noise.
  • They’re a request for a big-F Feature – which is a solution to an implicit problem (e.g. “We need executive reporting”) – not a Job, or a gap, or an unmet Decision, any spin of which are much more oriented around problems to be solved (e.g. “I need to produce a monthly report for our CISO of unique malware found, to justify the monthly subscription cost to the Finance department”)
  • They often assume a particular implementation and don’t tell us why other alternatives (that could well achieve the same outcome) are inferior – e.g. request comes in of the form “I need to export our threat intel feeds from this page”, when we have a very simple SDK that already has example scripts for doing just that. That there are gaps between one implementation and another are natural; the important bit is in how much unworkable friction the alternatives introduce.
  • There’s no believable way from the telephone game of distilled need to gauge how critical this need is – and since you all know vendors are slow to respond, you must assume that we’ll only respond to emergencies and so you’ll characterise the request as a “P0” even if there’s no rush and no critical impact from its absence

By calling this artifact of indirect communication (from customer to “decison makers who can and might decide to insert this into planned execution”) a Feature Request instead of a Problem Statement or a Proposal For Discussion, it assumes this is the end of communication, there’s no need for further background or context, and that the fastest way to get the vendor to fix the problem is to boil it down to “what the solution looks like”.

It absolutely assumes and encourages “tell the vendor what to implement” rather than “tell them what problem you’re having, what decision or action you’re unable to achieve without a solution to this problem, and how this will significantly impact your business operations”.

Why is it even called a feature request? When did we start asking our customers what to build, rather than asking our customers what problems they have, and helping them find solutions? It’s especially important in our line of work, as curators between market and engineering, to find common problems across customers and market segments, and help engineers address those pain points to make customers delighted, productive and loyal.

These days I make an effort to engage with any feature request that isn’t (a) already aligned to solidly-planned enhancements and (b) isn’t clearly spelled out why this matters to the customer. Our “feature request” systems aren’t great for even indirect communication with the originating customer, so many of these conversations get delayed for months if ever. I’ll increasingly root around Salesforce and Slack to reach out to associated Success Manager or Solution Engineer, but that’s still lacking the fidelity of the direct conversation with the person-with-the-problem. It’s a journey.

So if you see me try to stifle rolling my eyes the next time you ask me for the likelihood I’ll deliver this customer’s quick Feature Request, please assume it’s nothing personal – and that I’m very amenable to conversations that increase the likelihood we’ll address the problem.

DevOps status report: HackOregon 2019 season

One of my colleagues on the HackOregon project this year sent around “Nice post on infrastructure as code and what getting solid infra deploys in place can unlock” https://www.honeycomb.io/blog/treading-in-haunted-graveyards/

I felt immediately compelled to respond, saying:

Provocative thinking, and we are well on our way I’d say.

I’ve been the DevOps lead for HackOregon for three years now, and more often than not delivering 80% of the infrastructure each year – the CI/CD pipeline, the automation scripts for standardizing and migrating configuration and data into the AWS layers, and the troubleshooting and white-glove onboarding of each project’s teams where they touch the AWS infrastructure.

There’s great people to work with too – on the occasions when they’ve got the bandwidth to help debug some nasty problem, or see what I’ve been too bleary-eyed to notice is getting in our way, it’s been gratifying to pair up and work these challenges through to a workable (if not always elegant) solution.

My two most important guiding principles on this project have been:

  • Get project developers productive as soon as possible – ensure they have a Continuous Deployment pipeline that gets their project into the cloud, and allows them to see that it works so they can quickly see when a future commit breaks it
  • “working > good > fast” – get something working first, make it “good” (remove the hard-coding, the quick workarounds) second, then make it automated, reusable and documented

We’re married pretty solidly to the AWS platform, and to a CloudFormation-based orchestration model.  It’s evolved (slowly) over the years, as we’ve introspected the AWS Labs EC2 reference architecture, and as I’ve pulled apart the pieces of that stack one by one and repurposed that architecture to our needs.

Getting our CloudFormation templates to a place where we can launch an entirely separate test instance of the whole stack was a huge step forward from “welp, we always gotta debug in prod”. That goal was met about a month ago, and the stack went from “mysterious and murky” to “tractably refactorable and extensible”.

Stage two was digging deep enough into the graveyard to understand how the ECS parts fit together, so that we could swap EC2 for Fargate on a container-by-container basis. That was a painful transition but ultimately paid off – we’re well on our way, and can now add containerised tasks without also having to juggle a whole lot of maintenance of the EC2 boxes that are a velocity-sapping drag on our progress.

Stage 3 has been refactoring our ECS service templates into a standardised single template used by whole families of containerised tasks, from a spray of copypasta hard-coded replicas that (a) had to be curated by hand (much like our previous years’ containerised APIs has to be maintained one at a time), and (b) buried the lede on what unique configuration was being used in each service. Any of the goofy bits you need to know ahead of deploying the next container are now obvious and all in one place, the single master.yaml.

I can’t speak for everyone, but I’ve been pretty slavish about pushing all CF changes to the repo in branches and merging when the next round of stable/working infra has been reached. There’s always room for improvement, however:

  • smaller changes are always better
  • we could afford more folks who are trained and comfortable with the complex orchestration embedded in our infrastructure-as-code
  • which would mean being able to conduct good reviews before merge-to-master
  • I’d be interested in how we can automate the validation of commit-timed-upgrades (though that would require more than a single mixed-use environment).

Next up for us are tasks like:

  • refactoring all the containers into a separate stack (out of master.yaml)
  • parameterising the domains used for ALB routing
  • separating production assets from the development/staging environment
  • separating a core infra layer from the staging vs production side-by-side assets
  • refactoring the IAM provisions in our deployment (policies and attached roles)
  • pulling in more of the coupled resources such as DNS, certs and RDS into the orchestration source-controlled code
  • monitoring and alerting for real-time application health (not just infra-delivery health)
  • deploying *versioned* assets (not just :latest which becomes hard to trace backwards) automatically and version-locking the known-good production configuration each time it stabilises
  • upgrading all the 2017 and 2018 APIs to current deployment compatibility (looking for help here!)
  • assessing orchestration tech to address gaps or limitations in our current tools (e.g. YAML vs. JSON or TOML, pre-deploy validation, CF-vs.-terraform-vs-Kubernetes)
  • better use of tagging?
  • more use of delegated IAM permissions to certain pieces of the infra?

This snapshot of where we’re at doesn’t capture the full journey of all the late nights, painful rabbit holes and miraculous epiphanies