The Challenges of Customer Feedback Curation: A Guide for Product Managers

You’re one of a team of PMs, constantly firehosed by customer feedback (the terribly-named “feature request”**) and you even have a system to stuff that feedback into so you don’t lose it, can cross-reference it to similar patterns and are ready to start pulling out a PRD from the gems of problems that strike the Desirability, Feasibility and Viability triad.

And then you got pulled into a bunch of customer escalations (whose notes you intend to transform into the River of Feedback system), haven’t checked in on the backlog of feedback for a few weeks (I’m gonna have to wait til I’ve got a free afternoon to really dig in again), and I forget if I’ve updated that delayed PRD with the latest competitive insights from those customer-volunteered win/loss feedback.

Suddenly you realise your curation efforts – constantly transforming free-form inputs into well-synthesised insights – are falling behind what your peers *must* be doing better than you. 

You suck at this. 

Don’t be like Lucy

Don’t feel bad. We all suck at this. 

Why? Curation is rewarding and ABSOLUTELY necessary, but that’s doesn’t mean it isn’t hard:

  • it never ends (until your products are well past time to retire)
  • It’s yet one more proactive, put-off-able interruption in a sea of reactive demands
  • It’s filled with way more noise than signal (“Executive reporting is a must-have for us”)
  • You can bucket hundreds of ideas in dozens of classification systems (you ever tried card-sorting navigation menus with independent groups of end users, only to realise that they *all* have an almost-right answer that never quite lines up with the others?), and it’s oh-so-tempting to throw every vaguely-related idea into the upcoming feature bucket (cause maybe those customers will be satisfied enough to stop bugging you even though you didn’t address their core operational problem)

What can you do?

  1. Take the Feedback River of Feedback approach – dip your toes in as often as your curiosity allows
  2. Don’t treat this feedback as the final word, but breadcrumbs to discovering real, underlying (often radically different) problems
  3. Schedule regular blocks of time to reach out to one of the most recent input’s customers (do it soon after, so they still have a shot of remembering the original context that spurred the Feature Request, and won’t just parrot the words because they forgot why it mattered in the first place)
  4. Spend enough time curating the feedback items so that *you* can remember how to find it again (memorable keywords as labels, bucket as high in the hierarchy as possible), and stop worrying about whether anyone else will completely follow your classification logic.
  5. Treat this like the messy black box it inevitably is, and don’t try to wire it into every other system. “Fully integrated” is a cute idea – integration APIs, customer-facing progress labels, pretty pictures – but just creates so much “initialisation” friction such that every time you want to satisfy your curiosity on what’s new, it means an hour or three of labour to perfectly “metadata-ise” every crumb of feedback.
Be like Skeletor

NECESSARY EMPHASIS: every piece of customer input is absolutely a gift – they took time they didn’t need to spend, letting the vendor know the vendor’s stuff isn’t perfect for their needs. AND every piece of feedback is like a game of telephone – warped and mangled in layers of translation that you need to go back to the source to validate.

Never rely on Written Feature Requests as the main input to your sprints. Set expectations accordingly. And don’t forget the 97% of all tickets must be rejected Rule coined by Rich Mironov

**Aside: what the hell do you mean that “Feature Request” is misnamed, Mike?

Premise: customers want us to solve their problems, make them productive, understood and happy. 

Problem: we have little to no context for where the problem exists, what the user is going to do with the outcome of your product, and why they’re not seeking a solution elsewhere. 

Many customers (a) think they’re smarty pants, (b) hate the dumb uncooperative vendor and (c) are too impatient to walk through the backstory. 

So they (a) work through their mental model of our platform to figure out how to “fix” it, (b) don’t trust that we’ll agree with the problem and (c) have way more time to prep than we have to get on the Zoom with them. 

And they come up with a solution and spend the entire time pitching us on why theirs is the best solution that every other customers needs critically. Which we encourage by talking about these as Feature Requests (not “Problem Ethnographic Study”) – and which they then expect since they’ve put in their order at the Customer Success counter, they then expect that this is *going* to be coming out of the kitchen anytime (and is frankly overdue by the time they check back). Which completely contradicts Mironov’s “95% still go into the later/never pile“.

Reframing “solutions” to “problems & outcomes”: IDS alerting

Customer declares “We want IDS exclusions by IP”! Then after not seeing it immediately delivered, they (and often we) start wondering:

  • Why are we arguing about what to build?
  • And why isn’t this already done?

As anyone who’s worked in B2B Product Management can tell you, there’s no shortage of “easy solutions” that show up in our inboxes/DMs/Jira filters/Feature-Request-tool-du-jour. They’re usually framed more or less like this:

“I know you know we have a big renewal coming up and the customers has a list of feature requests they haven’t seen delivered yet [first warning bell]. They have this problem they need solved before they’ll sign the deal [second warning bell] and they’ve told us what the feature will look like [third and final warning]. When can I tell them you’ll deliver it?”

Well-meaning GTM partners or even customers go above and beyond what we PMs need, imagining they understand how our platform works, and coming up with a solution that meets their oblique mental model and should be incredibly quick to build.

First Warning Sign: customer thinks their B2B vendor is a deli counter that welcomes off-the-menu requests. 

Problem One: feature requests are not fast food orders. They’re market evidence that a potential problem exists (but are almost never described in Problem-to-be-solved terms). 

Problem Two: “feature request” is a misnomer that we all perpetuate at our peril. We rarely take that ticket into the kitchen and put it in front of the cooks to deliver FIFO, but instead use it as a breadcrumb to accumulate enough evidence to build a business case to create a DIFFERENT solution that meets most of the deciphered needs that come from customers in segments we wish to target.

So a number of our customers (through their SE or CSM) have requested that our endpoint-based IDS not fire off a million “false positive alerts”, and that the solution they’re prescribing is a feature that allows them to exclude their scanner by IP address. 

My Spidey sense goes off when I’m told the solution by a customer (or go-to-market rep) without accompanying context explaining the Problem Statement, workarounds attempted, customer risks if nothing changes, and clear willingness to negotiate the output while focusing on a stable outcome.

  • Problem Statement: does the customer know why they need a solution like this?
  • Workarounds attempted: there’s plenty of situations where the customers knows a workaround and may even be using it successfully, but are just wish-listing some free customisation work (aka Professional Services) in hopes of proving that the vendor considers them “special”. When we discover a workaround that addresses the core outcome the customer needs (but isn’t as elegant as a more custom solution), suddenly the urgency of prioritising their feature request drops precipitously. No PM worth their six-figure TComp is going to prioritise a feature with known succeeding workarounds over an equivalent feature that can’t be solved any other way. 
  • What if nothing changes: if the customer is one foot out the door unless we can catch up (or get ahead) of the competitor who’s already demoing and quoting their solution in the customer’s lab

Output over Outcome

Why don’t we instead focus on “allow Nessus to run, and not show me active alerts” or “allow my Vuln scanner…”

Or

“Do not track Nessus probes” (do customers want no telemetry, or just reduce the early-attack-stage alerts?)

Or

“Do not generate alerts from vuln scanners running at these times or from this network”

Here’s what I’d bring to the Engineers

Kicking off negotiation with the engineers doesn’t mean bringing finalized requirements – it just means starting from a place of “What” and “Why”, staying well clear of the “How”, with enough context for the engineers to help us balance Value, Cost and Time-to-market.

Problem: when my scanner runs, our SOC gets buried with false positive alerts. I don’t find the alerts generated by our network scanner’s activity to be actionable.

Outcome: when my scanner runs against protected devices, user does not see any (false positive) alerts that track the scanner’s activity probing their protected devices.

Caveat: it’s entirely possible that the entire market of IDS has all converged on a solution that lets customers plug in their “scanner IP” ahead of time. And the easy answer is to just blindly deliver what (you think) the customers have asked for. But my experience tells me that if it’s easy for us, it was easy for the other vendors and that it’s hardly the most suitable for all customers’ scenarios. The right answer is a little discovery work with a suitable cross section of customers to Five Whys their root operational problem – why by IP? Why are you scanning – what’s the final decision or action you’ll perform once you have the scan results? How often does the IP change? Do you use other tools like this that create spikes of FP behaviour? Are there compliance concerns with allowing anyone ini your org to configure “excluded IPs”? Do you want to further constrain by port, TCP flag, host header etc so that you can still catch malicious actors masquerading their attacks from the same device or spoofing that allow-listed IP?

Speed, Quality or Cost: Choose One

PM says: “The challenge is our history of executing post-mvp. We get things out the door and jump onto the next train, then abandon them.”

UX says: “We haven’t found the sweet spot between innovation speed & quality, at least in my 5 years.”

Customer says: “What’s taking so long? I asked you for 44 features two years ago, and you haven’t given me any of the ones I really wanted.”

Sound familiar? I’m sure you’ve heard variations on these themes – hell, I’ve heard these themes in every tech firm I’ve worked.

One of the most humbling lessons I keep learning: nothing is ever truly “complete”, but if you’re lucky some features and products get shipped.

I used to think this was just a moral failing of the people or the culture, and that there *had* to be a way this could get solved. Why can’t we just figure this shit out? Aren’t there any leaders and teams that get this right?

It’s Better for Creatives, Innit?

I’m a comics reader, and I like to peer behind the curtain and learn about the way that creators succeed. How do amazing writers and artists manage to ship fun, gorgeous comics month after month?

Some of the creators I’ve paid close attention to, say the same thing as even the most successful film & atV professionals, theatre & clown types, painters, potters and anyone creating discrete things for a living:

Without a deadline, lots of great ideas never quite get “finished”. And with a deadline, stuff (usually) gets launched, but it’s never really “done”. Damned if you do, damned if you don’t. Worst of both worlds.

In commercial comics, the deal is: we ship monthly, and if you want a successful book, you gotta get the comic to print every month on schedule. Get on the train when it leaves, and you’re shipping a hopefully-successful comic. And getting that book to print means having to let go even if there’s more you could do: more edits to revise the words, more perfect lines, better colouring, more detailed covers.

Doesn’t matter. Ship it or we don’t make the print cutoff. Get it out, move on to the next one.

Put the brush down, let the canvas dry. Hang up the painting.

No Good PM Goes Unpunished

I think about that a lot. Could I take another six months, talk to more research subjects, rethink the UX flow, wait til that related initiative gets a little more fleshed out, re-open the debate about the naming, work over the GTM materials again?

Absolutely!

And it always feels like the “right” answer – get it finished for real, don’t let it drop at 80%, pay better attention to the customers’ first impressions, get the launch materials just right.

And if there were no other problems to solve, no other needs to address, we’d be tempted to give it one more once-over.

But.

There’s a million things in the backlog.

Another hundred support cases that demand a real fix to another even more problematic part of the code.

Another rotting architecture that desperately needs a refactor after six years of divergent evolution from its original intent.

Another competitive threat that’s eating into our win-loss rate with new customers.

We don’t have time to perfect the last thing, cause there’s a dozen even-more-pressing issues we should turn our attention to. (Including that one feature that really *did* miss a key use case, but also another ten features that are getting the job done, winning over customers, making users’ lives better EVEN IN THEIR IMPERFECT STATE.)

Regrats I’ve Had a Few

I regret a few decisions I wish I’d spent more time perseverating on. There’s one field name that still bugs me every time I type it in, a workflow I wish I’d fought harder to make more intuitive, and an analytic output that I wish we’d stuck to our guns in reporting it as it comes out of the OS.

But I *more* regret the hesitations that have kept me from moving on, cutting bait, and getting 100% committed to the top three problems that I’m too often saying “Those are key priorities that are top of the list, we should get that kicked off shortly.” And then somehow let slip til next quarter, or end up six months later than a rational actor would have addressed.

What is it he said? “Let’s decide on this today as if we had just been fired, and now we’re the cleanup crew who stepped in to figure out what those last clowns couldn’t get past.”

Lesson I Learned At Microsoft

Folks used to say “always wait for version 3.0 for new Microsoft products” (back in the packaged binaries days – hah). And I bought into it. Years later I learned what was going on: Microsoft deliberately shipped v1.0 to gauge any market interest (and sometimes abandoned there), 2.0 to start refining the experience, and getting things mostly “right” and ready for mass adoption by 3.0.

If they’d waited to ship until they’d complete the 3.0 scope, they’d have way overinvested in some market dead-ends and built features that weren’t actually crucial to customers’ success and not had an opportunity to listen to how folks responded to the actual (incomplete, hardly perfect) product in situ.

What Was The Point Again?

Finding the sweet spot between speed and quality strikes me as trying to beat the Heisenberg Uncertainty Principle: the more you refine your understanding of position, the less sure you are about momentum. It’s not that you’re not trying hard to get both right: I have a feeling that trying to find the perfect balance is asymptotically unachievable, in part because that balance point (fulcrum) is a shifting target: market/competition forces change, we build better core competencies and age out others, we get distracted by shinies and we endure externalities that perturb rational decision-making.

We will always strive to optimize, and that we don’t ever quite get it right is not an individual failure but a consequence of Dunbar’s number, imperfect information flows, local-vs-global optimization tensions, and incredible complexity that will always challenge our desire to know “the right answer”. (Well, it’s “42” – but then the immediate next problem is figuring out the question.)

We’re awesome and fallible all at the same time – resolving such dualities is considered enlightenment, and I envy those who’ve gotten there. Keep striving.

(TL;DR don’t freak out if you don’t get it “right” this year. You’re likely to spend a lot of time in Cynefin “complex” and “chaos” domains for a while, and it’s OK that it won’t be clear what “right” is. Probe/Act-Sense-Respond is an entirely valid approach when it’s hard-to-impossible to predict the “right” answer ahead of time.)

Wherefore Product Owners?

I’m seeing a lot of talk in PM circles about the irreversible end-of-life of the PO – and even more radical, the consolidation of PdM and PgM rolesseparate and alongside the PM.

There’s talk that the modern Product shop doesn’t need these two (edit: three) as an execution-discovery team, that AirBnb’s recent irresponsibly misinterpreted sleight against the Product Manager (PM/PdM) title portends a peak in Product roles, and that AI will inevitably make Product “more efficient” (aka “we’ll need fewer of you slobs”).

Product Owner (PO) is unfortunately chained to the yoke of Agile, which incredibly hasn’t changed in its maniacal focus on The Team (and still isn’t ready to embrace The Rest Of The Org, to its sorry detriment) – and is proof of the inevitability of Hypocritcal Irony in that Agile preaches relentless Inspect and Adapt but hasn’t Adapted its roles, rituals or manifesto in 23 years since those frustrated engineers fantasised about a world in which we all just got out of their way.

I’m seeing talk that the right way to make PMs more effective is no longer relying on a paired PO but leaning more heavily into EPMs (aka Program Managers aka PgM), ProdOps (Product Ops) and Continuous Discovery (aka “channel your customers and market” or “weaponise your critical advantage”).

I’m a little sad at the death (or at least dearth) of PO in the industry – that’s where I got my start ten years ago, and what catalysed my bias to experimentation, steel threading and “Scream Testing” – but it’s also a welcome sign that the rest of tech is ready to Inspect and Adapt. If something isn’t working, iteration/year after iteration/year, why shouldn’t we try something new that the evidence before us implies, and observe how that perturbs our intended outcomes?

So where can we look for inspiration? I’m still inspired by the radical refocus that is Modern Agile. What modes of thinking about value delivery and team effectiveness are inspiring you these days?

One AI’s rendering of PO getting left behind – amusingly vague

Curation as Penance

Talking to one of my colleagues about a content management challenge, we arrived at the part of the conversation where I fixated on the classic challenge.

We’re wrangling inputs from customers and colleagues into our Feature Request (a challenging name for what boils down to qualitative research) and trying to balance the question of how to make it easy to find the feedback we’re looking for, among thousands of submissions.

AI art is a wonder – is that molten gold pouring from his nose?

The Creator’s Indifference

It’d be easy to find the desired inputs (such as all customers who asked for anything related to “provide sensor support for Windows on Apple silicon” – clearly an artificial example eh?) if the people submitting the requests knew how we’d categorise and tag them.

But most outsiders don’t have much insight into the cultural black box that is “how does one collection of humans, indoctrinated to a specific set of organisational biases, think about their problem space?” – let alone, those outsiders having the motivation or incentive to put in that extra level of metadata decorations.

Why should the Creators care how their inputs are classified? Their motivation as customers of a vendor are “let the vendor know what we need” – once the message has been thrown over the wall, that’s as much energy as any customer frankly should HAVE to expend. Their needs are the vendor’s problem to grok, not a burden for the customer to carry.

Heck, the very fact of any elucidated input the customer offers to the vendor is a gift. (Not every customer, especially the ones who are tired of sending feedback into a black hole, are in a gift-giving mood.)

The Seeker’s Pain

Without such detailed classifications, those inputs become an undifferentiated pile. In Productboard (our current feedback collection tool of choice) they’re called Insights, and there’s a linear view of all Insights that’s not very…insightful. (Nor is it intended to be – searching is free text but often means scrutinising every one of dozens or hundreds of records, which is time-consuming.)

This makes the process of taking considered and defensible actions based on this feedback not very scalable. This makes the Seeker’s job quite tedious, and in the past when I’ve faced that task I put it off far too often and for far too long.

The Curator’s Burden

Any good Product Management discipline regularly curates such inputs. Assigns them weights, ties them to renormalised descriptors like name, size, industry of customer, and groups them with similar requests to help find repeating patterns of problems-to-solve.

A little better from the AI – but what the heck is that franken-machine in the background?

A well-curated feedback system is productive – insightful – even correlated to better ROI of your spend of engineering time.

BUT – it costs. If the Creator and the Seeker have little incentive to do that curation, who exactly takes it on? And even if the CMS (content management system) has a well-architected information model up front, who is there to ensure

  • items are assigned to appropriate categories?
  • categories are added and retired as the product, business and market change?
  • supporting metadata is consistently added to group like with like along many dimensions?

The Curator role is crucial to an effective CMS – whether for product feedback (Productboard), or backlog curation (Jira) or customer documentation (hmm, we don’t use WordPress – what platform are we on this time?)

What’s most important is that the curation work – whether performed by one person (some fool like me in its early days), or by the folks most likely to benefit (the whole PM team today) – not that it happens with speed, but that it happens consistently over the life of the system.

Biggest challenge I’ve observed? In every CMS I’ve used or built, it’s ensuring adequate time and attention is spent consistently organising the content (as friction-free as it should be for the Creator) so that it can be efficiently and effectively consumed by the Seeker.

That Curator role is always challenging to staff or “volunteer”. It’s cognitively tiring work, doing it well rarely benefits the Curator, and the only time most Curators hear about it is when folks complain what a terrible tool it is for ever finding anything.

Best case it’s finding gems among more gems…
…worst case it’s some Kafkaesque fever dream

(“Tire Fire” or “garbage dump” are common epithets most mature, enterprise systems like Jira are described as by Creators and Seekers – except in the rare cases where the system is zealously, jealously locked down and heavily demanding on any input by the griping Creators.)

In our use of Productboard and Jira (or any other tool for grappling the feedback tsunami) we’re in the position most of my friends and colleagues across the industry find themselves – doing a decent job finding individual items, mostly good at having them categorised for most Seekers’ daily needs, and wondering if there’s a better a technology solution to a people & process problem.

(Hint: there aren’t.)

Curation is the price we need to pay to make easy inputs turn into effective outputs. Penance for most of us who’ve been around long enough to complain how badly organised things are, and who eventually recognise that we need to be the change we seek in the world.

“You either die a hero or live long enough to become the villain.” — Harvey Dent

WhoDidITalkTo: working ReactJS code!

You ever take a very long time to birth something small but ultimately, personally meaningful?

Me neither, but what I’m calling stage 1 of my ReactJS app is working to my liking.

WhoDidITalkTo is a personal work of love to help me remember all the wonderful encounters I have at Meetups and other such networking events.  It’s painful for me to keep forgetting the awesome conversations I’ve had with people, and have to confess I don’t remember someone who I very clearly made an impression on.  As someone with superhuman empathy, it’s crushing to see those hurt microexpressions cross their faces when they realize I’m no better than Leonard Shelby:

tumblr_m6igbdnbxr1qfola7o1_500
A little less dirty than him, usually

So I’m trying to remedy that, by giving myself a tool I can use from my phone to capture and review salient details from each new personal encounter I have at all the events I slut around to.

It’s prototype stage, and I have no dreams of monetizing this (so many startups have tried and failed to make this kind of “personal CRM lite” work and failed), and it’s a long ways from being fully functional in the field.  Still, I’m having fun seeing just how far I can stretch my rusty front end skills *and* treat this like a real Product Management project for myself.

If you’d like to peer inside my jumbled mind, this isn’t a bad place to see for yourself:
https://github.com/MikeTheCanuck/WhoDidITalkTo/projects/1

WhoDidITalkTo prototype v1

Occupied Neurons, September release

I’ve been scratching the itch of building an app for myself that solves a Job-to-be-done: when I’m networking, I want a tool to remind myself who are the weak ties in my network I’ve talked to, and what I’ve learned about them.  I want visual refreshers (photos I may have of them) and textual reminders of topics and things an otherwise-non-porous-memory would retain about people whose company I have previously enjoyed.

Using Firebase with ReactJS

In all the research I’m doing on prototyping a front end for my app, I’ve struggled to find something that’s more than “assemble every bespoke tag, class and id by hand” but less than “spend the next six months learning AngularJS”.  Focusing on the front-end to explore my user needs, I didn’t want to get stuck developing a big-ass (and probably unnecessary) back-end stack – even just adapting some well-defined pattern – so I started to explore Firebase [which is all front-end coding with a back-end data layer – to approximate it horribly].

And with a couple more explorations of the territory, I stumbled on the ReactJS “getting started” guide via the Hello World app, and finally understood how cool it is to have a pseudo-object-oriented approach to assembling the “V” in MVC.  (Who knows – for all I know, this is just vanilla ES6 now, and I’m just that far behind the times.)

Still, it is strikingly familiar in basic construction and with the promise of integrating a Firebase “backend” to give me a lightweight stack that will more than adequately perform for me as a single user, I’m finally willing to wade through the React Tutorial and see if that’s enough for me to piece together a working prototype

Props vs State in React

This is one of the more striking subtleties of React – how similar props and state are, and how it appears [at least to me] that the distinction is more a convention for others to understand how to use your React code, than anything that is required by the React compiler.

 

And on the Product Side of my mental tesseract…

I’ve also been refreshing my knowledge of the Product Management practices I haven’t had an opportunity to practice lately.  Amongst which:

How does a Product Manager perform competitive analysis?

This is the clearest-eyed explanation I’ve seen yet about “understanding your competition”.  I’ve worked with too many Product Marketing folks who get spun up about the checklist war, and making sure that we have feature parity in the product, and it’s always seemed like a lot of sound and fury, signifying nothing.

Focusing on “what problems does the competition solve for *YOU* dear customer, and why are those important to your core business?” is a whole lot more genuine *and* believable to me.  I’ve never thought of this line of questioning as “competitive analysis”, just part of doing my job to suss out what I can do to help my customers.

How Do I Know What Success Looks Like?

I was asked recently what I do to ensure my team knows what success looks like.  I generally start with a clear definition of done, then factor usage and satisfaction into my evaluation of success-via-customers.

Evaluation Schema

Having a clear idea of what “done” looks like means having crisp answers to questions like:

  • Who am I building for?
    • Building for “everyone” usually means it doesn’t work well for anyone
  • What problem is it fixing for them?
    • I normally evaluate problems-to-solve based on the new actions or decisions the user can take *with* the solution that they can’t take *without* it
  • Does this deliver more business value than other work we’re considering?
    • Delivering value we can believe in is great, and obviously we ought to have a sense that this has higher value than the competing items on our backlog

What About The Rest?

My backlog of “ideas” is a place where I often leave things to bake.  Until I have a clear picture in my mind who will benefit from this (and just as importantly, who will not), and until I can articulate how this makes the user’s life measurably better, I won’t pull an idea into the near-term roadmap let alone start breaking it down for iteration prioritization.

In my experience there are lots of great ideas people have that they’ll bring to whoever they believe is the authority for “getting shit into the product”.  Engineers, sales, customers – all have ideas they think should get done.  One time my Principal Engineer spent an hour talking me through a hyper-normalized data model enhancement for my product.  Another time, I heard loudly from many customers that they wanted us to support their use of MongoDB with a specific development platform.

I thanked them for their feedback, and I earnestly spent time thinking about the implications – how do I know there’s a clear value prop for this work?

  • Is there one specific user role/usage model that this obviously supports?
  • Would it make users’ lives demonstrably better in accomplishing their business goals & workflows with the product as they currently use it?
  • Would the engineering effort support/complement other changes that we were planning to make?
  • Was this a dealbreaker for any user/customer, and not merely an annoyance or a “that’s something we *should* do”?
  • Is this something that addresses a gap/need right now – not just “good engineering that should become useful in the future”?  (There’s lots of cool things that would be fun to work on – one time I sat through a day-long engineering wish list session – but we’re lucky if we can carve out a minor portion of the team’s capacity away from the things that will help right now.)

If I don’t get at least a flash of sweat and “heat” that this is worth pursuing (I didn’t with the examples mentioned), then these things go on the backlog and they wait.  Usually the important items will come back up, again and again.  (Sometimes the unimportant things too.)  When they resurface, I test them against product strategy, currently-prioritized (and sized) roadmap and our prioritization scoring model, and I look for evidence that shows me this new idea beats something we’re already planning on doing.

If I have a strong impression that I can say “yes” to some or all of these, then it also usually comes along with a number of assumptions I’m willing to test, and effort I’m willing to put in to articulate the results this needs to deliver [usually in a phased approach].

Delivery

At that point we switch into execution and refinement mode – while we’ve already had some roughing-out discussions with engineering and design, this is where backlog grooming hammers out the questions and unknowns that bring us to a state where (a) the delivery team is confident what they’re meant to create and (b) estimates fall within a narrow range of guesses [i.e. we’re not hearing “could take a day, could take a week” – that’s a code smell].

Along the way I’m always emphasizing what result the user wants to see – because shit happens, surprises arise, priorities shift, the delivery team needs a solid defender of the result we’re going to deliver for the customer.  That doesn’t mean don’t flex on the details, or don’t change priorities as market conditions change, but it does mean providing a consistent voice that shines through the clutter and confusion of all the details, questions and opinions that inevitably arise as the feature/enhancement/story gets closer to delivery.

It also means making sure that your “voice of the customer” is actually informed by the customer, so as you’re developing definition of Done, mockups, prototypes and alpha/beta versions, I’ve made a point of taking the opportunity where it exists to pull in a customer or three for a usability test, or a customer proxy (TSE, consultant, success advocate) to give me their feedback, reaction and thinking in response to whatever deliverables we have available.

The most important part of putting in this effort to listen, though, is learning and adapting to the feedback.  It doesn’t mean rip-sawing in response to any contrary input, but it does mean absorbing it and making sure you’re not being pig-headed about the up-front ideas you generated that are more than likely wrong in small or big ways.  One of my colleagues has articulated this as Presumptive Design, whereby your up-front presumptions are going to be wrong, and the best thing you can do is to put those ideas in front of customers, users, proxies as fast and frequently as possible to find out how wrong you are.

Evaluating Success

Up front and along the way, I develop a sense of what success will look like when it’s out there, and that usually takes the form of quantity and quality – useage of the feature, and satisfaction with the feature.  Getting instrumentation of the feature in place is a brilliant but low-fidelity way of understanding whether it was deemed useful – if numbers and ratios are high in the first week and then steadily drop off the longer folks use it, that’s a signal to investigate more deeply.  The user satisfaction side – post-hoc surveys, customer calls – to get a sense of NPS-like confidence and “recommendability” are higher-fidelity means of validating how it’s actually impacting real humans.

Occupied Neurons, late May 2016

Understanding Your New Google Analytics Options – Business 2 Community

Here’s where the performance analytics and “business analytics” companies need to keep an eye or two over their shoulder. This sounds like a serious play for the high-margin customers – a big capital “T” on your SWOT analysis, if you’re one of the incumbents Google’s threatening.

10 Revealing Interview Questions from Product Management Executives

Prep’ing for a PM/PO job interview? Here’s some thought-provoking questions you should think about ahead of time.

When To Decline A Job Offer

The hardest part of a job search (at least for me) is trying to imagine how I would walk away from a job offer, even if it didn’t suit my needs, career aspirations. Beyond the obvious red flags (dark/frantic mood around the office, terrible personality fit with the team/boss), it feels ungrateful to say “no” based on a gut feel or “there’s something better”. Here’s a few perspectives to bolster your self-worth algorithm.

The Golden Ratio: Design’s Biggest Myth

I’m one of the many who fell for this little mental sleight-of-hand. Sounds great, right? A magic proportion that will make any design look “perfect” without being obvious, and will help elevate your designs to the ranks of all the other design geeks who must also be using the golden ratio.

Except it’s crap, as much a fiction and a force-fit as vaccines and autism or oat bran and heart disease (remember that old saw?). Read the well-researched discussion.

Agile Is Dead

This well-meaning dude fundamentally misunderstands Agile and is yet so expert that he knows how to improve on it. “Shuffling Trello cards” and “shipping often” doesn’t even begin…

Not even convinced *he* has read the Manifesto. Gradle is great, CD is great, but if you have no strategy for Release Management or you’re so deep in the bowels of a Microservices forest that you don’t have to worry about Forestry Management, then I’d prefer you step back and don’t confuse those chainsaw-wielders who I’m trying to keep from cutting off their limbs (heh, this has been brought to you by the Tortured Analogies Department).

Occupied Neurons, April 2016

https://medium.com/@sproutworx/six-templates-for-aspiring-product-managers-a568d3115cfe#.swkk52f58
So many Product Managers are making it up as they go along – generating whatever kinds of artifacts will get them past the next checkpoint and keep all the spinning plates from veering off into ether. This is the first time in a long time I’ve seen someone propose some viable, useable and not totally generic tools for capturing their PM thinking. Well worth a look.

https://medium.com/swlh/mvpm-minimum-viable-product-manager-e1aeb8dd421
The “BUT” model for Product Management is a hot topic, and there’s a number of folks taking a kick at deciphering it in their context. I’ve got a spin on it that I’ll write about soon, but this is a great take on the model too.

https://schloss.quora.com/Design-doesnt-deserve-a-seat-at-the-table
Captures all my feelings about the complaint from Designers (and Security reviewers, and all others in the “product quality” disciplines) that they get left out of discussions they *should* be part of. My own rant on the subject doesn’t do this subject justice, but I’m convinced that we *earn* our right to a seat by helping steer, working through the messy quagmire that is real software delivery (not just throwing pixel-perfect portfolio fodder over the wall).

http://www.eventbrite.com/e/resilience-and-the-future-of-work-responsiveorg-un-conference-tickets-24045089510
An unconference to expand awareness of a movement among leading thinkers on how to organize work in the 21st century. Looks fascinating – unconference format is dense and high-learning, the subject is still pretty fresh and new (despite the myriad of books building up to this over the last decade), and the energy in the Portland community is bursting.