In which I refuse to accept “working as designed”

TL;DR: I spent an afternoon interrogating an AI agent about why my media server’s subtitle backlog wasn’t clearing. Turns out it wasn’t one thing – it was four. And I only found all four because I kept pushing back on explanations that didn’t fully hold up.


I run Bazarr on a Synology NAS. If you don’t know Bazarr, it’s an open-source tool that automatically downloads subtitles for your TV shows and movies. It’s genuinely excellent – the kind of “set it and forget it” software that mostly just works.

Mostly.

For months I had hundreds of accumulated episodes sitting in the “Wanted” list – episodes Bazarr knew existed, knew needed subtitles, and apparently couldn’t or wouldn’t do anything about. I’d subscribed to an OpenSubtitles.com VIP account (1,000 downloads per day instead of 20). I’d fixed some bugs in the codebase. I’d run “Search All” repeatedly. Nothing moved.

So I sat down with Claude Code and started asking questions.

What followed was one of the more instructive afternoons I’ve had working with an AI agent – not because the agent was brilliant, but because it wasn’t, and I kept noticing.


False lead #1: “729 episodes probably have no available subtitles”

Early in the investigation, after we’d established that Bazarr’s adaptive searching was throttling the bulk search (every single wanted episode had a failedAttempts timestamp, so Search All was skipping everything instantly), Claude offered this:

“For many: genuinely no results (older/obscure shows, score threshold, whatever).”

I pushed back. I’d gone to OpenSubtitles.com directly and checked Duckman – a 1994 animated show, not exactly mainstream – and found subtitles with thousands of downloads. The agent backed off: “You’re right. I was hedge-talking.”

(I appreciated the honesty. But I’d had to earn it.)


False lead #2: “The quota issue stamped all 729 episodes as failed”

The theory was that one particular movie had been eating up my 20-downloads-per-day free quota in an infinite retry loop, leaving nothing for the backlog. When that movie finally got fixed and I upgraded to VIP, the damage was done – 729 episodes had been marked as “failed attempts” and were sitting in an adaptive search holding pen.

Plausible story. But when I pushed on the mechanism – how exactly does hitting the download quota cause 729 episodes to all get stamped as failures? – the answer got more complicated. Claude had overstated it. Hitting DownloadLimitExceeded breaks the search loop after the current episode, not retroactively stamps everything that follows. The 729 stamps had to come from something else.

The more likely explanation: one bulk search run, probably during a period when my provider configuration was broken or incomplete, where Bazarr searched all 729 episodes, found nothing (for config reasons, not because subs don’t exist), and dutifully stamped every one of them.


The real design bug (and why I pushed hard on this)

Here’s where it got interesting. In the Bazarr codebase, failedAttempts is written to the database before generate_subtitles is called. Before the provider is contacted. Before anything is found or not found.

The consequence: if a search runs, a subtitle is found, and then the download fails – due to quota exhaustion, a network error, a 410 response from the provider – the episode gets stamped as a “failed attempt.” Adaptive searching then throttles it for weeks, even though the subtitle was right there.

To me, that’s a meaningful design gap. The stamp should only be written when the search actually runs and finds nothing. Download failures are provider-side problems, not signals that subtitles don’t exist.

I asked Claude directly: “Isn’t that bad logic? Shouldn’t we try again next run, not wait 1-3 weeks?”

The answer, eventually: “Yes. You’re absolutely right. This is a genuine design bug, not a corner case.”

We filed a PR. (morpheus65535/bazarr#3276, if you’re curious. The fix moves the stamp to after the search completes, and only writes it when providers were available but genuinely returned nothing.)


Verifying the damage

Before applying any fix, I wanted to confirm what we were actually dealing with. A quick sqlite3 query on the Bazarr database on my Synology:

SELECT
COUNT(CASE WHEN failedAttempts IS NOT NULL THEN 1 END) AS stamped,
COUNT(CASE WHEN failedAttempts IS NULL THEN 1 END) AS clean
FROM table_episodes
WHERE missing_subtitles != '[]' AND missing_subtitles IS NOT NULL;

Result: 729 | 0. Every single wanted episode was stamped. None were clean.

The fix:

UPDATE table_episodes
SET failedAttempts = NULL
WHERE missing_subtitles != '[]'
AND missing_subtitles IS NOT NULL;

After that, “Search All” ran for real – taking minutes instead of completing in seconds. Progress. But still no downloads.


The actual fix that finally cleared the backlog

Quota: 1 of 1,000 used. Providers: not throttled. Configuration health check: clean. And yet nothing downloading.

We dug into the OpenSubtitles.com provider config. “Use Hash” was on.

When Use Hash is enabled, Bazarr computes a hash of the video file and sends it to the provider looking for an exact file match. If no subtitle has been uploaded for that exact release, the search returns nothing – even if perfectly good subtitles exist for the episode by name, season, and episode number.

For good files, hash matching works great. For a 1994 animated series about a sentient duck, the missing hash isn’t quite the surprise you’d think.

Turn off Use Hash. Search All. Watch the queue drain.


What this was really about

I’m a PM. A technical one, but a PM. My job is not to write the code – it’s to ask the right questions until I understand whether the system is actually behaving correctly, or whether someone (or something) is telling me a story that’s plausible but incomplete.

Claude gave me five or six explanations today that were each partially right and meaningfully wrong. Not through any bad faith – just through the same pattern I see in engineers who are smart and moving fast: the first explanation that fits the visible evidence gets offered, and if the person asking doesn’t push, that’s where it ends.

I kept pushing. Not combatively – I apologised once for pushing too hard on a point that turned out to be wrong – but persistently. Show me the code. Walk me through the mechanism. What does the stamp actually record? Does this explain all 729, or just some?

To me, that’s the job. Not “accept the answer that sounds right” – but “accept the answer that accounts for all the evidence.”

The backlog is draining now. Four things needed fixing. I found all four.


We also shipped two code fixes to the upstream Bazarr project along the way. morpheus65535 has been a gracious maintainer – accepting PRs without fuss from an unknown contributor who showed up in his GitHub with opinions about his subtitle retry logic. I assume he has opinions of his own. I’d love to know them.

Fixing the double-tap, Agentic style

I was sitting on my couch trying to add a show to Sonarr on my phone. Searched for something, did the thing, then tapped the × to clear the search and add another. The keyboard dismissed. I had to tap the input box again to get it back.

Two taps instead of one. To be clear, this wasn’t life-threatening – not a crash, not wrong data – just the kind of friction that compounds quietly across every session until you stop noticing it, or stop using the app on mobile because it feels like it’s working against you.

I went looking for who had filed a bug before me, because surely someone had. No one had. So I filed it. Reproducible, irritating, worth my time.

Why it was actually hard

The fix seemed obvious: when the user clears the search, call .focus() on the input. Except on mobile Safari (and Chrome on iOS, per my testing), .focus() only raises the software keyboard when it’s called synchronously inside a direct user gesture. Defer it – with a useEffect, a setTimeout, anything async – and the browser silently ignores it. Input gets focus in the DOM sense, but the keyboard stays down.

(A maintainer later asked whether e.preventDefault() on the button would be simpler. That’d work on desktop – blocks the mousedown before the input loses focus. On mobile, focus is already gone during touchstart, which fires earlier in the event sequence. preventDefault has nothing to prevent by then.)

So the fix required calling .focus() synchronously inside the tap handler, which meant the input component needed to expose a focus() method — a React pattern already used elsewhere in the codebase, thankfully.

Being a guest

This is my first potential contribution to a widely-used open source project with real maintainers who have opinions (I assume they have opinions, having built a damn useful and pretty useable app). Didn’t seem right to blunder in.

Before branching: read the contribution guidelines, confirmed the pattern I was using existed elsewhere in their code, verified their gitflow. Opened the issue first and waited for triage before readying the PR.

When I did open the Draft PR, I called out the one glaring thing upfront: the diff looks alarming – 280+ lines changed – but almost all of it is re-indentation from the refactor. Here’s the whitespace-ignoring view. Here’s why the approach is valid. Don’t make the reviewer work to figure out what you actually changed, especially as an unknown Internet goon throwing them a drive-by.

A maintainer asked if a simpler one-liner would do. I explained why it wouldn’t work on mobile, politely and with specifics, and offered to collaborate if they had insights I didn’t.

Where it sits

The PR is Ready for Review. The issue was triaged and labelled the next day. Keyboard will pop up on the first tap – at least on my couch, on my phone.

What I want to emphasise isn’t that I can write React – hell, with Agentic tools that’s the easy part. It’s that I noticed the friction, understood it before touching the code, and approached the fix in a way that respected the people who’d built the thing I was trying to improve. Standing on the shoulders of giants, the least I could do is wash the mud off my shoes.

Two taps to one. It’s a small thing. I filed a bug over it anyway.

Speed, Quality or Cost: Choose One

PM says: “The challenge is our history of executing post-mvp. We get things out the door and jump onto the next train, then abandon them.”

UX says: “We haven’t found the sweet spot between innovation speed & quality, at least in my 5 years.”

Customer says: “What’s taking so long? I asked you for 44 features two years ago, and you haven’t given me any of the ones I really wanted.”

Sound familiar? I’m sure you’ve heard variations on these themes – hell, I’ve heard these themes in every tech firm I’ve worked.

One of the most humbling lessons I keep learning: nothing is ever truly “complete”, but if you’re lucky some features and products get shipped.

I used to think this was just a moral failing of the people or the culture, and that there *had* to be a way this could get solved. Why can’t we just figure this shit out? Aren’t there any leaders and teams that get this right?

It’s Better for Creatives, Innit?

I’m a comics reader, and I like to peer behind the curtain and learn about the way that creators succeed. How do amazing writers and artists manage to ship fun, gorgeous comics month after month?

Some of the creators I’ve paid close attention to, say the same thing as even the most successful film & atV professionals, theatre & clown types, painters, potters and anyone creating discrete things for a living:

Without a deadline, lots of great ideas never quite get “finished”. And with a deadline, stuff (usually) gets launched, but it’s never really “done”. Damned if you do, damned if you don’t. Worst of both worlds.

In commercial comics, the deal is: we ship monthly, and if you want a successful book, you gotta get the comic to print every month on schedule. Get on the train when it leaves, and you’re shipping a hopefully-successful comic. And getting that book to print means having to let go even if there’s more you could do: more edits to revise the words, more perfect lines, better colouring, more detailed covers.

Doesn’t matter. Ship it or we don’t make the print cutoff. Get it out, move on to the next one.

Put the brush down, let the canvas dry. Hang up the painting.

No Good PM Goes Unpunished

I think about that a lot. Could I take another six months, talk to more research subjects, rethink the UX flow, wait til that related initiative gets a little more fleshed out, re-open the debate about the naming, work over the GTM materials again?

Absolutely!

And it always feels like the “right” answer – get it finished for real, don’t let it drop at 80%, pay better attention to the customers’ first impressions, get the launch materials just right.

And if there were no other problems to solve, no other needs to address, we’d be tempted to give it one more once-over.

But.

There’s a million things in the backlog.

Another hundred support cases that demand a real fix to another even more problematic part of the code.

Another rotting architecture that desperately needs a refactor after six years of divergent evolution from its original intent.

Another competitive threat that’s eating into our win-loss rate with new customers.

We don’t have time to perfect the last thing, cause there’s a dozen even-more-pressing issues we should turn our attention to. (Including that one feature that really *did* miss a key use case, but also another ten features that are getting the job done, winning over customers, making users’ lives better EVEN IN THEIR IMPERFECT STATE.)

Regrats I’ve Had a Few

I regret a few decisions I wish I’d spent more time perseverating on. There’s one field name that still bugs me every time I type it in, a workflow I wish I’d fought harder to make more intuitive, and an analytic output that I wish we’d stuck to our guns in reporting it as it comes out of the OS.

But I *more* regret the hesitations that have kept me from moving on, cutting bait, and getting 100% committed to the top three problems that I’m too often saying “Those are key priorities that are top of the list, we should get that kicked off shortly.” And then somehow let slip til next quarter, or end up six months later than a rational actor would have addressed.

What is it he said? “Let’s decide on this today as if we had just been fired, and now we’re the cleanup crew who stepped in to figure out what those last clowns couldn’t get past.”

Lesson I Learned At Microsoft

Folks used to say “always wait for version 3.0 for new Microsoft products” (back in the packaged binaries days – hah). And I bought into it. Years later I learned what was going on: Microsoft deliberately shipped v1.0 to gauge any market interest (and sometimes abandoned there), 2.0 to start refining the experience, and getting things mostly “right” and ready for mass adoption by 3.0.

If they’d waited to ship until they’d complete the 3.0 scope, they’d have way overinvested in some market dead-ends and built features that weren’t actually crucial to customers’ success and not had an opportunity to listen to how folks responded to the actual (incomplete, hardly perfect) product in situ.

What Was The Point Again?

Finding the sweet spot between speed and quality strikes me as trying to beat the Heisenberg Uncertainty Principle: the more you refine your understanding of position, the less sure you are about momentum. It’s not that you’re not trying hard to get both right: I have a feeling that trying to find the perfect balance is asymptotically unachievable, in part because that balance point (fulcrum) is a shifting target: market/competition forces change, we build better core competencies and age out others, we get distracted by shinies and we endure externalities that perturb rational decision-making.

We will always strive to optimize, and that we don’t ever quite get it right is not an individual failure but a consequence of Dunbar’s number, imperfect information flows, local-vs-global optimization tensions, and incredible complexity that will always challenge our desire to know “the right answer”. (Well, it’s “42” – but then the immediate next problem is figuring out the question.)

We’re awesome and fallible all at the same time – resolving such dualities is considered enlightenment, and I envy those who’ve gotten there. Keep striving.

(TL;DR don’t freak out if you don’t get it “right” this year. You’re likely to spend a lot of time in Cynefin “complex” and “chaos” domains for a while, and it’s OK that it won’t be clear what “right” is. Probe/Act-Sense-Respond is an entirely valid approach when it’s hard-to-impossible to predict the “right” answer ahead of time.)

Linus rants at the security community again – bravo

https://lkml.org/lkml/2017/11/17/767

Linus goes off on the security community who keep trying to make sweeping, under-tested, destabilizing changes to the kernel, and while his delivery leaves something to be desired, the message is welcome and apparently remains necessary.  Making radical changes that do nothing to help the system operators and users know what’s going on, or be able to control or even just report the issues, is shall we say frustrating.

keep-calm-and-burn-it-down-5

It’s this kind of flagrant power play by security mavens that irks the rest of us to homicidal degree. It punishes the user in the hopes that that user will push the pain uphill to the originator of the buggy code.

Except that no typical user (i.e. 99% of the computing end user population) even *recognises* that the problem is with the calling code (app, driver) rather than the OS (“computer”, “CPU”, “crap phone”) that is merely trained to enforce these extreme behaviours.

I find after a couple of decades in infosec land that this is motivated by the disregard security folks have for the end user victims of this whole tug-of-war, which seems so often to break down to “I’m sick of chasing software developers to convince them to fix their bugs, so instead let’s make the bug ‘obvious’ to the end users and then the users will chase down the software developers for me”.

Immediate kernel panic may have been an appropriate response decades ago when operators, programmers and users were closely tied in space and culture. It may even still be an appropriate posture for some mission-critical and highly-sensitive systems, if you favour “protection” over stability.

It is increasingly ridiculous for the user of most other systems to have any idea how to communicate with the powers that be what happened and have that turned into a fix in a viable timeframe – let alone rely on instrumented, aggregated, anonymized crash reports be fed en masse to the few vendors who know let alone have the time to request, retrieve and paw through millions of such reports looking for the few needles in haystacks.

Punish the victim and offload the *real* work of security (i.e. getting bugs fixed) to people least interested and least expert at it? Yeah, good luck.

It is entirely appropriate in an increasing number of circumstances to soften the approach and try warning the user and trusting them with a little power to make some decisions themselves (rather than arbitrarily punish them for mistakes not their own).

I love many of my colleagues in the security community dearly, and wouldn’t tell them to quit their jobs, but goddamn do we quickly forget that the options are not just “PREVENT” but also “DETECT” and “CORRECT”. I’m glad to see that Kees Cook’s followup clarifies that he’s already looking into this, and learning that such violent change to a kernel can’t be swallowed whole.

Bug Reports: hoopla + comics

An occasional series of the bugs I attempt to report to vendors of software I enjoy using.

Bug #1: re-borrow, can’t read

I borrow a comics title on Hoopla, it eventually expires. I re-borrow it, then when I try to read it reports “There was an error loading Ex Machina Book Two.” error.

I tried a half-dozen times to Read it. I killed the app and restarted it, then tried to Read, still the same error.  I am unable to find a delete feature in the app, so I cannot delete and re-download the content.

This same error has happened to me twice with two different comics titles.  I only read comics via hoopla, so I cannot yet report if this happens for non-comics content.

Repro steps

  • Open Hoopla app on my device, browse to the title Ex Machina Book Two
  • Tap the Borrow button, complete the Downloading phase
  • Tap the Read button – result: content loads fine
  • Wait 21+ days for DRM license to expire
  • Browse to the same title, tap Borrow
    (Note: it take no time at all to switch to the Read button, which implies it just downloads a fresh DRM license file)
  • Tap the Read button

Expected Result

Book opens, content is readable.

Actual Result

App reports Error “There was an error loading…”, content does not load:

hoopla error re-borrowing comic.png

User Environment

iPad 3, iOS 9.3.5, hoopla app version 4.10.2

Bug #2: cannot re-sort comics

I browse the “Just added to hoopla” section of Comics, and no matter which sorting option I choose, the list of comics appears in the exact same order. Either this is a coincidence, or the sorting feature doesn’t work (at least in this particular scenario).

Repro steps

  • Open the hoopla app on my device, tap the Books tab
  • Tap the Comics selector across the top of the app window, then tap the Genres link at the top-right corner
  • Select the option Just added to hoopla
  • Scroll the resulting comics titles in the default popular view, noting that [at time of writing] three Jughead titles appear before Superman, Betty & Veronica and The Black Hood
  • Tap the new arrivals and/or A-Z view selectors along the top

Expected Result

The sort order of the displayed comics would change under one or both views (especially under the A-Z view, where Jughead titles would be listed after Betty & Veronica). The included titles may or may not change (perhaps some added, some removed in the new arrivals view, if this is meant to show just the most recently-added titles).

Actual Result

The sort order of the displayed comics appears identical to the naked eye.  Note that in the A-Z view, the Jughead comics continue to appear at the top, ahead of the Betty & Veronica comic:

hoopla sort order in A-Z view.png

User Environment

iPad 3, iOS 9.3.5, hoopla app version 4.10.2

Occupied Neurons, October edition

Melinda Gates Asked For Ideas to Help Women in Tech: Here They Are

https://backchannel.com/an-open-letter-to-melinda-gates-7c40d8696b63#
I am psyched that a powerhouse like Gates is taking up the cause, and I sincerely hope she reads this (and many other) articles to get a sense of the breadth of the problem (and how few working solutions there are).  The overlap with race, the attempts to bring more women into classrooms, the tech industry bias towards the elite schools and companies (and not the wealth of other experiences). It’s a target-rich environment to solve.

Building a Psychologically Safe Workplace: Amy Edmondson at TEDxHGSE

https://m.youtube.com/watch?feature=youtu.be&v=LhoLuui9gX8

I am super-pleased to see that the concept of Psychological Safety is gaining traction in the circles and organizations I’m hanging with these days.  I spend an inordinate amount of time in my work making sure that my teammates and colleagues feel like it’s OK to make a mistake, to own up to dead ends and unknowns, and will sure make the work easier when I’m not the only one fighting the tide of mistrust/worry/fear that creates an environment where learning/risks/mistakes are being discouraged.

Three Books That Influenced CorgiBytes Culture

http://corgibytes.com/blog/2016/09/15/three-influential-books/

Andrea and Scott are two people who have profoundly changed my outlook on what’s possible to bring to the workplace, and how to make a workplace that truly fits what you want (and sometimes need) it to be. Talking about empathy as a first-class citizen, bringing actual balance to the day and the communications, and treating your co-workers better than we treat ourselves – and doing it in a fun line of business with real, deep impact for individual customers.

This is the kind of organization that I could see myself in. And which would draw in the kinds of people I enjoy working with each day.

So after meeting them earlier this year in Portland, I’ve followed their adventures via their blog and twitter accounts. This article is another nuanced look at what has shaped their workplace, and I sincerely hope I can do likewise someday.

Reducing Visual Noise for a Better User Experience

https://medium.com/@alitorbati/reducing-visual-noise-for-a-better-user-experience-ae3407ff9c99

View at Medium.com

These days I find myself apprehensively clicking on Design articles on Medium.  While there’s great design thinking being discussed out there, I seem to be a magnet for finding the ones that complain why users/managers/businesses don’t “get it”.

As I’d hoped, this was an honest and detailed discussion of the inevitable design overload that creeps into most “living products”, and the factors that drove them to improve the impact for non-expert users.

(I am personally most interested in improving the non-expert users’ experience – experts and enthusiasts will always figure out a way to make shit work, even if they don’t like having to beat down a new door; the folks I care to feed are those who don’t have the energy/time/inclination/personality for figuring out something that should be obvious but isn’t.  Give me affordances, not a learning experience e.g. when you’ve got clickable/tappable controls on your page, give me lines/shadows/shading to signify “this isn’t just text”, not just subtle whitespace that cues the well-trained UI designer that there’s a button around that otherwise-identically-styled text.

Occupied Neurons, early May 2016

The continuing story of the intriguing ideas and happenings that I can’t shake off…

Pigsinspace222

(Have you ever seen an episode of Pigs In Space?  If not, go sample one now, and you’ll get my droll reference)

Infinite Scrolling, Pagination or “Load More” Buttons? Usability Findings in eCommerce

https://www.smashingmagazine.com/2016/03/pagination-infinite-scrolling-load-more-buttons/

Summary (and something I plan to bias towards in future designs, under similar conditions): The “Load More” design pattern is the most well-received by users and creates a minimum of friction while still enabling access to the page footer.

How Spotify’s Poor API Hygiene Broke a Bunch of Hardware and Software

http://www.programmableweb.com/news/how-spotifys-poor-api-hygiene-broke-bunch-hardware-and-software/analysis/2016/02/23

This is a pretty epic rant on the fallout for independent Spotify developers from a haphazard approach to managing the APIs offered over the years by this consumer entertainment service. Having worked on the other side of these kinds of decisions, I can well imagine how this came to be: thin staffing levels keeping from putting adequate attention on developer communications and engineering maintenance, plus distracted attention by PMs (or possibly even frequent PM turnover) such that late in the game, no one even remembers lets alone still believes in the original value prop behind the original APIs.

It doesn’t excuse the broken promises behind the APIs, and especially not the lack of communication in obvious channels when changes were made (eliminated), but I’ve been in such positions as a Product guy and found myself making decisions that feel just as compromised – trading off one disappointment for a better-mitigated disappointment elsewhere. It happens, especially when the product being extended through those APIs has a pretty low profit margin, and when the staff devoted to managing those concerns are terribly compromised (higher priorities and all).

Theory of Constraints

https://en.m.wikipedia.org/wiki/Theory_of_constraints

At the Intel-sponsored Accelerate Results gathering, a few themes/durable concepts kept coming up (and have come up in this community repeatedly over the years). One is the Theory of Constraints, which seems popular among all systems thinkers, even in big software design (at least in concept if not in execution).

I firmly believe we have a duty to consider outside perspectives on our industry, even when they appear to have no direct applicability – myopia, tools bias and fad-driven design/execution are the restraints I make deliberate effort to resist in my own practices.

Standing on the Shoulders of Giants

http://www.business-improvement.eu/toc/Goldratt_Standing_On_The_Shoulders_Of_Giants.php

Eliyahu Goldratt is a huge influence on the thought leaders at the Accelerate Results conference, and many made reference to his seminal essay that seems to have kicked off this whole revolution. Worth a skim, even if it’s only to be able to nod thoughtfully when others keep talking about this.

Everyday Internet Users Can Stand Up for Encryption — Here’s How

https://blog.mozilla.org/blog/2016/03/30/everyday-internet-users-can-stand-up-for-encryption-heres-how/
I worked with Mark Surman a long time ago back in Toronto for a non-profit Internet Service Provider. It’s more than a little amazing to me to see how our paths have diverged and yet how he’s speaking about issues today that are very near and dear to my heart.

The “-ity” Echo Chamber

What Kicked Off This Rant

I watch a blog at work that lectures about all the reasons why they’re wrong about this blogger’s pet subjects – design, UX, research, many of the secondary aspects of quality of a piece of software (much like security and privacy are secondary quality characteristics of technology projects). Overlong weekly screeds with tons of footnoted research to “prove” the points.

Footnotes.

Like a dozen per post.

No, seriously.

Then the fawning praise comes in from the people in the same field who all already agree with the points being made, and feel like their voice is being amplified and broadcast.

Only it ain’t. When your readership is the Echo Choir, I’m sure the adulation and affirmation that you’re “right” feels great, but does any of that advocacy translate into changing the minds of the folks who actually hold the power to implement (or ignore) your demands?

Echo Chamber

Continue reading “The “-ity” Echo Chamber”

How I do UX, partial thoughts: the no bullshit edition

Don’t expect a masters treatise, much in the way of theory, or anything resembling proof that UX Is Right.

Rose_PricklesI’m not interested in changing minds right here, or finding out if you’re a design bigot.  (I already know.) 

I’m also not going to pretend I’m something I’m not.  I’m not going to use a lot of flowery language, cryptic metaphor or industry jargon.  It is what it is.  A rose is a rose.

The important thing for me right here and now is to spell out what I do when I’m applying user experience principles to the stuff I create.  If you look closely, you’ll notice the topics are ordered according to where I spent most of my energy and attention. 

Interaction Design: identify what tasks a user needs to accomplish, understand why they need to accomplish it one way and not the others, and figure out how to provide an obvious/efficient/effective path through the software to successfully complete the task. 

Usability Engineering: identify the trouble spots, understand why that causes problems for people, and figure out how to make it better.

User Research: listen, ask questions, observe, ask more questions, offer unfinished ideas for early feedback, and thank them for their time and input. 

Information Architecture: spell words properly, choose words that users are familiar with, don’t use more words than you need to. 

Visual Design: choose colours that aren’t too garish, use colours and fonts consistently throughout the application(s), make sure things are aligned, don’t make users hunt for the affordances and cues.

Neilsen’ Ten Usability Heuristics (including my favourite)

These simple-sounding but powerful principles keep resurfacing in my work, and a quick reminder never hurts.

http://www.nngroup.com/articles/ten-usability-heuristics/

My current favourite is "aesthetic and minimalist design":

Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.

I keep running into the proponents of "what’s the harm in a little more info?", and I find this principle of "relative visibility" compelling. I’ll see how well this works as an argument for not overloading the user with "just in case" information.