Screw the PRD – find your own template!

Screw the PRD – find your own template!

I struggle to sit down and write out a one-and-done PRD – pre-defined headings, expectations of 10-15 pages (or more) of material covering all the subjects, consequences, requirements and stakeholders’ needs. 

My last initiative-guiding document wasn’t even a PRFAQ – I didn’t write the press release, but I did spell out a set of Mike’s Beliefs (after another PM prodded me to write down what I’d been ranting), then an evolving set of outcome-focused requirements (assembled over 5-7 sittings), then summarising a Vision (North Star guide), “what does done look like”, “what does success look like once we measure what we’ve launched” and an FAQ just simply to catch-all the questions I didn’t immediately answer. 

But that document didn’t even come at the inception of the project. I’m coordinating the data schema, API inventory and ecosystem needs of a much larger project – and at first I wanted to see where the gaps were, what conversations emerged, and where folks already had figured out what we need. 

My announcement of this doc came ~2 months after we’d already started – more of a codification of our direction, sharpening the focus and a bright-line reminder of what everyone already suspected we’d need to do. 

Here’s my current template:

Business Need

  • What problems were facing as a business, why we need to solve for them.

Vision

  • This is the nearest equivalent to the Press Release. It’s what I intend to say to the intended market. 

Beliefs about what we need to achieve

  • these are the hypotheses, assumptions and requirements all wrapped up together

What does done look like?

  • Features and implementation shapes. How to measure “have we done enough to ship, and to start learning from the market at scale?”

What does success look like after we’re done?

  • How to see that our results have met the market need as defined up front.

FAQs

  • The misc slop that doesn’t fit anywhere else

The Challenges of Customer Feedback Curation: A Guide for Product Managers

You’re one of a team of PMs, constantly firehosed by customer feedback (the terribly-named “feature request”**) and you even have a system to stuff that feedback into so you don’t lose it, can cross-reference it to similar patterns and are ready to start pulling out a PRD from the gems of problems that strike the Desirability, Feasibility and Viability triad.

And then you got pulled into a bunch of customer escalations (whose notes you intend to transform into the River of Feedback system), haven’t checked in on the backlog of feedback for a few weeks (I’m gonna have to wait til I’ve got a free afternoon to really dig in again), and I forget if I’ve updated that delayed PRD with the latest competitive insights from those customer-volunteered win/loss feedback.

Suddenly you realise your curation efforts – constantly transforming free-form inputs into well-synthesised insights – are falling behind what your peers *must* be doing better than you. 

You suck at this. 

Don’t be like Lucy

Don’t feel bad. We all suck at this. 

Why? Curation is rewarding and ABSOLUTELY necessary, but that’s doesn’t mean it isn’t hard:

  • it never ends (until your products are well past time to retire)
  • It’s yet one more proactive, put-off-able interruption in a sea of reactive demands
  • It’s filled with way more noise than signal (“Executive reporting is a must-have for us”)
  • You can bucket hundreds of ideas in dozens of classification systems (you ever tried card-sorting navigation menus with independent groups of end users, only to realise that they *all* have an almost-right answer that never quite lines up with the others?), and it’s oh-so-tempting to throw every vaguely-related idea into the upcoming feature bucket (cause maybe those customers will be satisfied enough to stop bugging you even though you didn’t address their core operational problem)

What can you do?

  1. Take the Feedback River of Feedback approach – dip your toes in as often as your curiosity allows
  2. Don’t treat this feedback as the final word, but breadcrumbs to discovering real, underlying (often radically different) problems
  3. Schedule regular blocks of time to reach out to one of the most recent input’s customers (do it soon after, so they still have a shot of remembering the original context that spurred the Feature Request, and won’t just parrot the words because they forgot why it mattered in the first place)
  4. Spend enough time curating the feedback items so that *you* can remember how to find it again (memorable keywords as labels, bucket as high in the hierarchy as possible), and stop worrying about whether anyone else will completely follow your classification logic.
  5. Treat this like the messy black box it inevitably is, and don’t try to wire it into every other system. “Fully integrated” is a cute idea – integration APIs, customer-facing progress labels, pretty pictures – but just creates so much “initialisation” friction such that every time you want to satisfy your curiosity on what’s new, it means an hour or three of labour to perfectly “metadata-ise” every crumb of feedback.
Be like Skeletor

NECESSARY EMPHASIS: every piece of customer input is absolutely a gift – they took time they didn’t need to spend, letting the vendor know the vendor’s stuff isn’t perfect for their needs. AND every piece of feedback is like a game of telephone – warped and mangled in layers of translation that you need to go back to the source to validate.

Never rely on Written Feature Requests as the main input to your sprints. Set expectations accordingly. And don’t forget the 97% of all tickets must be rejected Rule coined by Rich Mironov

**Aside: what the hell do you mean that “Feature Request” is misnamed, Mike?

Premise: customers want us to solve their problems, make them productive, understood and happy. 

Problem: we have little to no context for where the problem exists, what the user is going to do with the outcome of your product, and why they’re not seeking a solution elsewhere. 

Many customers (a) think they’re smarty pants, (b) hate the dumb uncooperative vendor and (c) are too impatient to walk through the backstory. 

So they (a) work through their mental model of our platform to figure out how to “fix” it, (b) don’t trust that we’ll agree with the problem and (c) have way more time to prep than we have to get on the Zoom with them. 

And they come up with a solution and spend the entire time pitching us on why theirs is the best solution that every other customers needs critically. Which we encourage by talking about these as Feature Requests (not “Problem Ethnographic Study”) – and which they then expect since they’ve put in their order at the Customer Success counter, they then expect that this is *going* to be coming out of the kitchen anytime (and is frankly overdue by the time they check back). Which completely contradicts Mironov’s “95% still go into the later/never pile“.

Reframing “solutions” to “problems & outcomes”: IDS alerting

Customer declares “We want IDS exclusions by IP”! Then after not seeing it immediately delivered, they (and often we) start wondering:

  • Why are we arguing about what to build?
  • And why isn’t this already done?

As anyone who’s worked in B2B Product Management can tell you, there’s no shortage of “easy solutions” that show up in our inboxes/DMs/Jira filters/Feature-Request-tool-du-jour. They’re usually framed more or less like this:

“I know you know we have a big renewal coming up and the customers has a list of feature requests they haven’t seen delivered yet [first warning bell]. They have this problem they need solved before they’ll sign the deal [second warning bell] and they’ve told us what the feature will look like [third and final warning]. When can I tell them you’ll deliver it?”

Well-meaning GTM partners or even customers go above and beyond what we PMs need, imagining they understand how our platform works, and coming up with a solution that meets their oblique mental model and should be incredibly quick to build.

First Warning Sign: customer thinks their B2B vendor is a deli counter that welcomes off-the-menu requests. 

Problem One: feature requests are not fast food orders. They’re market evidence that a potential problem exists (but are almost never described in Problem-to-be-solved terms). 

Problem Two: “feature request” is a misnomer that we all perpetuate at our peril. We rarely take that ticket into the kitchen and put it in front of the cooks to deliver FIFO, but instead use it as a breadcrumb to accumulate enough evidence to build a business case to create a DIFFERENT solution that meets most of the deciphered needs that come from customers in segments we wish to target.

So a number of our customers (through their SE or CSM) have requested that our endpoint-based IDS not fire off a million “false positive alerts”, and that the solution they’re prescribing is a feature that allows them to exclude their scanner by IP address. 

My Spidey sense goes off when I’m told the solution by a customer (or go-to-market rep) without accompanying context explaining the Problem Statement, workarounds attempted, customer risks if nothing changes, and clear willingness to negotiate the output while focusing on a stable outcome.

  • Problem Statement: does the customer know why they need a solution like this?
  • Workarounds attempted: there’s plenty of situations where the customers knows a workaround and may even be using it successfully, but are just wish-listing some free customisation work (aka Professional Services) in hopes of proving that the vendor considers them “special”. When we discover a workaround that addresses the core outcome the customer needs (but isn’t as elegant as a more custom solution), suddenly the urgency of prioritising their feature request drops precipitously. No PM worth their six-figure TComp is going to prioritise a feature with known succeeding workarounds over an equivalent feature that can’t be solved any other way. 
  • What if nothing changes: if the customer is one foot out the door unless we can catch up (or get ahead) of the competitor who’s already demoing and quoting their solution in the customer’s lab

Output over Outcome

Why don’t we instead focus on “allow Nessus to run, and not show me active alerts” or “allow my Vuln scanner…”

Or

“Do not track Nessus probes” (do customers want no telemetry, or just reduce the early-attack-stage alerts?)

Or

“Do not generate alerts from vuln scanners running at these times or from this network”

Here’s what I’d bring to the Engineers

Kicking off negotiation with the engineers doesn’t mean bringing finalized requirements – it just means starting from a place of “What” and “Why”, staying well clear of the “How”, with enough context for the engineers to help us balance Value, Cost and Time-to-market.

Problem: when my scanner runs, our SOC gets buried with false positive alerts. I don’t find the alerts generated by our network scanner’s activity to be actionable.

Outcome: when my scanner runs against protected devices, user does not see any (false positive) alerts that track the scanner’s activity probing their protected devices.

Caveat: it’s entirely possible that the entire market of IDS has all converged on a solution that lets customers plug in their “scanner IP” ahead of time. And the easy answer is to just blindly deliver what (you think) the customers have asked for. But my experience tells me that if it’s easy for us, it was easy for the other vendors and that it’s hardly the most suitable for all customers’ scenarios. The right answer is a little discovery work with a suitable cross section of customers to Five Whys their root operational problem – why by IP? Why are you scanning – what’s the final decision or action you’ll perform once you have the scan results? How often does the IP change? Do you use other tools like this that create spikes of FP behaviour? Are there compliance concerns with allowing anyone ini your org to configure “excluded IPs”? Do you want to further constrain by port, TCP flag, host header etc so that you can still catch malicious actors masquerading their attacks from the same device or spoofing that allow-listed IP?

Feature Request is a curse word

“Feature Request” is one of my favourite rant inspirers of late.

Not that there aren’t plenty of good features/ideas/problems-to-be-solved that are suggested by customers, partners and colleagues.

But that it’s so hard to find the real gems in a pile of hay, and too much of what gets filed are “solutions with no clear problem statement”.

Why do I get so invested in this process? Customer has a problem, they’ve figured out a great way that would totally solve their problem, and now it’s just their challenge of coercing the vendor until they finally gets around to delivering it. Usually later than you wanted, and not like what you asked for, and couched in go-to-market-friendly language that makes for a fun guessing-game of “does this solve the problem I needed addressed?”

This dysfunctional interaction wastes a whole lot of execution and onboarding time. Why don’t we do it better up front?

Here’s my experience, after doing product work in four separate tech companies, where I’ve been focusing on building user- and developer-productivity features of existing platforms that are intended to retain satisfied customers for years:

  • “feature requests” are often one-liner, “obvious” statements of something the customer is frankly frustrated we hadn’t already done – e.g. “add these three fields to the search API” – with no context why this isn’t simply a nice-to-have that someone heard and repeated up the channel, leaving us to over-pivot on low-priority items or assume that all such uncontextualised requests are likely low-pri noise.
  • They’re a request for a big-F Feature – which is a solution to an implicit problem (e.g. “We need executive reporting”) – not a Job, or a gap, or an unmet Decision, any spin of which are much more oriented around problems to be solved (e.g. “I need to produce a monthly report for our CISO of unique malware found, to justify the monthly subscription cost to the Finance department”)
  • They often assume a particular implementation and don’t tell us why other alternatives (that could well achieve the same outcome) are inferior – e.g. request comes in of the form “I need to export our threat intel feeds from this page”, when we have a very simple SDK that already has example scripts for doing just that. That there are gaps between one implementation and another are natural; the important bit is in how much unworkable friction the alternatives introduce.
  • There’s no believable way from the telephone game of distilled need to gauge how critical this need is – and since you all know vendors are slow to respond, you must assume that we’ll only respond to emergencies and so you’ll characterise the request as a “P0” even if there’s no rush and no critical impact from its absence

By calling this artifact of indirect communication (from customer to “decison makers who can and might decide to insert this into planned execution”) a Feature Request instead of a Problem Statement or a Proposal For Discussion, it assumes this is the end of communication, there’s no need for further background or context, and that the fastest way to get the vendor to fix the problem is to boil it down to “what the solution looks like”.

It absolutely assumes and encourages “tell the vendor what to implement” rather than “tell them what problem you’re having, what decision or action you’re unable to achieve without a solution to this problem, and how this will significantly impact your business operations”.

Why is it even called a feature request? When did we start asking our customers what to build, rather than asking our customers what problems they have, and helping them find solutions? It’s especially important in our line of work, as curators between market and engineering, to find common problems across customers and market segments, and help engineers address those pain points to make customers delighted, productive and loyal.

These days I make an effort to engage with any feature request that isn’t (a) already aligned to solidly-planned enhancements and (b) isn’t clearly spelled out why this matters to the customer. Our “feature request” systems aren’t great for even indirect communication with the originating customer, so many of these conversations get delayed for months if ever. I’ll increasingly root around Salesforce and Slack to reach out to associated Success Manager or Solution Engineer, but that’s still lacking the fidelity of the direct conversation with the person-with-the-problem. It’s a journey.

So if you see me try to stifle rolling my eyes the next time you ask me for the likelihood I’ll deliver this customer’s quick Feature Request, please assume it’s nothing personal – and that I’m very amenable to conversations that increase the likelihood we’ll address the problem.

Mike, what books do you suggest for getting up to speed on the Lean approach?

Recent question to me from a newbie to my Lean Coffee meetup:

Hey, I was wondering if you could suggest any books for getting up to speed on the LEAN approach? I’m reading ‘The Lean Startup’ by Eric Ries, and like it a lot. Anything that you’ve really enjoyed? Thanks 🙂

Thought I’d share he thoughts I passed to David:

Hi David, I’m not much of a prose guy – unconferences, meet-ups, lean coffee and experiential training work better for me. I’ve skimmed Lean Startup and that was interesting in parts; I’ve also read the Phoenix Project (devops focus and pretty terrible plotting and characters but a weirdly compelling outlook on incremental, value-based organisations – and it’s apparently a rewrite of the plot of The Goal which I’ve heard is also good in this vein).

Neal Peterson (who’s a regular attendee and runs a parallel Lean Coffee of the North) is our resident Lean Guru. I’d reach out to him too.

If you haven’t already, I’d strongly recommend getting involved with Agile PDX – many meet-up opportunities each month, and an invite-only (only because we haven’t automated self-signup) Slack group you’d be welcome to join.

AWS wrangling, round 3: simplest possible, manually-configured static website

Our DevOps instructor Dan asked us to host a static HelloWorld page in S3.  After last week’s over-scoped tutorial, I started digging around in the properties of an S3 bucket, and discovered it was staring me in the face (if only I’d stared back).

Somehow I’d foolishly gotten into the groove of AWS tutorials and assumed if I found official content, that must be the most well-constructed approach based on the minimal needs of most of their audience, so I didn’t question the complexity and poorly-constructed lessons until long after I was committed to see it through.  [Thankfully I was able to figure out at least one successful path through those vagaries, or else I’d probably still be stubborn-through-seething and trying to debug the black box that are IAM policy attachments.]

Starting Small: S3 bucket and nothing else

Create Bucket

Modify the bucket-level Permissions

  • Select the new bucket and click Properties
  • Expand the Permissions section, click Add more permissions
  • In the Grantee selector, choose Everyone
  • check the List checkbox
  • click Save

Enable static website hosting

  • In the Bucket Properties, expand the Static Website Hosting section
  • Select enable website hosting
  • In the Index Document textbox, enter your favoured homepage name (e.g. index.html)
  •  click Save

Upload your web content

  • In the Actions button (on the top-left of the page), select Upload
  • Click Add files, select a simple HTML file, and click Start Upload
    • If you don’t have a suitable HTML file, copy the following to a text editor on your computer and save it (e.g. as helloworld.html)
      <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
      <html>
      <head>
       <title>Hello, World!</title>
       <style>
       body {
       color: #ffffff;
       background-color: #0188cc;
       font-family: Arial, sans-serif; 
       font-size:14px;
       }
       </style>
      </head>
      <body>
       

      Hello, World!

      You have successfully uploaded a static web page to AWS S3

      </body> </html>

Modify the content-level Permissions

  • Select the newly-uploaded file, then click Properties
  • expand the Permissions section and click Add more permissions
  • In the Grantee selector, choose Everyone
  • check the Open/Download checkbox and click Save

Now to confirm that your web page is available to web users, find the Link in the Properties for the file and click it – here’s my test file:

Screenshot 2017-01-15 10.26.39.png

If you’ve done everything correctly, you should see something like this:

Screenshot 2017-01-15 10.31.01.png

If one or more of the Permissions aren’t correct, you’ll see this (or at least, that’s what I’m getting in Chrome):

Screenshot 2017-01-15 10.29.47.png

 

How Do I Know What Success Looks Like?

I was asked recently what I do to ensure my team knows what success looks like.  I generally start with a clear definition of done, then factor usage and satisfaction into my evaluation of success-via-customers.

Evaluation Schema

Having a clear idea of what “done” looks like means having crisp answers to questions like:

  • Who am I building for?
    • Building for “everyone” usually means it doesn’t work well for anyone
  • What problem is it fixing for them?
    • I normally evaluate problems-to-solve based on the new actions or decisions the user can take *with* the solution that they can’t take *without* it
  • Does this deliver more business value than other work we’re considering?
    • Delivering value we can believe in is great, and obviously we ought to have a sense that this has higher value than the competing items on our backlog

What About The Rest?

My backlog of “ideas” is a place where I often leave things to bake.  Until I have a clear picture in my mind who will benefit from this (and just as importantly, who will not), and until I can articulate how this makes the user’s life measurably better, I won’t pull an idea into the near-term roadmap let alone start breaking it down for iteration prioritization.

In my experience there are lots of great ideas people have that they’ll bring to whoever they believe is the authority for “getting shit into the product”.  Engineers, sales, customers – all have ideas they think should get done.  One time my Principal Engineer spent an hour talking me through a hyper-normalized data model enhancement for my product.  Another time, I heard loudly from many customers that they wanted us to support their use of MongoDB with a specific development platform.

I thanked them for their feedback, and I earnestly spent time thinking about the implications – how do I know there’s a clear value prop for this work?

  • Is there one specific user role/usage model that this obviously supports?
  • Would it make users’ lives demonstrably better in accomplishing their business goals & workflows with the product as they currently use it?
  • Would the engineering effort support/complement other changes that we were planning to make?
  • Was this a dealbreaker for any user/customer, and not merely an annoyance or a “that’s something we *should* do”?
  • Is this something that addresses a gap/need right now – not just “good engineering that should become useful in the future”?  (There’s lots of cool things that would be fun to work on – one time I sat through a day-long engineering wish list session – but we’re lucky if we can carve out a minor portion of the team’s capacity away from the things that will help right now.)

If I don’t get at least a flash of sweat and “heat” that this is worth pursuing (I didn’t with the examples mentioned), then these things go on the backlog and they wait.  Usually the important items will come back up, again and again.  (Sometimes the unimportant things too.)  When they resurface, I test them against product strategy, currently-prioritized (and sized) roadmap and our prioritization scoring model, and I look for evidence that shows me this new idea beats something we’re already planning on doing.

If I have a strong impression that I can say “yes” to some or all of these, then it also usually comes along with a number of assumptions I’m willing to test, and effort I’m willing to put in to articulate the results this needs to deliver [usually in a phased approach].

Delivery

At that point we switch into execution and refinement mode – while we’ve already had some roughing-out discussions with engineering and design, this is where backlog grooming hammers out the questions and unknowns that bring us to a state where (a) the delivery team is confident what they’re meant to create and (b) estimates fall within a narrow range of guesses [i.e. we’re not hearing “could take a day, could take a week” – that’s a code smell].

Along the way I’m always emphasizing what result the user wants to see – because shit happens, surprises arise, priorities shift, the delivery team needs a solid defender of the result we’re going to deliver for the customer.  That doesn’t mean don’t flex on the details, or don’t change priorities as market conditions change, but it does mean providing a consistent voice that shines through the clutter and confusion of all the details, questions and opinions that inevitably arise as the feature/enhancement/story gets closer to delivery.

It also means making sure that your “voice of the customer” is actually informed by the customer, so as you’re developing definition of Done, mockups, prototypes and alpha/beta versions, I’ve made a point of taking the opportunity where it exists to pull in a customer or three for a usability test, or a customer proxy (TSE, consultant, success advocate) to give me their feedback, reaction and thinking in response to whatever deliverables we have available.

The most important part of putting in this effort to listen, though, is learning and adapting to the feedback.  It doesn’t mean rip-sawing in response to any contrary input, but it does mean absorbing it and making sure you’re not being pig-headed about the up-front ideas you generated that are more than likely wrong in small or big ways.  One of my colleagues has articulated this as Presumptive Design, whereby your up-front presumptions are going to be wrong, and the best thing you can do is to put those ideas in front of customers, users, proxies as fast and frequently as possible to find out how wrong you are.

Evaluating Success

Up front and along the way, I develop a sense of what success will look like when it’s out there, and that usually takes the form of quantity and quality – useage of the feature, and satisfaction with the feature.  Getting instrumentation of the feature in place is a brilliant but low-fidelity way of understanding whether it was deemed useful – if numbers and ratios are high in the first week and then steadily drop off the longer folks use it, that’s a signal to investigate more deeply.  The user satisfaction side – post-hoc surveys, customer calls – to get a sense of NPS-like confidence and “recommendability” are higher-fidelity means of validating how it’s actually impacting real humans.

This time, success: Flask-on-AWS tutorial (with advanced use of virtualenv)

Last time I tried this, I ended up semi-deliberately choosing to use Python 3 for a tutorial (I didn’t realize quickly enough) was built around Python 2.

After cleaning up my experiment I remembered that the default python on my MacBook was still python 2.7.10, which gave me the idea I might be able to re-run that tutorial with all-Python 2 dependencies.  Or so it seemed.

Strangely, the first step both went better and no better than last time:

Mac4Mike:flask-aws-tutorial mike$ virtualenv flask-aws
Using base prefix '/usr/local/Cellar/python3/3.5.2_3/Frameworks/Python.framework/Versions/3.5'
New python executable in /Users/mike/code/flask-aws-tutorial/flask-aws/bin/python3.5
Also creating executable in /Users/mike/code/flask-aws-tutorial/flask-aws/bin/python
Installing setuptools, pip, wheel...done.

Yes it didn’t throw any errors, but no it didn’t use the base Python 2 that I’d hoped.  Somehow the fact that I’ve installed Python 3 on my system is still getting picked up by virtualenv, so I needed to dig further into how virtualenv can be used to truly insulate from Python 3.

Found a decent article here that gave me hope, and even though they punted to using the virtualenvwrapper scripts, it still clued me in to the virtualenv parameter “-p”, so this seemed to work like a charm:

Mac4Mike:flask-aws-tutorial mike$ virtualenv flask-aws -p /usr/bin/python
Running virtualenv with interpreter /usr/bin/python
New python executable in /Users/mike/code/flask-aws-tutorial/flask-aws/bin/python
Installing setuptools, pip, wheel...done.

This time?  The requirements install worked like a charm:

Successfully installed Flask-0.10.1 Flask-SQLAlchemy-2.0 Flask-WTF-0.10.3 Jinja2-2.7.3 MarkupSafe-0.23 PyMySQL-0.6.3 SQLAlchemy-0.9.8 WTForms-2.0.1 Werkzeug-0.9.6 argparse-1.2.1 boto-2.28.0 itsdangerous-0.24 newrelic-2.74.0.54

Then (since I still had all the config in place), I ran pip install awsebcli and skipped all the way to the bottom of the tutorial and tried eb deploy:

INFO: Deploying new version to instance(s).                         
ERROR: Your requirements.txt is invalid. Snapshot your logs for details.
ERROR: [Instance: i-01b45c4d01c070555] Command failed on instance. Return code: 1 Output: (TRUNCATED)...)
  File "/usr/lib64/python2.7/subprocess.py", line 541, in check_call
    raise CalledProcessError(retcode, cmd)
CalledProcessError: Command '/opt/python/run/venv/bin/pip install -r /opt/python/ondeck/app/requirements.txt' returned non-zero exit status 1. 
Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03deploy.py failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
INFO: Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
ERROR: Unsuccessful command execution on instance id(s) 'i-01b45c4d01c070555'. Aborting the operation.
ERROR: Failed to deploy application.

This kept barfing over and over until I remembered that the target environment was still configured for Python 3.4.  Fortunately or not, you can’t change major versions of the platform – so back to eb init I go (with the -i parameter to re-initialize).

This time around?  The command eb deploy worked like a charm.

Lesson: be *very* explicit about your Python versions when messing with someone else’s code.  [Duh.]

Manifesto

Get to know Mike. The Tech Ambassador, the Empathizer, the hairy Dog Fur-bearer, the comics-inspired Dude and the Hatter.

Something I’m tired of doing to myself, every time I want to write my thoughts out to the world around me, is deciding halfway through a rant or a confessional, that the people I’m aiming at probably wouldn’t give the full rat’s ass to make it through the ninth paragraph.

So starting in 2015 I’m mustering the nerve to just write what I need to get out of my multi-layered (fractured?) brain. Is there anyone out there reading what I write (other than the Google index spider and the parasitic content-scrapers [hi there bastards!])? Fucked if I know. And as far as this pressurized-anxiety release valve is concerned, don’t really matter. Nope, it don’t.

Got something to say to me? Take your best shot (and not your laziest one). I’ll give as good (but not as bad) as I get.