Tuesday Toolkit 12/01/2020

Photo by Andrea Piacquadio on Pexels.com
Good Morning. Let’s get this puppy off the ground.

Welcome to new subscribers. The Tuesday Toolkit is a newsletter covering tools for improving work & life and sharing a bit of fun. Thanks for being here. 

Tool of the Week
This week we are discussing pilot programs. A pilot program is the smallest iteration of something that can be ran to gather data from a real world scenario. Modeling experiments on paper or in simulation programs is nice, but as the wise George Box once said, “All models are wrong, some are helpful.” Enter pilot programs. 

A pilot program is useful when a large change is being considered, but the outcomes are not obvious. A few examples might be launching a new product, running a new marketing campaign, changing the production process of a manufacturing plant, or even creating a new TV show. 

Great, but what is the pilot program? It’s an experiment, done in the real world, with the minimum resources required to gain relevant, useful data. Let’s stick with the launching a new product idea. Designing a product, figuring out how to make a bunch of them, figuring out how to make people aware of it so it can be sold, and all the accompanying details is a risky endeavor. The product might flop, you could lose all your money, your business might even crash if too many resources are consumed launching something unproven. 

So prove it. 

In our example, the design stage might be reduced to creating a single item instead of a whole production run of items. The launch phase might be talking to existing customers about the new product and inviting them in for a demonstration of it (or doing a virtual demonstration of it, because Covid). Gathering their feedback is an art in its own right, but their feedback is a key to iterating on the design. Other options might be asking strangers in the coffee shop to review your product or finding the target audience of the product in their natural environment and asking them to test it out. Understanding how users interact with the product and feel about their interaction is key.  

– The pilot program is an experiment meant to gather data. Some of things to consider while running a pilot; 
– Know what questions you want answered, but be prepared for surprises
– Know how you plan to test
– Don’t look for confirmation of where you were right, look for where you are wrong
– Spend as little money as possible
– Iterate as fast as possible
– Minimize the number of unknowns
– Maximize the number of iterations that can be done with as little money as possible

Using the information gathered during the pilot, certain tweaks or changes can be made to emphasize whatever the audience cared about most. Maybe the clearest example is painting a room a new color. Most people will paint a small section of the wall the new color to see how it looks. That’s a pilot program. Seeing the new color in real life provides the data needed, e.g. how it looks and feels, and tells you what changes need to be made. 

Next time your company has a big change planned or a new product launch coming, make sure a pilot has been run to test it in the real world, prior to unleashing all the resources. 

PS: did you know the first episode of a TV show is called a pilot, because producers want it to take off? 

Where do you use pilot programs? Let me know on Twitter: @Quinn_Hanson22
Now, the fun stuff
Holiday Gift Guide
As we get ready for the holidays, gifts are on the mind. If you’re struggling to find the right things, take a peak at these items, hand picked by the team at Morning Brew. 

Tweet of the Week
Sage advice.
 Image description

Article of the Week 
How Complex Systems Fail by Richard Cook. This quick read breaks down 18 reasons that complex systems fail. Some of my favorite insights below; 

3. Catastrophe requires multiple failures – single point failures are not enough..
The array of defenses works. System operations are generally successful. Overt catastrophic failure occurs when small, apparently innocuous failures join to create opportunity for a systemic accident. Each of these small failures is necessary to cause catastrophe but only the combination is sufficient to permit failure. Put another way, there are many more failure opportunities than overt system accidents. Most initial
failure trajectories are blocked by designed system safety components. Trajectories that reach the operational level are mostly blocked, usually by practitioners.

5. Complex systems run in degraded mode.
A corollary to the preceding point is that complex systems run as broken systems. The system continues to function because it contains so many redundancies and because
people can make it function, despite the presence of many flaws. After accident reviews nearly always note that the system has a history of prior ‘proto-accidents’ that nearly
generated catastrophe. Arguments that these degraded conditions should have been recognized before the overt accident are usually predicated on naïve notions of system
performance. System operations are dynamic, with components (organizational, human, technical) failing and being replaced continuously. 

8. Hindsight biases post-accident assessments of human performance.
Knowledge of the outcome makes it seem that events leading to the outcome should have appeared more salient to practitioners at the time than was actually the case. This means that ex post facto accident analysis of human performance is inaccurate. The outcome knowledge poisons the ability of after-accident observers to recreate the view of practitioners before the accident of those same factors. It seems that practitioners “should have known” that the factors would “inevitably” lead to an accident. Hindsight bias remains the primary obstacle to accident investigation, especially when expert human performance is involved.

13. Human expertise in complex systems is constantly changing
Complex systems require substantial human expertise in their operation and management. This expertise changes in character as technology changes but it also changes because of the need to replace experts who leave. In every case, training and refinement of skill and expertise is one part of the function of the system itself. At any moment, therefore, a given complex system will contain practitioners and trainees with varying degrees of expertise. Critical issues related to expertise arise from (1) the need to use scarce expertise as a resource for the most difficult or demanding production needs and (2) the need to develop expertise for future use.



Thanks for tuning in this week! If you found value in this, please share it with your friends, colleagues, associates, acquaintances, family members, bowling leagues, partners, tinder dates and strangers. The larger we grow this audience, the more greatness can be shared. 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: