Highlights from Product Camp London 2014

Product Camp is an unconference for product managers. There is no schedule, no keynote speaker, no list of hot topics prepared in advance. Instead, those who want to speak claim a spot, write their title on the board, and see who comes along.

Photo by @simoncast
Photo by @simoncast

Talks run simultaneously, so it’s impossible to see everything. These are my highlights of Product Camp London 2014.

Measureable Outcomes using the Mobius Model led by Gabrielle Benefield

The Mobius Model links strategy to delivery in a clear and measurable way. It compliments the existing delivery method by bringing a focus to estimating and tracking the effectiveness of a team’s work.

Mobius Canvas thumbnail

The loop on the left links business information to the objectives of the organisation. For me, this was a familiar working process: data-led decision making characterised my time as a crime prevention analyst and as a traffic safety policy officer. The loop on the right portrays an ongoing cycle of iteration and continued improvement. Again this was familiar to me: deliver, measure, adapt is not far from the build, measure, learn cycle of the Lean Startup. The crossover between the loops is the strength of the Mobius Model. This is the point where a decision is taken: research further, or iterate again.

Find out more about the Mobius Model here.

Thirteen A/B and Split Testing Errors by Craig Sullivan

Craig spoke with humour and passion, and occasionally with the fury of one who has tolerated fools once too often. The insights he offered were simple and direct, and really generated a buzz around the room. My background is data driven policy and market research, so I was fascinated by some of the mistakes Craig discussed. I expect technology businesses to be fluent in data, and skilled at applying that to real life. I don’t expect sample size errors or findings not at the 95% confidence level from organisations operating in a data-heavy environment. There’s a real opportunity for skilled market researchers in the technology sector.

Manipulation and Coercion in Product Management led by Benjamin Mitchell

Is it okay to manage a difficult team member in a sneaky way? This session was a discussion of the following situation from Mike Cohn’s blog.

“You are a ScrumMaster for a team. You’ve noticed that one team member, Jeff, is domineering and no one is willing to stand up to him. This team has self-organized—it has chosen to let Jeff make all key decisions. As the ScrumMaster for this team, though, you recognize that if Jeff continues to make all the decisions on his own it will impede the team’s efforts to improve. You consider having a private conversation with Jeff, but that is unlikely to change much. You contemplate stepping in and overruling some decisions he makes, but if you do it once the team will expect you to continue to do so, which won’t be good. Then you begin thinking about the agile principles of subtle control and influence. Perhaps you decide to change the team’s dynamics by asking management to add someone new to the agile team, someone who is likely to stand up to Jeff.”

I find influence such a fascinating area, and I’m a big fan of nudge theory, behavioural economics, habit formation, and persuasion techniques. However, the discussion that followed took a different direction entirely. The group considered the moral subtleties of handling Jeff by adding a new team member to stand up to him. Is it wrong to act in this indirect and somewhat deceptive manner? “Fundamentally, the role of the product manager is to influence others,” said someone around the table, “but ethically this approach is somewhat problematic.”

Photo via Visual Bonds and Mind the Product
Photo via Visual Bonds and Mind the Product

Rolling Cohort Analysis by Andy Young

Andy Young’s talk was my highlight of the Product Tank event in January (video here), and his talk at Product Camp was of a similar calibre. Andy presented three different sales graphs, and asked the audience to decide whether the monthly, weekly or daily graphs showed the most positive results. After a short discussion, he revealed they were all showing the exact same data. Looking at monthly and daily sales data will give fairly different impressions, and so a good way to gain actionable insights is to do a 28 day rolling total. For each day, the value displayed is the sum of the sales of that day, plus the sales from all the previous 28 days. This will reduce the noise, but still give you the trends and the relevant spikes. I can vouch for this approach, as I used a similar rolling cohorts to sample crime pattern data in my work for Manchester.

You can learn more about Rolling Cohorts at this write-up from Seedcamp and Popcorn Metrics.

The Neuromarketing Toolkit by Craig Sullivan

In typically sweary and humorous fashion, the Malcolm Tucker of Testing took us through twenty minutes of insights into how to better understand customers and their relationships with products. The Neuromarketing Toolkit is a rich presentation, with a vast array of links to explore. I’m only part way through checking them out as we speak.

I had a great time at Product Camp. I learned a lot about data-driven decision making, split testing and how to apply rolling cohort analysis. Great speakers, brilliant event.

Product Camp London is organised by Mind the Product, whose recap of the event is here.

Leave a Reply