Georgi Georgev did a major overhaul of AnalyticsToolkit.com. This website has solutions to many of the hard questions facing practitioners when planning and analyzing A/B tests.
New CXL course by Ben Labay on experimentation program management. So in case your experimentation program can use a big push, consider this course. You will get a clear understanding of the pillars of experimentation programs and how to scale them to your needs.
Che Sharma (owner of Eppo) talks about how seamless end-to-end experimentation workflows supercharge product development.
The role of data product manager is becoming more popular. Eric Weber explains why specific background in data is necessary for this role.
Connor Joyce created a framework with pillars of the role of a appliced behavioral scientist. There three main components:
Behavioral Science domain knowledge
Mixed-method research experience
Communication and translation of research insights
(see also image below)
The concepts around experimentation can also be applied on yourself. Craig Sullivan explains how he (should) use it to figure out why he was having migraine attacks.
New CXL course by Ben Labay on experimentation program management. So in case your experimentation program can use a big push, consider this course. You will get a clear understanding of the pillars of experimentation programs and how to scale them to your needs.
Amazon released Cloudwatch Evidently, a tool for experiments and feature management. Interesting move for anyone with their stack on AWS.
One way that culture affects innovation is by encouraging or discouraging innovators – the people willing to experiment and take risks. People have a natural desire to invent, but innovators have more freedom to develop and disseminate inventions when they’re in a culture that embraces innovation.
Are you looking for a new opportunity? These are this week's featured jobs:
Or check the other 22 open roles from companies like VodafoneZiggo, IKEA, Netflix, Apple, Adidas, Gitlab, Stripe, Just Eat, Skyscanner, Lego and Spotify.
Are you hiring? Post your open roles to the job board. Readers of this newsletters can post for free, by using this coupon code at the last step: SUBSCRIBER200
This is a running list of upcoming events:
Listen to the Experimentation Masters Podcast to be a more confident and successful leader. Learn practical tips, methods and techniques from world-leading experts in experimentation. Experimentation is the fastest way to figure out what works, and what doesn’t. Let me help you to beat the odds.
"Make a list of the assumptions in a business sector, and then find the ones which aren't true or which won't be true in two years' time. You've now got a business." — Rory Sutherland
Policy-makers are increasingly turning to behavioural science for insights about how to improve citizens’ decisions and outcomes1. Typically, different scientists test different intervention ideas in different samples using different outcomes over different time intervals2. The lack of comparability of such individual investigations limits their potential to inform policy. Here, to address this limitation and accelerate the pace of discovery, we introduce the megastudy—a massive field experiment in which the effects of many different interventions are compared in the same population on the same objectively measured outcome for the same duration. In a megastudy targeting physical exercise among 61,293 members of an American fitness chain, 30 scientists from 15 different US universities worked in small independent teams to design a total of 54 different four-week digital programmes (or interventions) encouraging exercise. We show that 45% of these interventions significantly increased weekly gym visits by 9% to 27%; the top-performing intervention offered microrewards for returning to the gym after a missed workout. Only 8% of interventions induced behaviour change that was significant and measurable after the four-week intervention. Conditioning on the 45% of interventions that increased exercise during the intervention, we detected carry-over effects that were proportionally similar to those measured in previous research3–6. Forecasts by impartial judges failed to predict which interventions would be most effective, underscoring the value of testing many ideas at once and, therefore, the potential for megastudies to improve the evidentiary value of behavioural science. A massive field study whereby many different treatments are tested synchronously in one large sample using a common objectively measured outcome, termed a megastudy, was performed to examine the ability of interventions to increase gym attendance by American adults.
New CXL course by Ben Labay on experimentation program management. So in case your experimentation program can use a big push, consider this course. You will get a clear understanding of the pillars of experimentation programs and how to scale them to your needs.
Something I learned this week ...
I like to hear what you think of this newsletter, Share your feedback, it takes less than a minute.
If you're enjoying the Experimental Mind newsletter you can buy me a coffee or beer (in case you rather use Paypal, you can send it to 'kevin@experimentalmind.com'. Thanks to the many people who already did it. Much appreciated.
And share this email with a friend or colleague. I would love this community of experimenters to grow. You can point them here to subscribe.
Have a great week — and keep experimenting.
Kevin