Experimental Mind #283
Your weekly overview of interesting reads, events and jobs for the experimental mind.
In today’s edition: what it means that OpenAI buys Statsig, a new course on building experimentation cultures, five fresh jobs, upcoming events + another personal update.
Thanks to Convert and Sitespect for their continued support.
What it means that OpenAI buys Statsig
Big news in our space last week: OpenAI announced it will acquire Statsig for $1.1B. I know Statsig well, as a previous customer and through many conversations with their team, and their ambition and product quality have been clear from the start.
In the days since, I have read every post and comment I could find on this topic. The reactions cover the full spectrum:
Investors like Soma Somasegar (Madrona) called it a natural fit and validation of Statsig’s culture of velocity and customer obsession.
Competitors quickly positioned themselves: Optimizely’s CEO stressed their ongoing commitment to customers, while GrowthBook launched a “migration kit.”
Industry experts like Ben Labay framed it as another sign that experimentation is shifting from conversion optimization to core product risk management.
Others were more cautious: someone questioned whether this will mean the end of Statsig as a standalone product, while others doubted customers would remain a priority when OpenAI’s focus is elsewhere. Some even suggested this acquisition might be more about bringing Vijaye Raji and his team into OpenAI than about Statsig as a product.
Beyond the public debate, I also spoke with dozens of people one-on-one: customers, prospects, and competitors. The recurring theme is excitement for what is happening in the space, but also some nervousness. Customers and prospects wonder what will happen with support and roadmap. Competitors see opportunities, and in some cases, can’t help but show envy of Statsig’s trajectory.
It is too early to know exactly how this will play out (the deal still needs regulatory approval). But one thing is clear to me: experimentation is and will remain integral to product development. With AI-native companies like OpenAI embedding experimentation at the core, it may become even more central.
🔎 Interesting things you might have missed
Experimentation and thinking at the level of a program of experiments
Dean Eckles argues that instead of judging single tests, organizations should think at the program level, where value comes from iterating, learning across many experiments, and refining decision rules. He calls for more methods that reflect the messy, creative reality of innovation. LINK
____
“A/B testing can't keep up with AI”
This piece makes a compelling case for the rise of evals, but it overstates the idea that they replace A/B testing rather than complement it. Evals are great for fast iteration and proxy measurement, yet only randomised experiments can show causal impact on real user and business outcomes. I would frame it as: the future is evals plus A/B, not evals instead of A/B. LINK
____
Course: Building Award-Winning Experimentation Cultures
Join this 8-week cohort-based course covering process, data, psychology, and UX to strengthen your testing culture. Led by the amazing Ruben de Boer and Kelly Wortham. LINK (get $300 off)
____
The silent killer of innovation
The absence of psychological safety is the silent killer of innovation, where employees feel safer staying silent than voicing ideas or concerns. Leaders can and should rebuild safety, and with it innovation, by modelling curiosity, rewarding candour, normalising vulnerability, and listening without defensiveness. LINK
____
“It Takes Too Long To Experiment”
Zach Flynn says that experimentation should not be seen as slowing teams down, because while speed is about moving quickly, velocity is about moving in the right direction. Most product changes create small, hard-to-detect impacts, making experiments essential for reliable learning and avoiding wasted effort. LINK
____
Last week’s most clicked item:
Paper: Variance reduction in online marketplace A/B testing
This study evaluates four variance reduction methods: outlier capping, CUPED, CUPAC, and doubly robust estimation. Using historical data from a large online marketplace, Vinted. CUPAC and outlier capping deliver the largest confidence interval reductions (35%+), improving sensitivity. LINK
🚀 Job opportunities
Find 100+ open roles on ExperimentationJobs.com. This week’s featured roles:
Experimentation Delivery Manager at Creative CX (London, United Kingdom)
Marketing CRO Developer at Mindbody (Kuala Lumpur, Malaysia)
Director of Web Strategy & CRO at Eventbrite (USA)
Senior Software Engineer, Message Optimization & Experimentation at Attentive (San Francisco, USA)
CRO Specialist at PM Digital Design (United Kingdom)
📅 Upcoming events
A running list of upcoming events. Subscribe here.
18 Sep: Berlin Experimentation Meetup #10 (Berlin, Germany)
23 Sep: AI search optimization panel by Convert (online)
25 Sep: Statsig's Annual Conference: Sigsum (Seattle, USA)
13-18 Oct: Experiment Nation Conference (online)
📢 Personal update
This week I had two good conversations with vendors about the challenges in the experimentation space and how they are positioning themselves, spoke with a head of analytics at a fintech company about their experimentation challenges and ambitions, and caught up with Tom to discuss new plans for Experimentation Jobs.
To be honest, the real highlight was coffee and cake to mark my son’s first day at high school (even though the first day turned out to be a day off for him … what a life). A good moment to reflect on his time in primary school and how he’s feeling about what’s ahead.
Looking forward to seeing what I’ll be learning next week.
👂How was this edition?
Let me know by clicking one of these options: Excellent | Great | Good | OK | Meh
Or even better, simply hit reply. I read every email.
Have a great week — and keep experimenting.
Thanks, Kevin



The OpenAI + Statsig acquisition is a weird arrangement where I do not foresee the likelihood of good outcomes. This despite the fact that Altman is finally acknowledging that the gas has run out for exclusively betting on inductive reasoning off of large data sets.
Great for the validation from the Statsig team. I would sure take the money. But given that AI has no concept of causality, it's a bit strange and smells of speculative ways for OpenAI to quickly capitalize on their oversized valuation.
Perhaps the best move is where Statsig could be positioned for the broader OpenAI to learn from -- given the endemic inference hallucination rates of generative AI relative to the disciplines of causality, independent variables, and test power.
Except I see the opposite. That Statsig's intelligence would be the tail wagging the dog. Any gospel of good statistical methods is going to fall on deaf ears because good experimentation doesn't scale exponentially with more and bigger data centers.