The Great Canadian #NoEstimates Puzzle Experiment: SDEC13 Results

Tags: noestimates, experiments, games, agile

This past Tuesday I presented my third running of the #NoEstimates Puzzle Experiment in Winnipeg, Manitoba to attendees of the Software Development Evolution Conference 2013. This was by far the most interesting, entertaining and informative iteration I've seen to-date, and true-to-form, it generated a lot of conversation during and after the event.

Assisting me as the facilitator for Puzzle Team A (Scrum) was SDEC13 chair Steve Rogalsky (@srogalsky) who delighted in taking on the role given his "penchant" for Scrum, and by "penchant" I mean "disinterest" - he was a good sport, though and while overshadowed by Product Owner Dave Sharrock's enthusiasm did manage to ensure his team stuck to their guns and saw things through. 

Everyone who participated did so with full enthusiasm and did exceptionally well given the exercise I put before them: Thank you to the sixteen brave souls who comprised Puzzle Teams A and B! You've added more to my knowledge than you can imagine!


Both teams completed approximately the same amount of their puzzle images, within a few pieces of each other. As my notes below will show, I think there's some contributing factors as to why and how this occurred.

Notes and Observations

  • For the first time, both team POs independently chose the same portion of the puzzle to solve: The three rightmost houses, which devolved into the rightmost house due to time constraints. This made the results that the two teams achieved more comparable - and for some interesting interactions between the teams, ie. "smack-talk". ;-)

  • Puzzle Team A (Scrum rules) really surprised me with the way they self-organized around their work plan and estimate it, employing a visual T-shirt sizing technique that I believe allowed them to make dramatic progress in learning about their problem domain:

  • The Product Owner for Puzzle Team B made an interesting and controversial decision to allow his team to ship their product with a known defect when they were unable to find a puzzle piece for a 2nd story window in one of their houses. This could have been an opportunity for them to pivot to another area for more value, but they chose to stick to their original plan - an interesting analog to what we see happen in the real world
  • I introduced a rule change for Puzzle Team B to roughly align them with a "mob" team and to try and slow them down from what I had observed in previous events. I stipulated that they have a single team member responsible for doing the puzzle assembly (a "driver") while the others helped to sort the pieces for them. Every four minutes or so, the "driver" would rotate out and a "navigator" would rotate in. This introduced more of a constraint than I intended and caused the team members to get confused as they rotated around the table, continually shifting their context.  This became a bottleneck and the team felt continually "hobbled" in their efforts. What was interesting in this, however, is that the team didn't adapt their process in situ to work around this constraint, or change it entirely.

  • In an interesting example of optimism bias, at the half-way point, both POs expressed the same level of confidence (80%) for their team meeting their respective initial Definition of Values, ie. the three rightmost houses. This seemed ambitious, but the POs were also careful to mention that the teams would be able to provide some completed value that they could use, ie. a "complete" portion of the puzzle as opposed to a fragmented one. Interestingly, their teams shared the same confidence levels. 
  • A noticeable difference between Puzzle Team A and B was the intensity level of the effort: Puzzle Team A seemed more relaxed in the exercise with some team members even pushing away from the table. In contrast, Puzzle Team B seemed totally focused with very few people taking any time to push back. Some of Puzzle Team B's members seemed to thrive on this, others weren't so happy.
  • Puzzle Team A also showed an interesting tack in choosing to work on a single user story at a time, with everyone contributing in some way to building the puzzle, even if only to point out a single piece that was previously overlooked. They also had a good number of stories that were closely-sized, enabling them to lower variance in their cycle times. In a way, they were adopting some of the strategies that Puzzle Team B was set up to do from the start.
  • Each team did retrospectives, but it seems that Puzzle Team A embraced them more fully and did make changes to their process to enable them to do more for their PO. This was an interesting development that I've not seen happen in other iterations of the experiment - often the Scrum Team will just keep going without making any changes. Key takeaway is that retrospectives work when you apply what you learn from them.
  • Bryan Beecham and others suggested that there may have been too many people on each team, and given that they had to work around a banquet table, he's probably right. This said, during the first run of the experiment we had about the same number of people, but they were distributed along a long, rectangular table and in more cramped confines. I think this is something that could be adjusted.


At the conclusion of the experiment I asked both POs and teams to relate their experiences and challenges. Puzzle Team A was really pleased with their progress (as they should have been!) and almost immediately walked away from the table. Puzzle Team B was very different, perhaps owing to Bryan Beecham's participation: After relating their results, he asked the team if they could do a post-experiment retrospective to share what they thought worked against their ability to deliver well.

This was another interesting and unexpected development - usually both teams walk away and continue their conversations elsewhere. Puzzle Team B knew they could have done better and were quick to point to the elements of the experiment they thought worked against them. A chief criticism was directed at my facilitation, which they felt was too intrusive: By telling them to rotate drivers and navigators, I impeded their ability to self-organize around their tasks. It was a fair cop, in my opinion.

This also demonstrated something really interesting to me: Even though this was an abstract exercise among some strangers, they actually became invested in what they were doing. They wanted to do well and were bothered when they didn't succeed to their expectations of themselves. ME == MIND BLOWN.

Closing Thoughts

While running the experiment I had running conversations with several observers, including session speakers Janet Gregory and Jason Little. Both were really intrigued by what they were seeing and saw the potential for adapting the experiment in different ways for their own purposes, and shared some really great ideas for how it could be improved for future iterations - this was enormously helpful.

Afterward I had further, deeper conversations with other speakers about what they saw and how else they could envision the experiment being adapted and used in different scenarios and to different effects. Through these dialogues it began to dawn on me that while Alexei Zheglov and I created the experiment to observe interactions between individuals on teams that estimate their work and those that do not, it's a better canvas for exploring many other team and system interactions than I anticipated. This is the value of taking the risk to put an idea out in front of others to interact and critique: You not only risk rejection and ridicule, but also a reward in new insights and ideas.

In this regard, Puzzle Team A's PO, Dave Sharrock (who was also a session speaker), asked me afterward about what I intended the learning point of the experiment to be as it didn't seem evident to him. My response was that we want to learn something about teams and how process can either impede or accelerate their learning-to-implementation cycle times. In support, I referred to a 2010 post that Dan North wrote where he related a conversation with Liz Keogh on a thought experiment that revealed how, in knowledge work, learning is the constraint. I firmly agree, and the experiment is my attempt to find ways to surface and exploit it.

Point taken, however: Dave is right that this isn't as obvious as it could be and I will be making changes to improve the quality of the experiment and making the learning outcomes more evident. And this could mean making some significant changes prior to our upcoming showing at Agile Tour Toronto 2013 next week.  See you there!

blog comments powered by Disqus