Extreme Programming For Modern Start-Ups

gannon - 11 Apr 2014

Extreme Programming (XP) can be a very effective way to build software, but out of the box, it is poorly suited to many teams. It requires that the team is small, co-located, and working on a single product at any given time. It also assumes that suitable designs can be arrived at by working in micro-increments (eg. TDD cycles) without up-front design. In this post, I'll discuss how XP can be adapted to suit modern start-ups.

XP is rad

XP works very well for quickly building a new software system and adapting it to meet customer needs. Iterative development, customer value, automated testing, bug-free code and energized work are all a great fit for any innovative team. It is simple to get started (assuming your team is prepared to dive right in), simple to manage, and is designed to be adapted over time.

What about Kanban?

Since the Kanban methodology doesn't have XP's constraints, it seems to be a popular choice in recent years. Kanban, however, is more complicated to get started with (coming up with phases and WIP limits) and more complex to manage (adjusting classes of service and SLAs). Despite the complexity, it doesn't cover technical practices at all. Additionally, the intense focus that is fostered by using iterations is lost.

Also, Kanban (as originally designed) seems to eschew estimation, which doesn't work well for fine-grained project tracking. All items (stories or tasks) in a Kanban system are treated as equivalent in size, but on the teams I've worked on, tasks/stories usually vary in size quite a bit. While splitting large stories is a good practice with either methodology, it doesn't completely solve the problem. Significant functionality can't really be broken into pieces equivalent in size to a small task (like re-arranging widgets on a page or fixing a small bug).

The variance in item size means that the predictability of cycle times (the primary metric for setting expectations) is only accurate to the extent that the mix of task sizes stays consistent over time. On most teams, product work seems to come in waves, so the cycle time for a task during construction of a major new product is going to be significantly longer than when the team is working on maintenance and nominal enhancements.

Although some teams use T-shirt size estimation with Kanban (where each size has its own cycle time calculation), the mix of item sizes being worked on by a team will influence the cycle times of all items. In other words, if at one point, 5 XL items are being worked and 2 Small items are being worked on, the cycle time for all items is likely to be higher than at a time where 2 XL items and 5 Small items are being worked on. Accordingly, the cycle team per size will still fluctuate significantly, although there is probably some mitigating effect compared to not using estimation at all. (We just started doing this, and haven't analyzed the results yet.)

Times have changed

The boom

These days, start-ups (especially in SF/Silicon Valley) are operating in a different environment than when XP flourished a decade ago. We are in a boom which makes it very difficult to hire qualified engineers in any start-up hub. This has motivated start-ups to hire quality engineers anywhere they can find them. That can be in parts of the US that are not start-up hubs, or even abroad. This means many teams at modern start-ups are geographically distributed, and often on different time zones, as is the case here at Bizo.

Big Data

Also, the number of people active on the web is much larger than it used to be, and the amount of data required to build compelling applications is exploding. Accommodating that volume of data, especially in an era where users expect applications to respond instantaneously, requires carefully choosing efficient algorithms and data structures (eg. HyperLogLog, bloom filters, P-Square, etc.), selecting appropriate data stores and choosing the right model of concurrency. Designing software using the typical approach to TDD won't lead smoothly to a design that performs well under these conditions.

In an era where distributed teams and Big Data are the new norm, we need a refresh of XP to suit our needs.

Distributed teams

XP insists on co-located teams because high-bandwidth communication happens almost automatically. Also, XP espouses Pair Programming, which has traditionally been done in-person. Let's look at some alternatives for distributed teams.

Pro-longed stand-ups as a stand-in for in-person communication

The next best thing to in-person communication seems to be video chat. At Bizo, we hold our stand-ups using Google+ Hangouts. XP encourages very quick (~5 minute) stand-up meetings (hence the name) where each team member says just 3 things: what they did yesterday, what they're going to do today, and what they're blocked on, if anything. We have naturally tended longer stand-ups (15-20m) on our team, with team members disseminating all kinds of information and asking general questions to the group. When I was on a previous team which was co-located, I would try to encourage team members to keep it brief, and go 'offline' with any additional discussion. I am coming to the conclusion that on a distributed team, however, a slightly longer stand-up is actually good: the additional time spent is a small price to pay for high-value ad/hoc communication that is similar to what happens throughout the day on co-located teams. Inviting the Project Owner to the daily stand-up is a great way to maintain XP's focus on Real Customer Involvement. Having regular demos with the Project Owner (eg. the Product Manager, the client or whoever is the project sponsor) is also a good idea.

Informative workspace

An informative workspace can be accomplished by having a monitor in all company offices (with engineers) that constantly displays a dashboard and/or an up-to-date view of the project/task tracking app. (Such views should also be readily accessible to remote employees on demand.) Pivotal Tracker is an awesome web app for XP-style project tracking - the UI is well-suited to just such a purpose.

Pair Programming and real-time-ish code reviews

Although Pair Programming is normally done in-person, some great tools have been developed that make it much more feasible to do remotely. Screen Hero is one example, which is designed for highly performant screen-sharing where both parties can type and mouse around, as well as built-in voice chat. (Although it's awesome, we don't use it very often because it doesn't have Linux support - just Mac and Windows.)

For teams where members have significantly different schedules (eg. due to time zone differences), Pair Programming isn't feasible. In those cases, a near real-time code review process is a decent substitute. Personally, I think code reviews should have a single person who is responsible for giving a thumbs up/down on a change set. (Additional reviewers are best included only as an FYI.) The author can email a specific person asking for them to review their code at their earliest convenience (or email their team asking for a volunteer). In my opinion, the ideal would be if the reviewer actually calls the author and keeps them on the phone while they review the code. Code reviews can sometimes involve an in-depth back-and-forth, and it's best if those can be done in real-time. The key here is turn-around time so that integration and deployment can be done nearly continuously.

Large teams

Kanban-XP interop

XP is only designed for teams of up to 10 people, all working on the same project at any given time. The obvious way to scale is to break teams up as they grow, so no single team ever exceeds that limit. That's also a great way to ensure that each team is only working on a single project, yet the organization can progress on several projects simultaneously. XP isn't really suited to manage the flow of work across teams and projects, so having an over-arching Kanban process with XP implemented within individual teams may offer the best of both worlds.


Having teams work on a single project enhances the team's focus and synergy. However, since an organization's project portfolio changes over time, that requires continual re-organization of teams. Having teams be completely ephemeral necessitates standardization of tools and processes throughout the engineering department to avoid excessive re-training, which can lead to bureaucracy over time. One way to mitigate this is to limit such standardization to pods of related ephemeral teams, where team members generally stay within a pod for the duration of several projects.

Scalable systems

Although comprehensive up-front design is a very straight-forward way to build scalable systems, there are techniques for taking a more agile approach.

Spike Solutions

First, spike solutions can be be used to validate basic performance characteristics. Some very limited up-front design (back-of-the-envelope performance calculations for various potential infrastructure pieces) can yield a prospective stack for a system. That stack can be used in a prototype solution (with little to no business logic) that validates whether the stack can stand-up to the expected load. The most effective technique for validating scalability at this stage (when feasible) is performance tests that use a unit testing framework (eg. JUnit or RSpec) and invoke the application using in-process calls. The logistics are easier than true end-to-end load testing, although the results are less conclusive.

If the spike is successful in verifying adequate performance, subsequent stories can be implemented with a normal TDD cycle utilizing the tested stack. Maintain the performance tests so that they can be run against the real system as it develops, although you probably do not want to run it as part of your unit test suite (which should be very fast).

Leveraging the cloud

Its important to note that the initial solution may not be all that efficient in terms of infrastructure cost. If you're hosting your system in the cloud (and not provisioning capacity for it ahead of time), that's often ok, as long as the system can scale horizontally. Keep an eye on infrastructure costs, and schedule performance stories to improve efficiency over time.

Load testing and related hackery

Prior to fully launching, it may be wise to do a true end-to-end load test. The logistics of these tests are often difficult (and potentially expensive).

If your system is already running at scale, but you want to load test a new piece of infrastructure (or algorithm, etc.), one hack to consider is embedding an isolated load test within your running system. The key to this hack is taking care to limit the impact of the test on your production system. At a minimum, use short timeouts around the experimental code and ensure that errors are caught. Also, if you can't roll back painlessly, you may want to build in some kind of kill switch for the test.

Everything else is the same

With the tweaks mentioned here, the remaining XP practices and principles are feasible without significant modification. Those would be as follows (as outlined in: The Art of Agile Development [Jim Shore] 2007): Vision, Release Planning, Iteration Planning, Test-Driven Development, Energized Work, Root-Cause Analysis, Retrospectives, Ubiquitous Language, Coding Standards, Reporting, Slack, Stories, Estimating, Risk Management, Customer Tests, Refactoring, Incremental Design and Architecture, Simple Design, Spike Solutions, Exploratory Testing


Much of what is described here was gleaned from experience, but some of it is conjecture, so YMMV. I'm interested to hear feedback from those who have tried to scale XP.

comments powered by Disqus