comment 0

A sprint in the life of a Lead Developer at Movebubble

Earlier this month, I started writing a post-a-day account of a two-week development sprint at Movebubble – a proptech startup where I work as a lead developer. My motivation for doing this is somewhat fuzzy. Something about practising my writing and Getting Back In The Habit. I think when I started I imagined myself as a sort of nerdier Jack Kerouac writing a seminal work on the dreams of a generation, but I can’t help feeling that what I’ve ended up with is more Pilgrim’s Progress in C# than On The Road 2.0.

Either way, we’re here now. I do hope there is some value to the exercise. I’ve read a lot of theory on how to do Agile and how to do Lean and how to do Data Driven, but few real world examples that delved beyond superficial details necessary to support an author’s point. I wanted to write an account that was as unadulterated as possible; this is how it is, not just how the textbook says it should be. In particular, I wanted to highlight some of the compromises which get made and where conflicts occur between the need for quality and the need for speed.

Of course, I don’t expect you to read all of my 15,000 word epic. I mean, who really has the time? If you want to read the full account, then here are the links:

Otherwise, what follows is a highlight reel; a bit of a context, some narrative, some outcomes and finally some observations.

I’ll try and keep it short. Ish.

Meet the Team

Movebubble is a proptech startup building a mobile-first platform connecting renters and agents with the intention of making the process of letting a property as pain-free as possible. We have two apps, one for each side of the marketplace, and we’ve had around 50,000 downloads in the last 12 months despite zero marketing spend and only covering London zones 1 to 3. Despite success there, however, we currently operate at close to 100% loss and need to prove our revenue model before we scale and get further capital investment.

There are about 20 employees, including two delivery teams which are, somewhat mysteriously, named the Water Camels and Space Monkeys (or, if you want to wind them up, Spunkeys). The official reason for this is mutter mutter Jordi came up with it; we just count ourselves lucky that we don’t have a team dance or handshake (yep, been there too).

I’m the lead developer on the Water Camel team. I’m also the de facto project manager, as we don’t currently have anybody else performing that role. Other Camels are Jordi (developer), Laura (developer) and Sunil (designer) and we also get some help from the leadership team but it’s generally a bonus rather than something we factor into our sprint planning.

Historically, the Space Monkeys have tended to look after the renter app and the Water Camels have looked after the agent app, but recently that distinction has started to blur as the two have become more integrated through our ‘chat’ feature. For this sprint, we were were charged with building a single new feature which would be integrated into both applications; namely, the ability for an agent to suggest a property to a renter.

This would entail adding functionality to both apps and also to two services in our backend; our core API monolith and the chat microservice which is part of an ongoing project to break up the functionality into more rational domains. And we wanted to deliver this to a strict deadline due to holidays over the Easter period; that would mean building, testing and releasing the feature in a single two week sprint while the Space Monkeys worked in parallel on a separate set of improvements to the renter app.

How we work

Over the last few months we’ve tried to become as data driven as possible. As the number of users in the app has grown we’ve been able to gather more and more information on user behaviour and use that to inform decisions about what we should do next.

Our decision making process is something like:

1. What metric do we most need to move to achieve business success?
2. What are the key factors which influence into that metric?
3. What do we think we can do to influence those factors?
4. Of those ideas, which of them will deliver the maximum impact for the minimum effort?

On the back of this process, we end up with a list of possible development tasks – some of which are new features and some of which are us rethinking existing features – each tied to a hypothesis and in a rough priority order and these get scheduled into our roadmap accordingly. That roadmap is in a constant state of flux – as we get new data or become aware of new opportunities, we consistently review our strategy – so it hasn’t been unusual for us not to know what we’ll be working on the week before we actually start work on it.

We do try and make sure that Product and Design are ready to sign off on a feature before we start development on it however. In previous sprints we started working on things that weren’t really ready for development and ended up wasting valuable time building things that weren’t quite right.

We also try and ensure that technical pre-planning takes place a couple of days before a sprint. This is partly to distinguish to process of working out what the stories are from working out how long they will take, as we found that it’s easier to do the second distinct from the first and it leads to much more bearable meetings. It is also so that developers have a chance between seeing the designs and putting estimates on the cards to go away and think about whether there might be any issues that weren’t surfaced in the initial meeting.

So, in this case, the feature (Suggested Properties) was backed by a hypothesis that (what is the hypothesis)? We received signed off designs on the morning of the Thursday before we started the sprint, held a pre-planning whiteboard session on the Thursday afternoon to write up the stories, and started the sprint on Monday with a full set of stories but no estimates and no final commitment as to how much of the feature we would be able to implement inside the two weeks.

As it happened

The above is a burn down chart generated by Jira. You will notice that it’s lumpy. One thing that we could be better at is moving our tickets through the lanes – something that has been noted in retros is that none of us is really owning the ‘ready for sign off’ column so things tend to sit in there until they get moved over in a batch by Head of Product or a delegate.

When we started, we had 63 points on the board. We had estimated the full scope of the designs at around 85, but given that our previous performance had been around 45 points per week not including testing and release and we had a hard deadline of two weeks for this we negotiated some scope changes and committed to the 63.

The division of labour is roughly such that I end up doing most of the backend work, Jordi handles the negotiator app and Laura handles the renter app. This is not something that we have formally agreed, but when the pressure is on (which is always) we tend to revert to the things we do best.

As I’m also doing a fair bit of project and requirements management, not to mention mentoring and dev support, it’s rare that I get a full day to do development but Tuesday turns out to be totally clear so I’m able to knock a fair chunk out of the work that’s on me. By Wednesday morning I’m done with 80% of the backend work and I decide that it might actually be a good time for me to get set up to do React Native development on Windows – and this is where the trouble starts.

Most of our React work is done on a Mac. Jordi works on one, Laura works on one, I used to work on one. But as .Net core is not quite ready (at least, not at the time of writing) and remote-desktopping into a Windows machine for C# development proved impractical (due to the lack of debugging support) I was left with the choice of migrating to Windows or running a dual boot. Since React Native is supposed to be cross-platform and I had by this point decided that I hated Macs, I chose to move to a Windows machine. This was my first chance to get set up to do React Native Android-first.

It takes me a day and I’m still not there – some version incompatibilities with a couple of our plugins which aren’t a problem in iOS but are a problem in Windows. I want to upgrade but that causes a host of other issues, which we don’t have time to get to the bottom of. Plus I’m coming down with something. The week doesn’t really recover from there.

Monday I come in refreshed but discover there are a number of minor bugs and dev support issues awaiting me, unrelated to current work. I also have a number of meetings, so while I work through them relatively quickly I don’t have a lot of time at my desk. It’s 2.30 before I can get stuck into any sprint work and then there are issues raised by both Jordi and Laura relating to the new relationship between viewings and chats. This isn’t any cause to panic – yet – as we’ve allowed time for this kind of thing, so I sit down and try and understand problems and make suggestions as best I can. I have another meeting then with Sunil to talk about some upcoming work and then I’m finally able to do a bit of dev before heading off to play football.

Tuesday starts with more informal meeting – a discussion about indexing in MongoDB, a discussion about the badging issue raised by Laura the previous day, I get started on development by around 11. But there’s always more stuff flying in and I am off to the investors meeting in the afternoon, so I still get very little done. Time pressure is starting to tell, but I still think we’re on course to deliver.

Wednesday I’m working on a hangover accrued after the investors meeting. I hadn’t had a drink in 5 weeks beforehand, but in the bar after the meeting I found myself completely at a loss when surrounded by all the rich, suited men talking about the companies they’d bought and sold and then with the company awards dinner afterwards that first pint turned into 6 or 7 more.

Still, a few others are in the same boat and it turns out that cycling in really helps burn some of it off. And we’re at a crunch point in the sprint with the bug hunt scheduled for that day so I really need to just get on with things.

Which I do. I finish the last pieces of my backend work and then I’m free to help out with other bits and pieces. We’re pretty much there, the only major issue still hanging over us is the message badging which just refuses to work sensibly. We end up needing Valerio’s help with this as neither Laura or I have enough experience with the inner workings of the renter app to make good decisions and we need a solution in the next 24 hours.

The bug hunt goes reasonably well – there are a quite a few issues but they are relatively small. By late on Friday, we’re ready to release – from the pub!

Outcomes

So, how did it all go? Well, in the strictest sense of the word, we delivered. The feature worked and people were using it within a few hours of the official release notes being sent round.

But the process wasn’t perfect and there were a number of problems, particularly in the week post-release. We ended up holding a retro a week late due to holidays, which meant that some things were less fresh in our mind but also meant that we could assess the release in conjunction with the fallout and post-release support.

And probably the biggest issue was that fact that none of the agents seemed to be able to find the feature we’d built, because one of the things we had descoped was the tooltip which showed them where it was. If anything, this was a lesson in decision-making.

What we did well

We predicted a capacity of around 80 points and, allowing for about 20% scope creep, scheduled 63 into the sprint. Both the amount of work we did and the amount of scope creep we were allowed were pretty much exactly as we expected – we ended up doing 77.5 points worth.

The planning went smoothly, meetings were pretty painless and everybody on the team seemed to be pulling in the right direction. We were better at testing during the sprint – the burn down chart wasn’t perfect, but it was better than in some previous sprints where it felt like everything was unfinished until right near the very end.

There were signs of stress but the level of teamwork was high and everybody took responsibility for their work. The team feels cohesive and ultimately we delivered what we planned to deliver in the time we planned to deliver it. Within a few hours of the official release notes, people were using the new feature that we built with no reported problems.

There were some concerns about the discoverability of the feature (one of the things that we descoped was a tool tip pointing the new feature out), but evidence from the database suggested that a not insignificant number of agents had engaged with it and the resultant support load was small to none. It is still early days, but we have to be happy with that (and we are).

What we did badly

We’ve improved our testing but it could still be better. My suggestion of a formal test plan didn’t really work out – it ended up being a couple of wasted hours for Will since it was never actually used. Under time pressure, we cut corners, particularly close to release time, and missed out on the second bug hunt that we should have done before submitting the apps.

Laura and Jordi were ultimately the ones who paid for this. In the week following release, when I was on holiday, they found a lot of time taken up dealing with issues after the event, resubmitting the app and running through release checklists. A bit more care taken over testing earlier in the sprint and some explicit time scheduled in for managing this process would have helped a lot.

The same time pressure that led us to cut corners also led us to make mistakes. In some senses, we expect and account for this – rushing through features at high velocity at the expense of a higher support load is a conscious decision and something that would be a false economy in a larger more stable company but is part of our experimentation and data-gathering philosophy. At the same time, there are lessons to be learned from requirements we missed and some of the bugs that were released. These are things that we need to factor into future planning sessions.

Actions

We decided that we needed to rejig the way that we test. First off, we’ll add explicit constraints to our Agile Board to force people to think more about testing during the sprint – a pop up input on moving an item into Test, prompting for a test plan, and another pop up on moving into Accepted, prompting the user to record the outcome of testing.

We’ll also factor in explicit time for testing as a percentage of sprint points – finger in the air numbers to start with are 25% pre-release but also 25% post-release, to account for support load and the fact that we expect a certain number of bugs to be encountered in the wild.

Regards planning, a couple of considerations are added to our planning checklist. First off, we’ll be inviting an additional stakeholder from our operations team to planning meetings, just to provide additional insight on business requirements that perhaps we have missed. Secondly, we’ll be explicitly asking the question ‘how do we track this’ in each session, to make sure that we can measure success more easily after release.

Summary

There’s a lot to read, particularly if you go through the day by day accounts, and I suspect that this will be buried at the bottom of the internet before long. But writing everything down was at least a valuable exercise for me as it forced me to think harder about some of the things we could be doing better and where my time was going.

Leave a Reply

Your email address will not be published. Required fields are marked *