Tag archive

metrics

Five Simple Guidelines to Agile Metric Bliss

in Agile/Agile Engineering/Project Management by

Over the past couple years, I’ve had the opportunity to work with many great teams in many industries. I often work with teams and their managers to generate reports. In doing so, I quickly realize that although teams may be working to adapt and leverage one of the frameworks that fall under the Agile moniker, they are not yet adopting or clearly understanding the Agile Principles and Values. This comes through clearly when we look at the kinds of reports people are using.

kittenI’ve seen dashboards that compare multiple team’s velocity. Or, the classic utilization report that shows time worked by team member. Or, the quality reporting that focuses on who versus what. And then there is the singular snapshot that represents percentage of backlog done – doesn’t sound that bad until you have a manager have kittens because the percentage is not what as expected.

Now, I have to admit — I’ve made all these reports before and used them myself. Sometimes the purpose of the report had an intended outcome, but my best intentions sometimes resulted in gamesmanship of the numbers or fear by the team and in one case, a team member pulled me aside worried that he may be culled.

Let’s face it, all metrics and reports can be used for bad. But what do you need to do to create good metrics? There are some great resources all over the internet that help answer this question, but let me give you my top five things you can do to make your metrics effective while fostering an agile environment:

  1. Make them Transparent. This is obvious, but often I see people create reports and don’t share them. I get that there are some reports for “their eyes only”; however, in most cases, if not all, unless it has salary information — make the reports visible to everyone – the Team, Stakeholders, Managers, and even Customers.
  2. Make them Visual. Use charts, shapes, colors, and or pictures over a table of figures. We do this for three reasons – easy to read, reduces the likelihood that people focus solely on outlier values, and in many cases — creates conversations. By the way, use colors wisely — just like words, colors mean stuff.
  3. Follow the Trends. Goes hand-in-hand with visualize — a good metric should be informative provided indicators that make it easy to see if the needle is pointing up or pointing down. Trends generally allow you to decide if what we are doing is good or bad, and reduce snap decisions.
  4. Make them Relevant and Timely. There are the out-of-the-box metrics — burn downs, cumulative flow, burn ups, cycle time, and or velocity – that should be maintained on a daily, weekly, or iteration basis — updating them in arrears does no good. This is the same for all agile metrics. Since the goal of any metric should be to continuously improve in some way, reports/metrics that are created or updated weeks or months after the fact does us no good. And, couldn’t we better utilize the time and brain power on something current.
  5. Have a Purpose. Every report or metric needs to be leveraged for importance. If you cannot answer two questions about a report or metric, maybe you should stop spending time and money to create the report. First, why are we creating this report? And, second, what will we do if the report is indicating a need to adjust or change?

Now, these are my five simple (and in some cases, general) guidelines – what are your’s? Do you have any suggestions?

Always keep in mind …

Working software is the primary measure of progress.

Continuous attention to technical excellence and good design enhances agility.

At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.Agile Manifesto

Agile Metrics Resources

in Resources by

My last post talked about using Agile Metrics. I reflected mostly on my experiences at nuBridges and the metrics we put into place. These metrics were good at establishishing heartbeats and, in general, monitoring of the process. There are a bunch more practical metrics that help organizations assess value, identify and measure quality, and understand better where things can improve. When writing the last post, I found some great websites and resources on agile metrics that are worth sharing:

  • Earned Value for Agile Development by John Rusk of Optimization, Ltd. This document was found on www.agilewiki.com. This is a PDF that contains a solid explanation of how earned value can be applied to any agile project. The document is well written and easy to understand, even if you don’t have a practical project management background.
  • Agile Metrics page on Dave Nicollete’s wiki/blog site (davenicollete.wikispaces.com). Dave assembles some of his own presentations as well as some links to other valuable sites. His presentations are excellent and make for great training materials.

Another, Another Post About Agile Metrics

in Agile Engineering by

Even if you are on the right track, you will get run over if you just sit there.
– Will Rogers

As you can see by the title of this blog (or shall we get fancy and call it an article), it is about Agile Metrics. And as you can also glean from the title, this is yet another blog about this topic. However, I plan to be different – I’ll come after this topic from purely personal experience and reflection. Now I’m not saying the other writers do not reflect upon their personal experiences, but over the last eighteen months the teams I worked with have done an excellent job at forming out and tracking metrics. These metrics do two things:

  • Provide the stakeholders with insight regarding to what the teams are doing, how well they are doing it, and what is coming when.
  • Provide the team with measurements that can be monitored to identify and understand how the process of software development and deployment is working.

These both lead to consistency, trust, and continuous process improvement. Now don’t get me wrong, metrics alone don’t provide these things – but, capturing metrics, measuring, and then taking stock in the measurements do help establish consistency, foster trust, and drive continuous improvement.

We chose six metrics to capture:

  • Roadmap Effort Achieved. This equates to enhancements, new features, or business value activities performed during the iteration versus defects and other technical tasks which although are required for sustaining the product, they don’t have the agreed upon value with the Product Owner.
  • Utilization. This is broken out into two metrics: (1) Ideal Utilization which considers actual hours booked against all project activities divided by the team’s ideal hour capacity; and (2) Planned Utilization which considers actual hours booked divided by the team’s planned capacity. The first measure is generally mid 60% to low 70% depending on the roles and the level of the team.
  • Defects Raised. If this isn’t clear, then nothing is – just kidding, this metric tracks those defects that are born out of all touch points including level 3 support, testing, and those raised by developers that should have been caught during continuous integration cycles.
  • Sprint Task Complete. This metric is similar to a sprint task burn down; however, it is a measure of how close was the team at completing tasks they signed up for. This measure did not include tasks introduced during the sprint – only those they signed up for. The challenge we find with this measure is that if a team is small, this number could be skewed. However, this metric clearly indicates either too much cross-winds hitting the team, lack of focus, or simply too large of stories/tasks.
  • Sprint Effort Complete. Effort relates to hours, so this gives a pretty good barometer of how close the team came to finishing the tasks. We measure this for two reasons – (1) smaller teams that may not complete but 1 of 3 tasks looked bad, but the reality was they finished 95% of the work and just need another day or two to get it across the finish line and (2) this allows for us to understand impact of estimates and cross-winds (a.k.a. support). The team still got recognized for the effort, but there was recognition that either through more focus or better break down of tasks may help complete what was committed.
  • Estimate Accuracy. This type of metric is not unusual in most traditional PMOs; however, the goal here was to drive the behavior of breaking down tasks more and not estimating something you didn’t understand. The result of this awareness helped the organization to build features consistently and follow best practices surrounding analysis and design. We also found it valuable with prototyping in that the team was able to break down the tasks in a fairly repeatable way and the estimates only got better during these periods.

How have these metrics worked out?

These metrics have served us well; however, some of these metrics can be frowned upon since they appear to be too focused on performance. But what we’ve found is that you adapt the metrics as the teams mature and the inevitable organizational change occurs. So for instance, Utilization was always a product of ideal hours – therefore, you generally saw 60-68% utilization for teams that performed project tasks and played a role in level 3 support. By plotting the utilization over time, we arrived at a median utilization rate of 67%. Once we felt this stayed level, we turned our focus to planned utilization – meaning are we planning our sprint capacity accurately and therefore filling up the buckets properly. We are an organization that uses story points; however, we’ve found velocity to be cyclical based on the phase of the project and possibly the opportunities brought forth by the business (a.k.a. several large deals sign and we find ourselves involved in implementations that are not managed via Scrum, thus velocity collapses). Nonetheless, now that we’ve moved to looking at planned utilization, we are able to establish a level of consistency and trust with the business that when we say we have X capacity and we can deliver by Y – the business is lock-step saying the same thing because they too see the metrics.

Lessons Learned

Well, we’ve learned that the data wasn’t useful in short cycles – the data was useful once you had five or six sprints worth of data points, then adjustments to process could be made. Also, by this time, the reasons you would expect things to go sideways were very clear there on the charts (e.g. defect rate went up prior to the release cycle – as airspeed is slowed on the feature development and the focus turned to technical debt). We also learned that tracking estimate accuracy and utilization altered the teams’ behaviors – and not in a good way. The team started finding ways to work the metrics in the favor. They did so even after being told that we weren’t measuring their performance based on these numbers, but using them for identifying opportunities for improvement and demonstrating successful delivery. We had folks that would sandbag their estimates, or underestimate and work late then log work for the exact hours they estimated.

We also learned the obvious and, frankly what we wanted to see, and that is that once a team started to understand the metrics – they matured as a team. Part of this was the equation of time and the improvement of the Scrum/Agile processes. The other part was the fact that we chose specific metrics to focus on – especially those surrounding productivity and quality. We started conducting group code review meetings and pairing up more on tasks that we deemed to have higher risk.

What to Look at Next

We need to start look at tracking business value better – looking at using earned value. The challenge we have is that we generally don’t put a cost on projects – we are a small software firm that generally runs with department budgets only and assess costs along the line of business only. It seems that we would make better decisions along the way if we kept cost and revenues in plain view during the development process.

Also, need to look at measurements surrounding quality automation (unit test coverage and LOC test coverage).

I’m sure there are other areas, but now that we have a good handle on consistency and trust in delivering what we said we would, it is time to focus on business value and look deeper into quality. Both of these will help make the products and the business more successful.

Go to Top