Velocity is a much-misunderstood concept and the purpose of this note is to help clarify what it is. The Scrum Guide  does not mention velocity, but we know that it plays an important role in helping most Scrum Teams to keep process variance in check and to create a basis for short-term forecasting. We have six patterns that discuss velocity in a focused way:
Velocity also plays a strong supporting role in many other patterns, such as Release Plan, Kaizen Pulse and Teams That Finish Early Accelerate Faster. Velocity in itself is not a pattern (it doesn’t really build anything, but rather is a property of the Sprint and the team) and this note is here to help you understand the term as the patterns use it and to understand how you should use it with your team.
A Development Team’s velocity is a (usually unitless) number that indicates the team’s capacity to complete potentially shippable work in a given Sprint. Velocity is intended to be a measure of the team’s efficiency: how much work the team completes per unit time, where Sprints are the units of time (e.g., two weeks).
Velocity is not a measure of effort: the effort that a Stable Team expends during a Sprint is constant because the Sprint duration is constant. It represents the total amount of work Done (see Definition of Done) during a Sprint as a sum of the estimates that the team created for the Product Backlog Items (PBIs) at the beginning of the Sprint. Each of those estimates forecasts merely how much work it will take to complete a given PBI relative to the amount of work for other PBIs (see Estimation Points). The velocity varies from Sprint to Sprint as a consequence of natural variance and of many factors affecting efficiency (loss of a team member, manufacturing equipment problems, variance in requirements quality, etc.)
The velocity is the Development Team’s number. The team derives the number from historic data, based on recent Sprints. The items that add up to the team’s velocity — the PBIs or Sprint Backlog Items (SBIs) the team forecasts that they will deliver — the team calls its forecast for the given Sprint. In the same sense that yesterday’s weather is a good predictor of today’s weather, so the performance in recent Sprints is a good indicator of what the team will achieve in the Sprint at hand. The team can use velocity as a forecast of the work they will complete, but it is not a target or a guarantee for stakeholders. By analogy, if you forecast tomorrow’s weather as reaching 77°F and sunny, that isn’t the target for the weather, but only an informed guess of what the weather may be.
It is good practice to use recent history to inform the forecast for the upcoming Sprint: that is the essence of the pattern Yesterday’s Weather. At any given time the team’s velocity is a reasonable forecast of the team’s future performance based on an average of the team’s recent velocities. Because there is variance in any process, some Sprints will exhibit a higher-than-average velocity and some a lower-than-average velocity. When the Development Team uses velocity to size the amount of work to take into a Sprint (the Sprint Backlog) — a recommended practice — then the team should expect to finish all items on that backlog only 50 percent of the time. Note that the Scrum Team commits to the Sprint Goal and not their forecast delivery.
Velocity has a mean and a variance, and both are important to forecasting. A velocity with low variance leads to more precise forecasts, and makes it possible to assess whether a given kaizen (see Kaizen and Kaikaku) improved (or diminished) the team’s performance in the Sprints where they applied it.
Teams can reduce their velocity’s variance through attentiveness to sound Scrum practices. Here are a few of the core practices that are rooted in Development Team autonomy:
Current broad practice bases velocity on the Development Team’s estimates rather than on either measured effort or measured results. In theory, we could somehow make the velocity more precise by re-estimating the actual amount of work (in relative units) each PBI took after the Sprint is over, and summing those estimates into a velocity. It’s rarely worth the trouble. Estimates tend to converge over time, and the pessimism for one estimate usually offsets the optimism of another. If the team is using Estimation Points, it’s impossible for the average to be overly optimistic or pessimistic because the time estimates are unitless. Feedback and Yesterday’s Weather (as just described) drive both optimism and pessimism out of the forecast. In practice, teams find they can regularly use their velocity to predict what they will deliver in the upcoming Sprint with precision of plus or minus 20 percent or even much less.
Many teams (even Scrum teams) measure velocity in terms of absolute time. However, most people can’t even tell you how many hours they “work” a day (work on actual PBIs) let alone tell how much time they spend on a given item. We recommend relative estimation instead — and velocity is hence unitless.
Other teams use sizing instead of estimation: for example, partitioning the Sprint Backlog into tasks of equal size and using the number of Done tasks as the velocity. The conversion to the Product Backlog estimates is unclear.
Many managers we have worked with believe they can increase a team’s output by impressing on them the need to “work harder.” But lack of developer effort is rarely the problem. Velocity isn’t so much an indicator of how many coffee breaks the team is or isn’t taking, but rather about how smart and effectively the team is working. We want to put our focus on efficiency rather than brute force.
We offer these other guidelines for proper interpretation of velocity:
The term velocity started becoming au courant in the year 2000. This replaced load factor which was too complex, as discussed by Kent Beck, Don Wells, and Robert C. Martin. 
 Jeff Sutherland and Ken Schwaber. “The Scrum Guide.” Scrumguides.org, http://www.scrumguides.org, July 2016 (accessed 19 June 2017).
 —. “Velocity vs [sic.] Load Factor.” C2.com, http://wiki.c2.com/?VelocityVsLoadFactor, 16 Feb. 2000 (accessed 19 June 2017).