
Single Leg Hop Test Protocol: The Complete Clinical Guide
Team Meloq
Autor

An athlete finishes rehab, clears the hop test, posts a comfortable symmetry score, and gets back into training. Then the knee irritates again, or worse, a new injury appears within weeks. Most clinicians have seen some version of that sequence.
The problem usually isn't that hop testing was the wrong choice. The problem is that the single leg hop test protocol was reduced to a tape measure and a pass mark. Distance matters. Symmetry matters. But neither tells you enough about how the athlete creates force, absorbs load, or controls the landing.
A modern return-to-sport assessment has to do more than ask, “How far did they hop?” It has to ask, “What strategy did they use, and can I measure it reliably?”
Beyond the Tape Measure Why Return-to-Sport Needs an Upgrade
The single leg hop test remains one of the most useful field tests in lower-limb rehabilitation because it compresses strength, confidence, coordination, and power into a simple task. That simplicity is also why clinicians trust it. It is quick to run, easy to repeat, and easy to document.
But a simple test can create false reassurance when it is interpreted too narrowly. An athlete can produce an acceptable hop distance while still landing stiffly, limiting knee flexion, shifting trunk position, or protecting one side in ways that the naked eye only partly captures. If the clinician records distance alone, the chart may show progress while the movement strategy still shows compensation.

Why the old pass fail mindset falls short
Distance-based limb symmetry became popular because it offers a clean number. Numbers feel objective, and in many ways they are. The problem starts when one number carries more decision-making weight than it should.
Clinicians often see this in late-stage ACL rehab. The athlete can jump far enough. They may even look “good enough” at normal speed. Yet the landing is noisy. The knee doesn't flex well. The body unloads the involved side immediately after contact. Those details often sit outside a traditional paper protocol.
Clinical reality: A hop that looks acceptable from the front of the clinic can still be mechanically poor on landing.
That gap matters because return-to-sport decisions are rarely undermined by one obvious deficit. They are undermined by small, repeated errors in load acceptance and control that subjective observation struggles to quantify consistently between clinicians and between sessions.
What a better standard looks like
The upgrade isn't abandoning the hop test. It is using the hop test properly. That means three things:
- Standardize the task so the result is repeatable.
- Measure performance objectively so change over time is believable.
- Assess movement quality so distance doesn't hide poor control.
The single leg hop test protocol should function as a structured measurement process, not a casual screen. That shift changes how the data are used. Instead of treating hop distance as the answer, clinicians can treat it as one part of a broader readiness profile.
A tape measure still has value. It just shouldn't work alone. When return-to-sport decisions carry real consequences, subjective impressions need support from reproducible metrics and better documentation. That is where modern practice has moved, and rightly so.
The Evidence-Based Foundation of the Hop Test Battery
The hop test battery has lasted in clinical practice because it solves a real problem. It gives clinicians a practical way to challenge unilateral function under speed, distance, repetition, and landing demands without needing a full laboratory setup. Used well, it provides a strong bridge between isolated strength testing and unrestricted sport.
The cornerstone remains the single hop for distance. According to Physiopedia's overview of hop testing, test-retest reliability for the single leg hop test for distance shows ICC values from 0.92 to 0.97, which is strong enough to support longitudinal clinical use. The same source reports collegiate normative values of 192±20 cm for males and 149±17 cm for females, and notes that healthy subjects average 100% limb symmetry index, with recommendations shifting toward 95 to 100% rather than relying on a lower passing threshold in return-to-sport decisions.[1]

The four tests and what each one contributes
The standard battery is stronger than any single hop alone because each test stresses the limb differently.
| Test | Primary demand | What it helps reveal |
|---|---|---|
| Single hop for distance | Maximal horizontal propulsion in one effort | Unilateral power and confidence |
| Triple hop for distance | Repeated force production with continued control | Power maintenance and transition between landings |
| Crossover hop for distance | Repeated hopping with frontal plane challenge | Dynamic control during directional change |
| 6 m timed hop | Fast cyclic hopping over a set distance | Speed, rhythm, and reactive control |
The battery works because it captures different expressions of lower-limb function. A patient may perform well in a single maximal effort but lose quality when asked to repeat the task. Another may look strong in straight-line hopping but struggle once a frontal-plane demand appears.
Reliability matters more than convenience
A test is clinically useful only if the result is stable enough to trust. That sounds obvious, but many field-based decisions are still made from loosely delivered tests with inconsistent cueing, inconsistent trial counts, or unclear validity rules. Reliable tests protect against that.
The single hop test's reliability gives it real clinical value, but only when the protocol is standardized. That includes where the athlete starts, how the landing is judged, how many attempts are allowed, and exactly where distance is measured. Once those details drift, the test stops being a measurement and becomes a rough impression.
Reliable tests don't just reduce error. They improve communication between clinicians, surgeons, coaches, and the athlete.
The wider hop battery also helps reduce overconfidence in any one result. A patient who clears one hop task but underperforms in another hasn't “failed” the process. The battery has revealed that function is more specific than a single score suggests. That is one reason many clinicians now lean on batteries instead of isolated tests when clearance decisions matter.
For a broader view of how hop tests sit within performance assessment, Meloq's overview of physical performance tests is a useful companion read.
Norms are helpful, but context still rules
Normative values are useful anchors, particularly when the clinician wants to know whether the uninvolved side is functioning at a credible level rather than merely serving as a weak comparison point. But norms are not a substitute for context. Sport demands, athlete size, training age, and injury history still shape interpretation.
That is why the hop battery works best when it is treated as a decision framework, not a checklist item. The protocol gives structure. The reliability gives confidence. The battery gives breadth. Clinical judgment still decides what the pattern means.
A Standardized Protocol for Reliable and Repeatable Testing
Good hop testing starts before the first jump. If the setup is inconsistent, the athlete adapts to the environment rather than revealing their true capacity. In practice, most unreliable results come from ordinary errors. The start line changes. The cue changes. One clinician allows a recovery step, another doesn't. By the time you compare sessions, the number looks precise but the process wasn't.
A defensible single leg hop test protocol should control the parts of testing that are under the clinician's influence. That is what protects inter-rater and intra-rater reliability.

Set the environment before you test the athlete
Use the same surface each session, the same footwear rules, and the same start line. Remove unnecessary variability. A tape on the floor, a clear landing zone, and a written validity rule do more for data quality than generally recognized.
For the single hop, standard protocols described in the clinical literature use a start close to the line on one foot, one practice trial, then three scored trials, with the heel landing position used for distance measurement on valid attempts.[1] That detail matters. If one assessor measures from the toe and another from the heel, progress can be an artifact of the method rather than the athlete.
A simple pre-test checklist helps:
- Warm-up first: Use a consistent dynamic warm-up so the first scored trial isn't also the movement rehearsal.
- Explain validity rules: The athlete should know what counts as a failed attempt before the testing starts.
- Keep cueing stable: Use the same verbal instruction across limbs and sessions.
- Document pain or apprehension: A good score reached through guarded behavior needs context in the notes.
Administer each hop with clear validity criteria
Each task needs a slightly different emphasis.
For the single hop for distance, instruct the athlete to start on one foot close to the line, hop forward as far as possible, and stick the landing. Measure the valid attempt from the start line to the heel at landing.[1]
For the triple hop, momentum is the point. The athlete performs three consecutive forward hops without pausing, and the final landing must be held for at least 2 seconds without balance loss. Loss of momentum between hops invalidates the trial, and that matters clinically because repeated hopping can discriminate injured from uninjured limbs better than a single effort in some contexts, as described in the triple hop protocol summary from Moticon.[2]
For the crossover hop, the same principle applies, except the athlete must repeatedly cross the center line while maintaining rhythm and control. The task adds frontal-plane demand and often exposes athletes who can generate distance but don't control redirection well.
For the 6 m timed hop, speed tends to tempt compensation. Athletes may rush into noisy contacts or lose positional control to save time. That is why timing must be paired with clear observation of strategy, not used in isolation.
Practical rule: If you can't state exactly why a trial was valid, you shouldn't record it.
Common errors that corrupt the data
The mistakes below are common enough to deserve active prevention:
-
Too much learning during scored trials
One practice trial per side is part of standard protocols for a reason. Without rehearsal, later trials may improve because the athlete finally understands the task. -
Inconsistent trial numbers
Keep the same number of scored trials each session. “Best of three” only means something if it remains best of three. -
Loose landing standards
If one therapist accepts a touch-down of the opposite foot and another doesn't, your database isn't comparable. -
Changing the order casually
Standardized order or randomization reduces order effects and fatigue bias.
Documentation that supports decisions
Reliable testing also means reliable charting. Record the task, side, valid distance or time, failed-trial reason, symptoms, and observable movement notes. That sounds basic, but sparse documentation is one reason hop test data often become hard to use later in rehab.
A strong note doesn't just say the athlete “passed.” It records what was measured, how it was measured, and what movement pattern accompanied the score. That is what lets another clinician repeat the test with confidence and compare like with like.
Interpreting the Data From Limb Symmetry to True Readiness
Most clinicians calculate limb symmetry index (LSI) by dividing the involved limb score by the uninvolved limb score and multiplying by 100. It is a useful summary because it converts raw performance into an easily understood comparison. For years, that made LSI the center of hop test interpretation.
The problem isn't LSI itself. The problem is treating it as a complete answer.
Why symmetry can look better than function
Distance-based hop testing tells you how much horizontal output the athlete produced. It doesn't fully tell you how the output was achieved. That distinction becomes important when an athlete has regained enough propulsion to meet a benchmark but still lacks clean landing mechanics.
The qualitative gap has been described clearly in the literature. The review on hop-test movement quality in the ACL rehabilitation article indexed on PMC notes that standard protocols often focus on LSI >90%, while poor biomechanics such as stiff landings, limited knee flexion, and high vertical ground reaction forces can still persist in athletes who pass those symmetry criteria.[3] The same review highlights the practical problem many clinicians already feel in the clinic. There is no widely adopted standardized rubric for quantifying those qualitative factors.[3]
That leaves clinicians in an awkward position. They know movement quality matters, but they are often forced to describe it with language that isn't easy to reproduce between raters.
Passing LSI means the limbs performed similarly in the tested outcome. It does not prove the movement was well controlled.
What clinicians should look for around the number
A useful interpretation process asks three questions:
-
Was the score acceptable?
The raw distance or time still matters. -
Was the comparison acceptable?
Symmetry remains clinically relevant, especially when tracked over time. - Was the strategy acceptable? The acceptability of the strategy can make many return-to-sport decisions safer or riskier.
In practice, a hop with immediate trunk correction, short landing absorption, or visible unloading after contact should prompt caution even if the spreadsheet shows a pass. The number is real. It is just incomplete.
A related issue is structural asymmetry that changes how you interpret distance and landing mechanics. If you need a concise refresher on how baseline asymmetry can influence lower-limb testing, PosturaZen's essential guide to leg length discrepancy is a helpful contextual resource.
The decision error clinicians need to avoid
The common error is to overvalue what is easiest to count and undervalue what is hardest to standardize. That is understandable, but it is not good enough for return-to-sport decisions.
Hop testing should sit alongside strength data, symptom response, sport demands, and movement analysis. That is especially true in knee rehabilitation, where isolated functional tests can hide deficits that emerge under higher-load tasks. Clinicians working in ACL pathways will find this particularly relevant in Meloq's discussion of ACL rehabilitation strength testing protocols, benchmarks and return-to-sport criteria.
The takeaway is simple. Symmetry is necessary, but it isn't sufficient. If the athlete passes the number but fails the landing, the test has not granted clearance. It has identified the next question.
Integrating Objective Measurement in Modern Hop Testing
Once movement quality becomes part of the decision, the next problem appears immediately. Observation alone is limited. Two experienced clinicians can watch the same landing and agree broadly that it “doesn't look right,” yet disagree on the size of the deficit, the phase where it occurs, or whether it has improved since last month.
That is where objective tools change the role of hop testing. They don't replace clinical reasoning. They make it more defensible.

What force plates add to a hop test
A tape measure records the endpoint. A force plate records what happened during contact. That difference is enormous in late-stage rehab.
Portable force plates let the clinician examine loading asymmetry, impact strategy, and stabilization behavior during single-leg tasks. If the athlete reaches a respectable hop distance but lands with a sharp, poorly controlled force profile, the clinician no longer has to rely on a vague note like “stiff landing observed.” The landing can be quantified and tracked.
This is one reason force data fit naturally into return-to-sport testing. They make visible the part of performance that distance tends to hide. Meloq discusses this broader shift well in its article on objective outcome measurement in physiotherapy.
Strength and angle measurement still matter
Hop performance also depends on capacities outside the hop test itself. If the athlete cannot produce or absorb force well, the hop pattern often becomes a compensation strategy rather than a display of restored function.
A handheld dynamometer gives the clinician a quantifiable way to assess the force-producing capacity that manual muscle testing often overestimates. In landing-focused rehab, that matters most when you are examining the athlete's ability to control deceleration rather than survive it.
A digital inclinometer or digital goniometer adds another layer. If the clinician is concerned about reduced knee flexion at landing, angle measurement turns that concern into a reproducible value. Instead of writing “looked shallow,” the chart can document the landing position with a standardized hardware-based measurement process.
Applied point: The better the measurement method, the less the return-to-sport decision depends on memory and opinion.
One practical example is a setup combining a portable force plate for landing data, a digital dynamometer for unilateral strength profiling, and a digital inclinometer for confirming knee flexion position during landing drills. In that kind of workflow, Meloq tools such as EasyBase, EasyForce, and EasyAngle can sit inside a standardized testing pathway without changing the underlying clinical logic. They provide objective force, strength, and angle data that support repeatable decisions.
A short clinical example
A field-sport athlete in late ACL rehabilitation completes the single hop and triple hop with acceptable distance symmetry. On paper, the testing day looks successful. But the landing appears abrupt on the involved side, and the athlete seems to exit the contact quickly.
The clinician repeats the task on a portable force plate and sees an asymmetric landing strategy with poor load acceptance on the involved limb. Handheld dynamometry shows that the athlete can generate force reasonably well, but the eccentric control profile remains the concern. A digital angle measure during single-leg landing drills confirms limited knee flexion relative to the opposite side. Clearance is delayed, not because the athlete failed to hop far enough, but because the data show the involved limb still avoids normal landing behavior.
The point isn't that technology made the decision stricter. It made the decision more accurate.
A practical demonstration of hop-related testing is shown below.
What doesn't work in modern practice
Some habits don't hold up well anymore:
- Relying on manual observation alone: useful for screening, weak for longitudinal comparison.
- Using manual muscle testing as the primary strength decision tool: too subjective for high-stakes return-to-sport judgments.
- Recording only a final hop distance: incomplete when landing quality is the concern.
- Treating tech as decoration: data only help if they are built into a repeatable protocol.
The best clinical systems aren't necessarily the most complicated. They are the ones that reduce ambiguity. In hop testing, that means measuring the jump, the landing, and the capacities that support both.
Advanced Considerations and Clinical Nuances in Hop Testing
Real athletes rarely fit textbook assumptions. Youth athletes are still developing. Some sports reward linear stiffness and speed, while others expose the athlete to repeated cutting and frontal-plane load. Bilateral history, pain behavior, and confidence all complicate interpretation. The protocol has to stay standardized, but the reasoning around it has to stay flexible.
When the opposite leg is still a reasonable benchmark
A frequent concern is whether the non-injured leg has become too deconditioned during rehab to serve as a valid reference. That concern is sensible. If the comparison limb has declined substantially, LSI may flatter recovery.
Current evidence offers an important nuance. A 2022 study in the International Journal of Sports Physical Therapy examined 169 return-to-sport athletes after lower-extremity injury and found no significant difference in hop performance between their non-injured limb and that of healthy matched controls.[4] For many high-impact sport clearance scenarios, that supports continued use of the contralateral limb as a practical benchmark.[4]
That does not mean the opposite leg is always trustworthy. It means clinicians shouldn't dismiss it automatically.
Cases that need extra caution
Some situations still demand a broader reference framework:
- Bilateral injuries or symptoms: a side-to-side comparison loses value quickly.
- Youth athletes: growth and maturation can complicate interpretation.
- Very long rehab timelines: the clinical picture may call for more external benchmarks.
- Sport-specific demands: a field athlete who cuts repeatedly may need more than straight-line hop competence.
Use the contralateral limb as a reference when it is credible. Verify credibility rather than assuming it or rejecting it.
Matching the test to the task
Experienced clinicians also adjust emphasis according to the sport. A linear sport may tolerate a clearer focus on propulsion and repeated straight-line hopping. A cutting sport often demands closer attention to crossover tasks, landing control, and frontal-plane strategy.
In those cases, the hop battery becomes more informative when paired with objective force profiling rather than expanded endlessly with more field tests. Meloq's overview of the portable force plate is relevant here because it reflects how many clinicians now bring laboratory-style force assessment into routine rehab and field settings.
The key nuance is this. Standardization does not mean rigidity. The protocol stays stable so the measurement remains reliable. The interpretation adapts to the athlete in front of you.
Conclusion The New Standard for Return-to-Sport Decisions
The single leg hop test still deserves its place in modern rehabilitation. It is practical, clinically meaningful, and supported by strong reliability when the protocol is standardized. But the old version of hop testing, where distance alone drives the decision, no longer matches what clinicians know about re-injury risk and persistent movement deficits.
A better standard combines three layers of information.
First, measure performance. Distance and time still matter.
Second, assess movement quality. A symmetrical result is not the same thing as a safe or efficient strategy.
Third, validate what you see with objective measurement. Force plates, handheld dynamometry, and digital angle assessment give clinicians a way to quantify what subjective observation often misses or describes inconsistently.
That combination changes hop testing from a checkbox into a decision framework. It improves documentation. It improves repeatability between clinicians. It makes longitudinal tracking more credible. Above all, it helps protect athletes from being cleared on a number that looked good while the movement underneath it was still incomplete.
The new standard isn't tape measure versus technology. It is tape measure plus technology, inside a strict protocol, interpreted with clinical judgment.
Meloq supports this shift toward objective rehabilitation and performance testing with portable tools for force, strength, and range-of-motion measurement. If you're building a more reproducible hop testing workflow, Meloq is one option to explore for integrating quantified data into everyday clinical decisions.
[1] Physiopedia, Hop Test.
[2] Moticon, standardized triple hop protocol summary.
[3] PMC review on qualitative movement assessment in hop tests.
[4] International Journal of Sports Physical Therapy, contralateral limb reference study.

Producto destacado
Dinamómetro digital EasyForce
Pruebas de fuerza muscular de mano con un 99 % de precisión. Utilizado en más de 40 estudios revisados por pares.
Más información