Safety Metrics: How to Use Leading and Lagging Indicators That Actually Predict Risk

Safety metrics like TRIR and DART tell you what happened. Learn how to pair them with leading indicators to predict where your next incident is coming from

Updated February 27, 2026 · 8 min read

Reviewed by: SafetyRegulatory Editorial Team

Regulation check: February 27, 2026

Next scheduled review: August 27, 2026

Your TRIR was 0.8 last year. That’s below the industry average. Management is pleased. And then someone gets hurt in week three of the new year, and you’re scrambling to explain how your metrics didn’t predict it.

That’s the problem with lagging indicators. They’re accurate. They’re auditable. They tell you exactly what happened. But they look backward, and backward-looking data can’t tell you where your next incident is going to come from.

A good safety metrics program uses both. Lagging indicators confirm your historic performance. Leading indicators tell you whether the conditions for an incident are building right now.

The Standard Lagging Metrics and How to Calculate Them

Lagging indicators measure incidents that have already occurred. The four standard metrics that most safety professionals track are TRIR, DART rate, LTIR, and fatality rate. All four use 200,000 employee-hours as the base, which represents 100 workers working 40 hours per week for 50 weeks.

TRIR is Total Recordable Incident Rate. The formula:

(Number of OSHA recordable incidents x 200,000) / Total hours worked

If your facility had 4 recordable incidents last year and worked 500,000 hours, your TRIR is 1.6. A recordable incident under OSHA’s definition includes any work-related injury or illness that results in medical treatment beyond first aid, restricted work, days away from work, loss of consciousness, or diagnosis of a significant condition.

DART rate is Days Away, Restricted, or Transferred. The formula:

(Number of DART incidents x 200,000) / Total hours worked

DART only counts incidents that resulted in at least one day away from work, restricted duty, or job transfer. It’s a subset of TRIR and measures incident severity more than frequency.

LTIR is Lost Time Incident Rate, sometimes called LWDIR (Lost Workday Incident Rate). It narrows further, counting only incidents with days away from work:

(Number of lost time incidents x 200,000) / Total hours worked

Fatality rate is calculated the same way, using confirmed work-related fatalities in the numerator. Most facilities operate at a zero fatality rate for most years, which makes it a poor routine metric for ongoing program management. It matters for trend analysis across larger organizations and industries.

Benchmarking Against Your Industry

The Bureau of Labor Statistics publishes annual incidence rates by industry NAICS code. Find your NAICS code, pull the BLS incidence rate tables for the most recent year, and compare your TRIR and DART rate to your subsector average.

A TRIR of 2.0 means something very different in logging than it does in finance and insurance. Context is everything.

If your TRIR is above the industry average, you have documented underperformance relative to peers. If it’s below, you’re doing better than average but that still doesn’t mean you’re safe.

Below-average TRIR feels like a success. It might be. Or it might mean low reporting rates, a younger workforce that hasn’t accumulated enough exposure hours yet, or a run of luck that’s about to end.

Why TRIR Alone Is Not Enough

Zero-injury periods don’t prove a safe workplace. They may prove that hazards haven’t triggered an injury yet.

A facility with 50 workers and a zero TRIR for three years sounds excellent. But if that facility has degrading machine guarding and a supervisor who discourages reporting minor incidents, the zero TRIR is hiding a hazard profile building toward a serious event.

Small workforces also produce volatile TRIR. With 50 workers, one recordable incident in a 100,000-hour year produces a TRIR of 2.0. Two incidents is 4.0. Year-over-year comparison is almost meaningless at small sites. At a 5,000-worker facility, the TRIR stabilizes because the denominator absorbs statistical noise.

Using TRIR as the sole basis for safety bonuses makes this worse. When managers know their review depends on staying at zero, the incentive to not report goes up.

Leading Indicators: What They Are and Why They Work

Leading indicators measure activities and conditions that predict future incident rates. They look forward.

Near-miss reports are the most direct leading indicator. A near-miss is an unplanned event that didn’t cause injury or damage but had the potential to. Every near-miss that gets documented and investigated is a free lesson about where your hazard controls are failing. Near-misses that aren’t reported are the same lesson that nobody learns.

Safety observation completion rates measure whether supervisors and workers are actively looking at conditions and behaviors before something goes wrong. If you set a target of 40 observations per month and you’re completing 12, that gap tells you something real about supervision engagement.

Job Hazard Analysis (JHA) completion rates track whether your pre-job hazard review process is actually happening. If JHAs are required for specific high-risk tasks and completion data shows they’re being skipped, that’s a leading indicator of elevated risk.

Training completion rates matter most for competency-sensitive tasks. If 30% of your forklift operators are past their certification renewal date, that’s a leading indicator. The incident hasn’t happened, but the conditions for it are present.

Corrective action close-out time measures how long it takes to complete action items generated from inspections, near-miss investigations, and audits. Long close-out times mean your corrective action process is a paper trail, not a hazard elimination program. OSHA’s Recommended Practices for Safety and Health Programs specifically identifies corrective action tracking as a core program element.

Safety committee participation rates track whether your safety committee is functioning as a genuine program input or just as a compliance checkbox. Low or declining participation often signals that management isn’t acting on committee recommendations, which erodes the whole purpose of having a committee.

Building a Balanced Scorecard

Keep it small. Three to five lagging indicators and three to five leading indicators, reviewed monthly at the management level.

More metrics don’t produce more safety. They produce more time in spreadsheets and less time on the floor. Pick metrics that match where your actual risk lives.

A balanced scorecard for a mid-size manufacturing operation might look like this. Lagging: TRIR, DART rate, and near-miss incident rate. Leading: JHA completion rate for high-risk tasks, safety observation completion rate, and corrective action close-out time (target under 30 days).

Post these somewhere visible to the people whose work affects them. A safety scorecard buried in a monthly email to management is a management report, not a safety tool.

Presenting Metrics to Management

Safety rates don’t move executives the way financial numbers do. Translate them.

The National Safety Council publishes annual cost data per medically consulted injury and per death in its Injury Facts report. Use those figures with your own WC claim history to put a dollar value on your incident rate.

Frame DART cases in cost terms. If each DART case at your facility averages $38,000 in direct costs (verifiable with your WC carrier and NSC data), and you had 6 DART cases last year, that’s $228,000 in direct costs. Indirect costs often run two to five times that.

A safety program audit that costs $15,000 and prevents two DART cases pays for itself three times over. That’s the math that gets budget approved.

Common Mistakes

Tracking too many metrics is one of the most common ways safety programs fail. When a monthly safety report has 25 line items, nobody reads it carefully, nobody owns the trends, and the metrics become a reporting burden rather than a management tool.

Tracking metrics that nobody reviews is the same problem from a different angle. If leading indicator data goes into a database that one person looks at once a quarter, those metrics aren’t functioning as safety inputs. They’re compliance documentation.

Using TRIR as the sole basis for safety bonuses corrupts the data. When supervisors know their bonus depends on zero recordables, incidents get reclassified as first aid and workers get pressured not to see the clinic. Your TRIR stops measuring safety performance and starts measuring how motivated people are to hide injuries. OSHA inspectors look for this specifically during inspections.


Frequently Asked Questions

  • q: ‘What is a good TRIR for a manufacturing facility?’ a: ‘That depends entirely on your NAICS code. BLS publishes industry-average TRIR by sector annually. For manufacturing broadly, the 2023 BLS average TRIR was around 2.7. For specific subsectors like motor vehicle parts manufacturing, it runs higher. For pharmaceutical manufacturing, it runs lower. Always benchmark against your specific industry code, not manufacturing as a whole.’

  • q: ‘How do I calculate DART rate?’ a: ‘DART rate formula: (Number of DART incidents x 200,000) divided by total hours worked. A DART incident is any recordable case that results in at least one day away from work, restricted duty, or job transfer. Count the number of cases that meet any of those criteria, multiply by 200,000, divide by total hours worked for the period.’

  • q: ‘What leading indicators should a small business track?’ a: ‘For a small facility with limited resources, track three things: near-miss reports per month, percentage of corrective actions closed on time, and training completion rate for safety-critical tasks. These three metrics are low-cost to track, directly actionable, and tied to the most common failure points in small-facility safety programs.’

  • q: ‘Can I use safety metrics for OSHA compliance purposes?’ a: ‘OSHA 300 log recordkeeping is a regulatory requirement, not optional. TRIR and DART calculations come from your 300 log data. OSHA’’s Voluntary Protection Programs (VPP) and Safety and Health Achievement Recognition Program (SHARP) both use TRIR thresholds as eligibility criteria. See the OSHA 300 log guide for recordkeeping requirements.’

  • q: ‘How do I get workers to report near-misses?’ a: ‘The most effective approach is removing any negative consequence from reporting and visibly closing the loop. When a near-miss gets reported, investigate it, fix the hazard, and communicate what changed because of the report. Workers stop reporting near-misses when they see that reports go nowhere. They start reporting when they see reports produce action.’


Most organizations that track near-miss rate do so at zero. Not because near-misses aren’t happening, but because the reporting culture isn’t there. The near-miss rate is the single most valuable leading indicator you can track, and a zero near-miss rate at any active facility is almost certainly a measurement failure, not a safety success.