How to Use CAM Benchmarks Without Overfitting: A Data Methodology Guide
Industry CAM benchmarks are useful for identifying outliers in your portfolio — but treating them as performance targets or budget ceilings leads to bad decisions. This guide explains where benchmark data comes from, what it actually measures, and how to build your own portfolio-specific reference points.
By Angel Campa, Founder, CapVeri · Updated April 2026
Quick Answer
CAM benchmarks provide useful context for identifying outliers in your portfolio but should not be used as performance targets. Too many property-specific factors drive legitimate variation: building vintage, local labor costs, amenity level, and lease structure all create differences that have nothing to do with whether the reconciliation is accurate. The right use of benchmarks is to trigger investigation of specific line items — not to accept or reject a reconciliation based on whether total CAM falls within a published range.
Sources of CAM Benchmark Data and Their Limitations
Understanding where published benchmark data comes from is essential to interpreting it correctly. The major sources each have structural biases that limit their applicability in specific contexts.
BOMA Experience Exchange Report (BOMA EER)
The most widely cited source for commercial office operating expense benchmarks. Published annually, the BOMA EER covers hundreds of office buildings and breaks down expenses by category (janitorial, HVAC, security, administration, management fee, insurance, taxes) and by building class and size cohort.
Key limitations: Self-reported data from participating buildings — properties that choose not to participate (often those with anomalous expense profiles) are excluded. Coverage skews toward institutional-grade office in major markets. The survey captures total operating expenses, not just CAM-recoverable expenses, so comparison to a specific lease's CAM pool requires adjustment.
CBRE and JLL Annual Market Reports
Brokerage market reports provide broad coverage across property types and markets but are less granular on individual expense line items. They are more useful for directional market context (overall operating expense trends, market rent levels) than for specific CAM category benchmarking.
Key limitations: Methodology varies year-to-year; figures may represent median, mean, or market survey responses. Not audited. Better for macroeconomic context than property-level CAM comparison.
IREM Income/Expense Analysis (OEMA)
The Institute of Real Estate Management publishes operating expense data across property types, with particularly strong coverage of apartment and smaller commercial properties. Useful for retail and smaller office portfolios.
Key limitations: Less coverage of Class A institutional office and large industrial portfolios. Useful for community retail and strip centers; less useful for regional malls or large distribution centers.
Survivor bias note: All self-reported benchmark surveys share a structural limitation: properties with unusual expense profiles (very high or very low) are less likely to participate. The published ranges describe properties that opted in to the survey — which tends to be better-managed, institutional-quality stock. Your 1972 building with deferred HVAC maintenance may legitimately sit outside the reported range for reasons having nothing to do with overbilling.
Building Your Own Portfolio Benchmarks
The most actionable benchmarks for detecting CAM errors are portfolio-internal — comparing your own properties against each other after controlling for the factors that drive legitimate variation. Here is a practical methodology:
- 1
Normalize to cost per rentable SF
Convert all expense figures to $/RSF/year. This eliminates size as a confounding variable. A $2M CAM pool in a 200,000 SF building ($10/SF) is not directly comparable to a $500,000 CAM pool in a 100,000 SF building ($5/SF) without this step.
- 2
Segment by property type and vintage
Do not mix office and industrial in the same benchmark group. Within office, separate pre-2000, 2000–2015, and 2015+ vintage cohorts. Building age is a strong predictor of HVAC, roofing, and elevator maintenance costs — mixing vintages creates misleading averages.
- 3
Apply a market cost index
Adjust each property's expenses by a market-level labor cost index before comparing across geographies. CBRE and JLL publish market cost indices; alternatively, use Bureau of Labor Statistics regional wage data for building maintenance occupations. Without this step, a San Francisco building will always look expensive compared to a comparable Dallas building regardless of management quality.
- 4
Identify outliers, then investigate
Flag properties that are more than 20–25% above the adjusted group mean for specific expense categories. Investigate outliers by drilling to the GL detail rather than accepting or rejecting the variance based on the benchmark alone. The goal is to understand whether the variance has a structural explanation (the building has a loading dock that others do not) or a process explanation (an expense was miscoded).
When to Use Benchmarks — and When Not To
Use benchmarks to flag outliers for deeper review. If your insurance expense is running 3x the benchmark for comparable properties, that is a signal to investigate — not a conclusion that you have overbilled tenants. The investigation might reveal that your building is in a wind/flood zone and legitimately carries higher premiums, or it might reveal that the renewal was not competitively bid.
Do not use benchmarks as a budget ceiling. Telling your property manager that janitorial "should be" $1.20/SF because the benchmark is $1.20/SF ignores the specific service level required by your tenant leases. A lease that requires twice-daily cleaning of common areas will produce a janitorial expense above benchmark — and that cost is fully recoverable.
Do not use benchmarks as a substitute for lease review. A reconciliation is accurate or inaccurate based on whether it complies with the lease — not based on whether expenses fall within a published range. The benchmark is the starting point for questions, not the ending point for conclusions.
What Can Go Wrong
Comparing a 1985-vintage building to a Class A benchmark
A property manager who benchmarks a 40-year-old building against BOMA Class A EER data will conclude that every maintenance category is over-budget. Older buildings have higher HVAC maintenance costs, more frequent elevator callbacks, higher plumbing repair frequency, and less energy efficiency. These are structural characteristics of the asset — not evidence of expense mismanagement. The correct benchmark for a 1985 building is a cohort of comparable-vintage buildings in the same market.
Treating the benchmark as a budget limit rather than an outlier flag
When property management teams are evaluated against benchmark targets rather than outlier detection, they have an incentive to defer legitimate maintenance to stay within budget — or to negotiate contracts that appear below benchmark while delivering lower service quality. Benchmarks work well as investigation triggers; they create perverse incentives when used as performance targets.
Using a single-market benchmark for a national portfolio
A national portfolio that uses a single benchmark — often pulled from the largest market in the portfolio — creates systematic comparison errors for every other market. A Denver office property benchmarked against New York BOMA data will always look lean; a Phoenix property benchmarked against San Francisco data will always look expensive. Build market-specific reference ranges or apply explicit geographic adjustments before drawing any conclusions.
Frequently Asked Questions
What are the main sources of CAM benchmark data?
The primary sources are: (1) BOMA Experience Exchange Report — the most widely cited, covering office buildings by class and metro area; (2) CBRE and JLL annual market reports — broad coverage, less granular; (3) IREM OEMA — strong on retail and smaller commercial. Each source has coverage biases: BOMA EER skews toward institutional-quality office; IREM covers smaller commercial portfolios better.
Can I use a CAM benchmark to dispute a reconciliation?
Benchmarks can support a dispute investigation but should not be the primary basis. A lease governs what is recoverable — not what is typical. A landlord can lawfully bill above benchmark if the lease permits it and the expenses are legitimate. Use benchmarks to flag specific line items for closer examination, then trace those back to the lease language, invoices, and GL.
How do I build a portfolio benchmark across different markets?
Segment by property type, normalize to cost per RSF, apply a market cost index for labor, and group by building vintage cohort (pre-2000, 2000–2015, 2015+). Identify outliers at more than 20–25% above the adjusted group mean for specific categories. Investigate outliers by drilling to GL detail rather than accepting or rejecting based on the benchmark alone.
How often should I update my portfolio benchmarks?
At minimum, update benchmarks annually using prior-year actuals. For high-volatility categories — insurance premiums, utility rates, janitorial contracts — track monthly to catch mid-year deviations. In 2025–2026, commercial property insurance in coastal markets has increased 15–30% year-over-year in some cases; a benchmark anchored on 2022 actuals will flag legitimate increases as anomalies.
Related Resources
CAM Benchmarks by Property Type
2026 operating expense ranges for office, retail, and industrial
GL Export QA for CAM
How to validate your GL export before building a reconciliation
Why ERPs Still Leak CAM Revenue
The structural reasons Yardi and MRI export reconciliations contain errors
CAM Reconciliation Software
How CapVeri automates reconciliation and finds billing errors
Turn Benchmarks Into Actionable Findings
CapVeri compares your CAM expense pool against property-type and market benchmarks — then traces outliers back to specific GL entries so you can see exactly which line items warrant investigation.
Get Started Free