What AI Can and Cannot Do in CAM Reconciliation

By Angel Campa, Founder, CapVeri

Drawing the Line

Property controllers are fielding more vendor pitches for "AI-powered CAM reconciliation" than ever. Some of these tools are genuinely useful. Others are dangerous. The difference comes down to one question: is the AI doing extraction or calculation?

That distinction matters because the consequences of getting it wrong are not theoretical. A tenant's auditor who cannot reproduce your reconciliation math has grounds for a dispute. A court that cannot trace your calculation methodology has grounds to void the charge. And an AI model that produces a different number on Tuesday than it did on Monday gives both of them exactly what they need.

This article maps out where AI adds real value in the CAM workflow, where it creates risk, and how to structure a system that uses each tool for what it does well.

Where AI Adds Real Value

1. PDF and Document Extraction

The highest-value AI application in CAM reconciliation is reading documents that were never designed to be read by software.

A 180-page commercial lease contains the CAM provisions you need — buried in Section 8.3, cross-referenced in Exhibit D, and modified by an amendment signed three years after execution. Manually abstracting that lease takes 2–4 hours for an experienced paralegal. AI extraction cuts that to 15–30 minutes of review time.

The same applies to vendor invoices, property tax bills, insurance certificates, and utility statements. AI reads the PDF, extracts the relevant fields (amount, date, vendor, GL category), and presents them for human verification.

Practical example: A 12-property portfolio generates roughly 3,600 vendor invoices per year. At 8 minutes per invoice for manual data entry, that is 480 hours of staff time — about $36,000 at $75/hour fully loaded. AI extraction with human spot-checking reduces that to roughly 120 hours of review time, saving $27,000 annually.

2. Pattern Recognition and GL Classification

Every property management system exports GL data in its own format. Yardi account 5110 might be "Janitorial Services" for one management company and "Building Cleaning — Common Area" for another. MRI uses a completely different chart of accounts.

AI classification maps incoming GL codes to standardized expense categories based on the account description, historical patterns, and the amounts involved. It handles the 90% of entries that are straightforward and flags the 10% that need human judgment.

ScenarioManual ApproachAI-Assisted Approach
New property onboarding (200 GL codes)4–6 hours to map manually30 minutes to review AI suggestions
Quarterly GL import (500 entries)2–3 hours for classification20 minutes to review flagged items
Cross-system migration (Yardi to MRI)8–12 hours for remapping1–2 hours for verification

3. Anomaly Detection

AI is effective at identifying patterns that humans miss in large datasets. Year-over-year expense spikes, unusual vendor payment patterns, duplicate invoice detection, and seasonal anomalies all benefit from statistical pattern recognition.

Consider a building where janitorial costs increased 34% year-over-year. A human reviewing 200 GL line items might catch it. AI flags it automatically, along with the fact that landscaping at the same property dropped 28% — suggesting the janitorial vendor may have absorbed landscaping scope at a higher combined price.

What anomaly detection catches:

  • Year-over-year expense increases exceeding 15% without a corresponding explanation
  • Duplicate invoices from the same vendor within 30 days
  • Expense categories that suddenly appear or disappear
  • Seasonal patterns that break from historical norms (e.g., HVAC costs spiking in February)
  • Pro-rata share calculations that don't match the lease square footage

A 200,000 SF office building with $2.1M in operating expenses might have 15–25 anomalies worth investigating per year. Manual review catches maybe half of them. AI detection catches nearly all of them, then a human decides which ones are actual errors versus explainable changes.

Where AI Creates Risk

1. Financial Math

This is the bright line. AI should never perform the actual CAM calculation.

The reason is not that AI gets the math wrong most of the time. It might get it right 95% of the time. The problem is that you cannot prove it is right, and you cannot guarantee it will produce the same answer twice.

A gross-up calculation on a $2.1M expense pool at 73.4% occupancy with a 95% gross-up cap has exactly one correct answer: $1,997,368.42. A deterministic engine produces that number every time. An LLM might produce $1,997,368, or $1,997,369, or $2,001,500 — and you have no calculation ledger showing how it got there.

When a tenant's auditor asks to see the gross-up methodology, "the AI calculated it" is not an answer that survives a deposition.

The math that must be deterministic:

  • Gross-up calculations (occupancy adjustment)
  • Pro-rata share allocation (SF numerator / SF denominator)
  • Expense cap application (cumulative and non-cumulative)
  • Base year stop calculations
  • Administrative fee computations
  • Year-end true-up settlement amounts

2. Lease Interpretation

A lease clause that reads "Tenant shall pay its proportionate share of Operating Expenses, excluding capital expenditures with a useful life in excess of five years" requires legal and business judgment to apply. What counts as a capital expenditure? Who determines useful life? Does a roof repair with a 7-year useful life qualify as recoverable?

AI can extract that clause from the lease document. It should not decide what it means. Lease interpretation involves context, precedent, negotiation history, and sometimes litigation risk assessment. These are human decisions.

Real-world example: A tenant's lease excludes "structural repairs." The building needs $85,000 in parking garage waterproofing. Is the parking deck structural? In some jurisdictions, yes. In others, it depends on the specific work performed. An AI model will give you a confident-sounding answer — but it is not practicing law, and treating its output as a legal determination is a liability trap.

3. Sign-Off Authority

No AI system should authorize the release of a CAM reconciliation statement to tenants. The sign-off decision involves:

  • Verifying that the calculation matches the lease terms (human judgment)
  • Confirming that exclusions are properly applied (legal interpretation)
  • Deciding whether to adjust borderline items in the tenant's favor (business judgment)
  • Approving the timing of delivery relative to lease deadlines (risk management)

These decisions carry financial and legal consequences. They require a human who can be held accountable.

CapVeri's Architecture: AI for Extraction, Deterministic Code for Math

CapVeri was built around the principle that AI and deterministic calculation each have a role — and mixing them up creates the exact problems that CAM reconciliation platforms are supposed to solve.

The workflow separates cleanly into four stages:

Stage 1 — AI Extraction: Lease documents, vendor invoices, and GL exports are processed by AI to extract structured data. OCR reads scanned documents. Classification models map GL codes to expense categories. The output is a set of structured data fields, not calculations.

Stage 2 — Human Verification: Every AI-extracted value is presented for human review before it enters the calculation pipeline. A controller confirms that the lease square footage is correct, that the expense categories are properly mapped, and that the extracted cap provisions match the actual lease language.

Stage 3 — Deterministic Calculation: The verified data feeds into a Python calculation engine using exact Decimal arithmetic. Gross-up, pro-rata share, expense caps, base year stops, and settlement amounts are all computed deterministically. The same inputs produce the same outputs every time, with a full step-by-step calculation ledger.

Stage 4 — Audit Trail Storage: Every calculation, every input, and every intermediate step is stored and retrievable. Three years from now, when a tenant's auditor asks how you arrived at their $47,312 CAM charge, you can reproduce the entire calculation path.

StageTechnologyWhy
ExtractionAI (OCR + classification)Speed and scale on unstructured documents
VerificationHuman reviewJudgment, context, accountability
CalculationDeterministic Python engineReproducibility, audit defensibility
StorageImmutable audit trailLong-term retrievability for disputes

How to Evaluate AI Claims from Vendors

When a CAM software vendor says "AI-powered reconciliation," ask these three questions:

1. Does the AI perform the actual calculation, or just the extraction? If the AI performs the calculation, ask for the reproducibility guarantee. Can you run the same inputs twice and get the identical result? If not, the audit trail is broken.

2. Is there a human verification step before extracted data enters the calculation? AI extraction without human verification means you are trusting a probabilistic model to correctly interpret every lease clause, every GL code, and every invoice amount. The error rate on extraction is typically 3–7% — acceptable when a human reviews the output, unacceptable when it feeds directly into billing.

3. Can you export the full calculation ledger? A defensible reconciliation requires a step-by-step ledger showing every input, every formula applied, and every intermediate result. If the platform produces only a final number and a summary, you cannot defend it in a dispute.

The Bottom Line

AI is not the enemy of accurate CAM reconciliation. Misapplied AI is. The technology is genuinely transformative for the extraction, classification, and anomaly detection stages of the workflow. It has no business performing financial calculations that must be reproducible, auditable, and legally defensible.

The controllers who get this right will process reconciliations faster, catch more errors, and produce cleaner statements than their peers. The ones who hand the entire workflow to an AI model will discover the problem the first time a tenant's auditor asks to see the math.

Related Resources