This project benchmarks New York State local governments using bulk data from official government sources. The goal is to publish comparable metrics with clear provenance so that readers can evaluate claims, reproduce results, and improve the dataset.

For a more detailed technical discussion, see the Methodology page on the app.

Data sources

NYS Office of the State Comptroller (OSC)

The OSC collects standardized Annual Financial Reports from local governments across New York State. This is an invaluable public resource — the Comptroller’s office does the hard work of defining reporting standards and gathering data from thousands of jurisdictions.

NY Benchmark imports OSC data for all 61 cities that file with the Comptroller (NYC has its own Comptroller — see below), plus 57 counties, 933 towns, 558 villages, and 689 school districts. The dataset spans 1995 through the present and includes revenue, expenditure, and balance sheet data at the fund and account-code level — over 9.7 million individual observations.

U.S. Census Bureau

Demographic data (population, median household income, poverty rates, and more) is imported from the Census Bureau’s American Community Survey 5-year estimates, covering 2010 through the present — over 63,000 observations across cities, counties, and school districts.

NYC Comptroller ACFR

New York City has its own Comptroller and is not part of the OSC reporting system. NYC financial data comes from the Annual Comprehensive Financial Report (ACFR) published by the NYC Comptroller’s office. We import the Ten Year Trend statistical tables covering FY 2016-2025, including expenditures by functional category and agency, revenue by source, and fund balance classifications — approximately 760 observations.

Key differences from OSC data: NYC’s fiscal year runs July 1 - June 30 (most other NY cities use the calendar year). NYC’s General Fund balance is entirely Restricted and Committed (including a ~$2B Revenue Stabilization Fund) — there is no Unassigned General Fund balance. NYC’s $117B budget is approximately 100x larger than the next largest NY city, so per-capita and percentage metrics are more meaningful for comparison than absolute dollar amounts.

NYS Comptroller — Fiscal Stress Monitoring System (FSMS)

The OSC’s Fiscal Stress Monitoring System evaluates the fiscal health of local governments using composite scoring. Each entity receives a fiscal stress score (0-100) based on financial indicators and an environmental stress score (0-100) based on external factors like poverty, population change, and property values. Entities are designated as “Significant Fiscal Stress,” “Moderate Fiscal Stress,” “Susceptible to Fiscal Stress,” or “No Designation.”

NY Benchmark imports FSMS scores for cities, counties, towns, and villages (2012-2024) and school districts (2013-2025) from the OSC’s published Excel workbooks — over 166,000 observations. The scoring methodology changed in 2017; we normalize pre-2017 scores to the 100-point scale for trend consistency. NYC, Big Five school districts, and Union Free special-act districts are exempt from FSMS. The Stress Analysis page visualizes these scores interactively.

What NY Benchmark adds

The raw data from OSC and Census is publicly available, but it is not designed for benchmarking. OSC publishes annual CSVs organized by year and fund — useful for auditors and researchers, but not structured for the kind of cross-city, multi-year comparisons that residents and policymakers need.

NY Benchmark transforms this data into something comparable:

  • Derived metrics — Fund Balance as a % of Expenditures, Debt Service as a % of Expenditures, Per-Capita Spending — that distill complex financial statements into numbers you can compare across cities of different sizes.
  • Fund normalization — An all-fund approach that includes spending from General, Water, Sewer, Highway, and other funds while excluding custodial pass-throughs (Trust & Agency fund) and interfund transfers that would otherwise double-count or inflate totals.
  • Trend charts — Data since 1995 visualized per city, so you can see trajectories, not just snapshots.
  • Rankings — Cities ranked on key fiscal health metrics, surfacing both best practices and outliers.

Coming soon

  • Side-by-side comparison — Select two or more cities and compare them on any metric, with population-adjusted context.
  • Metric leaderboards — Rank all cities on specific spending categories (e.g., police spending per capita, fire department costs, debt service burden).
  • Category drill-downs — Break down broad categories (Public Safety, Debt Service) into their components (Police, Fire, Interest, Principal) across cities.
  • Cross-entity-type analysis — Compare cities vs. villages vs. towns on comparable metrics, once those entity types are imported.
  • Demographic context — Understand spending differences in light of poverty rates, population density, and other factors that affect what local governments need to provide.

Data principles

1) Official sources first

Data is imported from official government databases (OSC, Census Bureau) rather than secondary summaries or news reports. Individual city ACFRs are consulted for quality assurance and validation — see the Audit Time blog post for an example of this verification process.

2) Provenance is mandatory

Each data point traces to a source: the OSC dataset and year, the Census survey and variable, or (for manually-entered data) a specific document and page number.

3) Conservative interpretation

When reporting practices differ across jurisdictions, the project prefers conservative, well-documented approaches over aggressive normalization. For example, the all-fund approach was chosen specifically to handle the fact that cities organize their funds differently — some run water and sewer through the General Fund, others use separate Enterprise funds.

4) Definitions are explicit

Derived metrics like Fund Balance % and Debt Service % have specific, documented formulas. When definitions are revised (as happened when custodial pass-throughs and interfund transfers were excluded), the change is documented and applied consistently across all cities and years.

5) Comparisons require context

Raw numbers can mislead. Per-capita normalization is a start, but meaningful benchmarking will ultimately require demographic, economic, and service-level context. This is an ongoing effort.

Known limitations

  • Late and non-filing cities — Four cities (Mount Vernon, Ithaca, Rensselaer, Fulton) have not filed recent financial reports with the OSC. Their data ends at their last filing year. See the Non-Filers page for details.
  • NYC data from ACFR — New York City is not in the OSC system. NYC data comes from the NYC Comptroller’s ACFR (FY 2016-2025). NYC’s fund balance classification differs from other cities (no Unassigned General Fund balance), which affects Fund Balance % comparisons.
  • Reporting differences — Cities organize their funds and accounts differently. The all-fund approach handles most of this, but edge cases exist (see the Methodology page for specifics on custodial pass-throughs and the Plattsburgh debt service case).
  • Census margins of error — ACS 5-year estimates for smaller cities can have wide confidence intervals, particularly for income and poverty metrics.

Corrections and contributions

If you believe a metric is incorrect or ambiguous, please email [email protected] with the city, fiscal year, specific metric, and a citation to the relevant source.

Methods will evolve as the dataset grows; changes will be documented publicly.