OpenChargeback

I looked at the enterprise FinOps platforms. The demos were slick — dashboards, anomaly detection, multi-cloud cost optimization. They wanted $50K a year, which is roughly what we spend on a mid-size research allocation, and they’d need a dedicated person to run them. That person would also be me.

Research computing groups are not enterprises. We don’t have billing teams. We have one person — sometimes half a person — who tracks what 40 PIs spent on infrastructure last month and reports it back in whatever format the finance office currently demands. That person, at Drexel, is also me.

So I built the tool I actually needed.

The core insight

Show the list price even when the bill is zero.

If a PI’s HPC allocation is fully subsidized by a departmental grant, they still see: List Price $2,400 / Discount -$2,400 / Amount Due $0. That line — the one most billing systems would skip entirely — is the whole point. Because the moment IT goes to the provost asking for $2M to refresh the storage array, someone’s going to ask “does anyone even use this?” The zero-dollar statement is the answer. It makes the value of institutional infrastructure legible — for grant budgets, for annual reports, for the conversation that decides whether the cluster gets funded next year.

Numbers as narrative. Not as armor.

How it works

Everything flows through the FOCUS format — the FinOps Foundation’s open billing standard. AWS billing exports, Azure cost data, Slurm job records, storage utilization reports — if it can produce a CSV, OpenChargeback can ingest it. FOCUS is the common interchange; everything normalizes into the same shape before a single charge gets calculated.

Charges go through a two-stage review workflow. Close a billing period and it locks — reviewers can still adjust, PIs can still dispute. Finalize it and the numbers are permanent. That safety window between close and finalize is where institutional trust gets built. Nobody wants to argue about a charge they saw for the first time on a finalized statement.

The output is template-driven. PDF statements for PIs. GL journal entries for finance. CSV exports for whatever else. Jinja2 templates, so whatever format your institution’s systems eat, you write a template for it. It fits into whatever you already have; it doesn’t try to replace it.

Right-sized by design

SQLite instead of Postgres, because I didn’t want to build a multi-tenant platform before anyone asked for one. SQLAlchemy underneath, so if someone does want to scale it up, that path exists. Python, Flask, no build step, no external services, no enterprise contract. 239 tests. The kind of tool a single research computing person can deploy, configure, and actually maintain — not the kind that requires a team to babysit.

The thing that became clear after getting the cloud providers figured out was that the same approach works for anything with usage metadata. Kubernetes clusters, Docker hosts, file storage, GPU allocations. If you can attach tags to a resource, you can generate usage invoices from it. The whole application is metadata-driven; the governance model is up to you.

The other half

OpenChargeback answers “here’s what it cost.” Its companion, OpenResearchDataPlanner, answers the question that comes first: “what will this cost?” One tool for planning, one for accountability. Both built to be forked and owned — clone it, customize the config for your institution, run it.

The spreadsheet, finally, stays in the box.