The Inflated Numbers That Unlock Billions

How known modeling errors keep the federal expansion machine running.

Federally funded transportation projects are often justified using modeling tools that are known to be flawed, opaque and structurally biased toward expansion. These tools are not designed to accurately predict the future, but to produce a plausible narrative that unlocks federal funding. This modeling error is not an accident — it is a feature of the system.

The consequences are enormous: wasteful spending, local economic harm, professional distortion and the continued expansion of infrastructure that many communities lack the tax base to support. As long as these models go unchallenged, they will perpetuate federal incentives that reward scale over financial productivity and saddle local governments with long-term liabilities.

Background Context

The federal government continues to fund transportation infrastructure as though we are still building the initial Interstate Highway System. That system was completed long ago, and the Highway Trust Fund that financed it has been insolvent for years.

Even so, the money made available through federal transportation programs draws states and local governments toward projects they might not otherwise pursue. These projects are selected not because they are necessary, but because that’s where the funding is.

Traffic modeling is the institutionalized fiction at the center of this pursuit, a mechanism used not to understand the future, but to fabricate a justification for accessing federal dollars. These models create the illusion of future congestion and unmet capacity.

Their projections — rarely tested, often absurd — become the linchpin for justifying expansion projects. In this way, traffic modeling doesn't just support the system, it perpetuates it, driving decision-making that is detached from actual mobility outcomes or local economic benefit.

The Mechanics of the Fallacy

The Static Traffic Assignment (STA) model is the most commonly used method for projecting future traffic. Its basic structure treats each road segment as an isolated unit, applies a volume-to-capacity ratio, and calculates delay based on a curve that assumes more traffic (more volume) equals more congestion.

Of course, this simplified model isn’t how traffic works. Real-world traffic is a dynamic system, constantly influenced by bottlenecks, rerouting, and shifting demand. Travelers change behavior in response to congestion. Road segments don’t function independently, and network effects are powerful and non-linear.

STA fails to account for:

  • Queuing and bottlenecks.
  • Driver route shifting in response to congestion.
  • The increased driving that often follows added capacity, as people take trips they previously avoided or choose driving over other modes.
  • Non-linear or behavioral feedback loops.

The STA model is an overly-simplified representation of reality. That simplification is an advantage for those seeking the kind of precise calculations needed to create a compelling project narrative, but it often leads to absurd results. This is especially for mature systems.

For example, STA models routinely suggest that traffic volumes will exceed the physical capacity of the roadway. This result is mathematically impossible; no real-world roadway sustains volumes above capacity without some type of dynamic response by drivers. Even so, these impossible numbers are treated as planning inputs, the kind that compel a response.

In other words, the models assert conditions that could never physically exist, and then build entire projects around them.

The result is a system that overstates congestion in "No Build" scenarios and exaggerates benefits in "Build" scenarios. Because the models are mathematically complicated and inaccessible to most outsiders, they rarely face serious scrutiny. Their flaws are hidden behind a wall of technical language, proprietary software, and assumed expertise.

That shield of complexity not only protects the system from public accountability, it has encouraged every participant in the process to adapt to the incentives of this flawed modeling framework.

Engineers, under pressure to avoid the consequences of under-design, default to oversizing. Project sponsors benefit from rosy projections because inflated need justifies expanded scope and helps secure funding. Consultants and agencies, whose business models depend on project size and complexity, rarely push back.

The incentives all point in the same direction: say yes to the model, even if you know it's wrong.

Case Examples of Modeling-Based Failure

A. I-405 (California)

The $1.9 billion widening project on I-405 in Orange County was justified by a model that predicted a 13-mile morning southbound commute would balloon from 37 minutes in 2009 to 2 hours and 43 minutes by 2040 in the "No Build" scenario.

These projections implied an average speed of just 5 mph — absurd even under severe congestion. Despite the implausibility, these numbers were accepted and used to justify a $1.9 billion expansion project, one of the most expensive freeway expansion efforts in California's recent history.

The foundational traffic model, based on STA, forecast volumes that exceeded physical capacity and ignored downstream bottlenecks. The resulting Environmental Impact Statement overstated the benefits of the expansion and downplayed realistic congestion outcomes.

Post-construction analyses have shown that travel times did not improve meaningfully, and congestion persists. According to post-project evaluations by local news and transportation analysts, segments of the corridor still experience daily delays of 30 minutes or more during peak hours, similar to, or in some cases worse than, pre-expansion conditions.

What the model did accomplish was unlocking federal funding. What it did not accomplish was producing a forecast that reflected how traffic actually behaves. The extreme projections used to justify expansion relied on assumptions that transportation professionals openly acknowledge as unrealistic. Yet during the approval process, those projections were treated as if they were reliable representations of the future.

B. I-495 Express Lanes (Virginia)

The Virginia I-495 Express Lanes project involved the construction of high-occupancy toll (HOT) lanes alongside existing general-purpose lanes on the Capital Beltway. It was presented to the public as a solution to growing traffic congestion in the northern Virginia suburbs of Washington, D.C., with STA-based modeling projecting smoother flow and reduced delays across the system.

The project's justification leaned heavily on these forecasts, which claimed significant travel time savings and improved reliability for both the express and general-purpose lanes. The total cost of the project exceeded $2 billion.

In reality, the project shifted congestion rather than alleviating it. STA-based forecasts failed to account for the bottleneck created at the northern terminus of the express lanes. By focusing on isolated segments instead of full-network dynamics, the model missed how funneling vehicles from tolled lanes into a reduced number of general-purpose lanes would create severe congestion. This is now one of the worst congestion points on the corridor.

This choke point became so severe after project completion that it required an emergency retrofit, illustrating how ignoring real-world merging behavior and physical constraints leads to significant planning errors. General-purpose lane travel times actually increased post-construction.

These failures illustrate how STA’s structural limitations generate forecasts that look precise but fail under real-world conditions. The bottleneck at the northern terminus was not an unforeseeable surprise; it was a predictable outcome of modeling isolated segments instead of full-network behavior. Yet the projections that justified the project were treated as authoritative, and once funding was secured, those projections quietly receded from scrutiny.

C. I-5 Rose Quarter (Oregon)

The I-5 Rose Quarter widening in Portland is another case where outdated modeling distorted the project rationale. The proposal involves adding auxiliary lanes to a 1.7-mile stretch of I-5 through the Rose Quarter area, with the stated goal of improving safety and reducing congestion in one of the most crash-prone segments of highway in Oregon.

The projected cost of the project has escalated to over $1.9 billion, up from early estimates near $500 million, reflecting both inflation and growing skepticism about its complexity and effectiveness. These cost increases have not been met with a corresponding update in the modeling approach used to justify the project.

That modeling, now a decade old, is deeply flawed. DOT justified the I-5 Rose Quarter project using traffic forecasts derived from a 2015 Traffic Operations and Safety (TOAS) report, an approach that mirrors the outdated assumptions of Static Traffic Assignment (STA) models.

These projections treated each roadway segment in isolation, assumed that more lanes would naturally lead to faster flow, and failed to incorporate newer modeling techniques that reflect how traffic behaves on a congested urban network. Critical elements such as feedback loops, real-time travel behavior, and the impacts of tolling and mode shifts were excluded, despite the fact that ODOT had access to studies and tools that could have incorporated them.

These elements were not included, despite the fact that ODOT had access to tools that could have incorporated them. Their inclusion would have materially altered the projected benefits of the project and likely changed its funding calculus. More realistic modeling would have shown diminished benefits and potentially raised questions about whether the project was worth pursuing at all.

As a result, the project’s Environmental Assessment understated congestion in the No Build scenario and overstated the benefits of expansion. These exaggerated forecasts have helped the Rose Quarter project qualify for federal funding and remain on the state’s priority list, despite mounting public opposition and rising costs.

By overstating future congestion and framing expansion as the only reasonable remedy, project advocates have used these flawed projections to justify continued investment and position the project as an urgent need. The model's assumptions, rather than on-the-ground conditions or observed demand, have effectively become the foundation for funding and political support.

As costs escalated from $500 million to nearly $2 billion, the modeling framework remained largely untouched. The same projections that justified the project at its inception continued to anchor the case for expansion, despite being outdated and methodologically limited. The project grew more expensive, but the forecasts were never meaningfully reexamined.

A System Designed to Be Wrong

The models are not flawed in a way that encourages reform. They are flawed in a way that makes reform inconvenient.

Projections are rarely validated post-construction. When they are, the inconsistencies between what was forecast and what actually happened are ignored. Errors are accepted as the cost of doing business, a tradeoff in a system designed to justify rather than interrogate outcomes. The inertia of institutional process makes it easier to keep using bad models than to confront what better analysis might reveal.

This is because the modeling system has become a gatekeeping mechanism for federal funds. Agencies and consultants that understand how to work within the modeling conventions are rewarded with access to funding pipelines. Projects that conform to the assumptions and outputs of these models are more likely to qualify for federal dollars, while more adaptive, incremental or locally grounded approaches struggle to gain support.

The system rewards the appearance of rigor — large spreadsheets, complex simulations, volumes of technical documentation — regardless of whether those projections reflect reality. What gets punished is modesty, humility and any methodology that might reduce scope or question expansion.

In this way, the modeling system has become detached from observed transportation behavior and insulated from meaningful accountability.

What Should We Be Asking Instead?

This is not a call for a better model. It is a call for better questions.

Questions like:

  • Why do we accept models that routinely forecast physically impossible traffic volumes?
  • Who benefits when projections justify unnecessary expansion?
  • Why aren’t models required to be validated after a project is built?
  • Why do we design our entire funding system around a tool that engineers themselves admit is always wrong?
  • What would a system look like that assumes uncertainty rather than tries to suppress it?

These are not engineering questions. They are civic questions — questions of public trust, accountability and the future of our communities.

Reform is unlikely to come from the federal government, not because it's impossible, but because meaningful change would require embracing uncertainty. The kind of modeling that reflects reality must account for ranges of outcomes, dynamic behaviors and feedback loops. These are inherently difficult to translate into rigid formulas for funding distribution, particularly in a system built to construct interstate highways quickly and at scale.

As a result, federal institutions are structurally unsuited to lead this reform. Real change is more likely to come from state and local governments, the ones who experience the consequences of bad investments most directly and who have the most to gain by getting it right.

Unfortunately, these are also the actors who believe they have the most to lose. The prospect of losing federal funding discourages hard questions. Local officials and planners are often reluctant to challenge the modeling assumptions, fearing that doing so could disqualify them from the dollars they need. As a result, they continue to participate in a system they privately distrust, reinforcing its authority instead of questioning its foundation.

Conclusion

Traffic modeling is not a neutral input to federal transportation policy; it is the mechanism that sustains a funding framework built around expansion. As long as projections remain untethered from observed behavior, and as long as federal dollars are allocated based on modeled congestion rather than financial return, we will continue building larger systems without strengthening the places they serve.

The models are wrong, but more importantly, the incentives surrounding them are misaligned. We have built a funding structure that rewards certainty over humility, scale over productivity and expansion over durability.

Reform will not come from marginal adjustments to equations. It will come when state and local leaders begin asking whether the projects being justified actually improve their long-term financial position. Until then, modeling error will continue to function as policy, and communities will continue inheriting obligations they did not truly choose.

Written by:
Charles Marohn

Charles Marohn (known as “Chuck” to friends and colleagues) is the founder and president of Strong Towns and the bestselling author of “Escaping the Housing Trap: The Strong Towns Response to the Housing Crisis.” With decades of experience as a land use planner and civil engineer, Marohn is on a mission to help cities and towns become stronger and more prosperous. He spreads the Strong Towns message through in-person presentations, the Strong Towns Podcast, and his books and articles. In recognition of his efforts and impact, Planetizen named him one of the 15 Most Influential Urbanists of all time in 2017 and 2023.