Programming On Mars Logo
  • Home
  • Articles
  • Labs
Programming On Mars Logo
  • Home
  • Articles
  • Labs

  • Andre Lucas
  • Mon Jul 07 2025

Quantitative Architecture

The goal is to maximize system performance and scalability while reducing technical and financial bottlenecks, using real data from operational and business metrics.

High-impact technical decisions should be evidence-based. That means systems need to be instrumented with proper telemetry, offering reliable data for continuous analysis.

Getting insights from your code repository

A code repository is like a living timeline of your architecture. It reveals not just changes and authors, but also signs of instability, technical debt, and architectural hotspots.

Even a basic Git analysis can reveal:

  • Artifacts with the highest number of changes in the last 3 months
  • Developers with the most commits
  • Developers with the highest number of lines changed
  • Files with low code ownership (too many different authors)

Changes vs Commits

  • Commit – every explicit checkpoint saved to the repository (git commit -m)
  • Changes – the number of lines added or removed in each file (diff)

This data helps identify maintenance hotspots and detect areas with instability or low shared ownership. Artifacts with high change rates require more attention. Statistically, they are more likely to suffer from instability or poor separation of concerns, which can hurt system reliability in production.

How to deal with these critical areas

  • Prioritize these flows for better automated test coverage (ideally over 80%)
  • Consider incremental refactorings driven by metrics (controlled refactorings)
  • Top committers are often the ones with the most domain knowledge

Hotspots may represent high business value (if under active evolution) or architectural problems like excessive coupling or low cohesion.

To validate your assumptions, you can:

  • Link PRs to tracking tools (Jira, Azure DevOps, etc.)
  • Check delivery types (bug, feature, refactoring)
  • Analyze test coverage versus frequency of changes

Detecting Coupling (Change Coupling)

Files frequently changed together in the same commit or PR:

  • Show signs of Change Coupling
  • May hide implicit dependencies
  • Indicate opportunities for modularization and better cohesion

Watch out for multiple teams in the same repo

If multiple teams are editing the same files:

  • Investigate the reason
  • It may indicate missing bounded contexts, leading to overlapping responsibilities
  • This increases the risk of conflicts and production bugs

If files are tightly coupled and hard to test:

  • Revisit the architecture
  • Plan a controlled refactoring

Metrics need context: Quantitative + Qualitative

Quantitative metrics are useful, but they must be read alongside qualitative signals like:

  • Code reviews
  • Feedback from developers
  • Domain knowledge

Numbers alone don’t tell the full story.

Example: an artifact with many changes may be evolving (good) or being constantly patched (bad).

MTTR vs MTBF

  • MTTR (Mean Time to Repair)
    Average time to recover after a failure.
    MTTR = Total downtime / Number of failures
  • MTBF (Mean Time Between Failures)
    Average time the system runs before failing.
    Example: 1,000 hours of uptime and 3 failures → MTBF = 1000 / 3 = 333.3 hours

In modern systems with frequent releases, MTBF becomes less useful due to continuous change.

MTBF vs Continuous Deployment

If teams focus too much on MTBF, they may deploy less often to avoid failures. This leads to:

  • Larger releases with more risk
  • Harder root cause analysis
  • Riskier and more complex rollbacks

How to adopt continuous deployment

  • Strong test automation for critical paths (not necessarily 100%)
  • Stable CI/CD pipelines and reliable rollback strategies

The goal is frequent, low-risk releases.

Continuous deployment helps teams deliver value faster. It’s a challenge, but possible with maturity in testing and automation.

Disclaimer

These indicators should never be used to measure developer performance.

They are tools to identify architectural risks and improvement opportunities.

How to bring this visibility to business stakeholders

One of the biggest architectural challenges is to translate technical improvements into business impact.

Tactics:

  • Translate tech metrics into value:
    • “Lower CFR” → “Fewer bugs in production”
    • “Shorter Lead Time” → “Faster delivery of business ideas”
  • Use visual dashboards:
    • Google Sheets, Grafana — updated weekly or monthly
  • Build a communication routine:
    • Add a "tech impact" section in sprint reviews or product meetings

Quantitative Architecture in action – 5 practical steps

  1. Instrument services with business and operational metrics
  2. Use Git scripts to identify change hotspots
  3. Correlate changes with delivery types (bug, feature)
  4. Detect temporal coupling
  5. Prioritize tests/refactorings based on risk and impact

This post reflects lessons and ideas from my mentorship journey with Elemar Junior. I’m sharing notes, learnings, and hands-on insights about Quantitative Architecture, focusing on using real data to support architectural decisions.

If this content was helpful, share it with your team. If you'd like to discuss more, feel free to connect on LinkedIn

Tags:
Software ArchitectureSoftware Engineering
  • Privacy Policy
  • Terms And Condition
  • Contact
© Programming On Mars