Observability Is 30% of Your Codebase — Part 2

Evgeny Potapov, CEO, co-founder, (on LinkedIn, and on X.com)

Server room

This is Part 2 of our observability series. Read Part 1 here.

Most companies now understand that observability matters. But roughly 80% still confuse monitoring with observability — they set up dashboards and alerts and call it done. The real problem: observability instrumentation is treated as something you tack on after development, not something you build alongside your features.

Even in organizations where DevOps, QA, and security silos have broken down, observability work stays isolated from cross-functional processes and isn't allocated time in development schedules. Teams view tracing, metrics, and logging as tasks to complete after the code is "finished." This leads to rushed, incomplete instrumentation.

The underlying misconception: organizations rely on observability tools to provide observability, treating code instrumentation as secondary. They assume the tool's features will offer insights and prevent outages. This doesn't work because it skips the hard part — embedding observability into the code itself.

How much code is observability?

The OpenTelemetry Demo is an excellent reference. Built by the OpenTelemetry community, it demonstrates proper instrumentation across microservices in Go, Java, .NET, C++, Python, and other languages. A code analysis shows that 30–45% of the codebase is observability instrumentation.

That number is striking. It doesn't translate to the same percentage of development time, but it shows observability is a major part of the development process, not an afterthought.

What that means across the development lifecycle:

  • Onboarding and Standards: Teams need to standardize how metrics are provided, profiling is established, and logging is done — just like they standardize service architecture and data structures.
  • Development and QA: With observability at 30–45% of the codebase, mistakes will happen. QA must include the observability code. Reviewers need experience evaluating metrics, tracing, and logging implementations.
  • Observability QA: Beyond application logic, the data flowing into observability tools needs its own verification. Does it show up correctly? Is it actionable?
  • Security: Ensure no sensitive data is accidentally sent to logging systems. If such data is needed, verify compliance with regulations.
  • DevOps and SRE Integration: Alerting and response practices must be designed with the operations team. Define critical metrics and what information developers need to locate issues during outages.

Development plan comparison

A realistic comparison for an application similar to the OpenTelemetry Demo:

Without Observability — 270 hours

  1. Architecture Design (40h): Service definitions/data flow (20h), API contracts (10h), database schema (10h)
  2. Development (150h): Microservices coding (120h), API gateway setup (20h), database integration (10h)
  3. QA and Testing (40h): Unit testing (15h), integration testing (15h), load testing (10h)
  4. Security (20h): Security protocols (15h), vulnerability assessments (5h)
  5. Deployment (20h): Infrastructure provisioning (15h), final deployment and docs (5h)

With Observability — 345 hours

  1. Architecture Design (50h): Same as above + observability architecture (10h)
  2. Development (180h): Same as above + observability instrumentation (30h)
  3. QA and Testing (60h): Same as above + observability data validation (20h)
  4. Security (25h): Same as above + sensitive data audit for instrumentation (5h)
  5. Deployment (30h): Same as above + observability tools configuration and alerting (10h)

That's 75 additional hours — a 28% increase.

This isn't overhead. It's an investment in faster incident detection, reduced downtime, and a more stable product.

What to do about it

If you're a business leader: observability takes significant development time and must be part of the continuous process. Treat it as a long-term investment in reliability, not a checklist item.

If you're a manager or developer: allocate the time, communicate the cost to the business, and build observability into daily workflows. Establish standards for logging, metrics, and tracing early. QA should treat observability data as first-class output, verifying its accuracy alongside application behavior.

Latest articles

Observability Strategy Checklist — Part 1
Server room

A layer-by-layer checklist for observability — from user experience monitoring to infrastructure and user feedback.

Get started.
See your systems clearly.
Ship faster.

By clicking 'Get Started', you're agreeing to our Privacy Policy

Robot with a looking glass