Enterprise observability is starting to feel heavy. Telemetry volumes are growing with higher workloads. Costs are rising faster than many teams expected. Observability now consumes around 11–20% of total cloud infrastructure spend in many organizations. This is enough to trigger real scrutiny from finance teams.
At the same time, APM has grown into full-stack observability, which may contain sensitive data. What was once a tooling choice owned by engineering is now a shared responsibility across platform, finance, security, and compliance teams.
Organizations are moving to self-hosted APM mostly due to cost predictability, compliance requirements, and control. In this article, we’ll explore these aspects in detail.
What Is a Self-Hosted APM in Modern Enterprise Environments?

Self-hosted APM means an application performance monitoring (APM) model where you can store, process, and control your observability data in your enterprise’s infrastructure, instead of sending the data entirely to a third-party SaaS vendor.
In modern environments, this can mean on-premise data centers, private clouds, customer-managed cloud accounts, or bring-your-own-cloud (BYOC) setups. The key here is ‘who’ controls the data, deployment boundaries, and operational policies, not ‘where’ the software runs.
Self-Hosted APM vs SaaS-Managed APM
SaaS APMs: In SaaS-managed APM, vendor agents collect telemetry data (logs, metrics, traces, etc.). From there, it’s shipped to a vendor-owned backend. SaaS models are fast to set up and need minimal operational effort, but they keep data outside the organization’s environment, sometimes on a different continent altogether.
This may not be the best choice for enterprises operating with higher workloads or those under heavy regulatory constraints. They need to make an informed decision before choosing a SaaS APM.
Self-hosted APMs: Customers have the option to deploy a platform in their on-premise or cloud infrastructure. Many think self-hosted APMs always require heavy do-it-yourself operations. Well, that was often true a decade ago, when teams had to deploy and maintain every component themselves.
Things have changed now. Many modern platforms support customer-controlled deployments while the vendor manages upgrades, scaling, and operational complexity. This means organizations can control data location and cost, even without a large in-house observability operations team.
Today, self-hosted APM allows you to balance control and responsibility. You can choose:
- How much of the observability stack you want to own
- Which parts do you want managed
- How to balance costs and compliance risks over time
That flexibility is why self-hosted APM has become relevant again for large, cloud-native organizations.
Why Self-Hosted APM Is Making a Comeback
Self-hosted APM has existed long before modern SaaS observability platforms. Enterprises historically ran monitoring systems inside their own environments because there was no alternative.
The concept of self-hosting has not changed, but the reasons enterprises are returning to it have. It’s more due to concerns around cost, governance, and long-term control, and less due to infrastructure limitations. Here are some reasons in detail:
Ease Over Control

Early SaaS-based APM focused on fast onboarding, dashboards, and minimal operational overhead. That way, teams could easily instrument services and start collecting telemetry data without caring much about the infrastructure.
That convenience drove adoption mostly in smaller teams and early cloud migrations. But as organizations scaled, the simplicity of SaaS came with hidden trade-offs:
- Pricing that grows unpredictably with telemetry volumes
- Limited control over where data lives
- Dependency on vendor infrastructure for compliance
Apart from developers, these trade-offs matter to finance, security, and engineering teams.
Cost vs Growing Telemetry
Modern systems are more complex and distributed, which is why telemetry volumes are exploding. Although telemetry growth varies by environment, most enterprises report that logs, traces, and high-cardinality metrics grow faster than the core infrastructure itself.
For many organizations, observability starts to consume 10-25% of total API operations budgets. Also, annual spend on observability tooling can range from $1-$10 million at scale.
This makes it difficult to predict costs. SaaS observability platforms commonly have usage-based pricing models. When billing is tied to metrics, logs, indexed queries, or retention duration, predicting costs becomes difficult as telemetry volume rises. Plus, to send data out of the cloud, enterprises need to pay a significant amount (20-30% of observability cost) to their cloud providers. This is also called public egress cost or data out cost.
Compliance and Governance
Data privacy regulations have strict requirements for handling data.
- Where the data is located
- How it is accessed
These governance pressures are another reason why organizations are taking more interest in deployment models where data control is not entirely outsourced.
But telemetry datasets may contain sensitive operational and customer information. To counter this, teams must consider the location and control of their observability data while choosing tools, in order to fulfil security and compliance requirements. They must also evaluate whether sending all telemetry off-premises is compatible with internal risk policies.
OpenTelemetry Support for Faster Operations
One of the biggest barriers to self-hosted APM in the past was operational complexity. Teams had to manage custom agents, proprietary protocols, and tightly coupled backends. OpenTelemetry has removed that friction. It provides:
- The ability to instrument metrics, logs, and traces once, and choose where telemetry is processed
- Data collection without vendor lock-ins
Now, let’s find out the top reasons why enterprises are moving to self-hosted APM: cost, compliance, and control.
Reason 1: Cost at Scale for Enterprises
Observability costs and infrastructure size are no longer proportional for many enterprises. Modern teams use plenty of systems, applications, and tools, and telemetry grows faster than storage, computer, or network usage.
For deeper visibility, traces span dozens of services, logs become more verbose, and metrics gain dimensions. So, monitoring spend can accelerate even when application traffic grows modestly. Modern environments routinely generate hundreds of gigabytes to a few terabytes of telemetry data per day, making the cost question impossible to ignore for teams.
In the 2025 Observability Survey, organizations reported that, on average, observability spend represents about 17% of total compute infrastructure costs. This ratio can vary widely, but the data shows that observability tooling is important in enterprise budgeting. Let’s understand this next.
Cost Concerns with SaaS APMs

Cloud-native APM platforms typically rely on multi-dimensional pricing models. Instead of a single predictable cost model, spend can grow across several dimensions. These can include hosts or containers, data ingestion, indexed telemetry, retention duration, and advanced analytics usage. Platforms such as Datadog and New Relic are examples.
Common reasons for higher costs are:
- Indexed logs and traces are priced separately from raw ingestion
- Extended retention and rehydration fees for historical analysis
- High-cardinality metrics multiply storage and query costs
- Cloud data egress when telemetry crosses regions or accounts
- Custom metrics ingestion in some providers (e.g., Datadog) further increases cost and predictability
Studies on cloud cost optimization highlight that data movement and retention are underestimated contributors to long-term spend in observability pipelines. So, teams often struggle to explain why observability costs increase even when infrastructure usage remains stable. This makes budgeting harder for teams.
At this point, many organizations begin comparing pricing and deployment approaches more closely. They may start evaluating (e.g., Datadog vs CubeAPM and New Relic vs CubeAPM) to understand how different models behave with the same telemetry growth.
How Self-Hosted APM Changes Cost Predictability
Self-hosted APM puts cost control back into an organization’s hands and how they manage their infrastructure and operational decisions. They no longer have to rely on the pricing models that vendors have defined. They can also manage how they collect, store, and retain telemetry based on their internal policies and constraints.
Some common benefits are:
- Direct control over telemetry ingestion volume and sampling configuration
- Flexibility to adjust data retention to meet regulatory, audit, or operational needs
- Cost planning that aligns with internal infrastructure and capacity models (and not vendor-set pricing tiers)
- Significant savings on public egress costs as data is kept inside the customers’ environment
The goal is to spend smarter. With predictable costs, engineering and finance leaders can plan together, test cost estimates against real usage, and reduce budget surprises.
Reason 2: Compliance and Data Residency
Compliance and security teams now play a central role in observability decisions for many enterprises. This is non-negotiable for those in regulated industries and multi-regional companies.
Regulatory bodies (regional, federal, and industry-specific), such as HIPAA, GDPR, DPDP, etc., demand strict data control. Organizations must carefully handle their operational and customer data. Failing to comply can result in significant consequences. For example, in 2024, Meta suffered a US$1.4 billion settlement fine with the State of Texas for allegations of unlawful biometric data capture.
Let’s understand this aspect in detail.
Observability Data Can Include Sensitive Information

Observability data may include:
- Service and infrastructure identifiers
- Internal architecture details
- Customer or tenant IDs
- Request payloads and error context
In distributed systems, telemetry data can reveal how applications behave, where sensitive data flows, and how customers interact with services. Because of this, observability data is classified alongside other sensitive operational datasets. It should not be treated as harmless technical data. This is why organizations now consider observability data subject to the same security and compliance controls as application data.
Third-Party SaaS Data Handling Is Under Scrutiny
Full SaaS observability platforms route telemetry data via vendor-controlled infrastructure by default. This simplifies the setup but may create questions around data privacy, such as cross-border data transfer and data ownership in the long term.
Multinational organizations must ensure that telemetry generated in one region does not violate residency or access policies in another. This becomes more complex when observability data is processed outside customer-controlled environments.
In response, many enterprises are reassessing observability platforms based on whether telemetry can remain within customer-managed infrastructure or cloud accounts. They want to ensure the data is not being exclusively processed in shared SaaS backends.
Stricter Security Reviews & Vendor Assessments
In 2024, a third-party or supply chain compromise accounted for 30% of all data breaches, up from 15% previous year. This could be why enterprises are now facing longer and more detailed vendor assessments, audits, and compliance reviews. These include specific questions about:
- Where telemetry data is stored and processed
- Who has access to that data
- How data residency is enforced across regions
- Whether telemetry can be isolated per customer or account
Reason 3: Control Over Data and Long-Term Strategy
Organizations need better control over their data as their observability scales. Instead of worrying more about which tool has the most features, they need to carefully assess the related cost, flexibility, and risk over the next 5-10 years. Here are the reasons why:
Risk of Proprietary Stacks

Traditional APM platforms often rely on proprietary agents and data models that work well inside a single ecosystem. But during a change, such as switching tools, these platforms can create difficulties for teams. If a vendor has tightly coupled instrumentation, dashboards, alerts, etc., switching can be complex and expensive. Retraining teams on the new technologies and migrating data can also consume high cost and time.
In a 2024 CNCF survey, 64% of organizations admitted that avoiding vendor lock-in is important when selecting cloud-native tools and observability platforms.
OpenTelemetry Support Is a Must
OpenTelemetry (OTel) is one of the fastest-growing technologies and is the second most popular project of CNCF after Kubernetes. Enterprises are adopting it heavily to standardize their observability pipelines. It provides a standard to teams for collecting and exporting observability data, such as metrics, logs, and traces, no matter where they are located or processed. This separates instrumentation from backend ownership. This battle-tested agent consumes very low system resources. So, the OTel agent doesn’t choke your servers, which was a big issue with proprietary agents.
This way, teams don’t have to depend on a single vendor. They can instrument their applications once and then have the freedom to evaluate different backends, deployment models, or storage strategies over time. This also reduces the cost of future changes. OpenTelemetry adoption has indirectly become a part of risk management, not just a trend everyone is following.
Flexibility
No team wants to be locked out with a single provider or tool. This is more important in complex environments with many tools and systems. Teams need the option to be able to migrate to other tools for better billing, functionality, or to meet any other needs. They also look for flexibility to change deployment models or storage backends without hassles, such as rewriting applications or re-instrumenting services.
Observability data (e.g., metrics, logs, and traces) have become part of incident response, security investigations, capacity planning, and even business analytics. Losing portability over that data limits future choices. Also, longer or multi-year billing keeps organizations locked into one tool even if they are not happy with the tool’s performance or their needs have changed.
Better Operations

Data control is important in day-to-day operations. Enterprises want the ability to decide:
- How much telemetry they collected and when
- Which data is retained long-term versus sampled or summarized
- How observability integrates with internal security, compliance, and analytics systems
Systems evolve, traffic patterns change, and regulatory requirements tighten. Here, sampling strategies, retention policies, and integration options help teams adjust to the change. This is why many teams now evaluate observability solutions more carefully on control over data flow, architecture, and long-term strategy, apart from visibility depth.
Where CubeAPM Fits in the Enterprise Shift
Observability conversations are changing across large organizations. Teams are evaluating tool ownership, cost behavior, and how today’s decisions will hold up years down the line. Here’s why CubeAPM is one of the best observability platforms in the market with a self-hosting option.
Self-Hosted APM
Teams want to retain control over their observability data, where it’s stored, and how it is governed, instead of being locked in rigid vendor policies.
For this, CubeAPM supports self-hosted deployments via Bring-Your-Own-Cloud (BYOC) and on-premise hosting. This means enterprises can choose the model that aligns best with their security and compliance requirements.
Deployment Without Overhead

For many organizations, self-hosting used to feel like a step backwards. Running and maintaining observability infrastructure requires time, technical expertise, and constant attention. Because of this, many enterprises look for a model where they control data location and deployment, but do not have to operate every moving part themselves.
CubeAPM follows a vendor-managed, customer-controlled approach. It allows teams to retain data control while offloading operational complexity.
OpenTelemetry-Native
CubeAPM can ingest metrics, logs, and traces natively through OpenTelemetry. This decoupling of instrumentation from backend ownership gives teams breathing room. It reduces long-term risk and makes future changes feel more manageable, instead of being disruptive.
Predictable Pricing
Industry research shows that cost predictability is one of the top criteria for enterprises when evaluating observability platforms. Many teams have had to endure cycles of unexpected cost increases due to higher telemetry growth.
Example: Datadog charged Coinbase, a cryptocurrency company, a whopping US$65 million. This news went viral quickly, prompting other users to share their stories. It highlights how unpredictable the SaaS billings can be.
CubeAPM solves this with simple ingestion-based pricing of $0.15/GB, and not per host, per user, or per feature. Unlimited numbers of engineers can access dashboards without paying extra. So, teams can predict costs better even when their usage grows. CubeAPM also saves data out cost (usually $0.10/GB) that cloud providers (e.g., GCP, AWS, Azure) charge.
Long-Term Observability
Modern teams want an observability platform that scales with them efficiently in the long run. They look for characteristics, such as support for flexible deployment, open standards, and predictable costs.
At its core, CubeAPM is designed for organizations that view observability as a long-term platform choice rather than a quick tooling decision. That perspective matters when systems evolve, compliance requirements change, and telemetry volumes grow year after year.
If you’re exploring self-hosted APM without taking on the full operational burden, CubeAPM is an excellent choice.
Self-Hosted vs SaaS APM: Which Model Fits Which Enterprise?

Among self-hosted and SaaS APM, let’s understand which model fits better for which organizations or teams:
When SaaS APM Works Well
SaaS APM platforms are a strong fit for many organizations, including those earlier in their growth journey. They tend to work best when:
- Teams are small or moderately sized
- Telemetry volume is still manageable
- Speed of setup and time-to-value matter most
- There’s a limited operational capacity for running observability infrastructure
For these teams, SaaS APM reduces cognitive and operational load. Instrumentation is quick, dashboards are ready out of the box, and there is little infrastructure to manage.
When Self-Hosted APM Becomes Necessary
The context of decision changes as an organization scales. Self-hosted APM becomes relevant when:
- Telemetry volumes grow faster with more microservices and Kubernetes
- Observability costs become difficult to predict monthly
- Compliance, data residency, or internal security policies apply
- Multiple teams and regions rely on the same observability platform
- Companies operating in regulated domains, such as healthcare, finance, banking, irrespective of their size and stage
At this stage, observability applies organization-wide, not limited to a single team. To them, cost efficiency and deployment flexibility matter as much as feature depth.
When evaluating self-hosted APMs, some teams look at platforms that combine OpenTelemetry-native design with customer-controlled deployment with little to no operational overhead. Solutions such as CubeAPM reflect this model.
What Enterprises Should Consider Before Migrating to Self-Hosted APM?
Here’s what enterprises should consider when moving to a self-hosted APM from a fully SaaS APM model:
- Deployment responsibility vs vendor support: Enterprises need clarity on who owns which parts of the observability stack. Some models place nearly all responsibility on the vendor, while others require teams to operate and scale the backend themselves. Many organizations now look for self-hosted APMs where they can control data, and the vendor manages the upgrades, scaling, and day-to-day operations.
- OpenTelemetry support: Teams must find out if a platform is OpenTelemetry-first. Just being compatible on paper won’t do. The tool must natively ingest MELT data (metrics, logs, and traces) and route data to different backends without requiring the team to instrument applications again and again.
- Cost efficiency: Enterprises should also consider how observability costs vary at current and as they scale. This includes understanding how pricing responds to higher telemetry volume, longer retention periods, and higher query activity. Usually, ingestion-based (e.g., per GB) pricing is easier to predict than per-host, per-feature, or with multiple usage dimensions.
- Compliance: Teams should find out where their telemetry will be stored and processed. They should also check whether the tool has capabilities to meet data residency and privacy requirements, and how it enforces access controls to avoid compliance risks.
- Sampling, retention, and storage: Find out if the platform provides effective sampling strategies to control costs. Also, inspect their data retention policies and storage tiers. This enables teams to balance cost, performance, and analytical depth.
- Ease of migration: Check how the platform integrates with existing tools in your tech stack during the transition. Find out how you can reuse current instrumentation, and how much effort it takes to migrate dashboards, alerts, or historical data.
Conclusion
Moving to self-hosted APM for many teams is more about how they can control their data, cost, and deployment. Organizations want the option to easily change vendors and meet compliance needs, without vendor lock-ins.
If you’re looking for a self-hosted APM with predictable cost but without the DIY hassle, CubeAPM is an excellent option for you. Schedule a free demo to explore the platform.
Disclaimer: The information in this article reflects the latest details available at the time of publication and may change as technologies and products evolve.
FAQs
1. What is a self-hosted APM?
Self-hosted APM is a monitoring approach where application telemetry, such as metrics, logs, and traces, is stored and processed in an organization’s own infrastructure, including private cloud or on-prem environments. This gives enterprises more control over cost, data residency, and observability architecture than fully SaaS-managed APM tools.
2. Why are enterprises moving to self-hosted APM now?
Enterprises are moving to self-hosted APM due to rising observability costs at scale, stricter compliance and data residency requirements, and the need for greater architectural control as telemetry volumes continue to grow.
3. Is self-hosted APM cheaper than SaaS APM for enterprises?
Self-hosted APM can be more cost-predictable for enterprises operating at large scale, especially when SaaS pricing increases with usage, retention, or data egress. While infrastructure costs still exist, many teams prefer the budgeting control that self-hosted APM provides.
4. What compliance benefits does self-hosted APM offer?
Self-hosted APM helps enterprises meet compliance and data residency requirements by keeping observability data within controlled environments. This is especially important for regulated industries or organizations subject to regional data-sovereignty laws.
5. Does self-hosted APM require heavy operational overhead?
Modern self-hosted APM does not always require heavy operational effort. Many platforms support managed or hybrid deployment models that reduce day-to-day maintenance while preserving data ownership and control.





