-

Elastic Announces Smarter Tail-Based Sampling for APM in Modern Cloud-Native Environments

New Elastic Observability Features Maximize Visibility While Fine-Tuning the Performance of Data Collection

  • Eliminating blind spots by providing fine-grain control over data collection and storage
  • Accelerating troubleshooting for AWS Lambda with the ability to natively collect serverless traces
  • Enhancing visibility across AWS cloud services with new integrations that speed data ingestion

MOUNTAIN VIEW, Calif.--(BUSINESS WIRE)--Elastic (NYSE: ESTC) (“Elastic”), the company behind Elasticsearch, today announced new features and enhancements across the Elastic Observability solution to support modern cloud-native environments, including smarter tail-based sampling for application performance monitoring (APM) and enhanced visibility across AWS cloud services.

Eliminating blind spots with Elastic tail-based sampling

Tail-based sampling can help DevOps and site reliability engineering (SRE) teams eliminate application performance blind spots by providing finer-grain control over trace sampling conditions in high-volume systems with millions of transactions.

While common head-based sampling that applies a fixed-rate methodology can be efficient in low-volume application server environments, tail-based sampling is better suited to more complex, cloud-native applications. With Elastic tail-based sampling, the decision to keep or discard a sample is made after a trace has been completed and observed. As a result, tail-based sampling can help customers maximize visibility and reduce their data storage costs by capturing only the most critical transactions.

“As more organizations adopt cloud-native technologies and microservices-based architectures, application troubleshooting is becoming increasingly complex,” said Alvaro Lobato, Vice President, Observability, Elastic. “We built Elastic tail-based sampling to help customers avoid tradeoffs between full application visibility and cost. As a result, Elastic Observability provides maximum visibility while enabling the type of fine-grain control needed when working in complex, cloud-native environments. ”

In addition, Elastic tail-based sampling enables DevOps and SRE teams to easily adjust sampling rates to gain greater insight into application performance by evaluating each trace against a set of rules or policies and transaction outcomes. The resulting APM insights can accelerate root-cause analysis for faster time to resolution.

Enhancing visibility and accelerating troubleshooting across AWS cloud services

Now generally available, the ability to natively collect serverless traces from AWS Lambda functions provides customers with detailed, end-to-end visibility into distributed transactions to accelerate troubleshooting. Development teams can collect serverless application traces from Lambda functions written in Node.js, Python, and Java with a new AWS Lambda APM agent. Elastic additionally supports native cloud monitoring with the ability to collect Lambda traces via OpenTelemetry (Java and Python only).

“We're excited to start using Elastic's AWS Lambda APM agent for our cloud-native applications,” said Jose Navarro, Software Engineer, Accolade, a healthcare company. “Our team at Accolade especially likes the fact that it is possible to see whether a particular invocation of the Lambda function involved a cold start directly in the trace waterfall chart. The availability of Lambda-specific metrics, such as cold start rate, at the service and transaction group levels are also very helpful.”

In addition, customers can now ingest custom logs from Amazon S3 and CloudWatch into Elasticsearch and optionally set up index templates, ingest pipelines and output specifications. And, with Elastic 8.2, the Elastic Serverless Forwarder now supports CloudWatch, Kinesis Data Streams, and direct SQS as additional input sources for log ingestion. These enhancements give customers further flexibility by providing ingest options that meet their existing operating procedures and architectural preferences.

For more information on these and additional feature updates, read the Elastic blog about what’s new in Elastic Observability 8.2.

About Elastic:

Elastic is a search company built on a free and open heritage. Anyone can use Elastic products and solutions to get started quickly and frictionlessly. Elastic offers three solutions for enterprise search, observability, and security, built on one technology stack that can be deployed anywhere. From finding documents to monitoring infrastructure to hunting for threats, Elastic makes data usable in real time and at scale. Thousands of organizations worldwide, including Cisco, eBay, Goldman Sachs, Microsoft, The Mayo Clinic, NASA, The New York Times, Wikipedia, and Verizon, use Elastic to power mission-critical systems. Founded in 2012, Elastic is a distributed company with Elasticians around the globe and is publicly traded on the NYSE under the symbol ESTC. Learn more at elastic.co.

The release and timing of any features or functionality described in this document remain at Elastic’s sole discretion. Any features or functionality not currently available may not be delivered on time or at all.

Elastic and associated marks are trademarks or registered trademarks of Elastic N.V. and its subsidiaries. All other company and product names may be trademarks of their respective owners.

Contacts

Chloe Guillemot
Elastic Public Relations
PR-Team@elastic.co

Elastic N.V.

NYSE:ESTC

Release Versions

Contacts

Chloe Guillemot
Elastic Public Relations
PR-Team@elastic.co

More News From Elastic N.V.

Elastic Delivers GPU Infrastructure to Self-Managed Elasticsearch Customers via Cloud Connect

SAN FRANCISCO--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the Search AI Company, announced the availability of Elastic Inference Service (EIS) via Cloud Connect for self-managed Elasticsearch deployments. Organizations can now gain on-demand access to cloud-hosted inference capabilities without managing GPU infrastructure, all while maintaining their core infrastructure and data on-premises. Users also gain immediate access to models by Jina.ai, an Elastic company and a leader in open-source multil...

Elastic Adds High-Precision Multilingual Reranking to Elastic Inference Service with Jina Models

SAN FRANCISCO--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the Search AI Company, today made two Jina Rerankers available on Elastic Inference Service (EIS), a GPU-accelerated inference-as-a-service that makes it easy to run fast, high-quality inference without complex setup or hosting. These rerankers bring low-latency, high-precision multilingual reranking to the Elastic ecosystem. As generative AI prototypes move into production-ready search and RAG systems, users run into relevance and inference...

Elastic Announces General Availability of Agent Builder with Expanded Capabilities

SAN FRANCISCO--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the Search AI Company, announced the general availability of Agent Builder, a complete set of capabilities that helps developers quickly build secure, reliable, context-driven AI agents. AI agents need the right context to perform complex tasks accurately. Built on Elasticsearch, Agent Builder excels at context engineering by delivering relevance in a unified platform that scales, searches, and analyzes enterprise data. It dramatically simpl...
Back to Newsroom