OpenTelemetry, own your telemetry data

Your telemetry data has way more business value than you would think.

I’ve been excited about OpenTelemetry since its conception. OpenTelemetry was born out of the merger of two competing Open Source projects: OpenCensus and OpenTracing. The union of two projects was already unique in the Open Source world. Due to the merger, experts from both camps could start from a clean slate but learn from the mistakes made in the past. OpenCensus focused on both metrics and tracing, while OpenTracing only tackled tracing. But the new project was ambitious; go for all the telemetry pillars: Tracing, Metrics, and Logs.

The projects started with a specification first, not the implementation. It begins with a binary wire protocol based on top of Protobuf. Protobuf has built-in back and forward compatibility features, which have helped evolve the spec and keep everything stable for early adopters. Because telemetry is potentially collected on different machines and could have several processing hubs, a stable wire protocol means you can upgrade each component in the chain at other moments in time without downtime. That OpenTelemetry Protocol is how we tap into the stream to extract exciting facts about your telemetry data.

Semantic conventions

The following important part is the semantic conventions. Compared to the wire protocol, which is declared stable, you need to be aware that the conventions are still marked experimental. The semantic conventions give meaning to your telemetry data. Starting with the resource semantic conventions, they provide the origin of your data. What’s the process, container, operating system, cloud provider, … did the data originate from?

https://unsplash.com/photos/PQAZ_RZ9kdY