This question came up recently and I struggled to articulate my process of not putting the config in the collector image. In short, “it depends” is always the answer, and therefore you should think about the advantages and disadvantages and make your own choice. What does it depend on? You’ve asked the right person.
Should I run a custom Collector image?
First of all, for almost all production scenarios, you should be building your own custom collector image. There are 2 published images from the Collector team but neither is actually great for a production deployment.
This is the base image and provides the most basic of functionality. It will get you most of the way there if you’re using basic functionality like receiving/exporting OTLP/Jaeger/zipkin and some basic filtering or adding attributes. You can see all the included receivers/exporters/processors here.
This is the “kitchen sink” version that includes almost every receiver, extension, processor, connector and exporter that exists in the contrib repository. It’s great for testing things out as you don’t need to pick and choose the packages when you’re starting out.
I would say that for a lot of people’s getting started scenario, the Core image is more than enough. It does have a few notable missing components such as the Redaction Processor and the Kubernetes Attribute processor; however, you can mostly run without those in simpler environments.
The main reason for building your own collector image in my opinion is for reducing the attack surface in the event of vulnerabilities in the packages. The Collector maintainers do an amazing job of keeping on top of the vulnerabilities as they come in (weekly dependency updates, releases every 2-3 weeks, vulnerability checkers on PRs). However, vulnerabilities do occur and reducing the chance you’ll be impacted is important. As an example, at the time of writing this, the current images are presenting a vulnerability in the Jaeger packages that can’t be fixed without an upgrade. If you’re not using Jaeger then your built package is showing as vulnerable. The reality is that if your config file doesn’t specify the usage of those components, it’s highly unlikely to exploitable. But your security scanning tools will complain.
On top of all that, relying on a third party container image hosting platform like DockerHub or Github Container Registry may not be tolerable within your organisation’s risk profile. Therefore, you’re likely going to need to copy the image and host it on your Container Registry of choice anyway.
What about the config file?
So now we’ve got our own image building, why don’t we add the config file into the image? Seems like the path of least resistance as we won’t need to mount any files to run. There are a few issues that mean I recommend not doing this.
- Multiple collector configs
As you grow, you’ll start to think about scaling your observability pipeline. If you essentially hard code in your pipeline, you end up maintaining multiple images for each usecase.
- New Releases
The collector releases every few weeks, therefore you’re always going to be updating the images. Keeping them to just the core functionality means you’ll be maintaining less.
- Configure change cycles
You should be constantly updating your telemetry pipelines with the changing demands of the applications. If you have to build a new collector image each time, then push it to a registry, then have it deployed, that cycle is large and hard to iterate on.
Further to separating the config file and collector build, you should also separate output specific variables like API Keys so that they can be cycled by security teams without a need for a new build. This can be done by using the Environment Variable replacement, details here.
So DO build your own collector image; DON’T include the configuration file in it. Decouple these because they change for different reasons.