Monitoring Docker containers is crucialβnot just in production, but right from the development phase. Early monitoring helps catch performance issues, resource bottlenecks, and unexpected behaviors before they escalate. While tools like Grafana and Prometheus are powerful, they might be overkill for smaller projects or initial stages. Let’s explore some lightweight alternatives to keep your containers in check without the overhead.
π οΈ The Power of Built-in Tools
Docker provides built-in commands that offer valuable insights into container performance:
docker stats
This command displays real-time metrics for your running containers, including CPU usage, memoryconsumption, and network I/O.
docker logs
Access the logs of a container to monitor its output and diagnose issues.
These commands are straightforward and require no additional setup, making them ideal for quick checks during development.
π§ͺ Testing with docker.io/spkane/train-os:latest
To see these tools in action, let’s use the docker.io/spkane/train-os:latest image, which simulates system stress and is perfect for testing monitoring setups.
Run the container:
$ docker container run --rm -d --name stress docker.io/spkane/train-os:latest stress -v --cpu 2 --io 1 --vm 2 --vm-bytes 128M --timeout 60s
Unable to find image 'spkane/train-os:latest' locally
latest: Pulling from spkane/train-os
d4df0db66c89: Pull complete
19c5d5a1e2b2: Pull complete
2b25593057c7: Pull complete
0355d914b0bb: Pull complete
Digest: sha256:5acc35b4325d348c8ce6843f6751f62de6e83e518f94f5abe29d0f3ac0fb54be
Status: Downloaded newer image for spkane/train-os:latest
45e7f21918af3000a67d8f78bdfc6601d059160af9429304fca616b75e6036ac
Monitor with docker stats:
$ docker container stats stress --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
b75e1302b035 stress 429.59% 119.9MiB / 31.07GiB 0.38% 5.82kB / 126B 0B / 0B 6
You’ll observe metrics like CPU and memory usage updating in real-time. We are using `–no-stream` to just have a brief output of the current state. Otherwise, it will be running being updated the values each few seconds.
View logs:
$ docker logs stress
stress: info: [1] dispatching hogs: 2 cpu, 1 io, 2 vm, 0 hdd
stress: dbug: [1] using backoff sleep of 15000us
stress: dbug: [1] setting timeout to 60s
stress: dbug: [1] --> hogcpu worker 2 [7] forked
stress: dbug: [1] --> hogio worker 1 [8] forked
stress: dbug: [1] --> hogvm worker 2 [9] forked
π Accessing Metrics via Docker API
For more advanced monitoring or integration with custom tools, you can access container stats directly through the Docker API:
$ curl --no-buffer -X GET --unix-socket /var/run/docker.sock http://docker/containers/stress/stats | head -n 1 | jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{
"name": "/stress",
"id": "54370040079f3f7c3c6fd8608968050569b1141412c255949a0a72161f4a326a",
"read": "2025-06-04T19:45:21.220812324Z",
"preread": "0001-01-01T00:00:00Z",
"pids_stats": {
"current": 6,
"limit": 37968
},
"blkio_stats": {
"io_service_bytes_recursive": [
{
"major": 259,
"minor": 0,
"op": "read",
"value": 0
},
{
"major": 259,
"minor": 0,
"op": "write",
"value": 0
}
],
This command fetches real-time statistics for the stress container in JSON format, which can be parsed and utilized by various monitoring solutions. That helps us to build our own monitoring solutions too, so if we run a budget environment or want to have control over all our stack we can easily control how our containers behave.
Note that curl is not making a TCP/IP call, we are directly hearing over the unix socket exposed for docker --unix-socket /var/run/docker.sock
. That socket exports throught the Docker API /stats/ all needed parameters.
π§ Choosing the Right Monitoring Approach
Scenario | Recommended Approach |
---|---|
Development & Testing | docker stats and docker logs |
Custom Integrations | Docker API via curl |
Production & Large Deployments | Grafana, Prometheus, etc. |
For small-scale applications or during development, Docker’s built-in tools are often sufficient. They provide immediate insights without the complexity of setting up external monitoring systems. However, as your application scales, integrating more robust solutions like Grafana and Prometheus becomes beneficial for long-term monitoring and alerting.
π Conclusion
Monitoring doesn’t have to be complex. Starting with Docker’s native tools allows for quick and effective oversight of your containers. As your needs grow, you can seamlessly transition to more comprehensive solutions. Remember, the key is to implement monitoring early to ensure smooth and efficient container operations.