[GH-ISSUE #776] [Bug]: Docker: ffmpeg process leak in bambuddy causing memory growth over time #518

Closed
opened 2026-05-06 12:30:32 +02:00 by BreizhHardware · 7 comments

Originally created by @ChrisTheDBA on GitHub (Mar 21, 2026).
Original GitHub issue: https://github.com/maziggy/bambuddy/issues/776

Originally assigned to: @maziggy on GitHub.

Bug Description

Over time, the bambuddy container grows to several GB of RAM usage. After investigating the host and the container, the memory growth appears to come from many long-lived ffmpeg child processes spawned under the main uvicorn process.

Restarting the container immediately drops memory usage by about 4 GB, which strongly suggests the container is accumulating stale ffmpeg processes over time instead of cleaning them up.

Environment
Image: ghcr.io/maziggy/bambuddy:latest
Container name: bambuddy
Host OS: Linux
App process inside container:
uvicorn backend.app.main:app --host 0.0.0.0 --port 8011 --loop asyncio
What I observed

On the host, memory usage looked like this before and after restarting the container:

Before restart:

total used free shared buff/cache available
Mem: 15Gi 9.4Gi 354Mi 110Mi 5.7Gi 5.6Gi
Swap: 2.0Gi 1.0Mi 2.0Gi

After restart:

total used free shared buff/cache available
Mem: 15Gi 5.5Gi 4.2Gi 110Mi 5.7Gi 9.5Gi
Swap: 2.0Gi 1.0Mi 2.0Gi

So restarting bambuddy freed roughly 3.9 to 4.0 GB of RAM.

Docker also showed the container using about 4.1 GB of memory before restart.

Process inspection

Inside the container, docker top bambuddy showed a large number of ffmpeg processes running as children of the main uvicorn process.

Example:

sh -c uvicorn backend.app.main:app --host 0.0.0.0 --port ${PORT:-8000} --loop asyncio
/usr/local/bin/python3.13 /usr/local/bin/uvicorn backend.app.main:app --host 0.0.0.0 --port 8011 --loop asyncio
/usr/bin/ffmpeg -rtsp_transport tcp -rtsp_flags prefer_tcp -timeout 30000000 -buffer_size 1024000 -max_delay 500000 -i rtsps:///streaming/live/1 -f mjpeg -q:v 5 -r 15 -an -
/usr/bin/ffmpeg -rtsp_transport tcp -rtsp_flags prefer_tcp -timeout 30000000 -buffer_size 1024000 -max_delay 500000 -i rtsps:///streaming/live/1 -f mjpeg -q:v 5 -r 15 -an -
/usr/bin/ffmpeg -rtsp_transport tcp -rtsp_flags prefer_tcp -timeout 30000000 -buffer_size 1024000 -max_delay 500000 -i rtsps:///streaming/live/1 -f mjpeg -q:v 5 -r 15 -an -
...

There were many such ffmpeg processes, with start times spanning multiple days. Several appeared to be for the same RTSP stream and remained present long after they should likely have exited.

From the host process list, many of these ffmpeg processes were each using roughly 150 to 200 MB RSS, which adds up to the multi-GB memory usage.

Log observations

The container logs showed:

normal /health responses
repeated FTP retries for locating .3mf files
successful snapshot captures using RTSP or chamber image protocol
no obvious container crash/restart loop

Example patterns seen in logs:

repeated retries for .3mf download paths until a path succeeds
snapshot capture messages such as:
Capturing camera frame bytes from using RTSP
Successfully captured camera frame bytes: ...

This makes me suspect the issue is not a general crash loop, but rather that camera-related ffmpeg workers are being created and not reliably reaped.

Suspected cause

My guess is one of the following:

a new ffmpeg process is being started per snapshot/view/request and not always cleaned up
reconnect logic is leaving behind old ffmpeg children
multiple viewers or polling events create duplicate RTSP workers for the same printer
process termination/reaping is not happening correctly in some code path

Expected Behavior

I would expect ffmpeg helper processes to be short-lived or reused in a controlled way, and not accumulate over multiple days.

Steps to Reproduce

Leave the container running for several days and memory creeps up.

Printer Model

Multiple printers

Bambuddy Version

2.1.1

Printer Firmware Version

01.11.02

Installation Method

Docker

Operating System

Docker

Relevant Logs / Support Package

No response

Screenshots

No response

Additional Context

No response

Checklist

  • I have searched existing issues to ensure this bug hasn't already been reported
  • I am using the latest version of Bambuddy
  • My printer is set to LAN Only mode
  • My printer has Developer Mode enabled
Originally created by @ChrisTheDBA on GitHub (Mar 21, 2026). Original GitHub issue: https://github.com/maziggy/bambuddy/issues/776 Originally assigned to: @maziggy on GitHub. ### Bug Description Over time, the bambuddy container grows to several GB of RAM usage. After investigating the host and the container, the memory growth appears to come from many long-lived ffmpeg child processes spawned under the main uvicorn process. Restarting the container immediately drops memory usage by about 4 GB, which strongly suggests the container is accumulating stale ffmpeg processes over time instead of cleaning them up. Environment Image: ghcr.io/maziggy/bambuddy:latest Container name: bambuddy Host OS: Linux App process inside container: uvicorn backend.app.main:app --host 0.0.0.0 --port 8011 --loop asyncio What I observed On the host, memory usage looked like this before and after restarting the container: Before restart: total used free shared buff/cache available Mem: 15Gi 9.4Gi 354Mi 110Mi 5.7Gi 5.6Gi Swap: 2.0Gi 1.0Mi 2.0Gi After restart: total used free shared buff/cache available Mem: 15Gi 5.5Gi 4.2Gi 110Mi 5.7Gi 9.5Gi Swap: 2.0Gi 1.0Mi 2.0Gi So restarting bambuddy freed roughly 3.9 to 4.0 GB of RAM. Docker also showed the container using about 4.1 GB of memory before restart. Process inspection Inside the container, docker top bambuddy showed a large number of ffmpeg processes running as children of the main uvicorn process. Example: sh -c uvicorn backend.app.main:app --host 0.0.0.0 --port ${PORT:-8000} --loop asyncio /usr/local/bin/python3.13 /usr/local/bin/uvicorn backend.app.main:app --host 0.0.0.0 --port 8011 --loop asyncio /usr/bin/ffmpeg -rtsp_transport tcp -rtsp_flags prefer_tcp -timeout 30000000 -buffer_size 1024000 -max_delay 500000 -i rtsps://<printer>/streaming/live/1 -f mjpeg -q:v 5 -r 15 -an - /usr/bin/ffmpeg -rtsp_transport tcp -rtsp_flags prefer_tcp -timeout 30000000 -buffer_size 1024000 -max_delay 500000 -i rtsps://<printer>/streaming/live/1 -f mjpeg -q:v 5 -r 15 -an - /usr/bin/ffmpeg -rtsp_transport tcp -rtsp_flags prefer_tcp -timeout 30000000 -buffer_size 1024000 -max_delay 500000 -i rtsps://<printer>/streaming/live/1 -f mjpeg -q:v 5 -r 15 -an - ... There were many such ffmpeg processes, with start times spanning multiple days. Several appeared to be for the same RTSP stream and remained present long after they should likely have exited. From the host process list, many of these ffmpeg processes were each using roughly 150 to 200 MB RSS, which adds up to the multi-GB memory usage. Log observations The container logs showed: normal /health responses repeated FTP retries for locating .3mf files successful snapshot captures using RTSP or chamber image protocol no obvious container crash/restart loop Example patterns seen in logs: repeated retries for .3mf download paths until a path succeeds snapshot capture messages such as: Capturing camera frame bytes from <printer-ip> using RTSP Successfully captured camera frame bytes: ... This makes me suspect the issue is not a general crash loop, but rather that camera-related ffmpeg workers are being created and not reliably reaped. Suspected cause My guess is one of the following: a new ffmpeg process is being started per snapshot/view/request and not always cleaned up reconnect logic is leaving behind old ffmpeg children multiple viewers or polling events create duplicate RTSP workers for the same printer process termination/reaping is not happening correctly in some code path ### Expected Behavior I would expect ffmpeg helper processes to be short-lived or reused in a controlled way, and not accumulate over multiple days. ### Steps to Reproduce Leave the container running for several days and memory creeps up. ### Printer Model Multiple printers ### Bambuddy Version 2.1.1 ### Printer Firmware Version 01.11.02 ### Installation Method Docker ### Operating System Docker ### Relevant Logs / Support Package _No response_ ### Screenshots _No response_ ### Additional Context _No response_ ### Checklist - [x] I have searched existing issues to ensure this bug hasn't already been reported - [x] I am using the latest version of Bambuddy - [x] My printer is set to LAN Only mode - [x] My printer has Developer Mode enabled
BreizhHardware 2026-05-06 12:30:32 +02:00
  • closed this issue
  • added the
    bug
    label
Author
Owner

@maziggy commented on GitHub (Mar 21, 2026):

What Bambuddy version you are using? 2.1.1 is not valid.

<!-- gh-comment-id:4103261841 --> @maziggy commented on GitHub (Mar 21, 2026): What Bambuddy version you are using? 2.1.1 is not valid.
Author
Owner

@ChrisTheDBA commented on GitHub (Mar 21, 2026):

sorry v0.2.1.1 - I see a Version 0.2.3b1-daily.20260321 is available! which I assume is beta so I'm a stable version.

<!-- gh-comment-id:4103267184 --> @ChrisTheDBA commented on GitHub (Mar 21, 2026): sorry v0.2.1.1 - I see a Version 0.2.3b1-daily.20260321 is available! which I assume is beta so I'm a stable version.
Author
Owner

@maziggy commented on GitHub (Mar 21, 2026):

Please update to latest release 0.2.2.

<!-- gh-comment-id:4103269918 --> @maziggy commented on GitHub (Mar 21, 2026): Please update to latest release 0.2.2.
Author
Owner

@peter-k-de commented on GitHub (Mar 23, 2026):

I can confirm the same behavior as @ChrisTheDBA reported. I’m currently running v0.2.2.1.

For example, right now there is an ffmpeg process consuming almost an entire CPU core, even though no one is using Bambuddy or watching any stream.

Image

I never had such performance issues prior to (I guess) v0.2.2 or so, but at some point after updating to a newer version, I’ve noticed the host PC’s fans spinning up for extended periods - often hours while the printer is idling and bambuddy too.

I’m using a plain (non-Docker) installation on Ubuntu 24.04.3 (x86).

I’d be happy to provide logs or any additional information if that helps.

<!-- gh-comment-id:4114175752 --> @peter-k-de commented on GitHub (Mar 23, 2026): I can confirm the same behavior as @ChrisTheDBA reported. I’m currently running v0.2.2.1. For example, right now there is an ffmpeg process consuming almost an entire CPU core, even though no one is using Bambuddy or watching any stream. <img width="594" height="49" alt="Image" src="https://github.com/user-attachments/assets/9473079c-52bc-4adb-9759-fe6da2e6f081" /> I never had such performance issues prior to (I guess) v0.2.2 or so, but at some point after updating to a newer version, I’ve noticed the host PC’s fans spinning up for extended periods - often hours while the printer is idling and bambuddy too. I’m using a plain (non-Docker) installation on Ubuntu 24.04.3 (x86). I’d be happy to provide logs or any additional information if that helps.
Author
Owner

@maziggy commented on GitHub (Mar 24, 2026):

Root cause: When you close the camera viewer, the frontend sends a stop signal that kills the ffmpeg process. But the backend stream generator wasn't told to stop reconnecting — it just saw "ffmpeg died" and interpreted it as a dropped RTSP session, so it spawned a new ffmpeg process. This could repeat up to 30 times per stream view. The orphan cleanup couldn't catch these because they were still tracked as "active" streams.

Fix:

  • The stop endpoint now signals the generator to stop before killing the process
  • The generator checks if it was explicitly stopped before attempting reconnection
  • Stale stream detection now tracks timestamps per-stream instead of per-printer, so old orphaned streams can't hide behind a newer active stream's activity
  • Stale thresholds reduced from 120s/60s to 60s/30s

@peter-k-de — the ffmpeg eating a full CPU core while idle is the same bug. After the camera viewer was closed, the orphaned ffmpeg kept decoding the RTSP stream with nobody consuming the output.

Available/Fixed in branch dev and available with the next release or daily build.

Please let me know if it works fou you now.

<!-- gh-comment-id:4115848663 --> @maziggy commented on GitHub (Mar 24, 2026): Root cause: When you close the camera viewer, the frontend sends a stop signal that kills the ffmpeg process. But the backend stream generator wasn't told to stop reconnecting — it just saw "ffmpeg died" and interpreted it as a dropped RTSP session, so it spawned a new ffmpeg process. This could repeat up to 30 times per stream view. The orphan cleanup couldn't catch these because they were still tracked as "active" streams. Fix: - The stop endpoint now signals the generator to stop before killing the process - The generator checks if it was explicitly stopped before attempting reconnection - Stale stream detection now tracks timestamps per-stream instead of per-printer, so old orphaned streams can't hide behind a newer active stream's activity - Stale thresholds reduced from 120s/60s to 60s/30s @peter-k-de — the ffmpeg eating a full CPU core while idle is the same bug. After the camera viewer was closed, the orphaned ffmpeg kept decoding the RTSP stream with nobody consuming the output. Available/Fixed in branch dev and available with the next release or daily build. Please let me know if it works fou you now.
Author
Owner

@peter-k-de commented on GitHub (Mar 25, 2026):

After some hours of testing: Works perfectly now, CPU usage of bambuddy dropped to almost zero again. Thank you so much for your incredible work!

<!-- gh-comment-id:4125153306 --> @peter-k-de commented on GitHub (Mar 25, 2026): After some hours of testing: Works perfectly now, CPU usage of bambuddy dropped to almost zero again. Thank you so much for your incredible work!
Author
Owner

@maziggy commented on GitHub (Mar 25, 2026):


If you find Bambuddy useful, please consider giving it a on GitHub — it helps others discover the project!

<!-- gh-comment-id:4125265841 --> @maziggy commented on GitHub (Mar 25, 2026): ----- If you find Bambuddy useful, please consider giving it a ⭐ on [GitHub](https://github.com/maziggy/bambuddy) — it helps others discover the project!
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/bambuddy#518
No description provided.