mirror of
https://github.com/maziggy/bambuddy.git
synced 2026-05-09 05:35:30 +02:00
[GH-ISSUE #776] [Bug]: Docker: ffmpeg process leak in bambuddy causing memory growth over time #518
Labels
No labels
A1
automated
automated
bug
bug
Closed due to inactivity
contrib
dependencies
dependencies
duplicate
enhancement
feedback
hold
invalid
Notes
P1S
pull-request
security
security
ThumbsUp
user-report
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/bambuddy#518
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @ChrisTheDBA on GitHub (Mar 21, 2026).
Original GitHub issue: https://github.com/maziggy/bambuddy/issues/776
Originally assigned to: @maziggy on GitHub.
Bug Description
Over time, the bambuddy container grows to several GB of RAM usage. After investigating the host and the container, the memory growth appears to come from many long-lived ffmpeg child processes spawned under the main uvicorn process.
Restarting the container immediately drops memory usage by about 4 GB, which strongly suggests the container is accumulating stale ffmpeg processes over time instead of cleaning them up.
Environment
Image: ghcr.io/maziggy/bambuddy:latest
Container name: bambuddy
Host OS: Linux
App process inside container:
uvicorn backend.app.main:app --host 0.0.0.0 --port 8011 --loop asyncio
What I observed
On the host, memory usage looked like this before and after restarting the container:
Before restart:
total used free shared buff/cache available
Mem: 15Gi 9.4Gi 354Mi 110Mi 5.7Gi 5.6Gi
Swap: 2.0Gi 1.0Mi 2.0Gi
After restart:
total used free shared buff/cache available
Mem: 15Gi 5.5Gi 4.2Gi 110Mi 5.7Gi 9.5Gi
Swap: 2.0Gi 1.0Mi 2.0Gi
So restarting bambuddy freed roughly 3.9 to 4.0 GB of RAM.
Docker also showed the container using about 4.1 GB of memory before restart.
Process inspection
Inside the container, docker top bambuddy showed a large number of ffmpeg processes running as children of the main uvicorn process.
Example:
sh -c uvicorn backend.app.main:app --host 0.0.0.0 --port ${PORT:-8000} --loop asyncio
/usr/local/bin/python3.13 /usr/local/bin/uvicorn backend.app.main:app --host 0.0.0.0 --port 8011 --loop asyncio
/usr/bin/ffmpeg -rtsp_transport tcp -rtsp_flags prefer_tcp -timeout 30000000 -buffer_size 1024000 -max_delay 500000 -i rtsps:///streaming/live/1 -f mjpeg -q:v 5 -r 15 -an -
/usr/bin/ffmpeg -rtsp_transport tcp -rtsp_flags prefer_tcp -timeout 30000000 -buffer_size 1024000 -max_delay 500000 -i rtsps:///streaming/live/1 -f mjpeg -q:v 5 -r 15 -an -
/usr/bin/ffmpeg -rtsp_transport tcp -rtsp_flags prefer_tcp -timeout 30000000 -buffer_size 1024000 -max_delay 500000 -i rtsps:///streaming/live/1 -f mjpeg -q:v 5 -r 15 -an -
...
There were many such ffmpeg processes, with start times spanning multiple days. Several appeared to be for the same RTSP stream and remained present long after they should likely have exited.
From the host process list, many of these ffmpeg processes were each using roughly 150 to 200 MB RSS, which adds up to the multi-GB memory usage.
Log observations
The container logs showed:
normal /health responses
repeated FTP retries for locating .3mf files
successful snapshot captures using RTSP or chamber image protocol
no obvious container crash/restart loop
Example patterns seen in logs:
repeated retries for .3mf download paths until a path succeeds
snapshot capture messages such as:
Capturing camera frame bytes from using RTSP
Successfully captured camera frame bytes: ...
This makes me suspect the issue is not a general crash loop, but rather that camera-related ffmpeg workers are being created and not reliably reaped.
Suspected cause
My guess is one of the following:
a new ffmpeg process is being started per snapshot/view/request and not always cleaned up
reconnect logic is leaving behind old ffmpeg children
multiple viewers or polling events create duplicate RTSP workers for the same printer
process termination/reaping is not happening correctly in some code path
Expected Behavior
I would expect ffmpeg helper processes to be short-lived or reused in a controlled way, and not accumulate over multiple days.
Steps to Reproduce
Leave the container running for several days and memory creeps up.
Printer Model
Multiple printers
Bambuddy Version
2.1.1
Printer Firmware Version
01.11.02
Installation Method
Docker
Operating System
Docker
Relevant Logs / Support Package
No response
Screenshots
No response
Additional Context
No response
Checklist
@maziggy commented on GitHub (Mar 21, 2026):
What Bambuddy version you are using? 2.1.1 is not valid.
@ChrisTheDBA commented on GitHub (Mar 21, 2026):
sorry v0.2.1.1 - I see a Version 0.2.3b1-daily.20260321 is available! which I assume is beta so I'm a stable version.
@maziggy commented on GitHub (Mar 21, 2026):
Please update to latest release 0.2.2.
@peter-k-de commented on GitHub (Mar 23, 2026):
I can confirm the same behavior as @ChrisTheDBA reported. I’m currently running v0.2.2.1.
For example, right now there is an ffmpeg process consuming almost an entire CPU core, even though no one is using Bambuddy or watching any stream.
I never had such performance issues prior to (I guess) v0.2.2 or so, but at some point after updating to a newer version, I’ve noticed the host PC’s fans spinning up for extended periods - often hours while the printer is idling and bambuddy too.
I’m using a plain (non-Docker) installation on Ubuntu 24.04.3 (x86).
I’d be happy to provide logs or any additional information if that helps.
@maziggy commented on GitHub (Mar 24, 2026):
Root cause: When you close the camera viewer, the frontend sends a stop signal that kills the ffmpeg process. But the backend stream generator wasn't told to stop reconnecting — it just saw "ffmpeg died" and interpreted it as a dropped RTSP session, so it spawned a new ffmpeg process. This could repeat up to 30 times per stream view. The orphan cleanup couldn't catch these because they were still tracked as "active" streams.
Fix:
@peter-k-de — the ffmpeg eating a full CPU core while idle is the same bug. After the camera viewer was closed, the orphaned ffmpeg kept decoding the RTSP stream with nobody consuming the output.
Available/Fixed in branch dev and available with the next release or daily build.
Please let me know if it works fou you now.
@peter-k-de commented on GitHub (Mar 25, 2026):
After some hours of testing: Works perfectly now, CPU usage of bambuddy dropped to almost zero again. Thank you so much for your incredible work!
@maziggy commented on GitHub (Mar 25, 2026):
If you find Bambuddy useful, please consider giving it a ⭐ on GitHub — it helps others discover the project!