mirror of
https://github.com/maziggy/bambuddy.git
synced 2026-05-09 05:35:30 +02:00
[GH-ISSUE #172] [Feature]: Obico AI monitoring integration #109
Labels
No labels
A1
automated
automated
bug
bug
Closed due to inactivity
contrib
dependencies
dependencies
duplicate
enhancement
feedback
hold
invalid
Notes
P1S
pull-request
security
security
ThumbsUp
user-report
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/bambuddy#109
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @hennott on GitHub (Jan 29, 2026).
Original GitHub issue: https://github.com/maziggy/bambuddy/issues/172
Originally assigned to: @maziggy on GitHub.
Problem or Use Case
Bambuddy is nearly perfect and has only one thing missing (for me) .. I´m running a little farm of nearly 30 printers and because of using more technical filaments, some of the jobs struggle. They came out 95% perfect and than not.
Proposed Solution
Integration a service like Obico to analyse the current job by camera to prevent damage to the printer during failed jobs. Simplyprint is also testing something like this, but there I need to replace one cloud by another one.
Alternatives Considered
No response
Feature Category
Print Archiving
Priority
Nice to have
Mockups or Examples
No response
Contribution
Checklist
@maziggy commented on GitHub (Jan 29, 2026):
Certainly a nice feature, but we would need a contributor who has the required equipment to test and is willing to work on it.
@maziggy commented on GitHub (Feb 20, 2026):
Before we commit to building this, I'd like to gauge community interest. If you'd find this feature useful, please give this issue a thumbs up (👍) reaction so we can prioritize accordingly.
@maziggy commented on GitHub (Apr 12, 2026):
Will pick this up next few days. My plan is to just support a local Obico Docker install.
@maziggy commented on GitHub (Apr 13, 2026):
Available/Fixed in branch dev and available with the next release or daily build.
May I ask you guys please to give it a try and let me know how it works?
Docs -> https://wiki.bambuddy.cool/features/failure-detection/
@fblix commented on GitHub (Apr 14, 2026):
I tried setting this up by building my own obico ml api container.
Here's my compose file:
When connecting everything in Bambuddy it looks fine, the connectivity test succeeds, but shortly after I get this error.
Debug log only shows
ML API call failed for printer 1: Client error '400 BAD REQUEST' for url 'http://<Obiko-IP>:3333/p/?img=http%3A%2F%2F<BAMBUDDY-IP>%3A8000%2Fapi%2Fv1%2Fprinters%2F1%2Fcamera%2Fsnapshot' For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400In the Obico container I get this error msg:
[2026-04-14 09:11:17,304] ERROR in server: Failed to get image ImmutableMultiDict([('img', 'http://<BAMBUDDY-IP>:8000/api/v1/printers/1/camera/snapshot')]) - 401 Client Error: Unauthorized for url: http://<BAMBUDDY-IP>:8000/api/v1/printers/1/camera/snapshot@maziggy commented on GitHub (Apr 14, 2026):
Available/Fixed in branch dev and available with the next release or daily build. Please let me know if it works for you now.
Root cause: the Obico detection service handed the ML API a bare /camera/snapshot URL, and Bambuddy's auth returned 401 on that unauthenticated GET. The ML API then surfaced it as "Failed to get image", which Bambuddy reported back as a 400.
Fix: the snapshot endpoint already accepts a reusable camera-stream token (the same mechanism used by -based camera consumers, since browsers can't send auth headers on image loads). The detection service now appends that token to the URL it gives the ML API. The token is cached on the service, refreshed 5 min before its 60-min expiry, and is simply ignored when Bambuddy auth is disabled — so no behavior change for users without auth.
@fblix commented on GitHub (Apr 14, 2026):
I'll pull the next daily build as soon as its released and give it a try.
Thanks for the rapid response!
@fblix commented on GitHub (Apr 14, 2026):
Seems to work fine now. No more errors!
@maziggy commented on GitHub (Apr 15, 2026):
If you find Bambuddy useful, please consider giving it a ⭐ on GitHub — it helps others discover the project!
@fblix commented on GitHub (Apr 16, 2026):
Hey y'all back with another issue.
It seems Bambuddy takes too long to take snapshot. I get this error msg after a while:
[2026-04-16 06:29:55,717] ERROR in server: Failed to get image ImmutableMultiDict([('img', 'http://10.0.4.138:8000/api/v1/printers/1/camera/snapshot?token=MYTOKEN')]) - HTTPConnectionPool(host='BAMBUDDY-IP', port=8000): Read timed out. (read timeout=5)Upon testing it takes my bambuddy instance roughly 10s to reply to a CURL on the snapshot endpoint.
The timeout seems to be hardcoded in the server.py of obico. I have overwritten the timeout value (just for testing) for my container. the yaml looks like this now:
@maziggy commented on GitHub (Apr 16, 2026):
Thanks for the detailed repro with the exact timeout numbers — that was enough to fix it properly instead of just documenting the workaround.
Your instinct was right that raising the timeout helps, but it's a workaround: Obico's server.py has timeout = (0.1, 5) hardcoded, so every stock ML API container has the same 5s ceiling racing our snapshot pipeline (TLS proxy + ffmpeg + RTSP keyframe wait regularly pushes that past 5s on cold calls).
Fixed on dev by flipping the flow around: Bambuddy's detection loop now captures the JPEG locally with a 20s timeout we control, stashes the bytes under a one-shot random nonce, and hands Obico's ML API a new /api/v1/obico/cached-frame/{nonce} URL that returns the pre-captured bytes in <50ms. Obico's 5s timeout no longer races the capture —
its fetch is just an in-memory lookup.
Available/Fixed in branch dev and available with the next release or daily build.
You can drop the sed patch and the --timeout 120 gunicorn override from your compose — stock Obico ML API containers should work out of the box now. Please give it a try and let me know.
If you find Bambuddy useful, please consider giving it a ⭐ on GitHub — it helps others discover the project!
@fblix commented on GitHub (Apr 16, 2026):
I'll wait for the next daily build and will provide an update
@fblix commented on GitHub (Apr 16, 2026):
The cached-frame fix from the newest daily version resolves the original timeout issue, the sed workaround is no longer needed and I've removed it from my compose. The Obico ML API now gets the frames. Thanks for the quick fix!
However, I'm seeing a separate (minor) issue on the same build: the periodic camera stream cleanup task occasionally kills an ffmpeg process that is actively being used by the Obico detection service for snapshot capture.
This results in this warning being permanently shown in the GUI:
Log pattern (IPs redacted):
The cleanup scans /proc for ffmpeg processes, assumes they're orphaned, and sends SIGKILL (exit code -9) — but the process was actively owned by the Obico detection loop's capture_camera_frame_bytes() call. The detection service recovers on the next poll interval (~5s), so it's non-blocking, but it produces unnecessary errors in the UI ("Failed to capture snapshot for printer 1") and causes missed detection frames.
I was able to observe it 3 times within ~8 minutes of uptime during an active print.
@maziggy commented on GitHub (Apr 17, 2026):
Available/Fixed in branch dev and available with the next release or daily build. Please let me know if it works for you now.
If you find Bambuddy useful, please consider giving it a ⭐ on GitHub — it helps others discover the project!
@fblix commented on GitHub (Apr 17, 2026):
Tested on latest daily build (post-ef37ffa). The ffmpeg cleanup issue is resolved — no more SIGKILL/exit -9 errors. However, the ML API calls now fail silently on every poll cycle:
The camera frame capture works reliably now (no more orphan kills), but every ML API call fails with an empty error message. The warning text after the colon is blank — no HTTP status, no exception detail. Pattern repeats every ~5s poll cycle without recovery.
The obico container shows zero logs for this.
@maziggy commented on GitHub (Apr 18, 2026):
Available/Fixed in branch dev and available with the next release or daily build. Please let me know how it goes.
@fblix commented on GitHub (Apr 19, 2026):
Tested on the latest v0.2.3 release (upgraded from daily dev build).
The issue seems to be largely resolved, however once thing I noticed: On startup / restart the very first Obico capture sometimes hits the 20s timeout:
This only happens once right after startup — subsequent polls (~every 8s) capture + serve reliably (typically in ~1.2s). Likely the first RTSP connection after a cold start taking longer than 20s. So not a super big deal, but in the UI the error message stays persistent, even if subsequent calls succeed again:
@maziggy commented on GitHub (Apr 20, 2026):
Good catch — the cold-start RTSP capture can genuinely exceed the 20 s timeout on the very first frame after a restart, but the real bug here is that the red Status banner stayed up even though every subsequent poll succeeded. The service was writing _last_error on failure and never clearing it on success. Fixed on dev: a successful capture + ML call + classification now clears the banner. Configuration errors (missing external_url / ml_url) still persist because they abort before the clearing line.
Available in the next daily build — please let me know if the banner behaves for you now.
If you find Bambuddy useful, please consider giving it a ⭐ on GitHub — it helps others discover the project!
@fblix commented on GitHub (Apr 20, 2026):
Looks good to me!
UI looks good, the error msg in the UI now disappears after the next successful call.
Thanks for your quick fixes on this topic!