[GH-ISSUE #172] [Feature]: Obico AI monitoring integration #109

Closed
opened 2026-05-06 12:25:55 +02:00 by BreizhHardware · 19 comments

Originally created by @hennott on GitHub (Jan 29, 2026).
Original GitHub issue: https://github.com/maziggy/bambuddy/issues/172

Originally assigned to: @maziggy on GitHub.

Problem or Use Case

Bambuddy is nearly perfect and has only one thing missing (for me) .. I´m running a little farm of nearly 30 printers and because of using more technical filaments, some of the jobs struggle. They came out 95% perfect and than not.

Proposed Solution

Integration a service like Obico to analyse the current job by camera to prevent damage to the printer during failed jobs. Simplyprint is also testing something like this, but there I need to replace one cloud by another one.

Alternatives Considered

No response

Feature Category

Print Archiving

Priority

Nice to have

Mockups or Examples

No response

Contribution

  • I would be willing to help implement this feature

Checklist

  • I have searched existing issues to ensure this feature hasn't already been requested
Originally created by @hennott on GitHub (Jan 29, 2026). Original GitHub issue: https://github.com/maziggy/bambuddy/issues/172 Originally assigned to: @maziggy on GitHub. ### Problem or Use Case Bambuddy is nearly perfect and has only one thing missing (for me) .. I´m running a little farm of nearly 30 printers and because of using more technical filaments, some of the jobs struggle. They came out 95% perfect and than not. ### Proposed Solution Integration a service like Obico to analyse the current job by camera to prevent damage to the printer during failed jobs. Simplyprint is also testing something like this, but there I need to replace one cloud by another one. ### Alternatives Considered _No response_ ### Feature Category Print Archiving ### Priority Nice to have ### Mockups or Examples _No response_ ### Contribution - [ ] I would be willing to help implement this feature ### Checklist - [x] I have searched existing issues to ensure this feature hasn't already been requested
BreizhHardware 2026-05-06 12:25:55 +02:00
Author
Owner

@maziggy commented on GitHub (Jan 29, 2026):

Certainly a nice feature, but we would need a contributor who has the required equipment to test and is willing to work on it.

<!-- gh-comment-id:3817500387 --> @maziggy commented on GitHub (Jan 29, 2026): Certainly a nice feature, but we would need a contributor who has the required equipment to test and is willing to work on it.
Author
Owner

@maziggy commented on GitHub (Feb 20, 2026):

Before we commit to building this, I'd like to gauge community interest. If you'd find this feature useful, please give this issue a thumbs up (👍) reaction so we can prioritize accordingly.

<!-- gh-comment-id:3935245934 --> @maziggy commented on GitHub (Feb 20, 2026): Before we commit to building this, I'd like to gauge community interest. If you'd find this feature useful, please give this issue a thumbs up (👍) reaction so we can prioritize accordingly.
Author
Owner

@maziggy commented on GitHub (Apr 12, 2026):

Will pick this up next few days. My plan is to just support a local Obico Docker install.

<!-- gh-comment-id:4231660558 --> @maziggy commented on GitHub (Apr 12, 2026): Will pick this up next few days. My plan is to just support a local Obico Docker install.
Author
Owner

@maziggy commented on GitHub (Apr 13, 2026):

Available/Fixed in branch dev and available with the next release or daily build.

May I ask you guys please to give it a try and let me know how it works?

Docs -> https://wiki.bambuddy.cool/features/failure-detection/

Image
<!-- gh-comment-id:4234722716 --> @maziggy commented on GitHub (Apr 13, 2026): Available/Fixed in branch dev and available with the next release or daily build. May I ask you guys please to give it a try and let me know how it works? Docs -> https://wiki.bambuddy.cool/features/failure-detection/ <img width="1317" height="668" alt="Image" src="https://github.com/user-attachments/assets/dee897c2-825e-4704-82f8-cc021c68b04f" />
Author
Owner

@fblix commented on GitHub (Apr 14, 2026):

I tried setting this up by building my own obico ml api container.

Here's my compose file:

  obico-ml-api:
    container_name: obico-ml-api
    build:
      context: https://github.com/TheSpaghettiDetective/obico-server.git#release:ml_api
    environment:
      - DEBUG=True
      - FLASK_APP=server.py
    command: bash -c "gunicorn --bind 0.0.0.0:3333 --workers 1 wsgi"
    ports:
      - "3333:3333"
    restart: unless-stopped
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:3333/hc/ || exit 1"]
      start_period: 30s
      interval: 30s
      timeout: 10s
      retries: 3

When connecting everything in Bambuddy it looks fine, the connectivity test succeeds, but shortly after I get this error.

Image

Debug log only shows
ML API call failed for printer 1: Client error '400 BAD REQUEST' for url 'http://<Obiko-IP>:3333/p/?img=http%3A%2F%2F<BAMBUDDY-IP>%3A8000%2Fapi%2Fv1%2Fprinters%2F1%2Fcamera%2Fsnapshot' For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400

In the Obico container I get this error msg:
[2026-04-14 09:11:17,304] ERROR in server: Failed to get image ImmutableMultiDict([('img', 'http://<BAMBUDDY-IP>:8000/api/v1/printers/1/camera/snapshot')]) - 401 Client Error: Unauthorized for url: http://<BAMBUDDY-IP>:8000/api/v1/printers/1/camera/snapshot

<!-- gh-comment-id:4242696279 --> @fblix commented on GitHub (Apr 14, 2026): I tried setting this up by building my own obico ml api container. Here's my compose file: ```yaml obico-ml-api: container_name: obico-ml-api build: context: https://github.com/TheSpaghettiDetective/obico-server.git#release:ml_api environment: - DEBUG=True - FLASK_APP=server.py command: bash -c "gunicorn --bind 0.0.0.0:3333 --workers 1 wsgi" ports: - "3333:3333" restart: unless-stopped healthcheck: test: ["CMD-SHELL", "curl -f http://localhost:3333/hc/ || exit 1"] start_period: 30s interval: 30s timeout: 10s retries: 3 ``` When connecting everything in Bambuddy it looks fine, the connectivity test succeeds, but shortly after I get this error. <img width="1939" height="1265" alt="Image" src="https://github.com/user-attachments/assets/3edefb62-a02d-473b-a4bb-ab64e1ab401f" /> Debug log only shows `ML API call failed for printer 1: Client error '400 BAD REQUEST' for url 'http://<Obiko-IP>:3333/p/?img=http%3A%2F%2F<BAMBUDDY-IP>%3A8000%2Fapi%2Fv1%2Fprinters%2F1%2Fcamera%2Fsnapshot' For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400 ` In the Obico container I get this error msg: `[2026-04-14 09:11:17,304] ERROR in server: Failed to get image ImmutableMultiDict([('img', 'http://<BAMBUDDY-IP>:8000/api/v1/printers/1/camera/snapshot')]) - 401 Client Error: Unauthorized for url: http://<BAMBUDDY-IP>:8000/api/v1/printers/1/camera/snapshot`
Author
Owner

@maziggy commented on GitHub (Apr 14, 2026):

Available/Fixed in branch dev and available with the next release or daily build. Please let me know if it works for you now.

Root cause: the Obico detection service handed the ML API a bare /camera/snapshot URL, and Bambuddy's auth returned 401 on that unauthenticated GET. The ML API then surfaced it as "Failed to get image", which Bambuddy reported back as a 400.

Fix: the snapshot endpoint already accepts a reusable camera-stream token (the same mechanism used by -based camera consumers, since browsers can't send auth headers on image loads). The detection service now appends that token to the URL it gives the ML API. The token is cached on the service, refreshed 5 min before its 60-min expiry, and is simply ignored when Bambuddy auth is disabled — so no behavior change for users without auth.

<!-- gh-comment-id:4242876676 --> @maziggy commented on GitHub (Apr 14, 2026): Available/Fixed in branch dev and available with the next release or daily build. Please let me know if it works for you now. Root cause: the Obico detection service handed the ML API a bare /camera/snapshot URL, and Bambuddy's auth returned 401 on that unauthenticated GET. The ML API then surfaced it as "Failed to get image", which Bambuddy reported back as a 400. Fix: the snapshot endpoint already accepts a reusable camera-stream token (the same mechanism used by <img>-based camera consumers, since browsers can't send auth headers on image loads). The detection service now appends that token to the URL it gives the ML API. The token is cached on the service, refreshed 5 min before its 60-min expiry, and is simply ignored when Bambuddy auth is disabled — so no behavior change for users without auth.
Author
Owner

@fblix commented on GitHub (Apr 14, 2026):

I'll pull the next daily build as soon as its released and give it a try.

Thanks for the rapid response!

<!-- gh-comment-id:4243025800 --> @fblix commented on GitHub (Apr 14, 2026): I'll pull the next daily build as soon as its released and give it a try. Thanks for the rapid response!
Author
Owner

@fblix commented on GitHub (Apr 14, 2026):

Seems to work fine now. No more errors!

<!-- gh-comment-id:4246088809 --> @fblix commented on GitHub (Apr 14, 2026): Seems to work fine now. No more errors!
Author
Owner

@maziggy commented on GitHub (Apr 15, 2026):


If you find Bambuddy useful, please consider giving it a on GitHub — it helps others discover the project!

<!-- gh-comment-id:4249504281 --> @maziggy commented on GitHub (Apr 15, 2026): ----- If you find Bambuddy useful, please consider giving it a ⭐ on [GitHub](https://github.com/maziggy/bambuddy) — it helps others discover the project!
Author
Owner

@fblix commented on GitHub (Apr 16, 2026):

Hey y'all back with another issue.

It seems Bambuddy takes too long to take snapshot. I get this error msg after a while:

[2026-04-16 06:29:55,717] ERROR in server: Failed to get image ImmutableMultiDict([('img', 'http://10.0.4.138:8000/api/v1/printers/1/camera/snapshot?token=MYTOKEN')]) - HTTPConnectionPool(host='BAMBUDDY-IP', port=8000): Read timed out. (read timeout=5)

Upon testing it takes my bambuddy instance roughly 10s to reply to a CURL on the snapshot endpoint.

The timeout seems to be hardcoded in the server.py of obico. I have overwritten the timeout value (just for testing) for my container. the yaml looks like this now:


  obico-ml-api:
    container_name: obico-ml-api
    build:
      context: https://github.com/TheSpaghettiDetective/obico-server.git#release:ml_api
    environment:
      - DEBUG=True
      - FLASK_APP=server.py
    command: bash -c "sed -i 's/timeout = (0.1, 5)/timeout = (5, 30)/' server.py && gunicorn --bind 0.0.0.0:3333 --workers 1 --timeout 120 wsgi"
    ports:
      - "3333:3333"
    restart: unless-stopped
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:3333/hc/ || exit 1"]
      start_period: 30s
      interval: 30s
      timeout: 10s
      retries: 3

<!-- gh-comment-id:4258051073 --> @fblix commented on GitHub (Apr 16, 2026): Hey y'all back with another issue. It seems Bambuddy takes too long to take snapshot. I get this error msg after a while: ` [2026-04-16 06:29:55,717] ERROR in server: Failed to get image ImmutableMultiDict([('img', 'http://10.0.4.138:8000/api/v1/printers/1/camera/snapshot?token=MYTOKEN')]) - HTTPConnectionPool(host='BAMBUDDY-IP', port=8000): Read timed out. (read timeout=5) ` Upon testing it takes my bambuddy instance roughly 10s to reply to a CURL on the snapshot endpoint. The timeout seems to be hardcoded in the [server.py](https://[raw.githubusercontent.com/TheSpaghettiDetective/obico-server/release/ml_api/server.py](https://raw.githubusercontent.com/TheSpaghettiDetective/obico-server/release/ml_api/server.py)) of obico. I have overwritten the timeout value (just for testing) for my container. the yaml looks like this now: ```yaml obico-ml-api: container_name: obico-ml-api build: context: https://github.com/TheSpaghettiDetective/obico-server.git#release:ml_api environment: - DEBUG=True - FLASK_APP=server.py command: bash -c "sed -i 's/timeout = (0.1, 5)/timeout = (5, 30)/' server.py && gunicorn --bind 0.0.0.0:3333 --workers 1 --timeout 120 wsgi" ports: - "3333:3333" restart: unless-stopped healthcheck: test: ["CMD-SHELL", "curl -f http://localhost:3333/hc/ || exit 1"] start_period: 30s interval: 30s timeout: 10s retries: 3 ```
Author
Owner

@maziggy commented on GitHub (Apr 16, 2026):

Thanks for the detailed repro with the exact timeout numbers — that was enough to fix it properly instead of just documenting the workaround.

Your instinct was right that raising the timeout helps, but it's a workaround: Obico's server.py has timeout = (0.1, 5) hardcoded, so every stock ML API container has the same 5s ceiling racing our snapshot pipeline (TLS proxy + ffmpeg + RTSP keyframe wait regularly pushes that past 5s on cold calls).

Fixed on dev by flipping the flow around: Bambuddy's detection loop now captures the JPEG locally with a 20s timeout we control, stashes the bytes under a one-shot random nonce, and hands Obico's ML API a new /api/v1/obico/cached-frame/{nonce} URL that returns the pre-captured bytes in <50ms. Obico's 5s timeout no longer races the capture —
its fetch is just an in-memory lookup.

Available/Fixed in branch dev and available with the next release or daily build.

You can drop the sed patch and the --timeout 120 gunicorn override from your compose — stock Obico ML API containers should work out of the box now. Please give it a try and let me know.


If you find Bambuddy useful, please consider giving it a on GitHub — it helps others discover the project!

<!-- gh-comment-id:4258782224 --> @maziggy commented on GitHub (Apr 16, 2026): Thanks for the detailed repro with the exact timeout numbers — that was enough to fix it properly instead of just documenting the workaround. Your instinct was right that raising the timeout helps, but it's a workaround: Obico's server.py has timeout = (0.1, 5) hardcoded, so every stock ML API container has the same 5s ceiling racing our snapshot pipeline (TLS proxy + ffmpeg + RTSP keyframe wait regularly pushes that past 5s on cold calls). Fixed on dev by flipping the flow around: Bambuddy's detection loop now captures the JPEG locally with a 20s timeout we control, stashes the bytes under a one-shot random nonce, and hands Obico's ML API a new /api/v1/obico/cached-frame/{nonce} URL that returns the pre-captured bytes in <50ms. Obico's 5s timeout no longer races the capture — its fetch is just an in-memory lookup. Available/Fixed in branch dev and available with the next release or daily build. You can drop the sed patch and the --timeout 120 gunicorn override from your compose — stock Obico ML API containers should work out of the box now. Please give it a try and let me know. ----- If you find Bambuddy useful, please consider giving it a ⭐ on [GitHub](https://github.com/maziggy/bambuddy) — it helps others discover the project!
Author
Owner

@fblix commented on GitHub (Apr 16, 2026):

I'll wait for the next daily build and will provide an update

<!-- gh-comment-id:4258822901 --> @fblix commented on GitHub (Apr 16, 2026): I'll wait for the next daily build and will provide an update
Author
Owner

@fblix commented on GitHub (Apr 16, 2026):

The cached-frame fix from the newest daily version resolves the original timeout issue, the sed workaround is no longer needed and I've removed it from my compose. The Obico ML API now gets the frames. Thanks for the quick fix!

However, I'm seeing a separate (minor) issue on the same build: the periodic camera stream cleanup task occasionally kills an ffmpeg process that is actively being used by the Obico detection service for snapshot capture.

This results in this warning being permanently shown in the GUI:

Image

Log pattern (IPs redacted):

15:07:40 INFO  Capturing camera frame bytes from <PRINTER-IP> using RTSP (model: H2S)  
15:07:49 INFO  Killing orphaned ffmpeg process found via /proc (pid=483)  
15:07:49 INFO  Cleaned up 1 orphaned camera stream(s)  
15:07:49 ERROR ffmpeg frame bytes capture failed (code -9): ffmpeg version 7.1.3-0+deb13u1 ...  
15:07:49 WARN  Failed to capture snapshot for printer 1  
15:07:54 INFO  Capturing camera frame bytes from <PRINTER_IP> using RTSP (model: H2S)  
15:07:56 INFO  Successfully captured camera frame bytes: 429248 bytes   

The cleanup scans /proc for ffmpeg processes, assumes they're orphaned, and sends SIGKILL (exit code -9) — but the process was actively owned by the Obico detection loop's capture_camera_frame_bytes() call. The detection service recovers on the next poll interval (~5s), so it's non-blocking, but it produces unnecessary errors in the UI ("Failed to capture snapshot for printer 1") and causes missed detection frames.

I was able to observe it 3 times within ~8 minutes of uptime during an active print.

<!-- gh-comment-id:4260415947 --> @fblix commented on GitHub (Apr 16, 2026): The cached-frame fix from the newest daily version resolves the original timeout issue, the sed workaround is no longer needed and I've removed it from my compose. The Obico ML API now gets the frames. Thanks for the quick fix! However, I'm seeing a separate (minor) issue on the same build: the periodic camera stream cleanup task occasionally kills an ffmpeg process that is actively being used by the Obico detection service for snapshot capture. This results in this warning being permanently shown in the GUI: <img width="527" height="211" alt="Image" src="https://github.com/user-attachments/assets/10f9956f-ecfa-4c90-ba4a-60186e291bdf" /> Log pattern (IPs redacted): ``` 15:07:40 INFO Capturing camera frame bytes from <PRINTER-IP> using RTSP (model: H2S) 15:07:49 INFO Killing orphaned ffmpeg process found via /proc (pid=483) 15:07:49 INFO Cleaned up 1 orphaned camera stream(s) 15:07:49 ERROR ffmpeg frame bytes capture failed (code -9): ffmpeg version 7.1.3-0+deb13u1 ... 15:07:49 WARN Failed to capture snapshot for printer 1 15:07:54 INFO Capturing camera frame bytes from <PRINTER_IP> using RTSP (model: H2S) 15:07:56 INFO Successfully captured camera frame bytes: 429248 bytes ``` The cleanup scans /proc for ffmpeg processes, assumes they're orphaned, and sends SIGKILL (exit code -9) — but the process was actively owned by the Obico detection loop's capture_camera_frame_bytes() call. The detection service recovers on the next poll interval (~5s), so it's non-blocking, but it produces unnecessary errors in the UI ("Failed to capture snapshot for printer 1") and causes missed detection frames. I was able to observe it 3 times within ~8 minutes of uptime during an active print.
Author
Owner

@maziggy commented on GitHub (Apr 17, 2026):

Available/Fixed in branch dev and available with the next release or daily build. Please let me know if it works for you now.


If you find Bambuddy useful, please consider giving it a on GitHub — it helps others discover the project!

<!-- gh-comment-id:4265952851 --> @maziggy commented on GitHub (Apr 17, 2026): Available/Fixed in branch dev and available with the next release or daily build. Please let me know if it works for you now. ----- If you find Bambuddy useful, please consider giving it a ⭐ on [GitHub](https://github.com/maziggy/bambuddy) — it helps others discover the project!
Author
Owner

@fblix commented on GitHub (Apr 17, 2026):

Tested on latest daily build (post-ef37ffa). The ffmpeg cleanup issue is resolved — no more SIGKILL/exit -9 errors. However, the ML API calls now fail silently on every poll cycle:

20:02:50 INFO  Successfully captured camera frame bytes: 357931 bytes
20:02:50 WARN  ML API call failed for printer 1: 

20:02:55 INFO  Capturing camera frame bytes from <PRINTER_IP> using RTSP (model: H2S)
20:02:56 INFO  Successfully captured camera frame bytes: 360913 bytes
20:02:56 WARN  ML API call failed for printer 1: 

20:03:01 INFO  Capturing camera frame bytes from <PRINTER_IP> using RTSP (model: H2S)
20:03:03 INFO  Successfully captured camera frame bytes: 357769 bytes
20:03:03 WARN  ML API call failed for printer 1: 

20:03:08 INFO  Capturing camera frame bytes from <PRINTER_IP> using RTSP (model: H2S)
20:03:09 INFO  Successfully captured camera frame bytes: 356624 bytes
20:03:09 WARN  ML API call failed for printer 1: 

20:03:14 INFO  Capturing camera frame bytes from <PRINTER_IP> using RTSP (model: H2S)
20:03:15 INFO  Successfully captured camera frame bytes: 335281 bytes
20:03:15 WARN  ML API call failed for printer 1: 

20:03:20 INFO  Capturing camera frame bytes from <PRINTER_IP> using RTSP (model: H2S)
20:03:21 INFO  Successfully captured camera frame bytes: 324936 bytes
20:03:21 WARN  ML API call failed for printer 1: 

20:03:26 INFO  Capturing camera frame bytes from <PRINTER_IP> using RTSP (model: H2S)
20:03:28 INFO  Successfully captured camera frame bytes: 373976 bytes
20:03:28 WARN  ML API call failed for printer 1: 

The camera frame capture works reliably now (no more orphan kills), but every ML API call fails with an empty error message. The warning text after the colon is blank — no HTTP status, no exception detail. Pattern repeats every ~5s poll cycle without recovery.
The obico container shows zero logs for this.

<!-- gh-comment-id:4270232077 --> @fblix commented on GitHub (Apr 17, 2026): Tested on latest daily build (post-ef37ffa). The ffmpeg cleanup issue is resolved — no more SIGKILL/exit -9 errors. However, the ML API calls now fail silently on every poll cycle: ``` 20:02:50 INFO Successfully captured camera frame bytes: 357931 bytes 20:02:50 WARN ML API call failed for printer 1: 20:02:55 INFO Capturing camera frame bytes from <PRINTER_IP> using RTSP (model: H2S) 20:02:56 INFO Successfully captured camera frame bytes: 360913 bytes 20:02:56 WARN ML API call failed for printer 1: 20:03:01 INFO Capturing camera frame bytes from <PRINTER_IP> using RTSP (model: H2S) 20:03:03 INFO Successfully captured camera frame bytes: 357769 bytes 20:03:03 WARN ML API call failed for printer 1: 20:03:08 INFO Capturing camera frame bytes from <PRINTER_IP> using RTSP (model: H2S) 20:03:09 INFO Successfully captured camera frame bytes: 356624 bytes 20:03:09 WARN ML API call failed for printer 1: 20:03:14 INFO Capturing camera frame bytes from <PRINTER_IP> using RTSP (model: H2S) 20:03:15 INFO Successfully captured camera frame bytes: 335281 bytes 20:03:15 WARN ML API call failed for printer 1: 20:03:20 INFO Capturing camera frame bytes from <PRINTER_IP> using RTSP (model: H2S) 20:03:21 INFO Successfully captured camera frame bytes: 324936 bytes 20:03:21 WARN ML API call failed for printer 1: 20:03:26 INFO Capturing camera frame bytes from <PRINTER_IP> using RTSP (model: H2S) 20:03:28 INFO Successfully captured camera frame bytes: 373976 bytes 20:03:28 WARN ML API call failed for printer 1: ``` The camera frame capture works reliably now (no more orphan kills), but every ML API call fails with an empty error message. The warning text after the colon is blank — no HTTP status, no exception detail. Pattern repeats every ~5s poll cycle without recovery. The obico container shows zero logs for this.
Author
Owner

@maziggy commented on GitHub (Apr 18, 2026):

Available/Fixed in branch dev and available with the next release or daily build. Please let me know how it goes.

<!-- gh-comment-id:4273065088 --> @maziggy commented on GitHub (Apr 18, 2026): Available/Fixed in branch dev and available with the next release or daily build. Please let me know how it goes.
Author
Owner

@fblix commented on GitHub (Apr 19, 2026):

Tested on the latest v0.2.3 release (upgraded from daily dev build).
The issue seems to be largely resolved, however once thing I noticed: On startup / restart the very first Obico capture sometimes hits the 20s timeout:

15:08:10 ERROR Camera frame bytes capture timed out after 20s
15:08:10 WARN  Failed to capture snapshot for printer 1
15:08:15 INFO  Capturing camera frame bytes from <PRINTER_IP> using RTSP (model: H2S)
15:08:17 INFO  Successfully captured camera frame bytes: 462956 bytes

This only happens once right after startup — subsequent polls (~every 8s) capture + serve reliably (typically in ~1.2s). Likely the first RTSP connection after a cold start taking longer than 20s. So not a super big deal, but in the UI the error message stays persistent, even if subsequent calls succeed again:

Image
<!-- gh-comment-id:4276466450 --> @fblix commented on GitHub (Apr 19, 2026): Tested on the latest v0.2.3 release (upgraded from daily dev build). The issue seems to be largely resolved, however once thing I noticed: On startup / restart the very first Obico capture sometimes hits the 20s timeout: ``` 15:08:10 ERROR Camera frame bytes capture timed out after 20s 15:08:10 WARN Failed to capture snapshot for printer 1 15:08:15 INFO Capturing camera frame bytes from <PRINTER_IP> using RTSP (model: H2S) 15:08:17 INFO Successfully captured camera frame bytes: 462956 bytes ``` This only happens once right after startup — subsequent polls (~every 8s) capture + serve reliably (typically in ~1.2s). Likely the first RTSP connection after a cold start taking longer than 20s. So not a super big deal, but in the UI the error message stays persistent, even if subsequent calls succeed again: <img width="621" height="398" alt="Image" src="https://github.com/user-attachments/assets/0bae1b86-e335-4be8-92af-c49b82ff1881" />
Author
Owner

@maziggy commented on GitHub (Apr 20, 2026):

Good catch — the cold-start RTSP capture can genuinely exceed the 20 s timeout on the very first frame after a restart, but the real bug here is that the red Status banner stayed up even though every subsequent poll succeeded. The service was writing _last_error on failure and never clearing it on success. Fixed on dev: a successful capture + ML call + classification now clears the banner. Configuration errors (missing external_url / ml_url) still persist because they abort before the clearing line.

Available in the next daily build — please let me know if the banner behaves for you now.


If you find Bambuddy useful, please consider giving it a on GitHub — it helps others discover the project!

<!-- gh-comment-id:4278316346 --> @maziggy commented on GitHub (Apr 20, 2026): Good catch — the cold-start RTSP capture can genuinely exceed the 20 s timeout on the very first frame after a restart, but the real bug here is that the red Status banner stayed up even though every subsequent poll succeeded. The service was writing _last_error on failure and never clearing it on success. Fixed on dev: a successful capture + ML call + classification now clears the banner. Configuration errors (missing external_url / ml_url) still persist because they abort before the clearing line. Available in the next daily build — please let me know if the banner behaves for you now. ----- If you find Bambuddy useful, please consider giving it a ⭐ on [GitHub](https://github.com/maziggy/bambuddy) — it helps others discover the project!
Author
Owner

@fblix commented on GitHub (Apr 20, 2026):

Looks good to me!
UI looks good, the error msg in the UI now disappears after the next successful call.

Thanks for your quick fixes on this topic!

<!-- gh-comment-id:4279813022 --> @fblix commented on GitHub (Apr 20, 2026): Looks good to me! UI looks good, the error msg in the UI now disappears after the next successful call. Thanks for your quick fixes on this topic!
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/bambuddy#109
No description provided.