[PR #295] [MERGED] bug/spoolman-creates-dublicate-spools #991

Closed
opened 2026-05-06 12:34:23 +02:00 by BreizhHardware · 0 comments

📋 Pull Request Information

Original PR: https://github.com/maziggy/bambuddy/pull/295
Author: @bambuman
Created: 2/7/2026
Status: Merged
Merged: 2/8/2026
Merged by: @maziggy

Base: 0.1.9bHead: bug/spoolman-creates-dublicate-spools


📝 Commits (2)

  • deab812 Optimize AMS Spoolman sync performance with spool caching
  • c04df8c Add retry logic and connection resilience to Spoolman sync

📊 Changes

4 files changed (+378 additions, -24 deletions)

View changed files

📝 backend/app/api/routes/spoolman.py (+54 -4)
📝 backend/app/main.py (+32 -1)
📝 backend/app/services/spoolman.py (+89 -18)
📝 backend/tests/unit/services/test_spoolman_service.py (+203 -1)

📄 Description

Description

This error creates duplicate spool's (see screenshot) on application start. Haven't seen on other occasions.

2026-02-07 22:38:46,761 DEBUG [httpcore.http11] receive_response_headers.failed exception=ReadError(ClosedResourceError())
2026-02-07 22:38:46,762 ERROR [backend.app.services.spoolman] Failed to get spools from Spoolman:

This PR Optimizes AMS-to-Spoolman sync performance and adds connection resilience for large spool databases (300+ spools).

Problem: Each AMS tray independently fetched all spools from Spoolman, causing redundant API calls per sync operation. This caused timeouts and poor performance with large spool databases.

Solution: Fetch all spools once and reuse cached data across all tray operations. Added retry logic (3 attempts, 500ms delay) with connection recreation to handle transient network errors, ensuring the optimization remains reliable.

Type of Change

  • Bug fix (non-breaking change that fixes an issue)
  • New feature (non-breaking change that adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation update
  • Code refactoring
  • Performance improvement
  • Test addition or update

Changes Made

  • Added optional cached_spools parameter to sync functions (backward compatible)
  • Fetch spools once before loops in on_ams_change, sync_single_printer, and sync_all_printers
  • Added retry logic to get_spools() with connection recreation on errors
  • Configured httpx.Limits to prevent stale connection reuse
  • Added 9 new unit tests (all 1018 tests passing)

Screenshots

image

Testing

  • I have tested this on my local machine
  • I have tested with my printer model: P1S

Test Environment:

  • Spoolman instance with 300 spools
  • 2 × P1S printers:
    • Printer 1: 1 AMS unit (4 trays)
    • Printer 2: 2 AMS units (8 trays)
  • Total: 12 AMS trays being synced

Checklist

  • My code follows the project's coding style
  • I have commented my code where necessary
  • I have updated the documentation (if needed)
  • My changes generate no new warnings
  • I have tested my changes thoroughly

Additional Notes

Future improvement: The ideal solution would be enhancing Spoolman's API to support querying by extra data/tag, eliminating the need for client-side caching. Spoolman has a new release and appears actively maintained again, making this a viable contribution opportunity. I'll look into that.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/maziggy/bambuddy/pull/295 **Author:** [@bambuman](https://github.com/bambuman) **Created:** 2/7/2026 **Status:** ✅ Merged **Merged:** 2/8/2026 **Merged by:** [@maziggy](https://github.com/maziggy) **Base:** `0.1.9b` ← **Head:** `bug/spoolman-creates-dublicate-spools` --- ### 📝 Commits (2) - [`deab812`](https://github.com/maziggy/bambuddy/commit/deab81287f0d75d472d19a08cb2c242cbddb26a0) Optimize AMS Spoolman sync performance with spool caching - [`c04df8c`](https://github.com/maziggy/bambuddy/commit/c04df8ceb9060120d96ca20c6b3d3a00f748d868) Add retry logic and connection resilience to Spoolman sync ### 📊 Changes **4 files changed** (+378 additions, -24 deletions) <details> <summary>View changed files</summary> 📝 `backend/app/api/routes/spoolman.py` (+54 -4) 📝 `backend/app/main.py` (+32 -1) 📝 `backend/app/services/spoolman.py` (+89 -18) 📝 `backend/tests/unit/services/test_spoolman_service.py` (+203 -1) </details> ### 📄 Description ## Description This error creates duplicate spool's (see screenshot) on application start. Haven't seen on other occasions. ``` 2026-02-07 22:38:46,761 DEBUG [httpcore.http11] receive_response_headers.failed exception=ReadError(ClosedResourceError()) 2026-02-07 22:38:46,762 ERROR [backend.app.services.spoolman] Failed to get spools from Spoolman: ``` This PR Optimizes AMS-to-Spoolman sync performance and adds connection resilience for large spool databases (300+ spools). **Problem**: Each AMS tray independently fetched all spools from Spoolman, causing redundant API calls per sync operation. This caused timeouts and poor performance with large spool databases. **Solution**: Fetch all spools once and reuse cached data across all tray operations. Added retry logic (3 attempts, 500ms delay) with connection recreation to handle transient network errors, ensuring the optimization remains reliable. ## Related Issue ## Type of Change - [x] Bug fix (non-breaking change that fixes an issue) - [ ] New feature (non-breaking change that adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) - [ ] Documentation update - [ ] Code refactoring - [x] Performance improvement - [x] Test addition or update ## Changes Made - Added optional `cached_spools` parameter to sync functions (backward compatible) - Fetch spools once before loops in `on_ams_change`, `sync_single_printer`, and `sync_all_printers` - Added retry logic to `get_spools()` with connection recreation on errors - Configured `httpx.Limits` to prevent stale connection reuse - Added 9 new unit tests (all 1018 tests passing) ## Screenshots <img width="2457" height="240" alt="image" src="https://github.com/user-attachments/assets/751c0222-f53b-451a-ba75-fe3580c64783" /> ## Testing - [x] I have tested this on my local machine - [x] I have tested with my printer model: P1S **Test Environment:** - Spoolman instance with **300 spools** - 2 × P1S printers: - Printer 1: 1 AMS unit (4 trays) - Printer 2: 2 AMS units (8 trays) - Total: 12 AMS trays being synced ## Checklist - [x] My code follows the project's coding style - [x] I have commented my code where necessary - [x] I have updated the documentation (if needed) - [x] My changes generate no new warnings - [x] I have tested my changes thoroughly ## Additional Notes **Future improvement**: The ideal solution would be enhancing Spoolman's API to support querying by extra data/tag, eliminating the need for client-side caching. Spoolman has a new release and appears actively maintained again, making this a viable contribution opportunity. I'll look into that. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
BreizhHardware 2026-05-06 12:34:23 +02:00
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/bambuddy#991
No description provided.