mirror of
https://github.com/maziggy/bambuddy.git
synced 2026-05-09 05:35:30 +02:00
[GH-ISSUE #1150] [Bug]: MQTT print command acknowledgment timeout (15s) too short for P1P — causes 0500-4003 parse error #826
Labels
No labels
A1
automated
automated
bug
bug
Closed due to inactivity
contrib
dependencies
dependencies
duplicate
enhancement
feedback
hold
invalid
Notes
P1S
pull-request
security
security
ThumbsUp
user-report
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/bambuddy#826
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @d3ni3 on GitHub (Apr 28, 2026).
Original GitHub issue: https://github.com/maziggy/bambuddy/issues/1150
Originally assigned to: @maziggy on GitHub.
Component
Bambuddy
Bug Description
When sending a print job from Bambuddy's File Manager or Archive (reprint), the printer occasionally returns error
0500-4003("unable to parse print file"). The FTP upload completes successfully (STOR 226 confirmed), but the P1P takes significantly longer than 15 seconds to acknowledge the MQTT print command.The print-dispatch verifier treats the unacknowledged
project_filecommand as a broken MQTT connection and forces a fresh MQTT session. However, the P1P is not in a zombie state — it is genuinely processing the command, just slowly. It starts the print ~2 minutes 15 seconds after the original command was sent, well after Bambuddy has already given up and forced a reconnect. That reconnect mid-initialization appears to be what causes the0500-4003parse error on the printer side.In other words: the 15s timeout is a reasonable safeguard against zombie MQTT connections, but it fires too aggressively for the P1P, which has a legitimate but slow acknowledgment latency under normal conditions.
Printing the same file freshly sliced and sent directly via BambuStudio works without issues.
Expected Behavior
The MQTT acknowledgment timeout should be long enough to accommodate the P1P's response latency. A forced MQTT reconnect should not be triggered when the FTP upload was confirmed successfully (STOR 226) and the printer is still processing a valid print command.
Suggested fix: increase the timeout for P1P/P-series printers, make it configurable per printer model, or suppress the forced reconnect when a successful STOR 226 was already received.
Steps to Reproduce
Printer Model
P1P
Bambuddy Version
0.2.3.2
SpoolBuddy Version
No response
Printer Firmware Version
01.10.00.00
Installation Method
Docker
Operating System
Linux (Other)
Relevant Logs / Support Package
Screenshots
No response
Additional Context
Checklist
@maziggy commented on GitHub (Apr 28, 2026):
Never heard of a printer that takes so long to start a print....weird.
The 15-second timeout you hit was a v0.2.3.2 limit. Commit
9d041868already bumped it to 90 seconds in v0.2.4b1-daily.20260427.On its own, 90 seconds still wouldn't have caught your case (your printer took ~135 seconds to start the print). So the bigger fix: the watchdog now checks whether MQTT telemetry is still arriving before it force-reconnects. If push_status was received within the last 30 seconds, the session is healthy and the printer is just slow to parse — we leave MQTT alone instead of reconnecting and triggering 0500_4003. Original half-broken-session protection (#887/#936/#1136) still kicks in when telemetry has actually stopped.
Available/Fixed in branch dev and available with the next release or daily build. Please let me know if it works for you now.
If you find Bambuddy useful, please consider giving it a ⭐ on GitHub — it helps others discover the project!
@d3ni3 commented on GitHub (Apr 28, 2026):
Okay, thanks for the explanation. I'll wait for the daily build and give it a try.
@d3ni3 commented on GitHub (May 1, 2026):
The issue no longer occurs in the latest daily build :-)
Thanks!