1
0
Fork 0
mirror of https://github.com/maziggy/bambuddy.git synced 2026-05-09 08:25:54 +02:00

[GH-ISSUE #481] [Bug]: Queue errro 500 #302

Closed
opened 2026-05-07 00:08:38 +02:00 by BreizhHardware · 16 comments

Originally created by @agroezinger on GitHub (Feb 21, 2026).
Original GitHub issue: https://github.com/maziggy/bambuddy/issues/481

Originally assigned to: @maziggy on GitHub.

Bug Description

After update to 0.2.1b today, my queue display is not working anymore. same with version 0.2.1b2

Image

if you need any more information for debugging, please tell me what exactly. error message is not helping a lot.
i already cleared the queue and readded the items. If i filter for a specific printer, or status, the information is getting displayed

Image

Expected Behavior

show all

Steps to Reproduce

...

Printer Model

Multiple printers

Bambuddy Version

0.2.1b2

Printer Firmware Version

...

Installation Method

Docker

Operating System

Linux (Ubuntu/Debian)

Relevant Logs / Support Package


Screenshots

No response

Additional Context

No response

Checklist

  • I have searched existing issues to ensure this bug hasn't already been reported
  • I am using the latest version of Bambuddy
  • My printer is set to LAN Only mode
Originally created by @agroezinger on GitHub (Feb 21, 2026). Original GitHub issue: https://github.com/maziggy/bambuddy/issues/481 Originally assigned to: @maziggy on GitHub. ### Bug Description After update to 0.2.1b today, my queue display is not working anymore. same with version 0.2.1b2 <img width="771" height="974" alt="Image" src="https://github.com/user-attachments/assets/d4e52be5-a0ec-4cca-a952-04b15a06ce4a" /> if you need any more information for debugging, please tell me what exactly. error message is not helping a lot. i already cleared the queue and readded the items. If i filter for a specific printer, or status, the information is getting displayed <img width="486" height="502" alt="Image" src="https://github.com/user-attachments/assets/cb1acb74-5206-468c-9d38-3f47fd6c1765" /> ### Expected Behavior show all ### Steps to Reproduce ... ### Printer Model Multiple printers ### Bambuddy Version 0.2.1b2 ### Printer Firmware Version ... ### Installation Method Docker ### Operating System Linux (Ubuntu/Debian) ### Relevant Logs / Support Package ```shell ``` ### Screenshots _No response_ ### Additional Context _No response_ ### Checklist - [x] I have searched existing issues to ensure this bug hasn't already been reported - [x] I am using the latest version of Bambuddy - [x] My printer is set to LAN Only mode
BreizhHardware 2026-05-07 00:08:38 +02:00
  • closed this issue
  • added the
    bug
    label
Author
Owner

@maziggy commented on GitHub (Feb 21, 2026):

Sorry, cannot reproduce. Please upload a support package -> https://wiki.bambuddy.cool/features/system-info/?h=debug#enable-debug-logging

<!-- gh-comment-id:3938885386 --> @maziggy commented on GitHub (Feb 21, 2026): Sorry, cannot reproduce. Please upload a support package -> https://wiki.bambuddy.cool/features/system-info/?h=debug#enable-debug-logging
Author
Owner

@agroezinger commented on GitHub (Feb 21, 2026):

bambuddy-support-20260221-160209.zip

<!-- gh-comment-id:3938924087 --> @agroezinger commented on GitHub (Feb 21, 2026): [bambuddy-support-20260221-160209.zip](https://github.com/user-attachments/files/25458919/bambuddy-support-20260221-160209.zip)
Author
Owner

@maziggy commented on GitHub (Feb 21, 2026):

Looking at your logs, I found the root cause: your container is experiencing SQLite database lock contention, which causes the queue endpoint to return 500.

The chain of events in your logs:

  1. When you added a model-based ("Any [Model]") queue item, the scheduler started crash-looping every 30 seconds with a '>=' not supported between instances of 'str' and 'int' error
  2. This crash-loop caused SQLite connection pool exhaustion (98 leaked connections in your logs)
  3. With the pool exhausted, the queue endpoint can't acquire a database connection → 500 error

The good news: the specific scheduler bug that triggers this was already fixed in 0.2.1b2 (commit 732ab8c). However, your logs still show the old error, which suggests your container may be running a stale image layer despite reporting version 0.2.1b2.

Can you try:

  1. Force pull the image: docker pull ghcr.io/maziggy/bambuddy:0.2.1b2
  2. Recreate the container (not just restart): docker compose down && docker compose up -d

This should resolve both the scheduler crash-loop and the queue 500. The database lock / pool issues will clear up on their own once the crash-loop stops.

Let me know if the issue persists after a clean redeploy!


If you find Bambuddy useful, please consider giving it a on GitHub — it helps others discover the project!

<!-- gh-comment-id:3938979247 --> @maziggy commented on GitHub (Feb 21, 2026): Looking at your logs, I found the root cause: your container is experiencing SQLite database lock contention, which causes the queue endpoint to return 500. The chain of events in your logs: 1. When you added a model-based ("Any [Model]") queue item, the scheduler started crash-looping every 30 seconds with a '>=' not supported between instances of 'str' and 'int' error 2. This crash-loop caused SQLite connection pool exhaustion (98 leaked connections in your logs) 3. With the pool exhausted, the queue endpoint can't acquire a database connection → 500 error The good news: the specific scheduler bug that triggers this was already fixed in 0.2.1b2 (commit 732ab8c). However, your logs still show the old error, which suggests your container may be running a stale image layer despite reporting version 0.2.1b2. Can you try: 1. Force pull the image: docker pull ghcr.io/maziggy/bambuddy:0.2.1b2 2. Recreate the container (not just restart): docker compose down && docker compose up -d This should resolve both the scheduler crash-loop and the queue 500. The database lock / pool issues will clear up on their own once the crash-loop stops. Let me know if the issue persists after a clean redeploy! ----- If you find Bambuddy useful, please consider giving it a ⭐ on [GitHub](https://github.com/bambuman/bambuddy) — it helps others discover the project!
Author
Owner

@agroezinger commented on GitHub (Feb 21, 2026):

Hi,
wenn ich mich nicht täusche, kommst du aus Deutschland, richtig?
Hab ich gemacht:

  • docker stop,
  • docker rm,
  • image neu geladen und über das
  • Unraid template den container neu gestartet.

noch immer der Datenbankfehler :/


Hi,
If I'm not mistaken, you're from Germany, right?
I did the following:

  • docker stop,
  • docker rm,
  • reloaded the image and restarted the container via the
  • Unraid template.

Still getting the database error :/

<!-- gh-comment-id:3939197413 --> @agroezinger commented on GitHub (Feb 21, 2026): Hi, wenn ich mich nicht täusche, kommst du aus Deutschland, richtig? Hab ich gemacht: - docker stop, - docker rm, - image neu geladen und über das - Unraid template den container neu gestartet. noch immer der Datenbankfehler :/ ----- Hi, If I'm not mistaken, you're from Germany, right? I did the following: - docker stop, - docker rm, - reloaded the image and restarted the container via the - Unraid template. Still getting the database error :/
Author
Owner

@maziggy commented on GitHub (Feb 21, 2026):

Ja.

Was bedeutet "image neu geladen"? Ich kenne Unraid nicht, aber ich könnte mir vorstellen, dass in dem Unraid Template eine Version angegeben ist. Und latest matched nicht, weil Beta Version.

<!-- gh-comment-id:3939221068 --> @maziggy commented on GitHub (Feb 21, 2026): Ja. Was bedeutet "image neu geladen"? Ich kenne Unraid nicht, aber ich könnte mir vorstellen, dass in dem Unraid Template eine Version angegeben ist. Und latest matched nicht, weil Beta Version.
Author
Owner

@agroezinger commented on GitHub (Feb 21, 2026):

doch, ich habe explizit die beta version angegeben. im prinzip sollte das im hintergrund eigentlich nichts anderes machen als: docker compose up -d

hier sehe ich, das das richtige image genutzt wird

Image Image
<!-- gh-comment-id:3939289042 --> @agroezinger commented on GitHub (Feb 21, 2026): doch, ich habe explizit die beta version angegeben. im prinzip sollte das im hintergrund eigentlich nichts anderes machen als: docker compose up -d hier sehe ich, das das richtige image genutzt wird <img width="1241" height="58" alt="Image" src="https://github.com/user-attachments/assets/439f2bc8-26a2-4ec0-8663-030090805629" /> <img width="1103" height="1198" alt="Image" src="https://github.com/user-attachments/assets/2d8d1318-c96c-4d78-a6cb-a25f41713539" />
Author
Owner

@maziggy commented on GitHub (Feb 21, 2026):

Upload new debug logs please.

<!-- gh-comment-id:3939308173 --> @maziggy commented on GitHub (Feb 21, 2026): Upload new debug logs please.
Author
Owner

@agroezinger commented on GitHub (Feb 21, 2026):

bambuddy-support-20260221-160209.zip

<!-- gh-comment-id:3939572158 --> @agroezinger commented on GitHub (Feb 21, 2026): [bambuddy-support-20260221-160209.zip](https://github.com/user-attachments/files/25461896/bambuddy-support-20260221-160209.zip)
Author
Owner

@maziggy commented on GitHub (Feb 22, 2026):

These are the same logs you sent before. Furthermore your container was not updated to the latest beta version.

<!-- gh-comment-id:3940449286 --> @maziggy commented on GitHub (Feb 22, 2026): These are the same logs you sent before. Furthermore your container was not updated to the latest beta version.
Author
Owner

@agroezinger commented on GitHub (Feb 22, 2026):

you are right, i've selected the wrong zip file ... sorry

bambuddy-support-20260221-225739.zip

<!-- gh-comment-id:3940545687 --> @agroezinger commented on GitHub (Feb 22, 2026): you are right, i've selected the wrong zip file ... sorry [bambuddy-support-20260221-225739.zip](https://github.com/user-attachments/files/25466539/bambuddy-support-20260221-225739.zip)
Author
Owner

@maziggy commented on GitHub (Feb 22, 2026):

Still running on old Docker image.

<!-- gh-comment-id:3940548015 --> @maziggy commented on GitHub (Feb 22, 2026): Still running on old Docker image.
Author
Owner

@agroezinger commented on GitHub (Feb 22, 2026):

Wo im log file kann ich das sehen? Dann kann ich selbst weiter debuggen. Danke

<!-- gh-comment-id:3940571900 --> @agroezinger commented on GitHub (Feb 22, 2026): Wo im log file kann ich das sehen? Dann kann ich selbst weiter debuggen. Danke
Author
Owner

@maziggy commented on GitHub (Feb 22, 2026):

So lange du

  "developer_mode": null,

im support.json siehst, ist es die alte Version.

<!-- gh-comment-id:3940603605 --> @maziggy commented on GitHub (Feb 22, 2026): So lange du "developer_mode": null, im support.json siehst, ist es die alte Version.
Author
Owner

@agroezinger commented on GitHub (Feb 22, 2026):

Habe jetzt Bambuddy komplett neu aufgesetzt, Docker Container gelöscht, Image gelöscht, appdata (Speicherort der Daten) gelöscht und aus einem Backup (Das ich vorher in Bambuddy erstellt hatte) wiederhergestellt. Der Fehler bleibt bestehen. Sieht so aus als müsste ich entweder von null auf alles neu aufsetzen, oder gezielt, das was in der Datenbank korrupt ist, löschen.

Habe die SQLITE Datenbank mit Dbeaver geöffnet und einen PRAGMA integrity_check; durchgeführt. Result => OK.
Also eigentlich sollte die Datenbank keinen Fehler haben :/

bekomme weiterhin den error 500

INFO:     192.168.10.10:51769 - "GET /api/v1/queue/ HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.13/site-packages/starlette/middleware/base.py", line 151, in call_next
    message = await recv_stream.receive()
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.13/site-packages/anyio/streams/memory.py", line 132, in receive
    raise EndOfStream from None
anyio.EndOfStream

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.13/site-packages/uvicorn/protocols/http/httptools_impl.py", line 416, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        self.scope, self.receive, self.send
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/usr/local/lib/python3.13/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.13/site-packages/fastapi/applications.py", line 1134, in __call__
    await super().__call__(scope, receive, send)
  File "/usr/local/lib/python3.13/site-packages/starlette/applications.py", line 107, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.13/site-packages/starlette/middleware/errors.py", line 186, in __call__
    raise exc
  File "/usr/local/lib/python3.13/site-packages/starlette/middleware/errors.py", line 164, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.13/site-packages/starlette/middleware/base.py", line 191, in __call__
    with recv_stream, send_stream, collapse_excgroups():
                                   ~~~~~~~~~~~~~~~~~~^^
  File "/usr/local/lib/python3.13/contextlib.py", line 162, in __exit__
    self.gen.throw(value)
    ~~~~~~~~~~~~~~^^^^^^^
  File "/usr/local/lib/python3.13/site-packages/starlette/_utils.py", line 87, in collapse_excgroups
    raise exc
  File "/usr/local/lib/python3.13/site-packages/starlette/middleware/base.py", line 193, in __call__
    response = await self.dispatch_func(request, call_next)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/backend/app/main.py", line 3408, in auth_middleware
    return await call_next(request)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.13/site-packages/starlette/middleware/base.py", line 168, in call_next
    raise app_exc from app_exc.__cause__ or app_exc.__context__
  File "/app/backend/app/main.py", line 3405, in auth_middleware
    return await call_next(request)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.13/site-packages/starlette/middleware/base.py", line 168, in call_next
    raise app_exc from app_exc.__cause__ or app_exc.__context__
  File "/usr/local/lib/python3.13/site-packages/starlette/middleware/base.py", line 144, in coro
    await self.app(scope, receive_or_disconnect, send_no_error)
  File "/usr/local/lib/python3.13/site-packages/starlette/middleware/exceptions.py", line 63, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/usr/local/lib/python3.13/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    raise exc
  File "/usr/local/lib/python3.13/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
    await app(scope, receive, sender)
  File "/usr/local/lib/python3.13/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.13/site-packages/starlette/routing.py", line 716, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.13/site-packages/starlette/routing.py", line 736, in app
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.13/site-packages/starlette/routing.py", line 290, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.13/site-packages/fastapi/routing.py", line 119, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "/usr/local/lib/python3.13/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    raise exc
  File "/usr/local/lib/python3.13/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
    await app(scope, receive, sender)
  File "/usr/local/lib/python3.13/site-packages/fastapi/routing.py", line 105, in app
    response = await f(request)
               ^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.13/site-packages/fastapi/routing.py", line 424, in app
    raw_response = await run_endpoint_function(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ...<3 lines>...
    )
    ^
  File "/usr/local/lib/python3.13/site-packages/fastapi/routing.py", line 312, in run_endpoint_function
    return await dependant.call(**values)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/backend/app/api/routes/print_queue.py", line 304, in list_queue
    return [_enrich_response(item) for item in items]
            ~~~~~~~~~~~~~~~~^^^^^^
  File "/app/backend/app/api/routes/print_queue.py", line 204, in _enrich_response
    response = PrintQueueItemResponse(**item_dict)
  File "/usr/local/lib/python3.13/site-packages/pydantic/main.py", line 250, in __init__
    validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
pydantic_core._pydantic_core.ValidationError: 1 validation error for PrintQueueItemResponse
status
  Input should be 'pending', 'printing', 'completed', 'failed', 'skipped' or 'cancelled' [type=literal_error, input_value='aborted', input_type=str]
    For further information visit https://errors.pydantic.dev/2.12/v/literal_error

diese Zeile hier macht mich aber stutzig:

 Input should be 'pending', 'printing', 'completed', 'failed', 'skipped' or 'cancelled' [type=literal_error, input_value='aborted', input_type=str]

wo kommt das "aborted" her?

in meiner support-info.json steht aber auch noch immre developer-mode: null ... ich wüsste aber nicht, was ich noch anders machen soll beim "ziehen" des docker images ...

<!-- gh-comment-id:3941706827 --> @agroezinger commented on GitHub (Feb 22, 2026): Habe jetzt Bambuddy komplett neu aufgesetzt, Docker Container gelöscht, Image gelöscht, appdata (Speicherort der Daten) gelöscht und aus einem Backup (Das ich vorher in Bambuddy erstellt hatte) wiederhergestellt. Der Fehler bleibt bestehen. Sieht so aus als müsste ich entweder von null auf alles neu aufsetzen, oder gezielt, das was in der Datenbank korrupt ist, löschen. Habe die SQLITE Datenbank mit Dbeaver geöffnet und einen PRAGMA integrity_check; durchgeführt. Result => OK. Also eigentlich sollte die Datenbank keinen Fehler haben :/ bekomme weiterhin den error 500 ``` INFO: 192.168.10.10:51769 - "GET /api/v1/queue/ HTTP/1.1" 500 Internal Server Error ERROR: Exception in ASGI application Traceback (most recent call last): File "/usr/local/lib/python3.13/site-packages/starlette/middleware/base.py", line 151, in call_next message = await recv_stream.receive() ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.13/site-packages/anyio/streams/memory.py", line 132, in receive raise EndOfStream from None anyio.EndOfStream The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.13/site-packages/uvicorn/protocols/http/httptools_impl.py", line 416, in run_asgi result = await app( # type: ignore[func-returns-value] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ self.scope, self.receive, self.send ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/usr/local/lib/python3.13/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__ return await self.app(scope, receive, send) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.13/site-packages/fastapi/applications.py", line 1134, in __call__ await super().__call__(scope, receive, send) File "/usr/local/lib/python3.13/site-packages/starlette/applications.py", line 107, in __call__ await self.middleware_stack(scope, receive, send) File "/usr/local/lib/python3.13/site-packages/starlette/middleware/errors.py", line 186, in __call__ raise exc File "/usr/local/lib/python3.13/site-packages/starlette/middleware/errors.py", line 164, in __call__ await self.app(scope, receive, _send) File "/usr/local/lib/python3.13/site-packages/starlette/middleware/base.py", line 191, in __call__ with recv_stream, send_stream, collapse_excgroups(): ~~~~~~~~~~~~~~~~~~^^ File "/usr/local/lib/python3.13/contextlib.py", line 162, in __exit__ self.gen.throw(value) ~~~~~~~~~~~~~~^^^^^^^ File "/usr/local/lib/python3.13/site-packages/starlette/_utils.py", line 87, in collapse_excgroups raise exc File "/usr/local/lib/python3.13/site-packages/starlette/middleware/base.py", line 193, in __call__ response = await self.dispatch_func(request, call_next) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/backend/app/main.py", line 3408, in auth_middleware return await call_next(request) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.13/site-packages/starlette/middleware/base.py", line 168, in call_next raise app_exc from app_exc.__cause__ or app_exc.__context__ File "/app/backend/app/main.py", line 3405, in auth_middleware return await call_next(request) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.13/site-packages/starlette/middleware/base.py", line 168, in call_next raise app_exc from app_exc.__cause__ or app_exc.__context__ File "/usr/local/lib/python3.13/site-packages/starlette/middleware/base.py", line 144, in coro await self.app(scope, receive_or_disconnect, send_no_error) File "/usr/local/lib/python3.13/site-packages/starlette/middleware/exceptions.py", line 63, in __call__ await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) File "/usr/local/lib/python3.13/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app raise exc File "/usr/local/lib/python3.13/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app await app(scope, receive, sender) File "/usr/local/lib/python3.13/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__ await self.app(scope, receive, send) File "/usr/local/lib/python3.13/site-packages/starlette/routing.py", line 716, in __call__ await self.middleware_stack(scope, receive, send) File "/usr/local/lib/python3.13/site-packages/starlette/routing.py", line 736, in app await route.handle(scope, receive, send) File "/usr/local/lib/python3.13/site-packages/starlette/routing.py", line 290, in handle await self.app(scope, receive, send) File "/usr/local/lib/python3.13/site-packages/fastapi/routing.py", line 119, in app await wrap_app_handling_exceptions(app, request)(scope, receive, send) File "/usr/local/lib/python3.13/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app raise exc File "/usr/local/lib/python3.13/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app await app(scope, receive, sender) File "/usr/local/lib/python3.13/site-packages/fastapi/routing.py", line 105, in app response = await f(request) ^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.13/site-packages/fastapi/routing.py", line 424, in app raw_response = await run_endpoint_function( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<3 lines>... ) ^ File "/usr/local/lib/python3.13/site-packages/fastapi/routing.py", line 312, in run_endpoint_function return await dependant.call(**values) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/backend/app/api/routes/print_queue.py", line 304, in list_queue return [_enrich_response(item) for item in items] ~~~~~~~~~~~~~~~~^^^^^^ File "/app/backend/app/api/routes/print_queue.py", line 204, in _enrich_response response = PrintQueueItemResponse(**item_dict) File "/usr/local/lib/python3.13/site-packages/pydantic/main.py", line 250, in __init__ validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self) pydantic_core._pydantic_core.ValidationError: 1 validation error for PrintQueueItemResponse status Input should be 'pending', 'printing', 'completed', 'failed', 'skipped' or 'cancelled' [type=literal_error, input_value='aborted', input_type=str] For further information visit https://errors.pydantic.dev/2.12/v/literal_error ``` diese Zeile hier macht mich aber stutzig: ``` Input should be 'pending', 'printing', 'completed', 'failed', 'skipped' or 'cancelled' [type=literal_error, input_value='aborted', input_type=str] ``` wo kommt das "aborted" her? in meiner support-info.json steht aber auch noch immre developer-mode: null ... ich wüsste aber nicht, was ich noch anders machen soll beim "ziehen" des docker images ...
Author
Owner

@agroezinger commented on GitHub (Feb 23, 2026):

mein Fehler wandert definitiv mit!! ich habe eben auf einem Testsystem eine neue Ubuntu Server Installation aufgespielt, Docker installiert und per docker compose die letzte Beta installiert. Dann mein Backup eingespielt, selber Fehler.

Hast du noch irgendeine Idee?

<!-- gh-comment-id:3947353418 --> @agroezinger commented on GitHub (Feb 23, 2026): mein Fehler wandert definitiv mit!! ich habe eben auf einem Testsystem eine neue Ubuntu Server Installation aufgespielt, Docker installiert und per docker compose die letzte Beta installiert. Dann mein Backup eingespielt, selber Fehler. Hast du noch irgendeine Idee?
Author
Owner

@maziggy commented on GitHub (Feb 24, 2026):

Dann scheint die Datenbank korrupt zu sein.

<!-- gh-comment-id:3949798441 --> @maziggy commented on GitHub (Feb 24, 2026): Dann scheint die Datenbank korrupt zu sein.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/bambuddy-maziggy-1#302
No description provided.