mirror of
https://github.com/binwiederhier/ntfy.git
synced 2026-05-09 08:26:00 +02:00
[GH-ISSUE #956] Thousands ad thousands of defunct ssl client orphans... #671
Labels
No labels
ai-generated
android-app
android-app
android-app
🪲 bug
build
build
dependencies
docs
enhancement
enhancement
🔥 HOT
in-progress 🏃
ios
prio:low
prio:low
pull-request
question
🔒 security
server
server
unified-push
web-app
website
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/ntfy#671
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @emigrating on GitHub (Nov 21, 2023).
Original GitHub issue: https://github.com/binwiederhier/ntfy/issues/956
🐞 Describe the bug
Not really sure TBH. I just noticed this when doing my monthy system updates.
💻 Components impacted
Dockerized ntfy server. Running behind Traefik, which in turn is behind Cloudflare proxy (it was a bitch to get to run properly at first, but it's been running fine for ages now).
💡 Screenshots and/or logs
🔮 Additional context
Not really sure what I'm expecting with this post, perhaps that someone else has run into a similar thing or perhaps just to log my issue if it happens time and time again.
But basically, I have a few NTFY servers running here and there, this has happened on one of them and I don't think there are any config differences between them all.
After the initial headache getting them to run properly behind cloudflare's proxied DNS and traefik this was running fine. I then initiated an upgrade a few weeks ago (ie. 'docker compose pull && docker compose down ntfy && docker compose up ntfy -d') and made sure the service spun up again properly.
Have since left it alone as it's seemingly been working just fine - as in, my android client app is still receiving notifications just fine and has been doing so thruout. Only noticed this today when I was doing the monthly system updates. When I did notice the million or so defunct ssl sessions I immediately tried the web UI only to be greeted by a completely blank page, which makes no sense as the andorid client uses the web to connect, no? But either way, the web ui is no longer showing me data whereas the andoing app received a notification as recent as this morning.
I have since rebooted the entire server as there were kernel upgrades and the likes, but...
@emigrating commented on GitHub (Nov 21, 2023):
Just logged for future.
@DatDucati commented on GitHub (Jun 8, 2025):
This issue is popping up on my machine.
2025-06-06: 225 Threads
2025-06-07: 1655 Threads.
After 1.5 Days it runs into WARN territory of my CheckMK Monitoring... which is very annoying.
docker-compose:
Nginx-Config
Have you noticed that issue?
I could implement a restart cron job, but that is also not the best solution.
@wunter8 commented on GitHub (Jun 8, 2025):
I'm pretty sure this is the result of the healthcheck command, but I'm not sure why it's staying around and not cleaning itself up. You could maybe try adding
&& exitat the end of the command, so that if the grep fails it willexit 1and if the grep succeeds, it willexit@DatDucati commented on GitHub (Jun 9, 2025):
I changed the health check to the following:
@wunter8 commented on GitHub (Jun 9, 2025):
At least in OP's case, the leftover process was from
ssl_client, which would seem to be a result of checking the ntfy health status using https.I'm pretty sure the healthcheck runs inside the container (and not on the docker host), right? If so, you should be able to just change the URL to
http://localhost/v1/healthto avoid usingssl_client. Maybe that will make a difference 🤷♂️@DatDucati commented on GitHub (Jun 10, 2025):
Okay, the change to localhost in the heatlthcheck does appear to help. Threads are stable at ~230 for 5 hours now. I'll be monitoring (ha) the behavior in the long run. Thanks for your help! Then the example docker-compose file should be changed, as I got the health check from there :)