mirror of
https://github.com/binwiederhier/ntfy.git
synced 2026-05-09 08:26:00 +02:00
[PR #543] [CLOSED] Accumulate incoming messages in a buffered channel #1329
Labels
No labels
ai-generated
android-app
android-app
android-app
🪲 bug
build
build
dependencies
docs
enhancement
enhancement
🔥 HOT
in-progress 🏃
ios
prio:low
prio:low
pull-request
question
🔒 security
server
server
unified-push
web-app
website
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/ntfy#1329
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
📋 Pull Request Information
Original PR: https://github.com/binwiederhier/ntfy/pull/543
Author: @nicois
Created: 12/12/2022
Status: ❌ Closed
Base:
main← Head:nicois/use-buffered-channels-for-incoming-messages📝 Commits (2)
09e8fb8Accumulate incoming messages in a buffered channel9f2311bAvoid blocking incoming messages.📊 Changes
7 files changed (+151 additions, -171 deletions)
View changed files
📝
server/config.go(+1 -1)📝
server/message_cache.go(+71 -24)📝
server/message_cache_test.go(+75 -1)📝
server/server_test.go(+1 -0)📝
test/server.go(+3 -1)➖
util/batching_queue.go(+0 -86)➖
util/batching_queue_test.go(+0 -58)📄 Description
Instead of using a deque, store incoming messages in a native buffered channel, if buffering is enabled.
In addition, modify the batching algorithm so the enforced delay between consecutive
addMessagesinvocations is applied after all pending messages are processed. This acts as a "cooldown", rather than a "warmup". This avoids the need for more complex timing logic to dispatch batches, removes latency in adding messages when received infrequently, and natively blocking the goroutine until messages are received.Because the message processing loop always performs a blocking read first, it is appropriate for low-throughput environments just as much as high-throughput ones.
The default value of batchSize has been changed to 10, with a zero cooldown. This means that when messages are arriving faster than they can be inserted into sqlite, they will automatically become batched in groups of up to 10. It also means requests will complete more quickly, slowing down only when the buffer fills - at which point they will block for a short period, similar to legacy behaviour.
🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.