IcingaDB: add changes queue & RedisConnection enhancements#10619
IcingaDB: add changes queue & RedisConnection enhancements#10619
RedisConnection enhancements#10619Conversation
200080d to
2099e59
Compare
2099e59 to
b15a5fb
Compare
jschmidt-icinga
left a comment
There was a problem hiding this comment.
I didn't test extensively yet, but from a quick test everything seems to work as expected. I also can't speak much for the logic and performance implications of when to send which events to Redis since I barely touched that until now. I'll continue to look at this in the coming days and see if I can test this more thoroughly.
lib/icingadb/redisconnection.hpp
Outdated
| String m_CipherList; | ||
| double m_ConnectTimeout; | ||
| DebugInfo m_DebugInfo; | ||
| ObjectImpl<IcingaDB>::ConstPtr m_IcingaDB; // The IcingaDB object this connection belongs to. |
There was a problem hiding this comment.
This introduces some tight bidirectional coupling between the IcingaDB (even though it's just the -ti file here) and RedisConnection objects. It would be nice if this could be avoided, though copying all the members doesn't seem elegant either.
There was a problem hiding this comment.
I personally think, that this is 100% better than the previous version of this, and I don't see a problem with the bidirectional coupling either, since the RedisConnection class is meant to be used by the IcingaDB class only. Obviously, this is not that perfect either but if you have a better approach in mind, then feel free to suggest it.
There was a problem hiding this comment.
I had already thought about it and couldn't come up with anything good to suggest, other than designing it that way from the start. One option would be to maybe make the members string_views, which would be fine here, since the connection object has a longer lifetime than the icingadb object, but might become an implementation detail when/if other classes want to use this class. Or just suck up the (small) memory cost and leave things as they were.
It's not the worst thing in the world either way. We have this kind of coupling in many places, like (JsonRpc|HttpServer)Connection<->ApiListener, but it is kind a ugly, design-wise.
Regarding RedisConnection being only used by IcingaDB: Recently I was briefly looking at caching Perfdata in Redis for persistence in case the target services go offline. It's always nice to at least keep the option open of using a class like that for something else in the future.
There was a problem hiding this comment.
Or just suck up the (small) memory cost and leave things as they were.
It's not just about the memory cost but due to this ridiculously long parameter passing I would have to use. With master branch there are two places like this, and this PR add another one. So, decided simply squash them with approach.
icinga2/lib/icingadb/icingadb.cpp
Lines 84 to 86 in 35fdea8
icinga2/lib/icingadb/icingadb.cpp
Lines 94 to 96 in 35fdea8
Recently I was briefly looking at caching Perfdata in Redis for persistence in case the target services go offline. It's always nice to at least keep the option open of using a class like that for something else in the future.
If we ever end up using RedisConnection for other purposes, then we would definitely have to move it somewhere else. And while doing that, there will be other design decisions to make, I would consider this tiny bit of coupling acceptable for now that can further be improved later.
There was a problem hiding this comment.
Please the updated code now. I've introduced a helper struct for all the parameters instead and will be copied only once.
b15a5fb to
c8274a0
Compare
At the moment the integration tests from the Icinga DB repository are the only way to stress test this PR thoroughly. I've been running them ever since the initial implementation and I have almost gone crazy due to a subtle race condition that only showed up when running those tests. |
Can you describe this in a bit more detail so I know what to look out for, i.e. the symptoms of the race condition? |
Well, the obvious symptom is that the integration tests (specifically the Redundancy Group ones) will sporadically fail because either Icinga 2 didn't sent a delete command when it's supposed to, or deleted something that it shouldn't have. Generally speaking, if the tests don't succeed then there is a bug in here. |
julianbrost
left a comment
There was a problem hiding this comment.
Don't consider this a full review, just what I noticed at first glance.
c8274a0 to
3247cb4
Compare
lib/icingadb/redisconnection.cpp
Outdated
| // Wait up to 5 seconds for ongoing operations to finish. | ||
| asio::deadline_timer waiter(m_Strand.context(), boost::posix_time::seconds(5)); | ||
| waiter.async_wait(yc); | ||
|
|
||
| m_QueuedWrites.Set(); // Wake up write loop | ||
| m_QueuedReads.Set(); // Wake up read loop |
There was a problem hiding this comment.
Wait up to 5 seconds? Doesn't this just always wait 5 seconds?
And why are the read/write loops only woken after that?
There was a problem hiding this comment.
Wait up to 5 seconds? Doesn't this just always wait 5 seconds?
Yes, it always waits 5 seconds.
And why are the read/write loops only woken after that?
Doesn't make any difference but I've moved these above the timer now.
lib/icingadb/icingadb-worker.cpp
Outdated
| // Limits the number of pending queries the Rcon can have at any given time to reduce the memory overhead to | ||
| // the absolute minimum necessary, since the size of the pending queue items is much smaller than the size | ||
| // of the actual Redis queries. Thus, this will slow down the worker thread a bit from generating too many | ||
| // Redis queries when the Redis connection is saturated. | ||
| constexpr size_t maxPendingQueries = 512; |
There was a problem hiding this comment.
That number should be just big enough that there are enough queries in flight so that the throughput isn't limited by latency. For any other queries, it's more desirable to stay in the worker queue than the Redis query queue, because in the first, they can still be combined with other operations. My intuition is that the number is bigger than it needs to be.
Alternatively, could it be feasible to check the state of the Redis connection so that we write queries until the write operations would block. so that we don't have (many) more queries in flight than the send and receive queues can hold.
There was a problem hiding this comment.
I've reduced the number to 128 now but I'll have to think more thoroughly about the alternative approaches you suggested.
There was a problem hiding this comment.
Alternatively, could it be feasible to check the state of the Redis connection so that we write queries until the write operations would block. so that we don't have (many) more queries in flight than the send and receive queues can hold.
Now, after thinking about it some more, there's simply no way to do that reliably without implementing a full-blown backpressure mechanism, which is not worth the effort here but also way out of scope for this PR. The easiest way to limit the number of in-flight queries is to further limit the number of pending queries if 128 is still too much for you. If that's not sufficient, we can think about implementing a fine-grained backpressure mechanism in a separate PR but I think simply limiting the number of pending queries is sufficient for now IMO.
There was a problem hiding this comment.
Actually, I think even a dead-simple backpressure mechanism would magically solve all kinds of queue growth (OOM) problems. (Also applies to perfdata.) Just think about it! If the action causing a (e.g Redis) backend write is blocked (as with e.g RedisConnection::Sync) until sent (or, ideally, processed), it delays the next action, so that jam doesn't even occur in our software.
Ok... your hosts might be checked effectively every 5.5m instead of 5m – 🤷♂️.
There was a problem hiding this comment.
Actually, I think even a dead-simple backpressure mechanism would magically solve all kinds of queue growth (OOM) problems.
There's no OOM problems here. The Redis queue is currently limited to 128 queries only and is still TBD whether it's too much.
Just think about it! If the action causing a (e.g Redis) backend write is blocked (as with e.g RedisConnection::Sync) until sent (or, ideally, processed), it delays the next action, so that jam doesn't even occur in our software.
And when should the producer trigger the RedisConnection::Sync function? After each processed item? After n processed items? Doing it after each processed item would be overkill and completely unnecessary, so you'd have to figure out when it's appropriate to perform such a blocking call.
1916e7a to
ac841b3
Compare
ac841b3 to
a247654
Compare
|
|
||
| if (checkable->GetEnableActiveChecks()) { | ||
| m_Rcon->FireAndForgetQuery( | ||
| if (checkable->GetEnableActiveChecks() && checkable->IsActive()) { | ||
| m_RconWorker->FireAndForgetQuery( | ||
| { | ||
| "ZADD", | ||
| dynamic_pointer_cast<Service>(checkable) ? "icinga:nextupdate:service" : "icinga:nextupdate:host", | ||
| Convert::ToString(checkable->GetNextUpdate()), | ||
| GetObjectIdentifier(checkable) | ||
| }, | ||
| Prio::CheckResult | ||
| } | ||
| ); | ||
| } else { | ||
| m_Rcon->FireAndForgetQuery( | ||
| } else if (!checkable->GetEnableActiveChecks() || checkable->GetExtension("ConfigObjectDeleted")) { | ||
| m_RconWorker->FireAndForgetQuery( | ||
| { | ||
| "ZREM", | ||
| dynamic_pointer_cast<Service>(checkable) ? "icinga:nextupdate:service" : "icinga:nextupdate:host", | ||
| GetObjectIdentifier(checkable) | ||
| }, | ||
| Prio::CheckResult | ||
| } | ||
| ); | ||
| } |
There was a problem hiding this comment.
Can you please explain why you changed the conditions in here like this, in particular, why the else branch is no longer unconditional. Is there actually a situation where this function is supposed to do nothing and what is it? And why is it even run/enqueued then?
There was a problem hiding this comment.
Previously, as long as the GetEnableActiveChecks returned true, it alwas sent a ZADD command even though the object might just have been deleted and resulted into having zombie icinga:nextupdate:* entries till the next Redis full dump. Now, I've changed it to additionally consider the object's liveness and causes to not send any ZADD commands if the object is already deactivated. Consequently, the else branch shouldn't also send any ZREM commands just because the object isn't active anymore, hence the additional checks.
a247654 to
6695294
Compare
|
Rebased + cherry-picked commit dc9d40f + addressed some of the still open discusstions. |
|
I've pushed an updated version, ad85daf. The changes (
Please have a look and feel free to incorporate it into the PR if you're happy with it. |
51273c2 to
adfbce6
Compare
| /** | ||
| * The primary Redis connection used to send history and heartbeat queries. | ||
| * | ||
| * This connection is used exclusively for sending history and heartbeat queries to Redis. It ensures that | ||
| * history and heartbeat operations do not interfere with other Redis operations. Also, it is the leader for | ||
| * all other Redis connections including @c m_RconWorker, and is the only source of truth for all IcingaDB Redis | ||
| * related connection statistics. | ||
| * | ||
| * Note: This will still be shared with the icingadb check command, as that command also sends | ||
| * only XREAD queries which are similar in nature to history/heartbeat queries. | ||
| */ | ||
| RedisConnection::Ptr m_Rcon; |
There was a problem hiding this comment.
The comment doesn't match the current implementation as heartbeats are sent to m_RconWorker:
icinga2/lib/icingadb/icingadb.cpp
Lines 175 to 185 in adfbce6
(Yes, it says icinga:stats and not explicitly heartbeat, but that's what Icinga DB considers to be the heartbeat here.)
Though I'd probably switch that write over to m_Rcon, i.e. change the code, not the comment.
There was a problem hiding this comment.
It's an oversight, otherwise I'd have also changed the check on m_Rcon at the start of the function. Done.
| /** | ||
| * A Redis connection for general queries. | ||
| * | ||
| * This connection is used for all non-history and non-heartbeat related queries to Redis. | ||
| * It is a child of @c m_Rcon, meaning it forwards all its connection stats to @c m_Rcon as well. | ||
| */ | ||
| RedisConnection::Ptr m_RconWorker; |
There was a problem hiding this comment.
Wouldn't "config and state updates" better describe what that connection is used for than just "general queries"?
lib/icingadb/icingadb-worker.cpp
Outdated
| if (std::holds_alternative<queue::RelationsDeletionItem>(it->Item)) { | ||
| // We don't know whether the previous items are related to this deletion item or not, | ||
| // thus we can't just process this right now when there are older items in the queue. | ||
| // Otherwise, we might delete something that is going to be updated/created. | ||
| break; | ||
| } |
There was a problem hiding this comment.
I know that we talked about this in person before and you also reminded me of it recently, but neither did I find a mention in the PR, nor does it look like this has already been addressed:
Shouldn't there be something similar for config objects that are deleted? Those shouldn't be processed out of order for similar reasons. Deleting and recreating an object with the same name will give different pointers but the Icinga DB object ID stays the same, so an old delete could possibly delete a new object.
There was a problem hiding this comment.
but neither did I find a mention in the PR, nor does it look like this has already been addressed
I didn't address it yet because you didn't give me a clear answer when I explicitly asked you whether I should add a similar check for that case as well. Instead, you sounded like you aren't even satisfied with the existing check at all, so I did nothing.
Btw, now that there's also another case where we acquire an olock I'm not sure whether it makes sense to skip any items at all. So, I've changed the implementation to never process items out of order, but instead to always wait for some time if we can't acquire the olock immediately. See 006e380 for the details.
lib/icingadb/icingadb-worker.cpp
Outdated
| if (auto age = now - it->EnqueueTime; 1000ms > age) { | ||
| if (it == seqView.begin()) { | ||
| retryAfter = 1000ms - age; | ||
| } | ||
| break; | ||
| } |
There was a problem hiding this comment.
Actually, a second is quite a long time on a computer, so I'd consider making this interval shorter. After all, this ends up being a delay showing up everywhere. My intuition says that something in the low hundreds of milliseconds should already be plenty, like 200ms.
adfbce6 to
0ba83f5
Compare
|
Rebased and addressed all the requested changes. |
…al sync Don't send nextupdate as part of the intial sync. Instead, enque them to the background worker to be sent after the initial dump is done.
This reverts commit f6f7d9b and all other its new users.
As opposed to the previous version which used a complex data structure to correctly manage the query priorities, this version uses two separate queues for the high and normal priority writes. All high priority writes are processed in FIFO order but over take all queries from the normal priority queue. The later queue only be processed when the high priority queue is empty.
… update to Icinga DB" This reverts commit e9b8c67.
We can't drop the `OnNextCheckUpdated` signal entirely yet, as IDO still relies on it.
Previously, the checkable was locked while processing all the dependency registration stuff, so the worker thread should also do the same to avoid any potential race conditions.
This commit restructures the queue items so that each one now has a method `GetQueueLookupKey()` that is used to derive which elements of the queue are considered to be equal. For this, there is a key extractor for the `multi_index_container` that takes the `variant` from the queue item, calls that method on it, and puts the result in a second variant type. The types in that variant type are automatically deduced from the return types of the individual methods.
Now, if we can't acquire the olock on the config object, we immediately sleep for 10ms instead of trying to process the next item in the queue. This is way easier than having to deal with the potential out of order processing of items in the queue in both ways, i.e., we don't want to send delete events for objects while their created events haven't been processed yet and vice versa.
0ba83f5 to
3d7e0c4
Compare
|
Just fixed a minor style issue and a doc comment. See the diff by clicking ont he compare button. |
This pull request introduces a new runtime changes queue to IcingaDB, along with several enhancements to the
RedisConnectionclass. These changes aim to improve the memory footprint and number of duplicate (and thus superfluous) Redis queries. The problem of duplicate queries has been a long-standing issue in IcingaDB, and some hacky workarounds have been implemented in the past to mitigate it. This PR takes a more systematic approach as Julian described in #10186 to address the root cause. I will try to summarize the key changes below:Changes Queue
A new changes queue has been introduced to IcingaDB, which allows for batching of all runtime updates for a given object in an efficient manner. The changes queue works as outlined below:
Before going into more detail, we should clarify what we mean by "changes". In this context, changes refer to any event
that requires a Redis write operation. This includes, but is not limited to:
This new queue does not cover any history related writes, those types of events follow a different path and are not
affected by this change. The focus here is solely on runtime object changes that affect the normal non-historical operation of IcingaDB. Consequently, history and heartbeat related writes use their own dedicated Redis connection and do not interfere with any of the changes described here.
Now, here is how the changes queue operates:
When an object is modified, instead of immediately writing the changes to Redis, the object pointer is pushed onto the queue with a corresponding flag indicating the type of change required. As long as the object remains in the queue, any subsequent Redis write requests concerning that object are merged into the existing queued dirty bits. This means that no matter how many times e.g., a
OnStateChangeis triggered for a given object, only a single write operation will be performed when it is finally popped from the queue. Do note that an object can have multiple dirty bits set, so if both its attributes and state are modified while in the queue, a state and config update will be sent when it is processed.The consumer of the changes queue is a new background worker that pops objects from the queue and performs the necessary Redis write operations. This worker doesn't immediately process objects as they are enqueued; instead, it waits for a short period (currently set to
1000ms) to allow for more changes to accumulate and be merged. After this wait period, the worker serializes the queued objects according to their dirty bits and sends the appropriate Redis commands. Though, there's also another restriction in place: when the usedRedisConnectionreaches a certain number of pending commands (currently set to512), the worker won't dequeue any more objects from the changes queue until the pending commands drop below that threshold. This ensures that we don't unnecessarily waste memory by serializing too many objects in advance, if the Redis server isn't able to keep up.To accommodate this new changes queue, quite a number of existing code has been refactored, so that we no longer perform immediate writes to Redis. Additionally, the
RedisConnectionclass has been enhanced to support this new workflow.RedisConnection Enhancements
Several enhancements have been made to the
RedisConnectionclass to better support the changes queue and improve overall efficiency:As a consequence of the changes queue, the Redis queries internal queue has significantly been simplified. As opposed to the previous version which used a complex data structure to correctly manage the query priorities, this version uses a
std::dequefor the write queue and a simple mechanism to insert high-priority items at the front. By default, items are processed in FIFO order, but if someone wants to immediately send a high-priority query it will be placed at the front of the queue (rememberstd::dequeallows efficient insertion at both ends), and will overtake any normal priority items already queued. However, if there are already high-priority items in the queue, the new high-priority item will be inserted after them but still before any normal priority items, ensuring that all high-priority items are processed in the order they were enqueued.And while I'm at it, I also took the chance to improve the
WriteQueueItemtype by replacing the previously used ridiculously verbose query types by a more compactstd::variantbased approach. This not only reduces memory usage but also makes clearer that each item represents exactly one of a defined set of query types and nothing else.Now, IcingaDB is subscribed to the
OnNextCheckChangedsignal and not the dummyOnNextCheckUpdatedsignal anymore. Though, that dummy signal is still there since the IDO relies on it. The only behavioural change in IcingaDB as opposed to before is that the oldest pending Redis query is determined only on the primary Redis connection (the one used for history and heartbeats). If you guys think this is a problem, I can look into a way to have IcingaDB consider all connections when determining the oldest pending query.resolves #10186