Write checkable create & delete sla history events#566
Write checkable create & delete sla history events#566
create & delete sla history events#566Conversation
2853ab4 to
87e94ac
Compare
1cc1f09 to
80a76e5
Compare
2fc3663 to
c160354
Compare
2a4824a to
2717c61
Compare
93d6f3f to
69bc4ad
Compare
|
I just had an idea how we could call that type of SLA history after we didn't really come up with good name for this initially: lifecycle |
753dba4 to
f1878aa
Compare
Al2Klimov
left a comment
There was a problem hiding this comment.
Please don’t force-push for now.
5ee05e7 to
be89ac0
Compare
52f5cbb to
b8f9080
Compare
|
Now that I'm taking a fresh look at this after some time, I'm wondering: what was the reason for updating rows instead of inserting separate rows for create and delete events? |
|
I've updated the PR description with a summary of what queries are performed when (section "Implementation").
This also gave sort of an answer to this question: having that database structure with the delete time being part of the primary key prevents duplicate rows to be inserted for runtime updates. |
|
Admittedly not the most beautiful concept, but it solves problems we'd have with 1 event = 1 row 👍 |
|
While looking at this after over a year (🙈), I had an idea: what if during the schema upgrade, we just insert rows for all currently existing hosts and services into the new table, shouldn't that allow us to get rid of two special cases (emphasis by me)?
|
How do you plan to perform this hashing with pure SQL? icingadb/pkg/icingadb/sla_lifecycle.go Line 84 in 5a2396e |
That's introduced by this PR, so it could also be done differently if this makes things easier. Which brings me right to my next ideas, which I have plenty of. 😅 Please read them, ask any question that may come up and let me know if you think that the approach I'm suggesting in the following may indeed significantly simplify this PR. Doesn't much of the new code for selecting the host ID for a service primarily exist just because of the choice to compute the Going one more step further: host and service IDs already include the environment ID as well, so even simply using insert into sla_lifecycle (id, environment_id, host_id, create_time, delete_time)
select id, environment_id, id, UNIX_TIMESTAMP() * 1000, 0
from host;That wasn't the last step/idea of mine, this could be taken even further: one could add a where clause so that instead of all hosts, only those that don't yet have a row in insert into sla_lifecycle (id, environment_id, host_id, create_time, delete_time)
select id, environment_id, id, UNIX_TIMESTAMP() * 1000, 0
from host
where id not in (select id from sla_lifecycle where service_id is null and delete_time = 0);So this gives a simple query that could just be executed each time after the initial delta sync. That would take care of created objects, deleted objects could be handled similarly by updating all update sla_lifecycle
set delete_time = UNIX_TIMESTAMP() * 1000
where service_id is null
and delete_time = 0
and id not in (select id from host);So with 4 queries in total2, this should take care of all the updates needed during the initial delta sync, leaving only the runtime updates, those should be possible to be reduced to a single insert/update each, as potentially missing rows were already created by the previous delta sync. Footnotes
|
|
I have reworked this as suggested by Julian in #566 (comment) and also updated the PR description accordingly. |
The proposals from #566 (comment) should have been fully implemented by now and of course the PR description is also updated again. |
oxzi
left a comment
There was a problem hiding this comment.
First of all, thanks for writing the documentation. It makes it easier for both users and me to understand the motivation for this change.
I just made a first walk through and added some comments.
oxzi
left a comment
There was a problem hiding this comment.
I have played with this PR in a testing environment and have not encountered any errors so far.
However, I have a more general question: How will the sla_lifecycle table be used later on? The get_sla_ok_percent function does not honor it and, unless I am missing something, cannot even do this for deleted checkable objects.
| delete_time biguint NOT NULL DEFAULT 0, | ||
|
|
||
| CONSTRAINT pk_sla_lifecycle PRIMARY KEY (id, delete_time) | ||
| ); |
There was a problem hiding this comment.
I was thinking about creating some indices for the queries within the SyncCheckablesSlaLifecycle function.
In a huge environment, those queries may take some time. However, I cannot say how huge they must become to make those queries slow or if Icinga 2 will be the performance bottleneck before.
I have not tested this or anything, just wanted to put the idea out there.
There was a problem hiding this comment.
or if Icinga 2 will be the performance bottleneck before
If there's an issue, you could probably trigger it without Icinga 2 becoming an issue. You can create and delete many hosts and services over time, keeping the active config in Icinga 2 at a constant size but growing this table.
There was a problem hiding this comment.
Valid point, I have not considered cloud native fast moving infrastructures with a multitude of short-lived VMs.
That's right and it will never make use of it as well! As we discussed this last time, we will only collect the data for the time being until we get the time to finalise Icinga/icingadb-web#710 which will make use of this newly introduced table and delegate the SLA generation from SQL procedure to PHP code and mark the |
|
I've pushed the new transparent Screenshots and also changed the main function a bit, so I won't be pushing anything from now on unless someone requests a change. |
| the database without any interpretation. In order to generate and visualise SLA reports of specific hosts and services | ||
| based on the accumulated events over time, [Icinga Reporting](https://icinga.com/docs/icinga-reporting/latest/doc/02-Installation/) | ||
| is the optimal complement, facilitating comprehensive SLA report generation within a specific timeframe. | ||
|
|
There was a problem hiding this comment.
I would change the tone of the first two paragraphs to make it sound more like a technical document and less like a PR post. For example, the "legally binding" part could be dropped, as it must not be the case. Furthermore, Icinga Reporting is (currently) not the "optimal complement", but the only complementing UI.
Please don't take this the wrong way. If I would read this as a technical user - and this is technical documentation -, I would have the feeling that somebody wants to sell me something and not like I am going to find the details here.
| | PENDING (no check result yet) | OK | | ||
| | OK | Warning | | ||
| | Warning | Critical | | ||
| | Critical | OK | |
There was a problem hiding this comment.
As discussed earlier, please include the state's magic numbers used within the database. This eases debugging and understanding when using this technical document together with the own setup.
| // clearly identifies its state. Consequently, such events become irrelevant for the purposes of calculating | ||
| // the SLA and we must exclude the duration of that PENDING state from the total time. | ||
| total_time = total_time - (event.event_time - last_event_time) | ||
| else if (previous_hard_state is greater than OK/UP |
There was a problem hiding this comment.
Without the mapping from states to their numeric representation, this is quite cryptic. Thus, either introduce a mapping before or make it more explicit, like listing all acceptable previous_hard_state values.
There was a problem hiding this comment.
I'm not sure if I understand you correctly, but I don't quite agree that this being cryptic, rather it's the exact opposite. These are just normal Icinga 2 host - service states that have their own documentation except the PENDING state. Apart from that, I have actually listed the possible values that the previous_hard_state column represents and you want me to add another listing here?
There was a problem hiding this comment.
Please try to put yourself in a reader's position, who knows a bit about Icinga, but definitely no implementation details. When reading the comparison in this line, it may be unclear what "greater than" means in this context, as no numerical representation or hierarchy of states was introduced.
Apart from that, I have actually listed the possible values that the
previous_hard_statecolumn represents and you want me to add another listing here?
Based on this table, one would assume the following order: PENDING < OK < WARNING < CRITICAL.
And what is about UNKNOWN and the host states UP and DOWN?
Coming back to the if cases, there is
event.previous_hard_state is PENDINGandprevious_hard_state is greater than OK/UP AND previous_hard_state is not PENDING AND checkable is not in DOWNTIME(this one misses theevent.prefix, but I guess it was just forgotten?).
Shouldn't it be possible to rewrite the second clause to event.previous_hard_state is not in {OK, UP} AND checkable is not in DOWNTIME as event.previous_hard_state cannot be PENDING due to the first check? Doing so eliminates any order which has to be explained otherwise.
|
Wont't be ready any time soon, so I'm closing it. |
Why do we need this!
Currently we are generating the SLA history events only when e.g. there were state change and downtime start and end events for the checkables. Under some circumstances (if a checkable is created once and never deleted) this should be sufficient. However, when you e.g. delete a host and create it again a couple of days later and want to generate sla reports for this host at the end of the week, the result can vary depending on which state the host had before it was deleted. In order to be able to generate sla reports as accurately as possible, we decided to track the checkable creation and deletion time on top of the existing info. And since Icinga 2 doesn't really know when an object has been deleted (at least not in a simple way), this PR should take care of it.
Though, Icinga DB doesn't know when an object has been deleted either, it just takes the time the delete event for that object arrived and puts it into the new table. Meaning when you delete checkables while Icinga DB is stopped, the events Icinga DB would write after it is started won't reflect the actual delete/create event. Though, there is no better way to handle this gracefully.
Config sync
The upgrade script for
1.3.0generates acreated_atsla lifecycle entry for all existing hosts and services once it is applied as proposed in #566 (comment). Thus, all special cases such as the implementation of a custom fingerprint type for services1 and performing an extra query to retrieve host IDs from the database for runtime deleted services, have been removed.Implementation
The new table
sla_history_lifecyclehas a primary key over(id, delete_time)wheredelete_time=0means "not deleted yet" (the column has to beNOT NULLdue to being included in the primary key).idis either the service or host ID for which that sla lifecycle is being generated. This ensures that there can only be row per object that states that the object is currently alive in Icinga 2.Initial sync
Icinga DB performs a simple
INSERTstatement forHostandServicetypes after each initial config dump unconditionally, but matches onhostsandservicesthat don't already have acreate_timeentry withdelete_time = 0in thesla_lifecycletable, and sets theircreate_timetimestamp tonow. Additionally, it also updates thedelete_timeof each existingsla_lifecycleentries whosehost/serviceIDs cannot be found in theHost/Servicetables. It's unlikely, but when a given Checkable doesn't already have acreate_timeentry in the database, the update query won't update anything. Likewise, the insert statements may also become a no-op if the Checkables already have acreate_timeentry withdelete_time = 0.Create
Nothing to be done here (all newly created objects will be covered by the bulk
INSERT ... SELECT FROM host/servicequeries after the config dump).Update
Nothing to be done here (object existed before and continues to exist).
Delete
Nothing to be done here (all deleted objects will be covered by general bulk
sea_lifecyclequeries after the config dump).Runtime updates
Upsert
Performs an
INSERTwith ignore for duplicate keys for both create and update events (these look identical in the runtime update stream). If the object is already marked as alive insla_history_lifecycle, this will do nothing, otherwise it will mark it as created now(including when an object that was created before this feature was enabled is updated).Delete
It assumes that there exists a
created_atsla_lifecycle entry for that checkable currently going to be deleted, and performs a simpleUPDATEstatement settingdelete_time = now(i.e. updates the PK of the row) marking the alive row for the object as deleted. If, for whatever reason, there is no correspondingcreated_atentry for this checkable, that update statement becomes a no-op, as the upgrade script and/or the initial config dump should have generated the necessary entries for all existing objects that were created before this feature was available.Footnotes
Before @julianbrost proposed a change in https://github.com/Icinga/icingadb/pull/566#issuecomment-2273088195, services had to implement an additional custom fingerprint type
ServiceFingerprintwhich was used to also retrieve their host IDs when computing the config delta of the initial sync. By introducing this type, the necessity of having to always perform an extraSELECTquery to additionally retrieve the host IDs was eliminated, as host ID is always required for the sla lifecycles to work. ↩