Kuma checks should not fail kubernetes jobs #69
Labels
No labels
bug
cleanup
duplicate
enhancement
forgefriends
help wanted
invalid
label workflow
need more info
question
refactor
static-site
sync
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: infrastructure/k8s-cluster#69
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Currently our kuma monitoring scripts are failing the kubernetes jobs when something is wrong.
They should only fail if they couldn't post to kuma.
eg: The up-to-date check currently fails totally when codeberg or forgejo-code is offline for some reason.
The script should catch this and post a different down state to kuma.