WIP: feat(next-v7): extend anubis policy #934

Draft
viceice wants to merge 1 commit from feat/v7-anubis-policy into main
First-time contributor
base on this file - https://github.com/TecharoHQ/anubis/blob/v1.23.1/data/botPolicies.yaml
feat(next-v7): extend anubis policy
Some checks failed
build / lint (push) Has been cancelled
build / lint (pull_request) Has been cancelled
/ test (pull_request) Has been cancelled
1e24f1ef11
First-time contributor

this adds a lot of complexity so I have to ask: how effective will this be against when is actually causing problem?

this adds a lot of complexity so I have to ask: how effective will this be against when is actually causing problem?
First-time contributor

@earl-warren wrote in https://invisible.forgejo.org/infrastructure/k8s-cluster/pulls/940#issuecomment-9458:

this adds a lot of complexity so I have to ask: how effective will this be against when is actually causing problem?

While it looks complex, the main thing we're doing here is using the recommended defaults of Anubis to decide when it should ramp up the difficulty based upon it's own internal policies.

A lot of this logic also just evaluates browsers based upon how "realistic" they appear to be, and allows browsers that seem pretty legit coming from clean IPs to bypass it entirely. All of this can be fine-tuned, but I do think it's worth trying this for at least a short time to see if it remains effective at stopping crawlers while reducing the number of human complaints about being unable to solve Anubis.

@earl-warren wrote in https://invisible.forgejo.org/infrastructure/k8s-cluster/pulls/940#issuecomment-9458: > this adds a lot of complexity so I have to ask: how effective will this be against when is actually causing problem? While it looks complex, the main thing we're doing here is using the recommended defaults of Anubis to decide when it should ramp up the difficulty based upon it's own [internal policies](https://github.com/TecharoHQ/anubis/tree/main/data). A lot of this logic also just evaluates browsers based upon how "realistic" they appear to be, and allows browsers that seem pretty legit coming from clean IPs to bypass it entirely. All of this can be fine-tuned, but I do think it's worth trying this for at least a short time to see if it remains effective at stopping crawlers while reducing the number of human complaints about being unable to solve Anubis.
Some checks failed
build / lint (push) Has been cancelled
build / lint (pull_request) Has been cancelled
/ test (pull_request) Has been cancelled
This pull request is marked as a work in progress.
This branch is out-of-date with the base branch
View command line instructions

Checkout

From your project repository, check out a new branch and test the changes.
git fetch -u origin feat/v7-anubis-policy:feat/v7-anubis-policy
git switch feat/v7-anubis-policy

Merge

Merge the changes and update on Forgejo.

Warning: The "Autodetect manual merge" setting is not enabled for this repository, you will have to mark this pull request as manually merged afterwards.

git switch main
git merge --no-ff feat/v7-anubis-policy
git switch feat/v7-anubis-policy
git rebase main
git switch main
git merge --ff-only feat/v7-anubis-policy
git switch feat/v7-anubis-policy
git rebase main
git switch main
git merge --no-ff feat/v7-anubis-policy
git switch main
git merge --squash feat/v7-anubis-policy
git switch main
git merge --ff-only feat/v7-anubis-policy
git switch main
git merge feat/v7-anubis-policy
git push origin main
Sign in to join this conversation.
No description provided.