[Wiki] Translations lost on submission [automatic “I’m not a robot” verification]

I decided to spend some time translating the manual to Danish.
After editing quite a lot I clicked [Save changes]. However, this apparently triggered the automatic “I’m not a robot” verification.
When I was returned to the editing page I was back to where I started. So lost all the work.

Is there any way to suppress or prevent the “automatic verification” feature ?

1 Like

Just a thought - can it be browser related?
I’m using the Brave browser, which tries to minimise what web pages can store locally.

Try the “back” function, I have had some sucess with it and the wiki security. (No such luck when MantisBT does this evil behavior.)

Yes. It could be related to how that browser caches.

I am using Firefox. Try an experiment. Do a 1 word edit in Brave and another (on a different wiki page) in Firefox. Work on something else for an hour. Then try committing the change in each browser. (Depending on how Cloudflare manages verification, this might not be a true test. If it validates by IP address, revalidating in one browser might restart the timeout clock. … Perplexity claims the Cloudflare cookies are fully isolated. Annoying, but the test should good.)

Please let us know the results.

Investigated the traffic issue (the reason for the “automatic verification” from Cloudflare being shown.).

External ongoing spike in website traffic started eight hours ago, that I’ve partially blocked now.

This time so far the peak was eleven times the normal website traffic once again related to some new AI-bots!

Sorry the “automatic verification” turns on automatically until traffic levels fall back to the normal range, please have patience.

Last intense bot traffic was back on August 8th.

Website is working and accessible apart from that.

References:

2 Likes

I wonder if the posting of wiki URLs on Facebook, here or other social media triggers an uptick in AI-bot traffic?

Not as far as I can determine; as that type of traffic from social media is focused to those URL’s and easy to back track to the source of the traffic, whereas the bots seem to be attempting to access all pages, file/images etc, equally old pages and new across each of the services here ( mediawiki/mantisbt/wordpress and static webpages) !

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.