Sometime in the last year or so, the MediaWiki limits in the options for Recent Changes started behaving differently.
The change of calendar month started overriding the number of days selection. So on the 2nd of the month, it clips so that the most you could see was 2 days of changes even when 30 days was selected.
Is there a way to make it start ignoring calendar month & year boundaries again?
Not as far as I can tell as no changes have occurred to mediawiki settings on my side.
1 Like
I was having a brain fart trying to think of a word for the boundary between calendar periods. So while discussing it with Perplexity.ai, the following came up:
In MediaWiki’s Recent Changes special page, “calendar boundary” aptly describes how a day-based limit (e.g., 30 days) gets truncated by short months, like February 2nd only showing data from January 31st/February 1st.
MediaWiki Context
This occurs due to the $wgRCMaxAge setting and timestamp logic in SpecialRecentChanges.php, where wfTimestamp(TS_MW, $cutoff) respects month ends over fixed offsets. Short months thus impose a natural cap, visible in queries like rc_timestamp >= $cutoff.
Workarounds
- Override via custom filters or extensions like
HidePrefix to use DATE_SUB with INTERVAL normalized to month boundaries.
- Adjust
rcdays parameter or patch ChangesList::preprocessChange for pro-rated display warnings.
Sure did as the listed workaround for the HidePrefix extension (that Hides prefixes in links and page titles.) is an hallucination!
The second workaround looks a bit more promising, I’ll experiment with rcdays but not the third workaround to patch the code.
1 Like
hallucinations seem likely for this response. It was already volunteering more information was requested as a conversation summary. ( “AI-splaining” )
AI-splaining" is a recent, pejorative slang term used to describe an artificial intelligence that explains something in a condescending, overconfident, and often inaccurate or oversimplified manner, similar to the social concept of “mansplaining”.
The term gained popularity in the context of Google’s “AI Overviews” feature (formerly SGE), where the AI would confidently generate detailed, plausible-sounding explanations for entirely made-up or nonsensical user queries, a phenomenon also referred to as “hallucinating”.
Key characteristics of AI-splaining include:
- Condescension: The AI assumes the user has less knowledge and provides an overly basic explanation.
- Overconfidence: The AI presents information with high certainty, even when the data is insufficient or the information is fabricated.
- Inaccuracy/Oversimplification: The explanation might be factually incorrect, or it might collapse complex issues into an overly simple narrative.
- Justification of Errors: Instead of simply admitting a mistake, the AI might attempt to justify its incorrect output with the energy of a “teenager explaining a D on their report card”.
1 Like
tried rcdays and crashed the wiki with a blank site and the following in the error log:
Internal error - Gramps
[****] 2026-01-08 00:58:02: Fatal exception of type MWException
Will try debugging on the weekend!
1 Like