Recently, a contributor converted a portion of the wiki to make it more readily translated for other languages. While essential for a “100% translatable” objective, I feel that abstractions are an impediment to grokking an interface. People have to do an extra cognitive jump to assign the labels in the abstraction to the GUI element.
They changed Gramps Main Window annotated screen capture (that introduces the terminology for the elements of the main window) from a screen capture PNG to an abstraction in MediaWiki table form.
This example of this particular abstraction has a couple extra drawbacks…
it tries to integrate mutually exclusive variations (Search Bar and Sidebar; bottom/side split bars and the Dashboard)
The bottom bar is represented as full width when it is just a vertical partition of the Display Area. Likewise the Search Bar is just a vertical partition of the Display Area.
What are other options?
Is there a display feature or Image format that would let the labels be translatable text yet still show a REAL screen capture?
SVG can be an option, you can write both java and python code to control its behavior…
Try e.g., Inkscape to see if it can convert the Png to a functional SVG file first and then someone might be able to add code to control its behavior, e.g. change text in textboxes and labels.
However, the WikiMedia cataloging system for images is onerous. It doesn’t deal well with 3 OSes and 40-plus languages. I’ve tried sharing GIMP and Inkscape layered master annotated image. It wasn’t successful. Either they were ignored or overlooked.
The hope is to find a workflow that doesn’t require 40 translators needing to install graphics software on their machines.
I was wondering in there is a way to make a table with a screen capture backdrop? Maybe we could leave the callouts (the yellow conversation bubbles in the Main Window illustration) blank and have table cells float above each callout for the label text?
Less ideally, a crosswalk table could have the English in 1 column and translation in the other column.
I was thinking about that the translations could be in textfiles picked up by python or java generating the svg and inserted into variables for the different labels.
Another approach might be to use the text-based graphic of diagram.net (draw.io), but I don’t know if that will work the way you need, and most likely it will not operate within any wiki variant nor wikimedia.
I was thinking that e.g., a python script could generate the “new screenshoots” with updated labels in svg from one single svg file (a master) for each screenshoot you need, one time for each language using the existing translation files (if possible) and then convert the svg to any usable image format.
ooh, maybe you can use ImageMagic to insert a layer with new textboxes/labels and then flatting the new image… you can script ImageMagic using Python I think…
I am just throwing out some wild alternatives here.
@Nick has already mentioned the possibility of writing code to re‐generate the Addons table in the wiki from the registration data in the project. (And he broke the table out into a “addons” MediaWiki template to make the process more viable.) If he has this workflow automate replacing the MediaWiki file, that example might be able to be adapted to uploading the results from dozens of WebLate adapted ImageMagic scripts for localized variants.
And the 1st step in @GaryGriffin’s PDF Manual Generation 5.1 workflow is to make an HTML intermediary version. So a tools might be good for re-generating a set of GUI images specific to a user’s configuration (language, colors, dark theme) in a local HTML download.
And it could be useful for genealogical purposes too. The face tag rectangles could be output to SVG rectangles for labeling photos. (We talked about a method for using Picassa and Lua scripts for a similar labeling of images in 2021.)