Strange that more or less every single large research project use csv as one of the main formats for data if that is the case
I think you need to take into account what both Excel, R, Python and Perl actually can do working with tabular data.
Out of tabular data you can create full network graphs, you can use fields for calculation, you can create 3D models if you like.
AND them most important thing is that nearly all research tools, including Python has an extended amount of support for that format.
It is also a lot easier to work with than a .sql file, or a not correctly formated json file.
You can use vscode to work with csv data if you like, so, no, there are no limitations on a tabular format in that way, but yes, it would be easier to have an import export in a json-ld or grampml file if you want to analyze data in a RDS or graph tool.
But to analyze tabular data and do extracts of data, is so easy, even I can manage that.
Problem is when people allways shall discuss why something is not helpful, but instead want to exstract 3 fields out of a database with thousands, just so that they can do that one simple task.
If you want to get some of the data from the Gramps xml, it is easy to use either Excel with Power Query or Python with xmltree and numpy or something similar.
AND, if you shall create a report for every single thing someone want, Gramps wil be extremely bloated with reports.
Why is it so little will to make Gramps to a software that are “interoperative” with other research tools through a few export/import additions, so that people can use ANYT tool they like to do analyzes on there data…?
This start to remind me more and more about a locked down proprietary mindset, not a open source/open data mindset.
I can with my csv conversion from gramps xml import the data to Cytoscape and make a cluster or a node size data field for just what you talked about in your first post. because that is what most network graph tools are made to do.