I am not sure whether I should be posting here or via the bug tracking facilities. I am choosing here because I do not know that my testing of my problem has been rigorous enough to suggest that a bug fix is required – it may just be user incompetence.
My trial test of my workflow – which involves a good deal of non-Gramps software and processes – is no longer able to get a good sqlite extract. I have been using my workflow, and indeed still maintain a version back to 3.4.9, so I expect changes to be necessary with schema changes internal to the Gramps release. Am I the only user who is getting errors when attempting to use the latest available 3rd party addon for ExportSql.py to work?
You might have been the only one, because nobody tried it yet, in 5.2. And now that I tried, I can see that it fails:
151720: ERROR: _exportassistant.py: line 629: Fout bij het exporteren van uw stamboom
Traceback (most recent call last):
File "/usr/local/lib/python3.11/dist-packages/gramps/gui/plug/export/_exportassistant.py", line 620, in save
success = export_function(
^^^^^^^^^^^^^^^^
File "/home/enno/.gramps/gramps52/plugins/Sqlite/ExportSql.py", line 1072, in exportData
export_person(db, person.serialize())
File "/home/enno/.gramps/gramps52/plugins/Sqlite/ExportSql.py", line 717, in export_person
lookup(birth_ref_index, event_ref_list),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/enno/.gramps/gramps52/plugins/Sqlite/ExportSql.py", line 79, in lookup
(_private, _note_list, _attribute_list, ref, _role) = event_ref
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: too many values to unpack (expected 5)
@ennoborg How were you able to reproduce the error? I tried with my own db and the gramps example family tree, but maybe I am not doing exactly what you are. Could you document the exact steps (including any options selected) to reproduce this bug? BTW, OP has filed bug report 13208.Thanks.
I did an SQLite export of my own tree, with standard options, meaning including all persons, no filters, etc.
I did that on my laptop, with LMDE 6 and a Gramps 5.2.1 built and installed from source, and I can reproduce it on my desktop too, with Linux Mint 21.3, running 5.2.1 from source. I run that from source, because I have 5.1.7 installed, and I like to keep it that way, for now.
5.2.1 means the latest code on maintenance/gramps52.
Yes, exactly the same error at the same place. The developers have now acknowledged the bug. Thanks. 0013208: [Export Sqlite addon] Unable to export sqlite database " ValueError: too many values to unpack (expected 5)"
If anyone who is following this issue would like to be able to continue testing 5.2.0 with a workflow that is based on the sqlite export facility, with one caveat, I can provide a two-line patch to the present version of ExportSql.py which will circumvent the disconnect concerning the modified schema of the event_ref table.
The caveat, as you might have been able to guess, is that your present workflow does not need the citation_list attribute that was added between 5.1.x and 5.2.x. If you are in need of the greater flexibility afforded by that change to the schema, of course, you should wait for a proper fix.
I’m not well-acquainted with the project’s methodologies for change control and builds, so let me just annotate the changes I made in this text file for you or someone else to make what changes seem appropriate. The work-around is to ‘silently consume’ the new, sixth attribute that is now found in the event_ref object.
The existing code in both locations in the file will be unaware of the intervention, so it will continue to operate correctly on the basis of the 5.1.x idea of the event_ref table. No longer will exceptions be thrown, to you can get an exported sqlite.sql file as required for whatever the rest of your workflow entails.
There is something weird here, because I was not aware of the need of that attribute, and my test database was just an upgrade of the one that I have in my 5.1.x work tree. What I’m trying to say is, that I did not edit anything in that 5.2.x database that could trigger the need for this column. That is, I’m not aware of such a thing.
I’m also running a good deal of non-Gramps software and processes that are based on the data in Gramps, and you may be interested in my experiences with that.
For quite some time, I also used the SQLite export as starting point for my processes. A few months ago, I switched to ProgreSQL as my preferred SQL database. It would of course be possible to build a data pipeline from SQLite to ProgreSQL, but I decided not to go this way. The most important reason was pure curiosity how this could be done, but I also had in mind that the SQL Export Add-on is not a core component of Gramps and I understood from some forum discussions here that it may be dropped at some point in the development cycle.
Therefore I built my solution directly on the XML file that Gramps produces as automatic backup.
My process parses the XML file and fills tables in the ProgreSQL database that more or less mimic the Gramps views. Those tables are then the starting point for the downstream processes.
Thank you for sharing information concerning your experiences and methodology.
I, too, worry that the software for performing the Sqlite export is not core to the project and does not seem to have a current ‘owner’ of the code. I would be forced to re-engineer my whole workflow if the sqlite export were not available. As there have been times when a release no longer provided a mechanism for a sqlite export that I could make work, I have had to continue to keep alive an older 3.4.9 version for just that reason.
When you wrote that there is an XML file produced as automatic backup, I had to pause because I am far removed from working with the product. As far as I know the .gramps file that is produced by automatic backup is binary, not text. I do see that there is a standard Export capability that would produce XML, and perhaps that is where you get the XML that you parse and load into your ProgreSQL database. If the schema you have built results in the same tables with the same names and the names and types of the same attributes as what the current ExportSql.py produces, I would hope that the SQL queries that I run to produce the csv text files that go into the next part of my workflow, would integrate seamlessly, but even if you have made modest schema changes that would require modest changes to my SQL queries, it would be nice to contact you to see what you might share. I am getting too long of tooth to have any enthusiasm about having to try to replicate whatever code you have that is handling the XML parsing and loading into a proper SQL set of tables.
This alone would be reason enough to re-engineer any workflow … But that’s my personal point of view and you are invited to ignore it completely
As Nick already said, this is a simple text file but gzipped. So un-gzip the .gramps file and you get a XML file with UTF-8 code set. My understanding is that the XML export and the automatic backup produce exactly the same file (probably, the automatic backup just runs the export before shutting down Gramps).
No, this is not exactly the case. The SQL export produces an extremely normalized database that will need a lot of joins to work with. From a database point of view, this is a good thing, from a programmer’s point of view who is interrested in writing code for downstream applications, this is not so good. So my approach was to “de-normalize” the database and as I said, try to mimic the basic Gramps views (person, event, family etc) as tables. This is what I already did in SQLite and I migrated this approach to ProgreSQL (another option would have been to stick with the highly normalized approach of the SQLite export and to use SQL to build views in ProgreSQL that mimic the Gramps views).
I’ll be happy to share everything I have with you but just one warning: you appear to work a lot with SQL. I also worked a lot with SQL in my “SQLite period” but after migrating to PostgreSQL, I threw out nearly 100 % of my SQL code and replaced it with R code. R has some fantastic packages that allow to write very efficient and readable code and since I’m not really a SQL enthusiast, I prefer the R code over SQL in ProgreSQL even if that would probably be faster.
If you want a deeper discussion of what I’m doing, please send a private mail and we can set up that discussion.
The Sqlite export and import are now both fixed in the addons repository. I did a quick test with our example database which contains two event reference citations.