5.2 seems to be slower than 5.1

@UlrichDemlehner , which speed issues? Or are they discussed in another thread?

If @dsblank is doing optimizations for the upcoming 6.0 version, he probably wants to know about a very recent specific slowdown.

1 Like

The performance issues have been discussed quite extensively in other threads and BTW, you were part of those discussions :wink: But thanks for trying to clarify this issue (which in my personal opinion is currently the most serious obstacle for the future of Gramps).

1 Like

Ok. Thanks for the reference. I was not remembering the specifics on the 5.1 vs. 5.2 performance hit. Which seems to be about the BSDDB backend having much more caching allocated than the SQLite backend.

(Although the bogged down discussion on Geneanet about trying to troubleshoot the massive use of memory is troublesome. Expanding caching would probably just aggravate their problem.)

And @Nick-Hall suggested indexing could make an additional improvement.

Is that an accurate summary?

I’m not sure if it’s really a cache issue. All the other discussions I’ve had pointed to the Python blobs being extensively used as the root cause. BSDDB has a better support for those blobs (maybe because of the larger cache, I’m not familiar with the technical details) compared with SQLite and PostgreSQL, but this is only a symptom and not the root cause. The blobs basically prevent that the power of modern database engines can be leveraged by Gramps, so at the end of the day the blobs have to disappear and to be replaced by something modern database engines can better deal with. There was a discussion about JSON objects but again, I’m not familiar with the technical details.

Having said this, I doubt if indexing alone will be the solution. Indexing will probably help in specific situations but if the data you want to read from the database engine are “wrapped” in blobs that cannot be indexed, an index will be quite useless.

Eliminating the legacy Blobs with JSON is part of the current evolutionary step. The apparent tradeoff for greater database engine compatibility is less compression in storage.

Great news, thanks! And I guess everybody is nowadays able to live with less compression in storage since the days where we had to pay some DEM 1000 (at that time approx USD 300 - 400) for 1 MB (MB, not GB!) RAM in our Compaq 386 servers and some DEM 2000 for a 300 MB (again, MB not GB!) harddisk are definitely gone. And we scratched our heads and kept asking ourselves how in the world would somebody be able to fill a 300 MB harddisk …

Ah yes, ‘The Good Olde Days™’, when I wrote my Masters thesis using a PC that had no HDD, just a pair or 3.5" floppy disk drives.

Today, I’m using a PC that has a 10 TB HHD, steadily filling up with photos from my DSLR.

I wrote both my diploma and dissertation on an electrical (!) type writer with an exchangable type wheel for the Greek letters. Does anybody still know the name “Olivetti”? So please don’t mention ‘The Good Olde Days™’ – yours were the future then :wink:

But to add something on the sunny side of our discussion: when old guys were talking about the past maybe 30 or 50 or 100 years ago, they usually talked about their heroic deeds in this and that war. Now we talk about harddisk sizes and how they fill up. May be we should consider this as an indicator for the progress of mankind irrespective of all the confusion around us. Just a thought …

3 Likes

I started my genealogy when I got my first computer, an Apple IIe. Wanted to get everything my maternal grandmother knew into an organized format.

2 Likes