Why is Gramps slow in some places?

After an 8 year break from Gramps, I’m looking at all of the changes and new things. Wow! But somethings haven’t changed. Gramps is still slow in some places. Why?

In order to refamiliarize myself with the codebase, I started to look at filters. @emyoulation kindly shared a large family tree to explore some specific oddities in filtering and searching. And now it begins to come back to me.

In this thread, I’ll document some issues and ideas as I explore anew.

First, I wanted to get a feel for how often Gramps touches the database. The first thing that I found was that on the Relationships page for a typical family (mom, dad, 3 siblings, and a spouse) the individual’s person data was accessed in the database 19 times. Yikes!

Accessing the database and unpickling the data 19 times is expensive, but is a result of how Gramps was designed.

I remembered that I had written a Cache proxy that is used in a few places in Gramps. So I wrapped any opened database with it. That helped somewhat, reducing this particular individual’s db access down to 6. (It also helps speed up all of the other lookups as well). But why didn’t it reduce the access down to 1?

Ah, there are two ways that we’d need to cache data: at the database level, and at the Gramps Object level (objects like Person, and Place). We do the second with the Cache proxy, but not the first. I’ll explore that possibility.

But why don’t we always cache? Currently, the Cache proxy is a read-only layer (which is why it is only used in a few places). But with a little engineering, we can fix that so that we always cache. This would help across the board, with regular access and filters.

I for one suspect and sincerely hope the use of pickled blobs of data is
a dying if not dead technique and the next iterations of GRAMPS move
away from this and in the meantime we all grin and bear the apparent


1 Like

Indeed, pickled blobs have problems independent of speed issues, some I pointed out here: Collaborate on Optimizing a new Custom Rule - #22 by dsblank

Speed is another dimension, and I think that we can move away from pickled blobs without any slowdown.

In my experiments, I now have a cache on the db layer. I’ll next try to tease apart the speedups of eliminating building Python objects, database access, and de-pickling. Optimizing all three looks like it will make parts of Gramps magnitudes faster with very few changes.

I commented a week or two ago that some screens are slow to load like sources/citations. Nick pointed out that in the search screens all the data is loaded each time it is opened. I would think there must be a better way to do this which will speed it up.

1 Like

Yes, there are a couple of well-known solutions:

  1. Use paged-views. Only load the first N rows, and load the next set as you go to the next page. Much more clunky than current Gramps’ views, but scalable to large databases. Most web sites work that way.
  2. Use a virtual GUI table, loading the data for each cell as the rows or columns are exposed. I made a prototype of that with Gramps data a few years ago. Tricky to code, but a nice solution for databases that are less than a million rows.

I’m not certain how much my current experiments in db and object caching will effect the sources/citations, but it should be significant—even without using either of the above view tricks. But they would be a tremendous speedup I’m sure!


Said it privately before but … Welcome back home.


This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.