GrampsChat Addon for 6.0

GrampsChat Addon for Gramps desktop 6.0 is now available. This is a configurable Chatbot conversation Gramplet. You can use it with commercial LLM providers (like OpenAI) or you can use it with your own LLM server.

The addon will automatically install litellm if you have your Addon Manager settings set appropriately:

To use a commercial provider, you need to set these two environment variables before starting Gramps, like this on my Linux machine for example:

export GRAMPS_AI_MODEL_NAME="gpt-3.5-turbo-0125"
export export OPENAI_API_KEY="sk-..."

You can get an account at openai.com. Their models are listed here.

WARNING: this could cost you money! Keep reading for free, DIY server

After starting Gramps, select the GrampsChat Addon (gramplet, developer, stable):

You can then add it to… ah, just the Dashboard. I guess I need to make sure it can be added on any View. TODO #1.

Oh, I guess I need to set a height. TODO #2. Let’s disconnect it (click gears) to make it bigger.

And ask a question, like “What is Gramps?”

Talk to it, and try different models.

WARNING: as your conversation history gets longer, it uses up more tokens on your provider and can cost you money. Click “New Conversation” to clear the history.

DIY server

(This section requires some computer expertise and may not be for everyone. It may also require a computer with some hefty memory, CPU, and optionally a GPU.)

If you are like me, you don’t want to send your data to some remote server, and don’t want to pay money for exploring these ideas. GrampsChat was designed to also allow you to use your own LLM server.

First, you’ll need ollama. This is a system for hosting your own LLM server.

I think by installing it, it will start a “server”. If not then you can start (in another terminal/console window):

ollama serve

Then you need to pick a model. Let’s start with a small model. Then we need to “pull” it:

ollama pull llama3.2

And finally, you need to stop Gramps and reset the environment variables:

export GRAMPS_AI_MODEL_URL=http://127.0.0.1:11434
export GRAMPS_AI_MODEL_NAME=ollama/llama3.2

Note the “ollama/” in front of the model name. You should see something like:

That didn’t send any data to another server, and is completely free.

In this thread I’ll describe a bit about how this works, and what it might be useful for. But I encourage you to explore!

7 Likes

I just noticed that llama says that Gramps is the “Gnome Family Relations Manager” :slight_smile: Lucky gnomes!

1 Like

A metered service probably needs ‘alert’ CSS applied to the dialog for pay services. Maybe a red for titlebar background? (Using an RGBa transparency to make it pale and work with Light/Dark mode.) Maybe the gramplet backgound if the titlebar is only visible when undocked.

(The map services that for Geography view should probably be that way too.)

The Place Cleanup addon gramplet pops-up an initial Config where users sign up for account because that’s a metered service.

Is GrampsChat potentially more expensive (both cash and resource wise) if you have multiple instances open in different Categories?

SuperTool by @kku is an addon that (as a Tool rather than a Gramplet) has cached content sync’d to the category. His .json-fed expandable Help might be a good addition too.

If you are setting an environment variable to a service provider that costs money, then you’ll be aware of that. I don’t think we can say for certain whether you are using a no-cost model or not.

No. The only thing that costs money is the number of words (eg, “tokens”) that you send to a commercial provider. And that just depends on your current chat history and your question.

When we start getting advanced, and seeding the history with text from your Gramps tree or Gramps documentation, then the number of tokens is going to sky rocket. But that is why we need researchers to explore making a useful system.

If you want to see some serious confusion, use:

ollama pull deepseek-r1:1.5b
export GRAMPS_AI_MODEL_NAME="ollama/deepseek-r1:1.5b"
export GRAMPS_AI_MODEL_URL=http://127.0.0.1:11434

This is a tiny model that will probably run on most laptops. It has a so-called “reasoning” mode such that it generates a bunch of text before it answers your question. Supposedly this gives better answers. But on small models, it is wacky.

Ask Gramps: “How many ancestors do I have?” deepseek will ramble on and on, arguing with itself about the proper mathematical formula to use. Of course, the correct answer is that it is unknowable. A better question may be “How many humans have ever lived?”. And even that is problematic; what counts as being human?

One other point for developers serious about making a useful chatbot for genealogy: if you want to keep track of all of your questions and the models’ responses, you can set:

Get a free Opik API key from: Comet - Build Better Models Faster (create an account on comet.com, get the API key). Then:

export OPIK_API_KEY=...

When you start Gramps you might see an error dialog; ignore it. All of your chats will be logged, like this:

Full disclosure: Opik is an open source LLM tracking and evaluation tool created by the company I work for (Comet ML).

3 Likes

That might be an interesting limit to add to ancestor count projections…

When does the 2n exceed the total population of the earth in that generation?

After wrestling with Perplexity, here’s the prompt:

For a newborn in 2025, assuming 19 years per generation, calculate the generation number (nn) at which the theoretical number of ancestors (2n2n) exceeds the estimated world population for that generation. Use historical global population estimates for each generation’s birth year and determine the point where it becomes impossible for there to be no pedigree collapse. Assume that once this threshold is reached, pedigree collapse accelerates at a minimum doubling rate per prior generation (and likely faster). Provide the corresponding year for each significant generation and explain why pedigree collapse is inevitable and must have already occurred by that point.

Short answer: At Generation 29 (~1476 CE), the theoretical number of ancestors (229=536,870,912) exceeds the estimated world population (~400–500 million).