ChatWithTree gramplet addon

Based on earlier work from @dsblank have created a Gramplet “ChatWithTree’“.

  • ChatWithTree shows a chat like interface
  • There are commands to select models from several different AI providers
  • Addition of a “search by name” tool
  • Extended the “get person” tool to deliver a bit more data
  • Apart from the free models that google gemini provides, you can also connect to openai, anthrophic, openrouter etc, all thanks to the llm python model
  • You have to create an API key for the model that you want to use
  • When the AI bot is thinking, it shows the thinking in the chat - showing the strategy to answer difficult questions
  • The Gramplet is build for Gramps version 6

I’m looking for feedback on the current state of the gramplet, very curious with what models you will try this out. I personally tried it out mainly with gemini-2.0-flash and moonshot/kimi2:free, both can work with the created tools reasonably well.

You can install the gramplet by adding a new project to the addon-manager,

Name: ChatWIthTree

Url: https://raw.githubusercontent.com/MelleKoning/addons/refs/heads/myaddon60/gramps60/

The tool is currently marked as “unstable” so you have to set the plugin filter to unstable to find it.

Then in the Dashboard, you can add the ChatWithTree Gramplet.

When you select a certain chat-model, you have to have set the API key as environment-variable for that model before starting gramps! Otherwise you will get an error like this:

Type “/help” for examples of how to set environment variables for keys for different providers.

in linux, an env var can be set simply with a command export OPENROUTER_API_KEY=”sk…”

Openrouter has a few free open models that can be tried out.

Here is an example screenshot of what it looks like:

  • The green balloons are what I have typed above -
  • the yellow balloons show the “thinking strategy” how the AI model is going to use the tools that are provided - there will always be one balloon showing what local tools have been used to read gramps database information.
  • The last blue balloon is the final answer of the AI model.

As you can see, it did not provide a good answer - this sometimes happens - this is because the number of “turns” the AI is allowed to take (by the addon) is set to be quite limited at the moment (there is a maximum amount of turns set to limit the search time).

But we can give the model a nudge by asking more information..

Or as you can see, we might have to give more nudges, but some of the found data is there.

It can happen that the chat history becomes too large for the remote model though.

We can switch to another model with a larger context window in the middle of the chat:

We can also use the information that the model has itself:

(cut a bit in the chat) - it shows we can also tap into the common knowledge of the large language models themselves:

  • I hope the ChatWithTree gramps addon finds its way to be useful for a nice chat with your own genealogy database!

Cheers,

Melle

3 Likes

Nice work, and glad you could build on some of my ideas and code!

I left some comments on the PR.

It would be nice to build a library of functions that are useful for chatbots. We can spend some time refining them. Using the SimpleAccess was a quick-and-dirty trick, but those functions are generally quite expensive. Let me propose some functions (when I get a chance).

I think this could be a reach ecosystem going forward.

1 Like

Initial work here (not quite done with your existing functions, and need to write some more):

We can discuss some of the choices, but these were the rules I tied to follow:

  1. Use cached values when available
  2. Use raw data dicts to save time
  3. Don’t overwhelm the caller with too much information
  4. Be consistent in all functions
  5. WIP: Don’t use SimpleAccess or other expensive functions

I also added a SYSTEM_PROMPT so that it would contain the general structures, and that the function definitions wouldn’t need to repeat them.

Feedback welcomed!

PS - haven’t tested this much yet

1 Like

Nice!

I have not had time to test the updated tools, yet, but an very curious indeed about performance improvements. Would indeed also like to know how to get a hold onto other entities like notes of a person.

Meanwhile saw the post about MCP connected with an LLM on GrampsWeb - also interesting. Things are rapidly moving!

As you might have seen in the github ticket have tried to run the addon on Windows, also tried via a pythonVm, but had no luck so far..

@dsblank

Actually I think the “memory” is something that we already have in two different ways:

  • The chat history itself
  • The gramps database

I do understand that memory access is often way faster than database access, but in the test chats that I had with several (open source) LLMs have found that most of the models are capable of using the chat history to find back identifiers of persons.

Another issue I found is that the Gramps main application pops up with a window that Gramps is taking to long - and asks the user to stop the entire application :sweat_smile:

This is due to the fact that some of the interactions with the LLMs, and the tool calls, can indeed take longer than 10 seconds, and thus block the main thread.

Have tried to test “AsyncThreading” and that seems to work fine on a Console test version, see here: GitHub - MelleKoning/genealogychatbot

However, I wonder how to open a second instance or session of a database in Gramps itself?

That way, we could have the LLM tooling do its work on a second session of the database (maybe a read-only version) and have a worker thread doing its job, without blocking the main thread.

I tried to implement the AsyncThreadService.py that is working for the Console test app in the Gramplet, but did not succeed: Mainly because opening the database gives issues: Getting hold on the “database name” does not seem easy from Gramps. There is dbstate (of the Gramplet), but what is the database name to be able to create a second session like a read-only instance?

I also thought about letting the interaction with LLM only do that on a dedicated thread, but that would make yielding results from intermediate tool calls interact with the main Gramps thread, and then thread communication becomes an even more difficult issue, therefore tried to put the AsyncChatService handle the python worker thread, and leave the GTK Gui python code alone.

Anyways - my question is: is there a default way of getting hold on a second, read-only session on the same opened database in Gramps? That would make threading a neat possibility for the addon.

Managed to get async opening of database working on my Gramps 6.0.5 version: