[Release] Gramps [Web] MCP v1.0 - Connect AI Assistants to Your Family Tree

I’ve released the first version of Gramps MCP - a Model Context Protocol (MCP term defined) server that enables AI assistants to interact directly with Gramps databases through the Gramps Web API.

Technical overview: Built using the MCP Python SDK, this server exposes 16 tools for comprehensive genealogy data interaction:

  • Universal search using Gramps Query Language (find_type)
  • Full-text search across all entities (find_anything)
  • CRUD operations for all major Gramps objects (person, family, event, place, source, etc.)
  • Tree analysis tools (descendants, ancestors, statistics)
  • Change tracking and relationship discovery

Architecture: FastAPI-based HTTP server with stdio transport support, JWT authentication, Pydantic validation, and async httpx client for Gramps Web API communication. Supports both Docker deployment and direct Python execution.

API integration: Leverages the existing Gramps Web API endpoints while adding intelligent query processing and natural language interfaces. All operations respect Gramps data validation and relationship constraints.

Development context: I maintain a 4400+ person tree in Gramps and wanted to bridge the gap between AI assistance and serious genealogy research. This preserves Gramps’ data integrity while enabling conversational interaction with family tree data.

Extensibility: Modular tool architecture makes it straightforward to add new genealogy-specific functionality. Each tool category is separated (search, data management, analysis) for easier maintenance and extension.

Would appreciate feedback from developers familiar with the Gramps codebase, especially around API usage patterns and potential integration opportunities. The repository includes detailed technical documentation and contribution guidelines.

Any thoughts on extending this approach or integrating more deeply with Gramps core functionality?

11 Likes

Very nice! Already quite some coverage of functionality.

I’ve been working on a few other items that could be related:

  1. New libchatfuncs by dsblank · Pull Request #763 · gramps-project/addons-source · GitHub - set of functions designed for LLMs. Focus on speed, limiting resources (don’t overwhelm LLM), and utility. Currently designed for agentic tools use.
  2. Graph algorithms by dsblank · Pull Request #2111 · gramps-project/gramps · GitHub - ancestor and descendant algorithms. Designed for speed and consistency.
  3. addons-source/GrampsChat at maintenance/gramps60 · gramps-project/addons-source · GitHub - a Gramps Gramplet chatbot, built on top of litellm. Custom agentic framework. (There is also a spinoff chatbot with some nice features by Melle Konig.)
  4. GitHub - dsblank/gramps-web-desktop: Python package to view your gramps family tree with gramps-web - a way of using Gramps Web without the complexity, and using your SQlite database.
  5. GitHub - dsblank/object-ql: A Python query system for querying Objects - Gramps Object Query Language, a drop-in replacement for Gramps Query Language. Not fast. I’ll redo this someday using a faster method.

Do you imagine your MCP server working inside Gramps Web? I wonder if some of the returned data would overwhelm the LLM. Would love to see an example conversation with the Gramps example tree.

For clarity, should this be renamed Gramps Web MCP ? This is intended to work only with the GQL - Gramps Query Language by @DavidMStraub ? And those queries will be to Gramps Web for servers, not to Gramps for desktops

Thanks for the feedback! I went the MCP route because I have a subscription to Claude Code and wanted to leverage this the easiest way without having to pay per use. If I ever change my mind, it would be easy to switch providers or tools.

I also think a chatbot embedded in an app isn’t the best way forward. From a research perspective, you want access to multiple tools and the ability to mix and match like web search or multimodal capabilities. That way I can have a conversation like: “Find John Smith in my family tree, then search online for more records about him, and if you find a death certificate, extract all the information and add it to his record” all seamlessly in one chat.

We could build all that within Gramps Web, as these are essentially all tool calls. For that I would look to something like https://ai.pydantic.dev/ which is a great agent tool framework. But in my opinion, we’d just be recreating something like https://docs.openwebui.com/ if we want it to work well. So MCPs are the way forward in my opinion.

I looked through the other projects and will already incorporate some improvements proposed there. I’ve created GitHub issues for these: https://github.com/cabout/gramps-mcp/issues/5, https://github.com/cabout/gramps-mcp/issues/6, and https://github.com/cabout/gramps-mcp/issues/7. Especially good points on the documentation and not using the HTML report.

On the data overloading issue, I’ve been trying my best to limit that, mostly by making decisions for the LLM through this whole MCP layer by adding max results and things like that. But more is needed. Using capable models (non-local) helps too with the larger context windows.

For the web desktop, this MCP should already work for that since it would just point to localhost:5000 instead of the URL of your Gramps Web instance. Need to test this though. I’ll reply back when I have the example conversation documentation in place.

Thanks again for the feedback and suggestions!

2 Likes

good point, I will create an issue for this too, as it is a bit of a chore to change it everywhere. Clarify naming: Should this be 'Gramps Web MCP' to better reflect its purpose? · Issue #8 · cabout-me/gramps-mcp · GitHub

1 Like

Btw, I’ve argued that the branding should be “GrampsWeb” compound word rather than 2 separate words. (Simply to make it more unique and searchable.) It hasn’t been a popular suggestion.

1 Like

Nice.

Does it also work for off-line desktop! That’s not a bad joke. Currently, I can test Custom Connector via an online chat, and a self-hosting (with few ressourses) might be possible too.

eg, GitHub - itisaevalex/mistr-agent: A MCP client that enables Mistral AI models to autonomously execute complex tasks across web and local environments through standardized agentic capabilities.

GitHub - dsblank/gramps-web-desktop: Python package to view your gramps family tree with gramps-web i was talking about this. But like i said it. haven’t tested yet.

Yes, this should work, i added examples in the read me to connect to https://docs.openwebui.com/ which is similar than this.

2 Likes

@emyoulation, I think there is a bit of confusion of layers in your question. You can use GrampsWeb with your local SQLite database, and you can use Gramps desktop with your postgresql database.

@MPietrala’s MCP is a server that talks to gramps-web-api. But that doesn’t mean that there couldn’t be a desktop gramplet (for example) that talks to the MCP (which would talk to the gramps-web-api).

In general, I think the MCP is the right direction to go, as that makes the API useful for chatbots and other agentic systems (like Cursor).

2 Likes

Fantastic @MPietrala, I’ll have a detailed look in the next days!

Since it’s in Python, is there any reason we can’t just stick this behind the Flask Web API directly rather than standalone with FastAPI? I’m pretty sure we can have both.

At first glance, I think this basically implements Teaching the AI assistant to call tools · Issue #586 · gramps-project/gramps-web-api · GitHub (and more) and it would be silly not to be able to call this directly from the Gramps Web chat interface (which at the moment can do very little).

What do you think?

3 Likes

@MPietrala here is a conversation I had with ChatGPT about how one could use an MCP in the chatbots that we have been working on here: ChatGPT - MCP servers with chatbots

Basically, would just need a /tools endpoint that provides the tools’ schema. It would be nice to make gramps-web-api much easier to use from the desktop.

1 Like

Maybe a little bit different, I asked an AI for using this MCP for local self-hosting.

Attached some quick samples:

import requests
import json

def query_gramps_mcp(prompt):
    url = "http://localhost:8000/query"
    headers = {"Content-Type": "application/json"}
    data = {"prompt": prompt}
    response = requests.post(url, headers=headers, json=data)
    return response.json()

# Exemple
gramps_data = query_gramps_mcp("Find all ancestors of John Doe")
print(gramps_data)

via http

import subprocess
import json

def query_gramps_mcp_stdio(prompt):
    process = subprocess.Popen(
        ["uv", "run", "python", "-m", "src.gramps_mcp.server", "stdio"],
        stdin=subprocess.PIPE,
        stdout=subprocess.PIPE,
        stderr=subprocess.PIPE,
        text=True,
        cwd="/chemin/vers/gramps-mcp"
    )

    process.stdin.write(json.dumps({"prompt": prompt}) + "\n")
    process.stdin.flush()

    output = ""
    while True:
        line = process.stdout.readline()
        if not line:
            break
        output += line

    process.terminate()
    return json.loads(output)

# Exemple
gramps_data = query_gramps_mcp_stdio("Find all ancestors of John Doe")
print(gramps_data)

via Stdio

import subprocess
import json

def query_ollama(prompt):
    result = subprocess.run(
        ["ollama", "run", "mistral", prompt],
        capture_output=True,
        text=True
    )
    return result.stdout

# Exemple d'utilisation
gramps_data = query_gramps_mcp("Find all ancestors of John Doe")
enriched_prompt = f"""
Using the following Gramps data:
{gramps_data}
Answer this question: Who are the parents of John Doe?
"""
llm_response = query_ollama(enriched_prompt)
print(llm_response)

Ollama

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "mistralai/Mistral-7B-Instruct-v0.2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

def query_local_llm(prompt):
    inputs = tokenizer(prompt, return_tensors="pt")
    outputs = model.generate(**inputs, max_new_tokens=200)
    return tokenizer.decode(outputs[0], skip_special_tokens=True)

# Exemple
gramps_data = query_gramps_mcp("Find all ancestors of John Doe")
enriched_prompt = f"""
Using the following Gramps data:
{gramps_data}
Answer this question: Who are the parents of John Doe?
"""
llm_response = query_local_llm(enriched_prompt)
print(llm_response)

transformers

import requests
import subprocess
import json

def query_gramps_mcp(prompt):
    url = "http://localhost:8000/query"
    headers = {"Content-Type": "application/json"}
    data = {"prompt": prompt}
    response = requests.post(url, headers=headers, json=data)
    return response.json()

def query_ollama(prompt):
    result = subprocess.run(
        ["ollama", "run", "mistral", prompt],
        capture_output=True,
        text=True
    )
    return result.stdout

# 1. Interroger gramps-mcp
gramps_data = query_gramps_mcp("Find all ancestors of John Doe")

# 2. Enrichir la requête pour le LLM
enriched_prompt = f"""
Using the following Gramps data:
{gramps_data}
Answer this question: Who are the parents of John Doe?
"""

# 3. Envoyer au LLM local
llm_response = query_ollama(enriched_prompt)
print(llm_response)

http + Ollama

etc.

""" Ce code est inspiré des exemples et documentations suivants :

- gramps-mcp : https://github.com/cabout-me/gramps-mcp (Licence : [à vérifier])

- Mistral AI MCP : https://docs.mistral.ai/agents/mcp/ (Licence : MIT/Apache 2.0)

- Ollama : https://ollama.ai/ (Licence : MIT)

- Hugging Face Transformers : https://huggingface.co/docs/transformers/index (Licence : Apache 2.0)

For testing: A recent PC with 16 GB of RAM and a mid-range GPU (such as an RTX 3060) is sufficient for Mistral 7B.

For regular use: 32 GB of RAM and a GPU like the RTX 4090 are ideal.

For production or large models: A workstation with a professional GPU (A100/H100) and 64 GB+ of RAM is required.

Example setup for gramps-mcp + Mistral 7B

Hardware:

  • CPU: AMD Ryzen 7 5800X (8 cores).

  • RAM: 32 GB DDR4.

  • GPU: NVIDIA RTX 3060 (12 GB VRAM).

  • Storage: 1 TB NVMe SSD.

Software:

  • OS: Ubuntu 22.04 LTS.

  • LLM Tool: Ollama or transformers + bitsandbytes (for 8-bit quantization).

  • gramps-mcp: Running in HTTP or stdio mode.

Expected Performance:

  • Response time of ~5-10 seconds for simple queries.

  • Ability to handle multiple consecutive requests without saturation.

Adapté et combiné pour une utilisation locale avec des LLM et gramps-mcp. """

Hi @romjerome , yeah the current MCP work great for self hosting and the current setup already support this no need to do anything extra instead of following the instructions in the readme. You just have to connect the mcp to whatever local chat tool you use. Like any other mcp.

1 Like

@dsblank you can definitely do it. I have been working on that with another project with the use of Pydantic-ai Client - Pydantic AI . There are 2 options here. Either you bring all the individual tools over to the agent, or you give the agent (within your chatbot) access to the mcp. I am more a fan of the 2nd option as I am not a big fan of internal app chatbots (as discussed before) and would like to keep using the mcp separately in combination with others. That way both the internal chatbot and the external one uses the same tool set and get maintained together.

Great idea, didn’t think about that. Gonna be honest, I tried the gramplet way, and i got stuck very fast, so I hope someone else with more Gramps experience could tackle that.

I am all for. Will need a bit help on doing that. My only request that it should keep working as a standalone MCP too. And not just within the internal chatbot, which will be by design limited in model choice and access to other tools/mcp’s. As now i can have my AI assistant give access to a folder with sources I still need to clean up and just go through them, with the use of other mcp’s (file system, web search).

1 Like

Read in your README.md#license, CONTRIBUTING.md and LICENSE files that you are releasing Gramps MCP under the GNU Affero General Public License v3.0 (a copyright license with strong copyleft and SaaS provisions). (Thank you!)

Typically, code contributed to the Gramps will also have a copyright declaration embedded at the beginning of the Python modules too. @Nick-Hall Is this a needed change?
https://github.com/cabout-me/gramps-mcp/tree/main/src/gramps_mcp

It is good practice for each file to contain the name of the project, a license notice and a copyright declaration. The GPL FAQ provides a helpful answer. This is true for all licenses and all projects, not just for Gramps.

1 Like
1 Like