GetMyAncestors : a standalone tool to export FamilySearch ancestors

It took a lot of iterations before the Gramps install and use became useable by mere mortals. We’ll have to be prepared to endure a bit of torture to evolve the install & access to this tool to such a point.

I’d expect a couple weeks of experimenting… spread out so much to allow for getting your fingers burnt and a bit of recovery time.

Remember, never import a foreign format into your main Tree. Import into in a fresh, blank Tree and defuse all the IEDs. It is easier to replace a “blowed up” sapling tree than a mature one.

Then import the defanged backup (in .gramps or .gpkg format) into your primary Tree.

I agree with all of that, but I can not get this stand alone program to even open, so no worries about messing up my tree. I may be able to figure it out eventually

I’ve put in an inquiry in their issue tracker too. I may do some experimenting with installing & command line access under Windoze.

But I cannot afford the bandwidth to do lots of downloading tests. I’d hit my data-cap. (Yikes! I’ve already used 3/4 of this cycle’s allocation while only 1/2 thru!)

1 Like

You put the entire folder with all the GetMyAncestors files in the Python39\Scripts folder.

Then you open the Windows PowerShell window and run pip install getmyancestors. This will create the application files in the Scripts folder.

The ReadMe file has all the command lines that you will actually run from the PowerShell.

I have not taken it past creating all the application files. Have not experimented actually trying to download information.

1 Like

The other day I mentioned the following

It would be great if GetMyAncestors could be made part of a single addon with a drop down list for each of the websites you want to import from directly into Gramps!

2 Likes

Did a test.

You can take the fstogedcom file and make a shortcut file on your desktop (or anywhere). Running this file runs using a GUI window.

It will ask you to log into FamilySearch. The program adds you to the download list which you will want to delete. Then you add the Center Person by their FamilySearch ID and how many generations and what to download.

Within 1.5 minutes I had a GEDCOM with 790 people as a test.

DO NOT Import directly into your database.It would probably be best to download a single family or 1 generation up or down. It would take a lot of Cleaning to bring it into your main tree.

3 Likes

THANK YOU!!! working now!

2 Likes

You shouldn’t need to know where the folder is, running Python on Windows…

You don’t need to download any folders, just run
pip (or pip3) install from powershell or cmd window…

Installation
The easiest way to install getmyancestors is to use pip:

pip install getmyancestors
from pypi.org: https://pypi.org/project/getmyancestors/

If you run Python 3.10 it might be that you need to upgrade babelfish to 0.6.0 to get it running…
You will get a warning, but it seems that it works.


I am running a “stupid” download of data now…


I set 30 generations for both settings, and used myself as starting individ (just for fun of it).
will post the statistics when its finished…


  • “I think those of you that want to try this should be careful with the setting, specially the Descendants generations… It can be a lot of “items” to download for those of you that have a lot of relations…”

EDIT (after 400-500 minutes):

it did not complete, Python Error read chunk … but here is the statistic from when it stopped (ex. timer, that still runs)…

3 Likes

Cross-posting from the XML thread:

I tried with 40 generations ancestors, no descendants, added family information.
The gedcom file was 27MB large…

The gedcom imported to Gramps with minor warnings, e.g. deleting empty notes etc.

But just be aware that you will get a lot of fictions if you download this amount of generations…
I got the “Ynglingesaga” from Snorre back to year 275.

Here is a printscreen of some of the fun stuff… (and its sources…)

1 Like

I gave this tool a try. Nothing but an error … This is on Fedora 35. It does the same thing for both root and regular user. It never asks for username and password.

[bgee@main2 Downloads]$ getmyancestors
Traceback (most recent call last):
File “/usr/local/bin/getmyancestors”, line 5, in
from getmyancestors.getmyancestors import main
File “/usr/local/lib/python3.10/site-packages/getmyancestors/init.py”, line 3, in
from . import getmyancestors
File “/usr/local/lib/python3.10/site-packages/getmyancestors/getmyancestors.py”, line 13, in
import babelfish
File “/usr/local/lib/python3.10/site-packages/babelfish/init.py”, line 20, in
from .converters import (LanguageConverter, LanguageReverseConverter, LanguageEquivalenceConverter, CountryConverter

Have you checked your version of babelfish?

This is a getmyancestors problem or a babelfish problem…

I had to reinstall multiple times to get it to work on Windows, never tried on Linux…

I have used GetMyAncestors to get ged. For the tree itself it seems to work as expected. It did not have the source/citation hierarchy that I think in and I think gedcom 5.5 and gramps use. In my mind there was a source for each citation.
For my musings when I first installed it see

Given the changable nature of familysearch I have written a makefile with a generic ged rule and a bunch of specific gedcom files with the variable changes that make sense based on the people started with. I had started down the path for adding it into the gramps db with the command line, however as I understand gramps merge, the personIDs must be the same. For the time being my results are in separate databases.

Yeah, you are right - this is a topic that has nothing whatever to do with Gramps. I do not - and will not - have a github account, so I cannot report problems with this application there. Sounds like I am just out of luck.

I do have python3-babelfish version 0.5.5 installed.

As a general observation, a tool like this may not be nearly as useful as it first seems. I am reminded of the last time I tried to do voice recognition to type an email. It took me more time to find and correct the mistakes than simply typing the message would have taken. It was not a win. Family Search is so full of errors and mistakes that I think any attempt to use an export of their data is doomed to have the same problem.

I updated to babelfish 0.6 and at least the GUI version still run, even though you get a python dependency warning after upgrading it.

I think the tool can be of some help, when you set a specific individual and limit the ancestors and descendents, but yes, I agree, I don’t think it will be useful for much other scenarios other than if you have found some individuals with a lot of sources on FS.

Another thing is that when you use a tool like this, you need to accept the format given by the tool, for names on people, places and sources that is set at FS, and I know that a lot of both names on both individuals and places there has been transcoded and translated totally wrong. so as you say, the cleanup of the data can be a larger job than writing it manually of you try to batch import a lot of people…

Just for grins, I used GetMyAncestors to download a small group of 14 people in three generations. There was exactly one person in the group who intersected my data. The download worked. That’s a pretty cool program! A lot of effort went into it.

But the result was far less than useful. When I examined the GEDCOM file, there were problems with ratty data. StoltHD mentioned place names. If I had imported this file, it would have created a bunch of duplicate place names. Finding and cleaning that up would take several hours. There were some place names that were just plain wrong, such as “Glamorgan, Wales, United Kingdom”. The United Kingdom did not exist in the early 17th century.

Some of the date formats were weird, such as “Abt 1610” which would have to be found and corrected. I would have had to visit every person’s profile anyway so I could get their Family Search ID and a URL pointing to the profile. Some of them have Find a Grave links that I would have had to manually edit to get into a form I like.

The sources and notes came through reasonably well, though not in a form that I find useful.

The one common person would be a duplicate that I would have to merge and then clean up. Doing that for exactly one person is not a big deal. If there had been more overlap, then more duplicate persons would have to be found, merged and edited.

I wound up not importing the GEDCOM file. I entered those 14 people using my regular workflow. It took about an hour.

One thought I had was to import the GEDCOM to a brand new Gramps database, clean it up there and then export to my real database via an XML file. The problem with that is place names. I would need an exact duplicate of the places data in the new database. The place ID numbers would have to match up, else when importing it would create all new place names. I don’t think there is a way to get an exact copy of the places data.

2 Likes

An 2010 maillist message from Benny Malengier points out that GEDCOM didn’t support ‘Shared’ places. So when then wrote the Importer, they just automatically merged identical Places. (Although in someone else’s forum, a person using an Incestry export tool from TNG discovered redundancies with a ***Data is already there*** appended to the Place title. Which would help TNG but defeat that intelligence in the Gramps parser.)

There are lots of other shared data elements that GEDCOM lists repeatedly. (Like Citations)

Does the GEDCOM import in Gramps automatically merge any & all identical records?

This is another problem with the imported data from FS.

Citations, the Citation is added under the “Author”:

In addition there is a extreme amount of Notes, and a lot of Sources without Title.


I did one test that generated 10398 people, and I got more than 40000 Notes imported in addition to all the empty notes that Gramps doesn’t import.
11480 Citations, 30826 Events,

And Events is another problem, here is a random prints creen of some of the names for the Events…

This is a random print screen of some of the 6862 places:

Can be some problem figure out where some of those places are…


I didn’t check all the different entries, but because of all this Citations and Sources without “data” in correct fields, I think it will be extremely difficult to merge data and to figure out the actual amount of duplicates.


I think that it actually will be faster to add the data manually, but of course, if you want 800k to a million name in your database, this would be a good tool for that…


But just so it is clear, this is not the tools fault, it is the extremely faulty data on FamilySearch.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.