The gedcom imported to Gramps with minor warnings, e.g. deleting empty notes etc.
But just be aware that you will get a lot of fictions if you download this amount of generations…
I got the “Ynglingesaga” from Snorre back to year 275.
Here is a printscreen of some of the fun stuff… (and its sources…)
I gave this tool a try. Nothing but an error … This is on Fedora 35. It does the same thing for both root and regular user. It never asks for username and password.
[bgee@main2 Downloads]$ getmyancestors
Traceback (most recent call last):
File “/usr/local/bin/getmyancestors”, line 5, in
from getmyancestors.getmyancestors import main
File “/usr/local/lib/python3.10/site-packages/getmyancestors/init.py”, line 3, in
from . import getmyancestors
File “/usr/local/lib/python3.10/site-packages/getmyancestors/getmyancestors.py”, line 13, in
import babelfish
File “/usr/local/lib/python3.10/site-packages/babelfish/init.py”, line 20, in
from .converters import (LanguageConverter, LanguageReverseConverter, LanguageEquivalenceConverter, CountryConverter
I have used GetMyAncestors to get ged. For the tree itself it seems to work as expected. It did not have the source/citation hierarchy that I think in and I think gedcom 5.5 and gramps use. In my mind there was a source for each citation.
For my musings when I first installed it see
Given the changable nature of familysearch I have written a makefile with a generic ged rule and a bunch of specific gedcom files with the variable changes that make sense based on the people started with. I had started down the path for adding it into the gramps db with the command line, however as I understand gramps merge, the personIDs must be the same. For the time being my results are in separate databases.
Yeah, you are right - this is a topic that has nothing whatever to do with Gramps. I do not - and will not - have a github account, so I cannot report problems with this application there. Sounds like I am just out of luck.
I do have python3-babelfish version 0.5.5 installed.
As a general observation, a tool like this may not be nearly as useful as it first seems. I am reminded of the last time I tried to do voice recognition to type an email. It took me more time to find and correct the mistakes than simply typing the message would have taken. It was not a win. Family Search is so full of errors and mistakes that I think any attempt to use an export of their data is doomed to have the same problem.
I think the tool can be of some help, when you set a specific individual and limit the ancestors and descendents, but yes, I agree, I don’t think it will be useful for much other scenarios other than if you have found some individuals with a lot of sources on FS.
Another thing is that when you use a tool like this, you need to accept the format given by the tool, for names on people, places and sources that is set at FS, and I know that a lot of both names on both individuals and places there has been transcoded and translated totally wrong. so as you say, the cleanup of the data can be a larger job than writing it manually of you try to batch import a lot of people…
Just for grins, I used GetMyAncestors to download a small group of 14 people in three generations. There was exactly one person in the group who intersected my data. The download worked. That’s a pretty cool program! A lot of effort went into it.
But the result was far less than useful. When I examined the GEDCOM file, there were problems with ratty data. StoltHD mentioned place names. If I had imported this file, it would have created a bunch of duplicate place names. Finding and cleaning that up would take several hours. There were some place names that were just plain wrong, such as “Glamorgan, Wales, United Kingdom”. The United Kingdom did not exist in the early 17th century.
Some of the date formats were weird, such as “Abt 1610” which would have to be found and corrected. I would have had to visit every person’s profile anyway so I could get their Family Search ID and a URL pointing to the profile. Some of them have Find a Grave links that I would have had to manually edit to get into a form I like.
The sources and notes came through reasonably well, though not in a form that I find useful.
The one common person would be a duplicate that I would have to merge and then clean up. Doing that for exactly one person is not a big deal. If there had been more overlap, then more duplicate persons would have to be found, merged and edited.
I wound up not importing the GEDCOM file. I entered those 14 people using my regular workflow. It took about an hour.
One thought I had was to import the GEDCOM to a brand new Gramps database, clean it up there and then export to my real database via an XML file. The problem with that is place names. I would need an exact duplicate of the places data in the new database. The place ID numbers would have to match up, else when importing it would create all new place names. I don’t think there is a way to get an exact copy of the places data.
An 2010 maillist message from Benny Malengier points out that GEDCOM didn’t support ‘Shared’ places. So when then wrote the Importer, they just automatically merged identical Places. (Although in someone else’s forum, a person using an Incestry export tool from TNG discovered redundancies with a ***Data is already there*** appended to the Place title. Which would help TNG but defeat that intelligence in the Gramps parser.)
There are lots of other shared data elements that GEDCOM lists repeatedly. (Like Citations)
Does the GEDCOM import in Gramps automatically merge any & all identical records?
I did one test that generated 10398 people, and I got more than 40000 Notes imported in addition to all the empty notes that Gramps doesn’t import.
11480 Citations, 30826 Events,
And Events is another problem, here is a random prints creen of some of the names for the Events…
Can be some problem figure out where some of those places are…
I didn’t check all the different entries, but because of all this Citations and Sources without “data” in correct fields, I think it will be extremely difficult to merge data and to figure out the actual amount of duplicates.
I think that it actually will be faster to add the data manually, but of course, if you want 800k to a million name in your database, this would be a good tool for that…
But just so it is clear, this is not the tools fault, it is the extremely faulty data on FamilySearch.
OK, I have to reopen this old thread.
Because unfortunately I have absolutely no idea how to run getmyancestors.
Python - sure! But what do I need for that?
I installed the latest Python version and installed getmyancestors via the CMD in Windows with pip install.
What do I have to enter and where does the GUI start?
Please don’t let me die stupid…
Is there perhaps another program in which I can getmyancestors to work?
If pip works well in Windows, meaning that it installs all needed packages, and adds the commands to your path, it should work as described here:
The truth is however, that it does not work, because FS changed to a web based log on, which does not seem to supported by the current version of the program.
You can however import data from FS with programs like Ancestral Quest, Legacy, and RootsMagic, and with these, you can also add new data to FS, so they’re more powerful than getmyancestors.
These are all Windows programs, and Ancestral Quest and RootsMagic have free versions that are good enough for this purpose. Legacy 10 is all free, and they all have good things and bad things.
Thanks Enno, but i teste this both programms. They work fine but without Sources. I read that getmyancestors data with the sources can downloaded.
Or i dit it something wrong with this programms?
You did nothing wrong, and I just did another check with Legacy 10, because it does add some sources. And those are worthless, because they just tell you that the person was copied from a specific profile on FamilySearch, which is redundant, because every downloaded person already gets a proper ID, which you can find in the attributes after import.