How can you use RootsMagic logic to exclude specific record types, in each GEDCOM hierarchy, to create a cut-down GEDCOM export file for loading into (for example) ancestry.com?
GEDCOM uses record hierarachies of 0, 1, 2, 3 etc with different fieldnames (e.g. NAME, SOUR[CE], EVEN[T] etc.) .
I also have some Custom record categories that I have created.
Is there a programme/utility that allows you to deselect/select:
all records subordinate and equal to Level = 2, Name = SOUR - to remove all SOURCE notations
all records with bespoke Names (e.g. I have one for “1939 Register” - to distinguish these from CENSUS).
I have used Microsoft Excel logic to filter out unnecessary records - but it is cumbersome.
Any guidance would be appreciated.
RootsMagic has settings for each fact type that include control over where it is outputted. GEDCOM is one of the controlled outputs. You can go through the FactType list one fact type at a time to set it. That’s laborious and also affects what is transferred between RM databases by drag’n’drop so it needs to be reset for normal use and recalled each time another such export is wanted. Here are some sqlite scripts that can facilitate recall and reset:
Thank you Tom
It’s been a few years since I was programming in SQL but I can see the logic.
I may give these a try.
I was thinking more towards there being a user-interfaced app/utility/add-on that would enable the selection/deselection of the various values and parameters and then run the logic to extract the customised GEDCOM. If there isn’t one already - there’s surely a market for a generic one to facilitate this for:
extracting for a customised ancestry.com load;
extracting for a configured/redcated/sanitised version for family memebers/genealogists so that any personal notes/comments are removed;
extracting to create a sub-tree to delineate one portion of a tree from the whole for ease of management;
identifying which tree members contain specific GEDCOM customised (or not) record types/values;
identifying records with spelling/typographical anomalies which may match specific parameters;
creating a customised global record re-alignment to configure data content to conform to a specific data-string format (e.g. Census reference data Folio/Page/Schedule etc.);
identifying Tree members where the (earlier) record content is now deficient relative to the more recent data values/formats used to enrich recent record values.
… to name but a few that spring to mind…