I spend a lot of time cleaning up places and using place details to differentiate different locations in a city or county. This cleanup often involves splitting place details and merging duplicate places.
I have one county in particular with a lot of place details (at least it used to) with numerous cemeteries, churches, street addresses in come cases. I noticed a month or so ago that a couple of my more “popular” counties and cities had flat out disappeared. I had hundreds of events with these places on them and the places all were blank. I could tell this because I regularly push my changes to Ancestry via TreeShare and places were showing in Ancestry but not RootsMagic on people that those events hadn’t been updated on.
Panicing I reloaded an earlier database from backup and redid my updates.
Now today I’m seeing the same thing all over again, place details without places and entire places dissapearing. Is there some sort of limit on place details per place? I don’t see any reason there would be in the database.
shouldn’t be— how many people do you have on one particular place detail/ addy-- I have over 60 on one without any problems-- Sorry just checking --when you are looking at how many times a place was USED-- you are looking at the right panel where it says USED and NOT at Details where it says zero?
If you are saying that the place is no longer in your database list at all that should NOT happen— you MIGHT try running all the database tools-- perhaps even drag and drop the database into a new one in case your database is corrupted…
and if you can, check a few of those places on the individual’s records to see if they are still there…
Drag and drop isn’t an option, loses too much information. WHat I can confirm is places have definitely completely disappeared, but place details seem to still be there.
When I look at the database for places with PlaceType=2 and the MasterId (place) not in the database I get nearly 400. I also see my “disappearing” places with high PlaceIds meaning they’ve been readded.
SELECT * FROM PlaceTable WHERE MAsterId <> 0 AND MasterId NOT IN (SELECT PlaceId FROM PlaceTable)
Also, looking at events using:
SELECT * FROM EventTable WHERE PlaceId <> 0 AND PlaceId NOT IN (SELECT PlaceId FROM PlaceTable)
I have nearly 12,000 events with places not in the database. Not good. My suspicion is they are using a 16 bit signed integer internally for PlaceId somewhere, and once the PlaceId passed 32768 things started going south.
If I can figure out that UTCModDate is 45262.6000675232 I can determine where I need to restore to.
I’m primarily pushing, but I am pulling document matches. My normal workflow is add local, match to Ancestry and/or FamilySearch and pull individual events, then push the results to Ancestry. Been doing that since TreeShare came out.
My use of SQLite is currently for reporting and research only, at least on my “live” database. In any case the only changes I’ve been doing through Sqlite have been erasing the familtsearch id to get around a bug where people matched to a merged FamilySearch person are hung up until you forcibly unmatch them.
Well, plenty of anomalies. I just can’t figure out exactly what’s causing it. It’s obviously not in the database itself, and it’s not in TreeShare. From what I can figure out one of the dialog boxes associated with either the split or merge of places or place details is using a 16 bit signed integer to manage PlaceId.
The pisser is how I get my database back to normal, and it’s looking like it’s going to be painful. From what I can see since I can’t assume RM will fix this quickly is I’m going to need to:
a) Restore back to 2023-11-29
b) drag and drop the entire database into a new one to resequence all the PlaceIds
c) Drag and drop all the people added since 2023-11-29 from the latest database, which, is going to be a real PITA unless I can figure out how to make a group out of everyone with a RIN > the highest person on 2023-11-29
d) Figure out how to copy the ConfigTable from the latest DB to the rebuilt one to get all by Publisher books back
Interesting note. Copying the ConfigTable from the latest database to the drag and drop version does restore the Book Structure, but the people are lost. I assume that’s due to the resequencing of the RIN numbers.
As a side note the Books have an existing bug where many settings aren’t saved. This is a known bug so I’m not concerned about the fonts, locations, etc, beccause they are lost just switching to another chapter.
Re-sequencing the PlaceIDs in the PlaceTable and EventTable in a copy of the good Nov db would be my first step. I’m not sure what one could do with all the work done subsequently. All surviving PlaceIDs in it would need to be re-sequenced identically to those from Nov. to avoid the creation of duplicate places. And all those with PlaceIDs over 32760 (or 32768) flagged somehow visibly in the RM UI to be easily found for special attention. Then comes the really tricky part and a gamble. Compare the two PlaceTables and log and copy the records from the re-sequenced Nov that are missing (or differ in PlaceID) from the re-sequenced recent db. At this point, I think it’s back into the RM UI to look for anomalies and re-merge duplicates.
Are you sure you’re not just opening the wrong database? If you click the file outside of RM it won’t necessarily open with that one, especially if you have it marked to open the last file closed. Make sure to only open the file from within RM. Modifying the database outside of RM with SQLite is at your own risk.
I don’t believe I’ll be able to save any of the work on places via automation, but that’s not as much of a crisis. The places in my latest DB are “clean” so I may have to do some merging but that’s not the end of the world.
Rebuilding the Books will be bit of a PITA but I know which names are supposed to be in each chapter so it’s just going through each book and updating the chapters. There are maybe 100 chapters to do, so not a good time. Sure be nice if RM would allow an export / import of publications.
The hard part is creating the group for the drag and drop which I assume I’ll need to so via SQLite but I’ll need to study the tables more to figure out how.
I created a bug report to RM so I’ll wait and see what they have to say.
If you follow my procedure, no drag’n’drop is involved so the RINs remain constant. The faulty database with your complete book is the one being updated so the issues will be the remaining orphaned Place Details and a number of unused Places and duplicate Places.
As for @rzamor1’s warning of risk, it seems you’ve already suffered from the risk that RM9 may have a bug for which RM Inc accepts no greater liability than if you diddled your database with sqlite.
Positive Renee. I didn’t fall out of the tree yesterday. I’ve seen this issue more than once in the last few months which is why I’m considering drastic action.
As I sit I’m basically at dead stop on my work until I come up with a solution because I can’t afford to lose more data. As I see it, the only time I need to use SQLite for updating is to salvage the Books. Give me a way to do that via the RM UI I’ll happily use that but I cannot lose all that effort and SQLite looks like the only way for now.
Just wanted to mention that I had also seen a number of places in my database of type 2 but with MasterID of 0 or some large number that didn’t point to a type 0 place.
It was my impression that it may have been a RM v8 bug issue. Haven’t seen them recently.
I just did a check and found a bunch of type 0 places with masterID’s not 0.
I’m going to have to compare to backups.
I attempted on a new database to replicate your problem but have failed to do so but ran into something else. I started by adding a PlaceID=32768 using SQLite but the Places pane in List view failed to show it nor did it have any columns yet it had, unexpectedly given the lack of content, both horizontal and vertical scroll bars; the Map View functioned normally. The Place List did not present normally until I revised the PlaceID to 32760 and (maybe this was coincidental) I used the Place for an event (it was used previously as the higher PlaceID to no avail). Other PlaceIDs > 32768 also presented normally in the Place List once 32760 was created.
Once the Place List was sorted, I experimented with merging Places and Place Details and those proceeded as they should. I have not tried splitting.
Edit: I’ve not been able to stimulate the problem with splitting either so it’s not looking like a programmatic or systemic error for a small database. Maybe there is something with timing on a large database or sluggish computer - we’ve seen the incidence of someone appearing as the spouse in place of every Unknown Spouse when the Add Person dialogue is disrupted before the RIN is assigned.
The only way I see it happen is starting with my “clean” db from 2023-11-29, adding individuals then adding facts via hints, then cleaning up the places by splitting out cemeteries, churches, etc, and merging the resultant duplicates. This seems to go south one the ones I’m operating on go past 32768. Now there are a total of 12000 places and place names so I assume that has something to so with it. Unlikely sluggish computer as I have a MacBook Pro M2 with a terabyte SSD and 32GB RAM.
My suspicions remain with the UI elements inside RM mainly because I’ve seen similar issues with UI elements in the past. Not familiar with the internals of Delphi but in the end game the interfaces to the UI elements are similar.
In any case I’ve done this:
A) Restored back to 2023-11-29
B) Dragged and dropped entire restored database to new database
C) Created group in latest db using RIN > 29182 and date changed > 2023-11-29 using UI
D) Exported group to GEDCOM (can’t use groups in Drag n Drop)
E) Imported GEDCOM into new restored database
F) Merged duplicates
This has all been done via RM UI to avoid accusations of manipulating the DB outside of RM causing the problem. Everything looks good, even Ancestry is still connected, but I have about three dozen individuals that are showing being in Ancestry but not RM, but that’s not true. Still working on that.
Once I get everything fixed I’ll submit a backup of that DB to RM. I’ve already given them the corrupted DB, and the restored one that was “clean”.