Large collections and bulk de-duplication

One of our group libraries contains more than 60k records. A big part of them are duplicates. Removing them one by one through deduplication menu would be extremely costly both in time and effort, but as far as I'm concerned, there are no built-in or third party solutions for bulk dedup in Zotero.

1. We've had the idea to dedup the SQlite file directly. Deduplication should be possible with either SQL or pandas or other solutions; however, the Zotero database file contains multiple tables, so I suppose deduping just one would corrupt the entire thing. So I doubt this idea is actually viable, but perhaps someone has tried this?

2.Are there any new solutions we haven't heard of?

3.On a side note - is there any 'maximum intended' collection size for the best performance of Zotero? E.g. the aforementioned 60k library causes Zotero to constantly freeze on Windows; Ubuntu works slightly better for some reason. I have a modest laptop configuration http://i65.tinypic.com/2a4ogok.png - but it'd still be cool to know what resources a collection like this requires.
  • There's not currently anything better than the Duplicate Items view.

    This is more of a conceptual problem than a technical one. It's not really clear what any sort of automatic deduplication process would do — duplicate items can have both different versions of fields and different versions of files (e.g., from PDF watermarks or separate snapshots of the same webpage), neither of which can necessarily be resolved in any automated manner.

    If we could come up with a UI for it, we could at least offer to automatically merge the subset of items that can be resolved automatically. We could also merge exact file matches when merging parent items, but even there there's the attachment title, filename, and embedded note, which could all differ.

    Out of curiosity, how did you end up with so many duplicates?
  • There is no need to go as deep as you wrote, at least for starters. How I imagined it: Zotero could look for similar items - like it currently does in 'Duplicate Items' menu; if only one of them has an attachment, it is the one that stays; if all of them have attachments, some additional criteria/criterion is applied, e.g. the newest item stays; or the one with the fullest metadata etc.

    Being able to do something like this would be a blast - probably not even through UI, but with some tweaks, like editing sqlite (and which, I presume, would not work). I suppose that our issue is unusual indeed, but others might also find a use for the 'rough' deduplication.

    As for the amount of duplicates - I guess the main reason is that we used that group as a db for our text corpora. And as we were getting it collaboratively and from places not really adapt for that - it turned out to be quite a mess.
  • And Dan, just conceptually: for most of the work we do these days (we're a small Dutch think tank), we try to collect as big a corpus as we possibly can in Zotero, and we then textmine it to identify the main topics, how they change over time, etc. Just to get a bird's eye view of what 'academia' has been (or has not been) working on with respect to our research topics. We gather the items from various aggregators (from the usual academic aggregators, Publish or Perish sources, Unpaywall, etc.) and get them into Zotero. But as Yevhen indicated, that creates lots of duplicates.
    We have a Ukrainian developer looking at ways to still dedupe in sqlite without 'breaking' it. But if you could already implement what you suggested (at least automatically merge the 'obvious' ones (in libraries - across collections), that would already be a great step forward.
  • Hi @sdspieg,

    did you get anywhere with this? We're looking at citation trees (ping @dlesieur), which means that we have to periodically merge (as Zotero doesn't have a function to check on import).

    We've also implemented a non-descriptive merge, where on merge, the item data of both items is written to a note. So if something goes wrong, you can then find the missing metadata.

    Björn
  • While I understand that it may be difficult to automate a decision of which duplicate to treat as the master, it remains extremely unsatisfying to have to do this manually.
    I imported a Mendeley library into my Zotero, now have like 2000 duplicates.
    It seems an option to choose "treat the newer version as master" would be perfectly helpful, since now I am not making any more educated assessment than this anyway.

    More likely I am about to delete my entire Zotero library and just re-import the Mendeley into an empty library. Not exactly satisfying either.
  • I'm pretty sure there's a script that just auto-merges all duplicates in the duplicates folder that you can find on the forums (I'd have to search, too) -- it uses the local javascript API (https://www.zotero.org/support/dev/client_coding/javascript_api ) so save apart from the fact that you could merge false positives
  • I really hope to see a solution here. Like @Thomas, we are dealing with multiple imported libraries for our work. Manually merging each duplicate is not reasonable for us.
  • I don't have the link, but there's an add-on that you should find mentioned in some of the newer threads on this
  • Has anyone figured out a way to do this yet (even a script)? I tried an addon form 2 years ago and it seem to be outdated now. Facing the same problems as the users above with hundreds of duplicates.
  • Look for the zoplicate add-on. Works great
  • But so there is no option to automatically pick the most detailed one AND the largest attachment (even if that one has fewer or more incomplete fields)?
  • edited yesterday at 7:44am
    Well, I have been there with Zotero 6 too, to do this with 73k items, four hours a day, for several months. I had to do this for my Integrative Review (similar to a Systematic Review), but I survived it... Aka, it is painstakingly doable...
Sign In or Register to comment.