Managing/counting duplicates for reviews (best practice)
We would like to use Zotero to manage our systematic literature search. Having scanned a few forum posts on duplicates, we wondered if (eg. @bwiernik) or other experienced users might have any process advice/best practice guide (or links) for how they manage systematic review duplicates (particularly how they keep count of removed duplicates for the search record? – do they do this in Zotero or export to CSV/excel?)
Assuming you have separate (sub)collections for every search, you shouldn't have duplicates within those.
Then, once you merge all duplicates, the total number of items in your library (or top level collection, but I think it makes sense to use a fresh group library for each SR) is the deduped total and the sum of items from all collections (where items remain after de-duping) is the original number of search results.
You can further fine-tune this by having a collection per database with subcollections for each search. That will allow you to also get both the original and de-duped total per database.
1) import each set of search results into their own collection. The number of items in each collection provides the number of results returned by each search.
2) merge the duplicates. The total number in the library root is the number of unique items.
3) use colored tags to mark the processing status of items (eg, rejected, unavailable, codes)
4) label the reason for rejection with tags starting with *. Th counts of items with each tag give the counts for each exclusion criterion.
From the counts listed above you can produce all of the parts of your PRISMA diagram.
I’m happy to jump on a short call to talk about using Zotero for a systematic review