Best way to import 30,000 references

Dear Zotero-experts,

I have a library consisting of around 30000 entries. These are just entries that have authors, a title, a journal, and other minor text-fields (no attachments like pdf or other complicated stuff).
I have seen other posts about the speed of the import and I already disabled syncing and tag-creation. I would like to know if the file-format is an important parameter for the import-speed. Currently I created a json-file and it takes pretty long (although kind of ok). Would you suggest to chose another format like bibtex, xml, rdf, ris, etc. to speed up the import or is this not really important?

Thanks for any advise!
Seb
  • I don't think the import format matters greatly, no. JSON seems like a good choice.
  • Given that you're a relatively new user, it may be worth pointing out that bulk export-then-import is often less-than-optimal. Data is not completely round-tripped and, in particular, items get new IDs.

    Without knowing what you're trying to do, the general guidance is to use individual or group syncing if you're getting data into a second deployment of Zotero. If you're unable to do this for one reason or another, you may copy the sqlite file to another computer.

    If you're trying to process the data somehow, it is often better to do this with tools that use Zotero's API (such as PyZotero).
  • Thank you for your answers. I think the sqlite-file seems like a good option, although I will try to investigate the group syncing option as well. Is there a good documentation on this?
  • edited April 8, 2018
    Still not clear what you're trying to do. But sync and groups and backup are all well documented.
  • Yes, can you please take a step back and describe exactly what you are trying to do? What is this database of 30000 records? Is it exported from Zotero? Downloaded from PubMed or another online database?
  • Thanks. The group-sync method solved this problem for me.
Sign In or Register to comment.