Unexpected file upload status 409
[JavaScript Error: "[Exception... "'Unexpected file upload status 409 in Zotero.Sync.Storage.WebDAV._onUploadComplete()' when calling method: [nsIStreamListener::onStopRequest]" nsresult: "0x8057001e (NS_ERROR_XPC_JS_THREW_STRING)" location: "<unknown>" data: no]"]
report id 1167518090
hmm
about to generate a debug id.
Jon.
OK, debug id D1916169234
Ubuntu 10.04 firefox 3.6.8
Got plenty of room on my webdav server still
report id 1167518090
hmm
about to generate a debug id.
Jon.
OK, debug id D1916169234
Ubuntu 10.04 firefox 3.6.8
Got plenty of room on my webdav server still
"An unknown error occurred. Please check your file sync settings or contact your server administrator."
The Debug ID is D393813943.
My webdav provider is mydrive and I can upload files through their interface, no problem. I've contacted them, so will wait to see what they say
Might be something like number of files in the directory?
I'm a developer working for Jon's WebDAV provider :) He contacted us about this issue, and the cause is indeed that we started limiting the number of objects per folder today to max. 1000 (to avoid certain performance issues).
Is there a hard limit on the number of files Zotero creates, or does this depend on the number of objects the user syncs with Zotero?
Would it be possible to split it up into smaller folders?
Cheers,
Markus from mydrive.ch
Splitting up into smaller folders is a possibility, but it's unlikely we'd be able to get to it in the near future, I'm afraid.
vogeltje, if enough MyDrive users complain here maybe they'll put more priority on reworking Zotero to use multiple folders :)
It would be terrific if Zotero would change its storage to limit the number of files in a single folder as I previously ran into this limit with another WebDav provider.
I've looked around a bit but can't find an alternate WebDAV provider. Syncing was super-fast with MyDrive!
Read the code.google.com link below to see why the files are being split across multiple folders: it makes a lot of sense from the server perspective (loading several thousand files into memory within PHP is very server intensive and impractical for shared hosting services). Since zotero already generates the filenames, why not automatically generate subfolders to limit the number of files per folder?
http://code.google.com/p/sabredav/issues/detail?id=76&q=zotero&colspec=ID%20Type%20Status%20Priority%20Component%20Summary
Nico_sub -- or any other 'filed-out' ZOTERO lover -- please tell us, if you found a trustable alternative, even if we need to pay.
http://forums.zotero.org/discussion/69510/
Like I said there, so far, so good.
Guess I will have to look for another server
we from softronics ave opened a thread:
http://forums.zotero.org/discussion/14156/split-the-data-in-diferent-folders/#Item_1
The "best" is probably the Zotero storage, it is made to work with it!
I do not need to access my papers from any browser, so I do not need the Zotero storage facility, but my library is still growing. I have therefore gone the "sync" route. I copied my zotero folder to "My Documents" (with FF closed), then pointed Zotero to it (Preferences->Advanced->Data directory location). You can then sync that to the cloud using any number of providers: Dropbox is very popular but there are plenty others (box.net, sugarsync, jungledisk...). I am experimenting with Spideroak, which seems very interesting. Uploading right now, at about 1 Mps. I'll report here on performance later.
Warning: shameless self promotion: if you click here I get extra space. https://spideroak.com/download/referral/76edd4430c757219830ea3d16cf23420
Zotero data sync is free, so there's little reason not to use this for your database.
You can more safely sync the storage subdirectory, which contains the static PDF/web snapshots using a third party sync. But the supported Zotero file storage and WebDAV mechanisms work quite well.
I realize that you "pretty much said all [you] can say on this in [your] previous post." I just wanted to point out that this is an issue that other people are talking about, and thought the discussion on the SabreDAV development site might be relevant for you.
I appreciate that you guys can't drop everything and attack this issue immediately, but you are going to hit a point in the near future where the number of files in a single directory becomes a major performance-limiting issue. There are really simple workarounds for this, such as creating subdirectory names based on a short hash (perhaps to just 2 alphanumeric digits) of the already generated filename, and stuffing files into said subdirectories. I have only entered 222 items into my library, and already have 431 items in my /zotero/ WebDAV folder. This will grow dramatically as my group members add their references. It's already causing a tremendous slowdown in the HTML WebCT interface at my university, since it needs to generate an internal representation for all of the files in the folder whenever a single file is queried. Likewise, this will eventually become problematic in the local filesystem.
I look forward to when you guys have time to modify the zotero behaviour. Perhaps I'll make the required changes for my own use, and send you an updated copy. However, I'm a bit worried about messing up my own library while playing with the zotero code.
Thanks again, this is an awesome piece of software.
And the change is a bit more complicated than you make out, because it would have to either 1) migrate existing users' WebDAV data (which is anything but simple) or 2) apply only to new setups and/or new uploads (which would require keeping track of which structure is used and look in the appropriate place).
But we'd be happy to look at any patches that you provided.
Zotero does not use PROPFIND requests for the actual syncing, specifically because that would make for very slow sync performance. (Not because of any inherent slowness of PROPFIND, but just because pulling a file list on every sync would be a bad design.)
It uses PROPFIND only when verifying the server and when purging orphaned files. The former uses "Depth: 0", so it's just a query on the folder metadata. The latter uses "Depth: 1" to get the full file list. But it looks like the latter isn't even hooked up in the code right now, so I believe Zotero clients shouldn't be making any non-"Depth: 0" PROPFIND requests currently.
Does that get us anywhere? Rather than preemptively splitting up directories, could you just, say, block "Depth: 1" and "Depth: Infinity" PROPFIND requests if there are more than a given number of files in the folder?
We're currently looking into several performance improvements and will hopefully be able to raise the limit in a few weeks, and maybe also implement this work-around for Zotero.