A "Needs review" category like in Mendeley (read/unread sorting)

This would be a great workflow improvement
  • What's wrong with using a tag for this?
  • aurimas,

    I agree with you - tags fulfill this purpose. I have a tag for "must read", but the Mendeley "needs review" category is applied automatically, not manually. It searches out the new references for those with missing/incorrect information and alerts you to them, so you can go through each one and make sure the reference is correct.

    IH-pol is right: it is a great workflow improvement, as you don't have to manually add the tag to every reference. Perhaps a workaround would be to offer the option to automatically add a tag to every new reference imported (For example, every time a new reference is added to the library, tag it with "needs review").
  • What about tagging items that have been reviewed with a "reviewed" tag and then having a saved search that shows you items without said tag?
  • well, it depends on when this get applied, right? Doesn't Mendeley only apply this when it's unsure about the quality of the data, e.g. after retrieve metadata? E.g. when fields are missing or the like.
    I actually like that idea. I wouldn't want everything imported tagged as new (Frank actually had an add-on that did that, I believe), that just seems silly.

    Given that over-tagging also has performance effects, we shouldn't overdo it for that reason, too.
  • adamsmith,

    Right. The "needs review" is applied when it appears that fields are missing (hence why I have a saved search for journals that do not have a DOI or URL) and/or Mendeley is unsure about the quality/accuracy of the data. I think it would be great for Zotero to implement such a thing. Obviously you wouldn't need it for every new reference, but Zotero could look at the most important fields (author, year, title, page, volume, issue, ISBN) used for references and make a judgment based on that.
  • A form of this will be coming, since we will need something to alert the user of updated/available metadata when we implement lookup of additional/better metadata from within Zotero. I'm just trying to get a grasp on what the needs really are. In general, I would encourage to take a look at every reference that you import. While we can mark up really bad references, there will always be things that will need tweaking. At least I think this will be the case until there is a centralized curated/crowdsourced database of metadata. Even databases like PubMed, which generally have great metadata do pop up with errors once in a while.
  • Yeah, and I think for the quality/accuracy issue we'd want to just have a smart search or wizard that highlighted items with problematic fields automatically rather than assigning permanent tags.
  • Sounds good. I look forward to its implementation.
    I do carefully review all my references, but its not uncommon for me to stumble upon a journal and then import 20 or even 50+ new articles that I'm looking forward to reading. With large-scale imports, it's inevitable that some problems will pop up.

    Dan, it would be a permanent tag, just a tag like Zotero already has. It's applied to those that have questionable metadata. After review, if the user is satisfied that the metadata is correct, they can remove the tag and go about their business.
  • Right, but I'm saying that that's not the approach I see us taking, at least for flagging items with identifiable problems. If Zotero knows what's problematic, there's no need for a tag. (It'd only be an issue if we flagged things that might actually be correct rather than just obvious errors or omissions.)
  • I don't think the back-end implementation of this is really important at this point. I think the point is that when you want to review items, you should be able to easily see problematic ones. The problematic items are not going to become less problematic over time, so whatever algorithm we use to detect issues can be applied at any time. Thus, there's no point to permanently mark an item as problematic and a solution like what Dan suggests (and what we have for duplicate items, for instance) would be sufficient.

    OTOH, if the algorithm is complex and takes a while to execute _and_ we want users to be able to see problematic items as they browse their library (which is probably reasonable), then marking the items somehow may be beneficial. The marking doesn't have to be a tag though and it really shouldn't matter to the user what the marking is as long as they can control it.
  • edited November 6, 2014
    (It'd only be an issue if we flagged things that might actually be correct rather than just obvious errors or omissions.)
    right but that's a serious issue. We already have people complaining about this for duplicates and I think its worse for items that need review. Starting with things like missing DOIs for journal articles - those would be good to check, but not all articles have DOIs. Or for something rarer: Most books, say, would have a creator, but depending on what you study, you might have quite a few without, we wouldn't want those to clutter the "review" box and be impossible to remove.

    In other words, while I'm all for handling this intelligently, it needs to be possible to somehow mark an item as reviewed.
Sign In or Register to comment.