Rating Bookmarks

Is there any way to rate the bookmarks? (like 4/10 or maybe with stars) and the ability to sort them based on that?
i could use tags like "1/10,"2/10" etc. but it wouldn't be as effective and more time consuming too.
P.S. I just started using Zotero today :)
«13
  • Not currently, but we've considered adding such a feature, and we'd be interested to know what sort of rating system (1-5 stars, like/dislike, etc.) people think would be most useful.
  • I can't imagine any sensible use for a rating system at all. I'm interested in what people have to say about items, but not remotely in how someone (even me) 'rates' items along some fake measurement scale (if we had a list of Wittgenstein's top 10 philosophy books, how much good woudl it do us?).

    Anyway, I'll no doubt be in a minority on this one ;) So I'll request a feature instead: will I be able to turn it off?
  • edited March 31, 2008
    Thanks for your fast response and i'm glad you're considering adding the feature.
    I personally would prefer a 1-5 stars rating system ^^

    Also Thank you for making Zotero. I specially like the snapshot feature. though it would be great if i could save snapshots of different pages of a site and put them all under the snapshot of the main page and it could expand/collapse (the same could apply to links) but i think i'm asking for too much >.<;

    Edit:
    about reply above:
    in my case, i'm bookmarking many artists sites and i want to rate them based on how i find their style. some artists just started drawing but some have experience, i want to be able to put a difference between them.i'm sure it has its use in similar cases. though a disable option wouldn't hurt ;) (or if you don't rate any site it's the same as it being turned off)
  • Hi Dan,

    I think the proper way to add rating support to zotero is to move the tag system to a full taxonomy system just as drupal and wordpress did. Then It will allow users and developers to add rating features, tags, categories, locations, school of thought, etc.

    You can take a look how wordpress handled it: http://codex.wordpress.org/WordPress_Taxonomy
  • The good thing about tags is that it's not that hard to merge them, and so to gain significant social advantages from that.

    I'd imagine it'd be more difficult using hierarchical taxonomies.

    In other words, any new feature that goes into Zotero must be designed with social networking (for lack of a better term) in mind.

    For the record, I'm with CB. I just started using the Magnolia bookmark service, and I find the notion of assigning stars really awkward. It's not to say I won't use it, but it just feels wrong to me ATM.
  • I should probably provide a bit more context. One question is the potential use of a rating system as a single-user organizational tool. The other is usefulness in a multi-user environment, particularly in a recommendations system. Focusing on the latter for the moment, CB, you're right that a single user's ratings might provide little utility (though perhaps a bit of novelty). However, in aggregate, ratings data potentially becomes much more interesting.

    The fundamental problem with not having a rating system is that there's not really another good metric for identifying the references that users have in their Zotero libraries but don't actually like or find useful. External citation databases could be a start, but users will have materials in Zotero that aren't in such databases, and the number of times an item is "cited" in Zotero isn't a clear measurement. Citation frequency could also be misleading in general—The Bell Curve might be cited frequently but in vastly different contexts. It seems a recommendations system should have some way to assess that difference. Even having just like/dislike buttons would provide the system with an added dimension to the data (and it would to some degree avoid the problem of different users having different criteria for what goes where on a point scale).
  • Dan: I see what you're saying, but as you say, there are many ways to do this sort of weighting.

    The problem with stars, is something like this: what if I absolutely hate an article or a book because I disagree with the ideas, but I consider it essential reading. How do I rate that. and to do so in such a way that it distinguishes it from, say, an article I find simply weak (uninspiring, thinly-supported, whatever)?

    This is a tricky problem, but the solution might be so simple as to subtly change the meaning of such a weighting?

    Also, for myself, I could see having different audiences: students at different levels, and different scholarly communities. I wonder how this could work best in that context?

    Hmm ... I guess it might be orthogonal, and just a question of assigning items to different groups.

  • The fundamental problem with not having a rating system is that there's not really another good metric for identifying the references that users have in their Zotero libraries but don't actually like or find useful.
    I'm a tad dubious about dealing with the lack of a good metric by using instead a meaningless one. If I rate some things high because I like them, and others because I find them useful, I can't remotely be considered to have said the same thing about both (except to an economist, who'd expunge 'like' and 'find useful', and call them both utilities). I can't see how aggregating these kinds of ratings amounts to anything other than noise (which is just what aggregated ratings do become wherever they're used).

    I won't flog this as it just seems to be a fact that ratings systems are popular (even my uni library is using them in our opac, heaven help us), so I guess your users will want them.

    But as an aside, it's probably not *too* hard to come up with more useful ideas for aggregating something approximating people's judgements, though implementing them would be a scarier matter. Here are two off the top of my head:

    Idea 1: adjectives with a rating scale. Have a kind of 'tag' that is, roughly speaking, an adjective (would this be a unary relation, or a binary relation with an adjectival concept, Bruce?). Tagging with an adjective would require placing it on a scale. The scale for each adjective could be averaged across users (or, ultimately and better, across chosen sets of trusted users or groups). Thus you could see at a glance see how people think of an item in terms of specific descriptions (eg. 'well-written', 'good experimental design', 'good fact checking' etc), rather than contentless preferences.

    Idea 2: offer a tag-like 'cloud' for items, that scales substantive terms by their frequency of appearance in notes which users have opted to share (or at least to make available for aggregating purposes). I've no idea how informative this would be in practise, but coming up with algorithms to extract contextually substantive words would be a fun project for someone thus inclined ;)

    Sorry, not meaning to be pointlessly negative here. This topic just happens to have set one of my itches a-tingling.
  • Apologies for the redundancy -- Bruce slipped in before me and made essentially the same points.
  • Bruce: All good questions.
    what if I absolutely hate an article or a book because I disagree with the ideas, but I consider it essential reading. How do I rate that. and to do so in such a way that it distinguishes it from, say, an article I find simply weak (uninspiring, thinly-supported, whatever)?
    It's an important distinction, though it's possible it would come through simply by volume. If something is essential reading, it seems likely that it will be in a large proportion of the Zotero libraries of people for whom it is essential, even if it is widely disliked. Something that is both bad and unimportant might not be.

    Or, as an alternative, if this is the fundamental distinction: have both like/dislike and relevant/irrelevant. ("Relevant"/"irrelevant" may not be the right phrasing, but, basically, the question of "Is this something I need to read?".)

    The issue of different audiences might come out in the wash. It'd be a bit more subtle than explicitly saying, "This item is unimportant for undergrads but essential for grad students," but the items appropriate for each group should be sufficiently clustered already by other metrics (libraries they appear in, collections they appear in, research fields of users whose libraries they appear in, age/grade/degree of users whose libraries they appear in) for the recommendation system to figure out the places they belong.
  • Bruce:

    taxonomies do not need to be hierarchical.

    User case example

    For a single user:
    items could be assigned to categories: (ex: Economy, macroeconomics...)
    items could be tagged (ex: international trade, balance of payments, free trade)
    items could be located (ex: France, Germany)
    items could be rated by usefulness (ex: 4/5)
    items could be rated by likeness (ex: 2/5)

    For a social environment all the terms could be treated as tags.

    in the example above: Economy, macroeconomics, international trade, balance of payments, free trade, France, Germany, usefulness:4/5, likeness:2/5

    We might loose some of the meaning but aren't tags in somne way subjective after all and from my point of view we would gain more flexibility when sorting/searching, as every user would be able to decide how to sort things out: categories and tags, only tags, tags and rating, places and time, etc.
  • CB: There might be a distinction between blunt yet fundamental questions of the worth and relevance of an item and the more descriptive sort of adjectives you list. The former might be more useful in a recommendations system, whereas you'd certainly want the latter to be available for filtering, searching, or visualizing the recommended items.
  • raf
    edited April 1, 2008
    I also like the ideas of rating items, were it only to have a very quick look at which primary sources I should definitely put into my PhD.

    I realise this can be done by a tag (e.g. "to use"), but rating them would offer a visual overview in terms of importance too. If this would be implemented as stars in the middle pane (like YouTube movies), the user has the option either to select them or not (like you can select "creator," "year," "title," etc.)

    Edit: I think this will work very nicely if combined with the collections feature.
  • Dan: from my pov a more ramified system would be better for generating recommendations (if only because a digg-style setup sets such a low bar, being of no worth at all).

    The only kind of recommendations I'd be interested in would come from a *weighted* set of adjectives or similar, with the weighting decided by me, and taking input only from a trusted group or network. There 'ain't no wisdom of the crowd, and star-ratings are for tabloid movie reviews ;)
  • Dear all,

    I agree that giving a single rating to a reference is a tricky subject. However, from a pragmatic point of view, having just an overall rating system is much better than no rating system at all. So I would opt for a 5-stars system which would be easy to implement.

    The second step would be to develop a more elaborated system. Here are some suggestions in this direction. Of course I focus on rating for use in a group environment.

    1. Ask a more fine-grained rating, like rating the following dimensions:

    a. Originality of concepts
    b. Relevance of Applications
    c. Technical Soundness
    d. Importance of Results
    e. Clarity of Presentation

    It could work, but in practice users are likely to find this too long to fill in, and not use the rating system anymore.

    2. Another way would be to build a trust-based network. For example, you would say how much you trust the judgment of different researchers in the group. If a stupid colleague rates with 5 stars a stupid book, this would count much less than if your smartest colleague rates another reference with 5 stars. Then the final scoring is a weighted mean of all this taken into account.

    3. You will find other similar ideas developed by my colleague Marko Rodriguez :

    Rodriguez, M.A., "A Multi-Relational Network to Support the Scholarly Communication Process,” International Journal of Public Information Systems, volume 2007, issue 1, pages 13-29, ISSN:1653-4360, LA-UR-06-2416, Mid Sweden University, March 2007.
    http://arxiv.org/abs/cs/0601121

    Best regards,
    Clément Vidal.

    P.S. : Thanks Dan for redirecting me here!
  • It's high time for a simple 5-star, one-click rating option. To me, it's not about how others interpret my ranking criteria, it's simply having a way of quickly separating the wheat from the chaff in my collections. When I collect material for research, I just add relevant material first and sift through it later. As my collections grow, have a handy way to visually flag important stuff is invaluable.
  • @ elloyd

    You may have thought of this already, and I don't suggest it is anything more than a workaround but have you considered creating tags such as:

    *
    **
    ***
    ****
    *****

    and using these for rating. Not quite one click, but only one drag onto the tag in the tag selector, or a few extra keyboard presses if you are typing in other tags for a new item.

    I haven't yet found the need for a five star ranking system, but I do have a tag: "IMPORTANT" which I use for core items in each collection.

    For visual flagging see this discussion on tag colors
  • edited August 10, 2009
    I was going to suggest this as well.

    I actually think the * rating tags are better than a workaround--properly deployed, it means you can very easily see all 5-star papers with a given subject tag or combination of tags using the tag selector pane, or create saved searches for same.

    Though now that I'm thinking through some use cases, I suppose one wouldn't be able to sort by ratings with the tag system, since each rating would be its own tag.
  • Keep it simple. Single-user, five stars rating system with the ability to turn this feature off. Users can decide for themselves what they use the stars for.

    I agree with Elloyd - It would be nice to get this up an running before worrying about multi-user functions or the philosophy of ranking things....

    The more organizational tools (with the ability to turn these tools off), the better.
  • edited December 3, 2009
    I also would love to see a rating feature.

    It should be
    - quick and easy to use.
    - usable as sotring criterium
    - directly visible in the item window

    It would be nice if it had an averaging feature for groups.

    Of course I can add tags like "*, **, ******", or "bad, boring, ... good, excellent", but they don't show up in the overview, and when I change my mind, they are not the fastest to change.

    A 5-star ranking is something everyone is used to, so it would be intuitive to use.
    Starting with a single user ranking would be good, but I see the strenght of such a primitive ranking when it comes to groups. If you have an average voting and the number of votes you can quickly see, if something is: good/bad, common/special knowledge.

    And if you just call that field "Ranking" then everybody can use it for their own purposes, so user A can use it for personal liking, while B uses it for relevance to his topic and C for something completely different.
  • edited December 2, 2009
    in the example above: Economy, macroeconomics, international trade, balance of payments, free trade, France, Germany, usefulness:4/5, likeness:2/5
    I know what you're getting at, but I feel it's over-generalizing a bit. See, if you allow "custom" systems of ratings, or a large set of possible ratings, most of those will be left unused since it's way too much overhead for the user, for such a simple thing (as Clement said). This also applies (IMHO) to the wordpress system.
    Or, as an alternative, if this is the fundamental distinction: have both like/dislike and relevant/irrelevant. ("Relevant"/"irrelevant" may not be the right phrasing, but, basically, the question of "Is this something I need to read?".)
    If I could rate rating systems (*ahem*), I'd rate the like/dislike scale as irrelevant. Others may disagree, but whether you "like" or not an item is not of much use; it's either relevant to your work or not!

    The problem here is that we're carrying over the stars' meaning from other sites, where ratings are used in a popularity contest. Our libraries represent our work (no matter how fun it may be ;) ) and when we're looking for documents personal preference will correlate with personal relevance. If you don't like something it's not relevant to you. It's useless to differentiate them to say "it may be relevant to someone else, although I don't like it" -- you're not rating for others, you're rating for yourself. Let the others rate their work.

    I hope I'm getting my point across. We're used to stars ratings according to personal preference, but here the most useful measure is relevance. Naming the field "Relevance" would probably suffice to convey this to the user. If that's not enough, maybe have some text after the stars with the meaning of the rating. I can't think of too many adjectives so here's an example of a 3-stars system: 1 - irrelevant to me, 2 - relevant to me, 3 - essential to me. (The "to me" part is to assure the user that he/she is not saying it's irrelevant in all fields and contexts.)
  • edited February 12, 2010
    I would favour a rating system that is minimal and visual.

    library item 1
    library item 2
    library item 3
    library item 4
    library item 5


    In this way, I could highlight only the best publication in a folder with many items for easier retrieval later on. Also, it's flexible and could be used for something entirely different by another user and it does not use up precious space in the already crowded zotero window.
  • Color coding is no good option in my opinion.
    If you have 1 extra color, then people will want more colors, and then they have to remember what red was for, and what the green stuff was, ...
    And rating with just a binary option "high" and "low" is not overly efficient.
    So I vote again for a 5 star ranking. It's easy, versatile, well established elsewhere, and with an averaging feature it can be used in groups.
  • There's a decent chance that color-based marking will happen, and there's a good chance that, if it does, it will use tags.
  • I have been looking to replicate something like the Faculty of 1000 (f1000.com) rating schemes but for my own groups. The idea being that I want to see (a) what other people are reading, and (b)have a quick way to decide whether I should read the same paper.

    The F1000 solution is in addition to having something like and arbitrary scale (F1000 uses 10 pts which they calculate using some proprietary scheme - this can be aggregated over the total "social" group of uses that you belong to), to also have some fields with a qualitative descriptor of the paper, say "Novel hypothesis", "New finding", "Confirmatory", "Tech Advance". Then they have the comments of the "experts" that (if you pay) you can read. I would like to be able to see my own colleague's comments on the papers also, and to add my own. The best "reviews" are when I can see 2-3 people's pooled "scores" and raw commentary on a paper (and their individual scores too). If I am particularly interested in the comment of a particular colleague, then I can see what else he has reviewed. What I really want is a Yelp! (www.yelp.com) for references!

    It would seem that Zotero has all the pieces in place to do something like this.

    Claudio
  • +1
    for a 1-5 star rating system as a simple single user organizational tool. And putting off the social/collective aspects of it for future deliberation. I think these are two separate issues and the implementation of complex collective rankings should not indefinitely defer (the first post in this thread is Mar 2008) this relatively simple request.

    Personally for me, a simple 1-5 ranking for every citation is a very basic and necessary requirement to organize publications and its absence leads to a lot of pointless toil. For instance, I did an extensive literature search yesterday for a paper that I am writing and now have at least 55 references in that particular folder. Some of these need immediate in depth reading while some were added as references for minor points (and the rest fall in the middle in their importance). And I knew which was which when I added them but as I sit this morning to start reading I find myself at a loss. I switched to Zotero from Papers(which has a similar rating system); while Zotero offers much more than Papers and I don't intend returning yet I do miss this very simple feature.

    I have tried using tags for this particular purpose and they don't compare well with the ease and simplicity of a simple 1-5 star rating system.
  • Hello,

    I am using Zotero to write my PhD thesis. I already have about 300 items in my library, and by the end of this I may have over 1000. Some I will want to use again and again, others just once. I would really appreciate a simple 1-5 stars rating system to flag those items that are particularly relevant or may be of future use to me. Please include this feature in the next update! By the way, I love Zotero. It works fantastically. I can't believe all my colleagues are still using EndNote!
  • I vote for keeping it simple with a five star system. each user can decide if they want to use for how much they like it vs how useful/essential they consider the article. but I would love to have this option NOW NOW! : )
    wildly grateful for zotero,
    Sara
  • would LOVE a five star rating system (like iTunes)... would help tremendously in tracking favorite citations. it's the one think i consistently wish zotero had.
  • I would also like very much for a rating system, 5-star is fine.

    For someone who asked why aren't Tags enough, here's a quick reply: I use tags to categorize my references, and I search in these categories by having a Saved Search with each category. If I used tags to rate, then I would have to create many more saved searches to view only the "well rated ones", for instance. With a real rating system, I could add the ratings to the view columns in zotero, and so when I'm browsing my saved searches, I could just sort by rating and have quick access to the papers that I selected as the best in that category.

    Btw, I would really love this feature as soon as possible... I saw the ticket with it, and saw that it was even planned for version 1.0, then 1.5, then 2.0, but never implemented... Why???
Sign In or Register to comment.