Following up on @carrotroll 's question, I noticed that in some cases suddenly the initials are given to distinguish between such publications (whereas for other publications, a second author is added like in carrotroll's example). For example, my own publications are listed in-text as 'Vandeputte et al., 2021' OR 'M.M. Vandeputte et al., 2021' OR 'M. Vandeputte et al., 2020' for publications in the same year with different author groups. Can this be changed? All of these publications are listed correctly and consistently in Zotero (Vandeputte // M. M.).
The style follows the APA manual exactly here. It is correct APA style to add initials to distinguish authors with the same last name and to add additional names to distinguish citations that are shorted to the same “Jones et al” form. There isn’t any changes needed to bring the style into compliance with the APA manual.
@ephestion I don’t know what you mean. The APA style definitely produces the legal citations as described in the APA manual. If you are having an issue, please open a new thread and give specific examples.
For narrative citations, use the Suppress Author setting in the Zotero Word plugin.
Suddenly, using APA7, in-text referencing is showing many authors (only one reference), like (Cer, Yang, Kong, Hua, Limtiaco, John, i ostali, 2018), while it should be (Cer i ostali, 2018). For many other references (and there are a high number of them), it is normal behaviour (more than two authors get "et al." (or in Croatian, "i ostali").
Full reference:
Cer, D., Yang, Y., Kong, S., Hua, N., Limtiaco, N., John, R. S., Constant, N., Guajardo-Cespedes, M., Yuan, S., Tar, C., Sung, Y.-H., Strope, B., & Kurzweil, R. (2018). Universal Sentence Encoder. arXiv:1803.11175 [cs]. http://arxiv.org/abs/1803.11175
In bibtex:
@article{cer_universal_2018, title = {Universal {Sentence} {Encoder}}, url = {http://arxiv.org/abs/1803.11175}, abstract = {We present models for encoding sentences into embedding vectors that specifically target transfer learning to other NLP tasks. The models are efficient and result in accurate performance on diverse transfer tasks. Two variants of the encoding models allow for trade-offs between accuracy and compute resources. For both variants, we investigate and report the relationship between model complexity, resource consumption, the availability of transfer task training data, and task performance. Comparisons are made with baselines that use word level transfer learning via pretrained word embeddings as well as baselines do not use any transfer learning. We find that transfer learning using sentence embeddings tends to outperform word level transfer. With transfer learning via sentence embeddings, we observe surprisingly good performance with minimal amounts of supervised training data for a transfer task. We obtain encouraging results on Word Embedding Association Tests (WEAT) targeted at detecting model bias. Our pre-trained sentence encoding models are made freely available for download and on TF Hub.}, urldate = {2021-12-02}, journal = {arXiv:1803.11175 [cs]}, author = {Cer, Daniel and Yang, Yinfei and Kong, Sheng-yi and Hua, Nan and Limtiaco, Nicole and John, Rhomni St and Constant, Noah and Guajardo-Cespedes, Mario and Yuan, Steve and Tar, Chris and Sung, Yun-Hsuan and Strope, Brian and Kurzweil, Ray}, month = apr, year = {2018}, note = {ZSCC: 0000598 arXiv: 1803.11175}, keywords = {★, ⛔ No DOI found, Computer Science - Computation and Language}, annote = {Comment: 7 pages; fixed module URL in Listing 1}, file = {arXiv.org Snapshot:/home/tedo/Zotero/storage/AJ6Z8INH/1803.html:text/html;Cer et al. - 2018 - Universal Sentence Encoder.pdf:/home/tedo/Zotero/storage/7HDZ9MEV/Cer et al. - 2018 - Universal Sentence Encoder.pdf:application/pdf}, }
@adamsmith Thank you. It's a bad rule (it should be limited to three authors max since there is a,b,c... appendix to year), but it has nothing to do with the Zotero.
Following up on @carrotroll 's question, I noticed that in some cases suddenly the initials are given to distinguish between such publications (whereas for other publications, a second author is added like in carrotroll's example). For example, my own publications are listed in-text as 'Vandeputte et al., 2021' OR 'M.M. Vandeputte et al., 2021' OR 'M. Vandeputte et al., 2020' for publications in the same year with different author groups. Can this be changed? All of these publications are listed correctly and consistently in Zotero (Vandeputte // M. M.).
Law Statutes and Common Law case referencing broken, doesn't reference properly
Narrative in-text citation broken/non existent
Please work on these as APA is one of the big ones to comply with.
For narrative citations, use the Suppress Author setting in the Zotero Word plugin.
For many other references (and there are a high number of them), it is normal behaviour (more than two authors get "et al." (or in Croatian, "i ostali").
Full reference:
Cer, D., Yang, Y., Kong, S., Hua, N., Limtiaco, N., John, R. S., Constant, N., Guajardo-Cespedes, M., Yuan, S., Tar, C., Sung, Y.-H., Strope, B., & Kurzweil, R. (2018). Universal Sentence Encoder. arXiv:1803.11175 [cs]. http://arxiv.org/abs/1803.11175
In bibtex:
@article{cer_universal_2018,
title = {Universal {Sentence} {Encoder}},
url = {http://arxiv.org/abs/1803.11175},
abstract = {We present models for encoding sentences into embedding vectors that specifically target transfer learning to other NLP tasks. The models are efficient and result in accurate performance on diverse transfer tasks. Two variants of the encoding models allow for trade-offs between accuracy and compute resources. For both variants, we investigate and report the relationship between model complexity, resource consumption, the availability of transfer task training data, and task performance. Comparisons are made with baselines that use word level transfer learning via pretrained word embeddings as well as baselines do not use any transfer learning. We find that transfer learning using sentence embeddings tends to outperform word level transfer. With transfer learning via sentence embeddings, we observe surprisingly good performance with minimal amounts of supervised training data for a transfer task. We obtain encouraging results on Word Embedding Association Tests (WEAT) targeted at detecting model bias. Our pre-trained sentence encoding models are made freely available for download and on TF Hub.},
urldate = {2021-12-02},
journal = {arXiv:1803.11175 [cs]},
author = {Cer, Daniel and Yang, Yinfei and Kong, Sheng-yi and Hua, Nan and Limtiaco, Nicole and John, Rhomni St and Constant, Noah and Guajardo-Cespedes, Mario and Yuan, Steve and Tar, Chris and Sung, Yun-Hsuan and Strope, Brian and Kurzweil, Ray},
month = apr,
year = {2018},
note = {ZSCC: 0000598
arXiv: 1803.11175},
keywords = {★, ⛔ No DOI found, Computer Science - Computation and Language},
annote = {Comment: 7 pages; fixed module URL in Listing 1},
file = {arXiv.org Snapshot:/home/tedo/Zotero/storage/AJ6Z8INH/1803.html:text/html;Cer et al. - 2018 - Universal Sentence Encoder.pdf:/home/tedo/Zotero/storage/7HDZ9MEV/Cer et al. - 2018 - Universal Sentence Encoder.pdf:application/pdf},
}
https://s3.amazonaws.com/zotero.org/images/forums/u933088/wx7ph8ljqs20kx0rfdld.png
Is it a bug or how can I fix it?
Thanks.
Thank you.
It's a bad rule (it should be limited to three authors max since there is a,b,c... appendix to year), but it has nothing to do with the Zotero.