recentpopularlog in

juliusbeezer : citation   123

« earlier  
Fixing Instead of Breaking, Part One - Open Citations - The Scholarly Kitchen
Recent calls to make citation data “open” could move citations into this dubious modern age, and there is a good amount of enthusiasm for the innovation. But are there potential downsides? Could open citations inadvertently foment herd mentality and swarm behavior around citations? Could it feed the dominance of top journals by reinforcing their position with fast feedback loops? Could it increasingly feed the surveillance economy that’s been built around platforms and free content? Could it entice authors and editors to find new ways to cheat their way up the ladder?

Revealing citations in more-or-less realtime may change how articles are cited, and not in a legitimate or informative way. When metrics are so visible, accessible, and responsive, they can create feedback loops that promulgate swarm behavior. Popularity becomes a self-fulfilling prophecy. Seeing that something is cited a lot might make you more likely to cite it
citation  scholarly  open  impact_factor  sciencepublishing 
january 2018 by juliusbeezer
The selfish scientist’s guide to preprint posting – nikokriegeskorte
My lab came around to routine preprint posting for entirely selfish reasons. Our decision was triggered by an experience that drove home the power of preprints. A competing lab had posted a paper closely related to one of our projects as a preprint. We did not post preprints at the time, but we cited their preprint in the paper on our project. Our paper appeared before theirs in the same journal. Although we were first, by a few months, with a peer-reviewed journal paper, they were first with their preprint. Moreover, our competitors could not cite us, because we had not posted a preprint and their paper had already been finalised when ours appeared. Appropriately, they took precedence in the citation graph – with us citing them, but not vice versa.
archiving  citation  sciencepublishing 
may 2017 by juliusbeezer
Quote for Today from Paul Feyerabend | The Professor's Notes
While the political scientist in me as a rule stops listening when I hear someone is an “anarchist” the use of the word in this case carries far different baggage. That said, here’s the quote from his introduction, page 2:..
In cases where the scientists’ work affects the public it even should participate: first, because it is a concerned party (many scientific decisions affect public life); secondly, because such participation is the best scientific education the public can get–a full democratization of science (which includes the protection of minorities such as scientists) is not in conflict with science.
science  philosophy  search  citation  publishing 
may 2017 by juliusbeezer
Is it OK to cite preprints? Yes, yes it is. | Jabberwocky Ecology
Why hasn’t citing unreviewed work caused the wheels to fall off of science? Because citing appropriate work in the proper context is part of our job. There are good preprints and bad preprints, good reports and bad reports, good data and bad data, good software and bad software, and good papers and bad papers. As Belinda Phipson, Casey Green, Dave Harris and Sebastian Raschka point out it is up to us as the people citing research to make professional judgments about what is good science and should be cited. Casey’s take captures my thoughts on this exactly:
citation  peerreview  archiving 
may 2017 by juliusbeezer
How to Cite a President Trump Tweet - EasyBib Blog
Trump, D. [realDonaldTrump]. (2017, January 3). I will be having a general news conference on JANUARY ELEVENTH in N.Y.C. Thank you [Tweet]. Retrieved from https://twitter.com/realDonaldTrump/status/816433590892429312
twitter  citation  editing 
january 2017 by juliusbeezer
(1) How Academia.edu promotes poor metadata and plays to our vanity… and how it could improve. | Mark Dingemanse - Academia.edu
The process of adding papers is geared towards enriching Academia.edu content rather than towards promoting the sharing of correct and complete scientific information.
After Academia.edu gets your PDF (and that's a requirement for adding a paper), there are very few opportunities for providing metadata, and the primary upload interface cares more about fluff like 'research interests' than about getting the basic bibliographic metadata right. There is no way to import via DOI or PMID (which would prevent many errors), or even to record these identifiers

a fatal lack of concern for interoperability which is quite surprising...

Academia.edu is built for single-authored papers, and its handling of multi-authored papers is surprisingly poor.
The default way of scraping author names leads to many errors and they can only be fixed manually. Take the paper Academia.edu staff published on 'discoverability'

the authors are all jumbled up. Only the original uploader owns the item and can add or fix bibliographic metadata, and for other authors, it's hard to see who's the owner. There is no system for duplicate detection and resolution. It is too easy for multiple authors to upload the same paper with slight differences in bibliographic metadata.
citation  scholarly  socialnetworking 
september 2016 by juliusbeezer
Set up a ‘self-retraction’ system for honest errors : Nature News & Comment
such reluctance to retract errors would be avoided if we could easily distinguish between ‘good’ and ‘bad’ retractions. In our research on misconduct, my colleagues and I informally use terms such as ‘honest retraction’. However, these carry a judgement inappropriate for formal notices. Using a more neutral term such as ‘withdrawal’ could solve that, but it is probably too late to impose a new word on the scientific system.

A more realistic solution is to mimic the way in which bibliometrics researchers use the term self-citation. Superficially, citations all look the same, and are classified as such in databases. However, citations that authors direct at their own work are a self-evident subcategory, which is easily and objectively marked out in any analysis. We can do the same with retractions.

Simply, we should define a self-retraction as any retraction notice that is signed by all co-authors. This is a natural category, which academics, administrators, policymakers and journalists could use unambiguously.
citation  sciencepublishing  scholarly 
april 2016 by juliusbeezer
Impact of Social Sciences – Academic citation practices need to be modernized so that all references are digital and lead to full texts.
the primary version of a journal article, the version that we should reference first and most prominently in our own work, and that we should always provide links to, should be in one of four forms:

An article in a wholly open-access journal.
citation  openaccess 
january 2016 by juliusbeezer
Selecting for impact: new data debunks old beliefs | Frontiers Blog
As you can see, Figure 1 shows there is absolutely no correlation between rejection rates and impact factor (r2 = 0.0023; we assume the sample of 570 journals is sufficiently random to represent the full dataset, given that it spans across fields and publishers). In fact, many journals with high rejection rates have low impact factors and many journals with low rejection rates have impact factors that are higher than the bulk of journals with rejection rates of 70-80%. Clearly, selecting “winners” is hard and the belief that obtaining a high impact factor simply requires high rejection rates is false.
citation  scholarly 
january 2016 by juliusbeezer
Structured Affiliations Extraction from Scientific Literature
the problem of extracting a document's authors and affiliations remains difficult and challenging, mainly due to the vast diversity of possible layouts and styles used in articles. In different documents the same type of information can be displayed in different places using a variety of formatting styles and fonts. For instance, a random subset of 125,000 documents from PubMed Central contains publications from nearly 500 different publishers, many of which use original layouts and styles in their articles. What is more, the PDF format, which is the most popular for storing source documents, does not preserve the information related to the document's structure, such as words and paragraphs, lists and enumerations, or the reading order of the text. This information has to be reverse engineered based on the text content and the way the text is displayed in the source file.

Nevertheless, there exist a number of methods and tools for extracting metadata from scientific literature. They differ in availability, licenses, the scope of extracted information, algorithms and approaches used, input formats and performance.
authorship  citation  sciencepublishing  tools 
november 2015 by juliusbeezer
The Open Access Citation Advantage Service « SPARC Europe
The OpCit project has for many years kept up to date a list of studies on whether or not there is a citation advantage for Open Access articles. That project has now completed and the list is no longer being managed. SPARC Europe is pleased to maintain the list henceforth and has brought it up to date.

In 2010, a summary of all the studies to date was published. This, too, has been brought up to date, and the current summary table lists all studies, some comparative details of their methodologies, and their findings.

We know the OpCit project’s work was highly valued and SPARC Europe is pleased to continue to capture that value for users.

Total number of studies so far 70
Studies that found a citation advantage 46
Studies that found no citation advantage 17
Studies that were inconclusive, found non-significant data or measured other things than citation advantage for articles 7
citation  openaccess 
november 2015 by juliusbeezer
Thomson Reuters mulls sale of business that 'does not align' - THE BARON
The business, which has 3,200 employees, provides intellectual property and scientific information and associated tools and services to governments, universities and companies. In 2014 it had revenue of about $1 billion, contributing about eight per cent of Thomson Reuters' total revenue of $12.6 billion before currency adjustments.

”While a few of IP & Science's businesses operate at the intersection of global commerce and regulation, the vast majority of the unit does not align in the same way as our other business units," chief executive James Smith told employees in an e-mail. "The decision we are announcing today reflects the difficult choices we must all make every day as we prioritize our resources and energy towards our key growth opportunities.”
citation  business 
november 2015 by juliusbeezer
A New and Stunning Metric from NIH Reveals the Real Nature of Scientific Impact | ASCB
Today we received strong evidence that significant scientific impact is not tied to the publishing journal’s JIF score. First results from a new analytical method that the National Institutes of Health (NIH) is calling the Relative Citation Ratio (RCR) reveal that almost 90% of breakthrough papers first appeared in journals with relatively modest journal impact factors. According to the RCR, these papers exerted major influence within their fields yet their impact was overlooked, not because of their irrelevance, but because of the widespread use of the wrong metrics to rank science.

In the initial RCR analysis carried out by NIH, high impact factor journals (JIF ≥ 28) account for only 11% of papers that have high RCR (3 or above). Here is hard evidence for what DORA supporters have been saying since 2012. Using the JIF to credit influential work means overlooking 89% of similarly influential papers published in less prestigious venues.
citation  altmetrics  scholarly  sciencepublishing 
october 2015 by juliusbeezer
Impact of Social Sciences – Wikipedia is significantly amplifying the impact of Open Access publications.
the single biggest predictor of a journal’s appearance in Wikipedia is its impact factor –the higher the better. Yet, a really exciting finding to pop out of the data is that, for any given journal, those that are designated as “open access” are 47% more likely to appear in Wikipedia than comparable “closed access” journals. It looks like Wikipedia editors are putting a premium on open access. It is important to emphasize that this does not mean that Wikipedia editors are citing “open access” journals more often than closed access journals. What seems to really matter most to Wikipedia editors is impact factor. Nevertheless, when given a choice between journals of highly similar impact factors, Wikipedia editors are significantly more likely to select the “open access” option.
openaccess  citation  twitter 
october 2015 by juliusbeezer
Wrong Number: A closer look at Impact Factors | quantixed
the JIF is based on highly skewed data
it is difficult to reproduce the JIFs from Thomson-Reuters
JIF is a very poor indicator of the number of citations a random paper in the journal received
reporting a JIF to 3 d.p. is ridiculous, it would be better to round to the nearest 5 or 10.
citation  journals  sciencepublishing 
september 2015 by juliusbeezer
Rédaction Médicale et Scientifique: Facteur d'impact 2014 du BMJ de 17,5 avec un score Altmetric de 270k : combien de fois ont été cités les 3 articles les plus cités ?
Que veut dire un score Altmetric de 270k ? Qui comprend ? Le score Altmetric est un calcul dont les formules ne sont pas explicites, basé sur les réseaux sociaux, divers médias, et des systèmes comme Mendeley, CiteULIke, etc... Il s'agit d'un logo dont les couleurs varient aussi selon des règles définies... mais 270k pour le BMJ en 2014... comment interpréter ?
altmetrics  citation  dccomment  français 
august 2015 by juliusbeezer
Caged Masterpieces: Chris Hoofnagle Reviews Arthur Leff’s Swindling and Selling | Authors Alliance
This wonderful book is out of print, and practically unavailable to new generations of lawyers and thinkers who focus on consumer protection. As of this writing, the least expensive used copy of it is $74 on Amazon.com. The entire University of California library system has only two copies of this work. According to Google Scholar, despite Leff’s brilliance and masterful discussion, the book has only attracted 27 citations.

Countless works of enduring value and significance fall out of print and remain essentially off-limits, which not only denies their creators an intellectual legacy, but also stymies researchers, libraries, artists, and others whose work could be enriched by access.
copyright  bookselling  library  publishing  citation 
august 2015 by juliusbeezer
[1506.07608] Amplifying the Impact of Open Access: Wikipedia and the Diffusion of Science
Among the implications of this study is that the chief effect of open access policies may be to significantly amplify the diffusion of science, through an intermediary like Wikipedia, to a broad public audience.
openaccess  wikipedia  agnotology  arxiv  citation 
june 2015 by juliusbeezer
(1) Open Access Meets Discoverability: Citations to Articles Posted to Academia.edu | Yuri Niyazov, Josh Schwartzman, Carl Vogel, Maxwell Shron, David Judd, Adnan Akil, Ben Lund, and Richard Price - Academia.edu
a typical article posted on Academia.edu recieves approximately 37% more citations compared to similar articles not available online in the first year after upload,rising to 58% after three years, and 83% after five years.
openaccess  citation  scholarly  sciencepublishing 
june 2015 by juliusbeezer
The Fork Factor: an academic impact factor based on reuse. | Authorea
A versioning system, such as Authorea and GitHub, empowers forking of peer-reviewed research data, allowing a colleague of yours to further develop it in a new direction. Forking inherits the history of the work and preserves the value chain of science (i.e., who did what). In other words, forking in science means standing on the shoulder of giants (or soon to be giants) and is equivalent to citing someone else’s work but in a functional manner. Whether it is a “negative” result (we like to call it non-confirmatory result) or not, publishing your peer reviewed research in Authorea will promote forking of your data.
citation  git  sciencepublishing 
march 2015 by juliusbeezer
Academics Write Papers Arguing Over How Many People Read (And Cite) Their Papers | Smart News | Smithsonian
There are a lot of scientific papers out there. One estimate puts the count at 1.8 million articles published each year, in about 28,000 journals. Who actually reads those papers? According to one 2007 study, not many people: half of academic papers are read only by their authors and journal editors, the study's authors write.

But not all academics accept that they have an audience of three. There's a heated dispute around academic readership and citation—enough that there have been studies about reading studies going back for more than two decades.

In the 2007 study, the authors introduce their topic by noting that “as many as 50% of papers are never read by anyone other than their authors, referees and journal editors.” They also claim that 90 percent of papers published are never cited. Some academics are unsurprised by these numbers. “I distinctly remember focusing not so much on the hyper-specific nature of these research topics, but how it must feel as an academic to spend so much time on a topic so far on the periphery of human interest,” writes Aaron Gordon at Pacific Standard. “Academia’s incentive structure is such that it’s better to publish something than nothing,” he explains, even if that something is only read by you and your reviewers.

But not everybody agrees these numbers are fair. The claim that half of papers are never cited comes first from a paper from 1990. “Statistics compiled by the Philadelphia-based Institute for Scientific Information (ISI)indicate that 55% of the papers published between 1981 and 1985 in journals indexed by the institute received no citations at all in the 5 years after they were published,” David P. Hamilton wrote in Science.
citation 
march 2015 by juliusbeezer
The Winnower | DIY Scientific Publishing
Jauntily written but ultimately empty discussion of DoIs because it hinges on straw man of an Elsevier journal's policy:
"References should include only articles that are published or in press. For references to in press articles, please confirm with the cited journal that the article is in fact accepted and in press and include a DOI number and online publication date. Unpublished data, submitted manuscripts, abstracts, and personal communications should be cited within the text only."
citation 
march 2015 by juliusbeezer
Real-time Stream of DOIs being cited in Wikipedia | CrossTech
We’re interested in seeing how DOIs are used outside of the formal scholarly literature. What does that mean? We don’t fully know, that’s the point. We have retractions in scholarly literature (and our CrossMark metadata and service allow publishers to record that), but it’s a bit different on Wikipedia. Edit wars are fought over … well you can see for yourself.

Citations can slip in and out of articles. We saw the DOI 10.1001/archpediatrics.2011.832 deleted from “Bipolar disorder in children”. If we’d not been monitoring the live feed (we had considered analysing snapshots of the Wikipedia in bulk) we might never have seen that. This is part of what non-traditional citations means, and it wasn’t obvious until we’d seen it.

You can see this activity on the Chronograph’s stream. Or check your favourite DOI. Please be aware that we’re only collecting newly added citations as of today. We do intend to go back and back-fill, but that may take some time- as it * cough * requires polling again.
citation  wikipedia  sciencepublishing 
march 2015 by juliusbeezer
PLOS Medicine: The Impact Factor Game
Although we have not attempted to play this game, we did, because of the value that authors place on it, attempt to understand the rules. During discussions with Thomson Scientific over which article types in PLoS Medicine the company deems as “citable,” it became clear that the process of determining a journal's impact factor is unscientific and arbitrary.
citation  altmetrics  sciencepublishing  journals 
february 2015 by juliusbeezer
The San Francisco Declaration on Research Assessment (DORA)
Journal Impact Factor has a number of well-documented deficiencies as a tool for research assessment. These limitations include: A) citation distributions within journals are highly skewed [1–3]; B) the properties of the Journal Impact Factor are field-specific: it is a composite of multiple, highly diverse article types, including primary research papers and reviews [1, 4]; C) Journal Impact Factors can be manipulated (or “gamed”) by editorial policy [5]; and D) data used to calculate the Journal Impact Factors are neither transparent nor openly available to the public [4, 6, 7].
citation  altmetrics  peerreview  research  sciencepublishing 
february 2015 by juliusbeezer
PeerJ–A PLOS ONE Contender in 2015? | The Scholarly Kitchen
The most-cited paper in PeerJ wasn’t about biology or medicine but about science publishing (“Data reuse and the open data citation advantage” by Heather Piwowar, one of the cofounders of ImpactStory), which received 12 citations, many of which were from editorials and news extolling the benefits of open access data publishing.

So how is PeerJ going to perform? If we include 402 citations in the numerator and 231 citable items in the denominator, we arrive at a base score of 1.740. This figure doesn’t include self-citations (citations from PeerJ in 2014 citing other PeerJ articles published in 2013), since PeerJ articles are not yet indexed in the Web of Science. While self-citation rates can be particularly high in specialist journals for which there are few other journals publishing articles on the same topic, multidisciplinary biomedical journals generally have low self-citation rates. For PLOS ONE, self-citation rates affecting their Impact Factor calculation range from 8% to 14%. If PeerJ is comparable, we are looking at a first Impact Factor between 1.879 and 1.984.
openaccess  citation  sciencepublishing  scholarly  megajournal 
february 2015 by juliusbeezer
PLOS ONE: Where Should I Send It? Optimizing the Submission Decision Process
How do scientists decide where to submit manuscripts? Many factors influence this decision, including prestige, acceptance probability, turnaround time, target audience, fit, and impact factor. Here, we present a framework for evaluating where to submit a manuscript based on the theory of Markov decision processes. We derive two models, one in which an author is trying to optimally maximize citations and another in which that goal is balanced by either minimizing the number of resubmissions or the total time in review. We parameterize the models with data on acceptance probability, submission-to-decision times, and impact factors for 61 ecology journals. We find that submission sequences beginning with Ecology Letters, Ecological Monographs, or PLOS ONE could be optimal depending on the importance given to time to acceptance or number of resubmissions. This analysis provides some guidance on where to submit a manuscript given the individual-specific values assigned to these disparate objectives.
sciencepublishing  scholarly  citation  openaccess 
january 2015 by juliusbeezer
Manuscript submission modelling – my comments in full - Ross Mounce
Some academics have an odd psychological complex around this thing called ‘scooping’. The authors of this paper are clearly strong believers in scooping. I don’t believe in scooping myself – it’s a perverse misunderstanding of good scientific practice. I believe what happens is that someone publishes something interesting; useful data testing a novel hypothesis — then somewhere else another academic goes “oh no, I’ve been scooped!” without realising that even if they’re testing exactly the same hypothesis, their data & method is probably different in some or many respects — independently generated and thus extremely useful to science as a replication even if the conclusions from the data are essentially the same...
All interesting hypotheses should be tested multiple times by independent labs, so REPLICATION IS A GOOD THING.
I suggest the negative psychology around ‘scooping’ in academia has probably arisen in part from the perverse & destructive academic culture of chasing publication in high impact factor journals. Such journals typically will only accept a paper if it is the first to test a particular hypothesis, regardless of the robustness of approach used – hence the nickname ‘glamour publications’ / glam pubs. Worrying about getting scooped is not healthy for science. We should embrace, publish, and value independent replications.
With relevance to the PLOS ONE paper – it’s a fatal flaw in their model that they assumed that ‘scooped’ (replication) papers had negligible value. This is a false assumption
sciencepublishing  scholarly  philosophy  citation  peerreview 
january 2015 by juliusbeezer
The effect of open access and downloads ('hits') on citation impact: a bibliography of studies
Citation analysis is specialised and difficult. To make the case for, or against, a claim such as 'open access increases impact' requires a lot of the reader, who may not be a specialist but who wants to try and understand the point at issue and decide if it has any relevance to him or her. The following simple example is included for this reason, not as proof but as evidence of the effect within a particular domain. Draw your own conclusions, and then read the more detailed evidence of the bibliography if you are still interested.
citation  openaccess 
january 2015 by juliusbeezer
The Open Access Advantage in Legal Education’s Age of Assessment - Jotwell: Lex
The authors’ research shows that articles published simultaneously as print and open access law review articles provide at least a 50% citation advantage over their print-only law review counterparts. More specifically, they find that the aggregate cumulative OA advantage for new and retrospective works combined is about 53%; the OA advantage of newer works published during the years 2007-2012 is about 60%. Their research also indicates that OA articles are more heavily cited in the years immediately following an article’s publication and that OA articles tend to “command greater attention over the lifespan of the work” (Donovan et al, at 8).
openaccess  citation  law 
january 2015 by juliusbeezer
Centrepieces: The Indexer - The International Journal of Indexing
Centrepiece 10 (September 2013):

C10:1 Author citations and the indexer — Sylvia Coates
C10:4 Tips for newcomers: Wellington 2013 compiled — Jane Douglas
C10:7 Reflections on the Wilson judging for 2012 — Margie Towery
indexing  citation 
january 2015 by juliusbeezer
Authorea | The Fork Factor: an academic impact factor based on reuse.
we would like to imagine what academia would be like if forking actually mattered in determining a scholar’s reputation and funding. How would you calculate it? Here, we give it a shot. We define the Fork Factor (FF) as:

FF=N∗(L1N√−1)

Where N is the number of forks on your work and L their median length. In order to take into account the reproducibility of research data, the length of forks has a higher weight in the FF formula. Indeed, forks with length equal to one likely represent a failure to reproduce the forked research datum.
citation  opendata  openaccess  open 
january 2015 by juliusbeezer
Misrepresenting science is almost as bad as fraud: Randy Schekman - Livemint
My own work that led to the Nobel Prize, the first paper was in the PNAS and had very few citations because it was new and no one else was working on this. But the citations grew over time. Measuring the impact factor for that very meaningful paper was useless. What’s happened now is a kind of collusion between these commercial journals and people who calculate this number. This system is broken. I encourage people to think about journals run by scientists and not people who want to sell magazines.
citation  peerreview  sciencepublishing 
january 2015 by juliusbeezer
Cambridge Journals Online - PS: Political Science & Politics - Abstract - Will Open Access Get Me Cited? An Analysis of the Efficacy of Open Access Publishing in Political Science
In this article, we seek to determine the efficacy of OA in political science. Our primary hypothesis is that OA articles will be cited at higher rates than articles that are toll access (TA), which means available only to paying customers. We test this hypothesis by analyzing the mean citation rates of OA and TA articles from eight top-ranked political science journals. We find that OA publication results in a clear citation advantage in political science publishing.
openaccess  citation 
january 2015 by juliusbeezer
Authorea | High Impact Research comes in Lower Impact Packages
the total share of citations going to non-elite articles rose from 27% to 47% over the same period.

Part of the reason for this sudden shift is digitization. In the conclusion to the paper produced by the team responsible for Google Scholar (released 10 years ago in November), they state:

Now that finding and reading relevant articles in non-elite journals is about as easy as finding and reading articles in elite journals, researchers are increasingly building on and citing work published everywhere.
citation  internet  scholarly  sciencepublishing 
december 2014 by juliusbeezer
Authorea | How to: Add and Manage Citations and References on Authorea
Authorea makes citations easy, whether you are citing for the first time or a power citer. Beyond the simple interface and automatic formatting of citations and lists, sharing is as easy as linking a webpage, and every listed citation comes with a DOI-based Link to the article or its landing page.
citation  tools 
december 2014 by juliusbeezer
The Open Access Advantage for American Law Reviews by James M. Donovan, Carol A. Watson, Caroline Osborne :: SSRN
Articles available in open access formats enjoy an advantage in citation by subsequent law review works of 53%. For every two citations an article would otherwise receive, it can expect a third when made freely available on the Internet. This benefit is not uniformly spread through the law school tiers. Higher tier journals experience a lower OA advantage (11.4%) due to the attention such prestigious works routinely receive regardless of the format. When focusing on the availability of new scholarship, as compared to creating retrospective collections, the aggregated advantage rises to 60.2%. While the first tier advantage rises to 16.8%, the mid-tiers skyrocket to 89.7%. The fourth tier OA advantage comes in at 81.2%.
openaccess  citation 
october 2014 by juliusbeezer
Authorea | The Fork Factor: an academic impact factor based on reuse.
A versioning system, such as Authorea and GitHub, empowers forking of peer-reviewed research data, allowing a colleague of yours to further develop it in a new direction. Forking inherits the history of the work and preserves the value chain of science (i.e., who did what). In other words, forking in science means standing on the shoulder of giants (or soon to be giants) and is equivalent to citing someone else’s work but in a functional manner. Whether it is a “negative” result (we like to call it non-confirmatory result) or not, publishing your peer reviewed research in Authorea will promote forking of your data. (To learn how we plan to implement peer review in the system, please stay tuned for future posts on this blog.)
citation  git  sciencepublishing  scholarly 
october 2014 by juliusbeezer
International Committee of Medical Journal Editors (ICMJE) Recommendations for the Conduct, Reporting, Editing and Publication of Scholarly Work in Medical Journals: Sample References
The International Committee of Medical Journal Editors offers guidance to authors in its publication Recommendations for the Conduct, Reporting, Editing and Publication of Scholarly Work in Medical Journals (ICMJE Recommendations), which was formerly the Uniform Requirements for Manuscripts. The recommended style for references is based on the National Information Standards Organization NISO Z39.29-2005 (R2010) Bibliographic References as adapted by the National Library of Medicine for its databases. Details are in Citing Medicine. (Note Appendix F which covers how citations in MEDLINE/PubMed differ from the advice in Citing Medicine.) Sample references typically used by authors of journal articles are provided below.
citation  editing  tools  writing  sciencepublishing  medicine 
september 2014 by juliusbeezer
Joint Declaration of Data Citation Principles - FINAL | FORCE11
Data citation, like the citation of other evidence and sources, is good research practice and is part of the scholarly ecosystem supporting data reuse.

In support of this assertion, and to encourage good practice, we offer a set of guiding principles for data within scholarly literature, another dataset, or any other research object.
opendata  citation 
september 2014 by juliusbeezer
References, Please by Tim Parks | NYRblog | The New York Review of Books
There is, in short, an absolutely false, energy-consuming, nit-picking attachment to an outdated procedure that now has much more to do with the sad psychology of academe than with the need to guarantee that the research is serious. By all means, on those occasions where a book exists only in paper and where no details about it are available online, then let us use the traditional footnote. Otherwise, why not wipe the slate clean, start again, and find the simplest possible protocol for ensuring that a reader can check a quotation.
citation  scholarly  reference  archiving 
september 2014 by juliusbeezer
What Jeffrey Beall gets wrong about altmetrics | Impactstory blog
Although early theorists emphasized citation as a dispassionate connector of ideas, more recent research has repeatedly demonstrated that citation actually has more complex motivations, including often as a rhetorical tool or a way to satisfy social obligations (just ask a student who’s failed to cite their advisor). In fact, Simkin and Roychowdhury (2002) estimate that as few as 20% of citers even read the paper they’re citing. That’s before we even start talking about the dramatic disciplinary differences in citation behavior.

When it comes down to it, because we can’t identify citer motivations when looking at a citation count alone (and to date efforts to use sentiment analysis to understand citation motivations have failed to be widely adopted) the only bulletproof way to understand the intent behind citations is to read the paper that cites.
citation  altmetrics  beall 
september 2014 by juliusbeezer
Academic citation practices need to be modernized — Advice for authoring a PhD or academic book — Medium
modern referencing is not about pointing to some source details for books that cost a small fortune and are buried away in some library where the reader is not present; still less about pointing to source details for an article in a pay-wall journal to which readers do not have access. That is legacy referencing, designed solely to serve the interests of commercial publishers, and 90% irrelevant now to the scholarly enterprise. If that is the best that we can do in connecting readers to our source texts, then it will have to do. But let’s face it, it’s not much use in today’s world.

With open access spreading now we can all do better, far better, if we follow one dominant principle. Referencing should connect readers as far as possible to open access sources, and scholars should in all cases and in every possible way treat the open access versions of texts as the primary source...

the new rules that the British government’s research funding body has already introduced for the next ‘research excellence framework’ (REF) exercise, expected in 2020. For any academic’s or researcher’s journal articles to be considered as part of a university case for REF funding support they will either need to be available in open access form in the journal (free to any reader), or the university must show an immediate pre-publication version of the paper on their e-depository.
citation  reference  openaccess  sciencepublishing 
september 2014 by juliusbeezer
EditorMom: Edifix: Subscription Cloud-Based Service to Automate Editing and Styling of References
Edifix, a subscription cloud-based service that edits reference-list entries to match specified styles and provides digital object identifiers (DOIs) and/or PubMed identifiers (PMIDs) when available. I am experimenting with it via a free trial. I have no financial interest in the software or its producer.

Because more and more biomedical journals ask authors to provide DOIs and/or PMIDs, using this tool may save me lots of time, because I can say from experience that my authors are not going to go back and hunt down DOIs if I ask them to. Plus, editing a reference list for style and fixing incorrect details?
editing  tools  citation  dccomment 
august 2014 by juliusbeezer
An Interview with Amy Brand on a Proposed New Contributor Taxonomy Initiative | The Scholarly Kitchen
this effort addresses the pressing need for a more fine-grained and transparent system of research and publication credit. If it is successful, there will be far fewer author disputes, and fewer disincentives for sharing data, for example. So it could positively influence both the collaborative culture of the lab, and academic incentive structures more generally... a core working group facilitated by CASRAI, focused on fleshing out the taxonomy itself. We had previously tested a taxonomy with 14 roles and it was fairly well received, but we’ve known all along that we would need to refine it in consultation with a broader group of stakeholders, and that’s what we’re doing now.
citation 
august 2014 by juliusbeezer
Let the right one in: hiring academic staff | Opinion | Times Higher Education
We need simply to give more time to the whole business. First, to reading candidates’ research. Notwithstanding the distorting effects of the research excellence framework (which render foreign candidates for posts blindfolded in a minefield, unless they are painstakingly prepared), panels still seem too rarely to read and form their own judgement on candidates’ work, relying instead on reputation or the weight of words. (Where newly appointed professors, for example, are promptly deemed unsubmittable to the REF, something has clearly gone wrong.)

We need, then, to give more time to presentations and interviews: to assess a candidate’s teaching in action, really to probe and explore their research, and to assure ourselves that their apparent strengths will not melt away after appointment.
citation  research  sciencepublishing  politics 
august 2014 by juliusbeezer
Impact of Social Sciences – Are 90% of academic papers really never cited? Reviewing the literature on academic citations.
Many academic articles are never cited, although I could not find any study with a result as high as 90%. Non-citation rates vary enormously by field. “Only” 12% of medicine articles are not cited, compared to about 82% (!) for the humanities. It’s 27% for natural sciences and 32% for social sciences (cite).
citation 
august 2014 by juliusbeezer
Wiþ Endemanndom
Under the man’s name, clarity has appeared at last, owed albeit not to some unfogging of mind, but to plain old stealing.
zizek  copyright  citation 
july 2014 by juliusbeezer
Do open access articles in economics have a citation advantage? - Munich Personal RePEc Archive
We investigate whether articles in economics that are freely available on the web have a citation advantage over articles with a gated access. Our sample consists of articles from 2005 from 13 economic journals (including the top five journals). In addition to standard mean comparisons we also use a negative-binomial regression model with several covariates to control for potential selection effects and quality bias. Using citation data from three different databases (Web of Science, RePEc and Google Scholar) we show that articles that are freely available on the internet have indeed a significantly higher citation count
openaccess  citation 
july 2014 by juliusbeezer
Beyond the donut: the Altmetric story | Altmetric.com
What first got you interested in altmetrics?
I used to work in bioinformatics, in a lab at Edinburgh University. I wrote a blog about interesting papers or methods I’d come across, and there was a great set of computational biology blogs by others that I’d read every day.

I found those blogs far more useful than, say, journal club. I always wondered why you couldn’t see links out to blogs from journal articles, or conversely have an index of which papers were being mentioned by who. There’s a lot of good discussion happening around research online and it isn’t usually linked to where it’d be most useful to see it, next to the research in question.

So that was one thing that got me interested in these kinds of ideas. The other was a broader problem about getting credit (and funding) for your work. It’s crazy that we still have to do things like write articles about datasets or software not because people need to read the article but because without it some people will assume it cannot be formally cited, and their uses may not be recognized.
altmetrics  citation  blogs 
june 2014 by juliusbeezer
The Really Obvious (but All-Too-Often-Ignored) Guide to Getting Published | Vitae
Many manuscripts don’t get published in the first journal to which they are submitted. This is pretty much a fact of life. At least a third of the articles I have written were rejected by the journal I initially targeted for publication.

But while an editor may suspect that their journal was not your first choice, it’s best not to draw her attention to this. The dead giveaway is the reference list (though occasionally the abstract makes it clear, too).
writing  sciencepublishing  citation 
june 2014 by juliusbeezer
retrieve_pdf_metadata [Zotero Documentation]
Zotero can take PDFs of scholarly papers and query the Google Scholar database for matches.
citation  tools 
june 2014 by juliusbeezer
extract bibtex data from pdf/ps files?? -- CFD Online Discussion Forums
The way i do it is using the electronic god called google. Go to scholar.google.com and and make sure to turn on show link to import citations to bibtex in scholar preferences and type the paper name in search bar.A sign will come below each paper ( import into bibtex ) and if you click on it will take you to a page with citation.Copy paste into bibtex.
citation  tools  google  scholarly 
june 2014 by juliusbeezer
The "50% of all trials are never published" zombie pushed to the surface again this week
Hi,

Forgive my ignorance, but what are "marked up references" ? (as here "If I had been reviewing this as a draft from a less experienced colleague, I would have sent it back and asked for marked u...
citation  dccomment 
june 2014 by juliusbeezer
If you're going to cite a stat, please get your facts right...
The "50% of all trials are never published" zombie pushed to the surface again this week... Their first two citations are inaccurate/overextrapolated. When I find the first two references in an article are questionable, I tend to be wary of the rest of the paper. If I had been reviewing this as a draft from a less experienced colleague, I would have sent it back and asked for marked up references for the rest of the paper. It is the responsibility of the authors to cite supporting literature correctly.
citation  peerreview  sciencepublishing  dccomment 
june 2014 by juliusbeezer
Article-level metrics
Article-level metrics (ALMs) refer to a whole range of measures that can provide insights into the “impact” or “reach” of an individual article. Whereas the well-known Impact Factor measures citations at the journal level, ALMs aim to measure research impact in a transparent and comprehensive manner. They not only look at citations and usage but also include article coverage and discussions in the social web.
Thanks to our partnership with Altmetric, a London-based start-up which tracks and analyzes the online activity surrounding scholarly literature, we are able to provide detailed statistics on each article’s coverage and discussions in the media and on blogs; any bookmarking, ratings and discussions via bibliographic tools and sites such as Papers, Mendeley and ResearchGate; and social media sharing via platforms like Twitter, Facebooks and Google+.
altmetrics  sciencepublishing  socialmedia  socialnetworking  citation 
june 2014 by juliusbeezer
Impact of Social Sciences – Global-level data sets may be more highly cited than most journal articles.
I attempted to measure the impact that a few openly accessible data sets have had on scientific research. In my recent paper in Plos One, I analyzed the impact that three freely available oceanographic data sets curated by the US National Oceanographic Data Center have had on oceanographic research by using citations as a measure of impact. Since scientific assessments like the RAE increasingly use citations to journal articles for this purpose, I wanted to do the same for data sets...
My results suggest that all three data sets are more highly cited than most journal articles. Each data set has probably been cited more often than 99% of the journal articles in oceanography that were published during the same years as the data sets. One data set in particular, the World Ocean Atlas and World Ocean Database, has been cited or referenced in over 8,500 journal articles since it was first released in 1982. To put that into perspective, this data set has a citation count over six times higher than any single journal article in oceanography from 1982 to the present.
opendata  citation 
may 2014 by juliusbeezer
Academic citation practices need to be modernized — Advice for authoring a PhD or academic book — Medium
I want to try and convince you that our existing citation and referencing practices are now woefully out of date and no longer fit for purpose in the modern world. The whole scholarly purpose of citing soruces has changed around us, but our conventions have not recognized the change nor adapted yet. I first set out what’s wrong with what we do now, and then sketch a radical agenda for starting afresh...
Source quotes replacing page references do not have to be memorable, nor especially salient bits of text, nor very long — they can be very short so long as they are unique. The six words that form this particular link are enough to identify without ambiguity a single sentence in a book of 300+ pages.
citation  evocatext 
may 2014 by juliusbeezer
Troll Thread Interview : Tan Lin : Harriet the Blog : The Poetry Foundation
Using Tumblr to host our PDFs and Lulu to print books with no upfront costs, means we didn’t have to waste money on staples, paper, or xeroxing. In effect, Print-on-Demand and PDFs are what poor publishing looks like.

H: The TT [Troll Thread] platform itself has been especially conducive to facilitating the circulation and making of work that otherwise wouldn’t get published if left to the other distribution models out there...
literature is a series of systematic acts of violence committed upon previous notions of “literary” language through violation of convention, which updates the larger cultural definition of “literature” and “literariness” as a result. And it’s funny because it seems like these days, everywhere I look I see nothing but violations of conventional textual codes as a byproduct of this process of digitizing the archive away from the print paradigm.
ebooks  publishing  poetry  literature  informationmastery  internet  citation 
may 2014 by juliusbeezer
Is Elsevier going to take control of us and our data? The Vice-Chancellor of Cambridge thinks so and I’m terrified « petermr's blog
Do you trust Mendeley? Do you trust Elsevier? Do you trust and large organisations without independent control (GCHQ, NSA, Google, Facebook)? If you do, stop reading and don’t worry.

In Mendeley, Elsevier has a window onto nearly everything that a scientist is interested in. Every time your read a new paper Mendeley knows what you are interested in. Mendeley knows your working habits – what time are you spending on your research?

And this isn’t just passive information. Elsevier has Scopus – a database of citations. How does a paper get into this? – Scopus decides, not the scientific world. Scopus can decide what to highlight and what to hold back. Do you know how Journal Impact Factors are calculated? I don’t because it’s a trade secret. Does Scopus’ Advisory Board guarantee transparency of practice? Not that I can see. Since JIF’s now control much academic thinking and planning, those who control them are in a position to influence academic practice.

Does Mendeley have an advisory board? I couldn’t find one.
sciencepublishing  citation  altmetrics  opendata 
may 2014 by juliusbeezer
Home | JISC Open Citations
Open Citations is a database of biomedical literature citations, harvested from the reference lists of all open access articles in PubMed Central that reference ~20% of all PubMed Central papers (approx. 3.4 million papers), including all the highly cited papers in every biomedical field. All the data are freely available for download and reuse.

This web site allows you to browse these bibliographic records and citations, to select an individual article, and to visualize its citation network in a variety of displays.
open  citation  openaccess  sciencepublishing 
april 2014 by juliusbeezer
The Topmost Cited DOIs on Wikipedia | Not Confusing
The Wikipedia Open Access Signalling Project, which I’ve recently joined, sees this as a fantastic opportunity to spread the word about the Open Access (OA) movement. We are still in the initial stages of understanding what OA materials are currently cited on-wiki. One of the ways OA has been cited on Wikipedia so far is through the Template:Cite DOI. A Digital Object Identifier (DOI) can be thought of as ISBNs for articles – academic or otherwise – in the networked world.

The DOI becomes useful in citations because, like with any identifier, one can unmistakably and machine-readably know what is being cited. For the OA Signalling Project machine-readability is key. It will allow us to tell readers whether a cited resource is OA or free-to-read before they even click on it. By analysing the usage of DOIs on Wikipedia we can see what areas are starting to catch on to this slick method of citation...

Our next task is to start writing a bot that will be able to automatically determine the OA-ness of a citation using some new APIs that have been released by publishers. At that point we will be able to know what percentage of citations on Wikipedia can actually be read by general public.
openaccess  wikipedia  citation 
april 2014 by juliusbeezer
Rédaction Médicale et Scientifique: PLOS ONE : toujours en perte de vitesse ? Les auteurs préfèrent-ils retourner vers des revues plus prestigieuses ?
J'avais évoqué, fin juin 2013, le recul du facteur d'impact de PLOS ONE en me demandant si c'était un début d'une descente aux enfers.. et d'autres sont de mon avis. Un billet sur Scholarly Kitchen, début mars 2014, attire l'attention sur la diminution des articles publiés par PLOS ONE : 2657 en février 2014 contre 3019 en janvier... EN 2013, PLOS ONE a publié 31 500 articles, soit 8000 de plus qu'en 2012, et a eu 78 000 reviewers... Qu'en penser ? Le nombre médian de jours entre la soumission et la publication était de 151 jours... 1450 articles de PLOS ONE ont eu les honneurs de la presse avec 5000 mentions.... Qu'en penser quand on regarde la liste des articles les plus cités dans la presse....

Il n'est pas possible de juger sur 2 mois, mais est-ce que la baisse du facteur d'impact a détourné des auteurs vers des revues plus prestigieuses ? Le facteur d'impact 2012 (3,730) a baissé et certains prévoient que le prochain facteur d'impact pourrait être entre 3,1 et 3,2 !!!! Très nombreux commentaires sur la valeur du facteur d'impact, sur les reviewers fatigués par PLOS ONE, sur la vraie compétition qui arrive (BMJ Open par exemple), etc...
openaccess  citation  medicine 
april 2014 by juliusbeezer
Killing Pigs and Weed Maps: The Mostly Unread World of Academic Papers - Pacific Standard: The Science of Society
These topics get researched, presented, published, and, somewhat tragically, immediately dispatched to the far reaches of the JSTOR archives, a digital library consisting of over 2,000 journals.

In an effort to unearth some of these projects, I used a random word generator to search JSTOR and see what results appeared on the first page. What has been ignored?
screwmeneutics  citation  scholarly 
march 2014 by juliusbeezer
Scientific Citations | Intro Page
Our mission is to enable the scientific community through a paradigm-shift technology that will eliminate mistakes and bias from the practice of scientific referencing.
citation  sciencepublishing 
march 2014 by juliusbeezer
The War Nerd: Everything you know about Crimea is wrong(-er) | PandoDaily
It’s not easy diagnosing the psychotic episode brought on in the western media by Crimea, because anti-Russian stories are pushing two totally contradictory lines at the same time. Sometimes the party line is that Putin has gone crazy, and Russia is a joke, “a gas station masquerading as a country” that will pay a “big price” for grabbing the Crimean Peninsula.

Then there’s the neocon version of Russophobia, peddled by shameless old Iraq-Invasion boosters like Eli Lake. According to Lake’s latest in the Daily Beast, “Russia is invading Ukraine in the shadows.” The proof? Eli don’t need no stinkin’ proof.
russia  politics  funny  writing  citation 
march 2014 by juliusbeezer
Can social media increase the exposure of newly published research? | Mosquito Research and Management
Putting aside the debate around the timing of tweets their resulting influence on metrics, at the end of the three week period, our paper had received almost twice as many “social shares” as any of the other papers, and subsequently, substantially more page views and downloads. Surely the social media effort assisted in this result? I don’t want to draw too much from this relatively simple analysis but I think the resulting increase in exposure of the publication has been worth the relatively small amount of time invested in spreading the word via Twitter.

Lastly, I think it is important to make a note about the importance of the “traditional” media. As I mentioned earlier, I was both surprised and disappointed at the lack of coverage the publication received. I thought a new study that contributes some answers to one of the most commonly asked questions I get, “are mosquitoes good for anything?”, would have generated more interested. I guess all researchers think their research will attract wider interest!
sciencepublishing  twitter  socialmedia  altmetrics  news  journalism  citation 
march 2014 by juliusbeezer
Data reuse and the open data citation advantage [PeerJ]
In a multivariate regression on 10,555 studies that created gene expression microarray data, we found that studies that made data available in a public repository received 9% (95% confidence interval: 5% to 13%) more citations than similar studies for which the data was not made available.
opendata  citation  sciencepublishing 
february 2014 by juliusbeezer
[1310.8220] Prediction of highly cited papers
In an article written five years ago [arXiv:0809.0522], we described a method for predicting which scientific papers will be highly cited in the future, even if they are currently not highly cited. Applying the method to real citation data we made predictions about papers we believed would end up being well cited. Here we revisit those predictions, five years on, to see how well we did. Among the over 2000 papers in our original data set, we examine the fifty that, by the measures of our previous study, were predicted to do best and we find that they have indeed received substantially more citations in the intervening years than other papers, even after controlling for the number of prior citations. On average these top fifty papers have received 23 times as many citations in the last five years as the average paper in the data set as a whole, and 15 times as many as the average paper in a randomly drawn control group that started out with the same number of citations. Applying our prediction technique to current data, we also make new predictions of papers that we believe will be well cited in the next few years.
citation 
december 2013 by juliusbeezer
The Work of Man Has Only Just Begun » Our Aimé Césaire Researchathon
A researchathon is a collective marathon that seeks either to answer a research question or to build a research resource. This is accomplished by bringing together a group of researchers, librarians, technologists, and students in one room for a full day of collaborative work toward a specific goal. The practice derives from the culture of hackathons familiar to technologists, in which programmers gather for long hours, often late into the night, to solve a software problem collaboratively. In the humanities we have already seen a similar phenomenon in the spread of wikithons, or marathons of wikipedia editing, and the exhilarating One Week|One Tool “barn raisings.” The word researchathon was coined, as far as we can tell, by David K. Park, Director of Special Projects at the Applied Statistics Center at Columbia University. Our Césaire researchathon is the first major attempt to bring the researchathon model to research in the humanities at Columbia — or elsewhere, for that matter.
Our researchathon will focus on building the largest existing bibliography of Aimé Césaire’s primary and secondary sources in one day. At the end of the day we hope to offer our work to present and future researchers of Césaire — open access on the open web.
internet  research  socialmedia  socialnetworking  citation  wiki 
december 2013 by juliusbeezer
Not all citations are equal: identifying key citations automatically
LeMire is looking for a razor to cut through huge numbers of scholarly citations to the 'good stuff', but I find the simplifications they have made over-simplifications, and thought I'd say so: he has a deep/shallow citation concept which is almost certainly false.
dccomment  citation  search 
november 2013 by juliusbeezer
CrossMark® for Researchers
(Single point of failure) service linked to CrossRef that checks for updates/retractions/errata etc.
citation  sciencepublishing  scholarly 
november 2013 by juliusbeezer
ArXiv at 20 : Nature : Nature Publishing Group
Again, because of cost and labour overheads, arXiv would not be able to implement conventional peer review. Even the minimal filtering of incoming preprints to maintain basic quality control involves significant daily administrative activity. Incoming abstracts are given a cursory glance by volunteer external moderators for appropriateness to their subject areas; and various automated filters, including a text classifier, flag problem submissions. Although the overall rate of such submissions is well below 1%, they tend to cluster in specific areas (such as general relativity, quantum mechanics and unified theories in physics; proofs of the Riemann hypothesis, Goldbach's conjecture and new proofs of Fermat's last theorem in mathematics; P versus NP problem in computer science).

Moderators, tasked with determining what is of potential interest to their communities, are sometimes forced to ascertain 'what is science?' At this point arXiv unintentionally becomes an accrediting agency for researchers, much as the Science Citation Index became an accrediting agency for journals, by formulating criteria for their inclusion. Although decisions are biased towards permissiveness, inevitably some authors object that it is never permissive enough...
On arXiv, we have seen some of the unintended effects of an entire global research community ingesting the same information from the same interface on a daily basis. The order in which new preprint submissions are displayed in the daily alert, if only for a single day, strongly affects the readership on that day and leaves a measurable trace in the citation record fully six years later2, 3. Some researchers, wise to this, time their submissions to arrive just after the daily afternoon deadline to maximize their prominence in the next day's mailing. Filters that highlighted 'popular' materials over longer periods of time would exacerbate this effect. Hence any recommender system on arXiv would need, at minimum, to be personalized to individual readership preferences and interests to reduce herding behaviour. Experiments with such systems are ongoing, and may be put online within a year or two if they perform properly.
sciencepublishing  openaccess  peerreview  archiving  repositories  reputation  attention  citation 
november 2013 by juliusbeezer
Outsell Inc. - The world's only research and advisory firm focused solely on media, information, and technology.
Thomson Reuters’ Web of Knowledge and Google Scholar are announcing a major new partnership between their services... When Google Scholar users at the participating institutions hit the Scholar search results page, they see a new Web of Science link directly in the results, under the article preview, as part of Scholar’s familiar navigation bar. On Web of Science, subscribers now can move directly from a Web of Science record to a Scholar search on the same item.
google  citation  search 
november 2013 by juliusbeezer
The usefulness of citation counts depends heavily on the context of an individual’s publishing community. | Impact of Social Sciences
All this dependency is worrisome from both scientific and policy perspectives.

By all means, use citations – they are a relatively cheap source of information. But the information conveyed by citations is limited. So be cautious. Be careful. Use common sense. Seek the insights provided by context.
citation 
november 2013 by juliusbeezer
William Ian Miller reviews ‘Alchemies of the Mind’ by Jon Elster · LRB 10 August 2000
Unusual for a theorist of any stripe these days, Elster has read books, especially by those writers whose most telling virtue is their eye for human foolishness and knavery. Nothing as strong as Swift, but he is addicted to those exposers of knavery, La Rochefoucauld and Stendhal, and those exposers of foolishness, Jane Austen and Montaigne; all have major speaking parts in Elster’s play. In fact, his usual audience of economists, political scientists and psychologists are liable to take exception to his overt pleas for belletrism. In disciplines that frown on any citation that is older than a decade, Montaigne stands no better chance than Erving Goffman. Wheels are reinvented with astounding regularity – if we’re lucky; just as often we lose the skills and talents to reinvent them or reinvent them looking more like triangles and squares.
citation  agnotology 
october 2013 by juliusbeezer
Publishing: Open citations : Nature News & Comment
The OCC, as an open repository of scholarly citation data made available under a Creative Commons public domain dedication, is attempting to improve matters. It aims to provide accurate citation data that others may freely build upon, enhance and reuse for any purpose, without restriction under copyright or database law.
citation  altmetrics 
october 2013 by juliusbeezer
Why Do We Quote? The Culture and History of Quotation - Open Book Publishers
Quoting is all around us. But do we really know what it means? How do people actually quote today, and how did our present systems come about? This book brings together a down-to-earth account of contemporary quoting with an examination of the comparative and historical background that lies behind it and the characteristic way that quoting links past and present, the far and the near.

Drawing from anthropology, cultural history, folklore, cultural studies, sociolinguistics, literary studies and the ethnography of speaking, Ruth Finnegan’s fascinating study sets our present conventions into cross-cultural and historical perspective.
citation 
october 2013 by juliusbeezer
« earlier      
per page:    204080120160

Copy this bookmark:





to read