Submissions/ORCID and DOI as main tools to increase quality and quantity of scholarship, and how to implement this improvements in Wikipedia
After careful consideration, the Programme Committee has decided not to accept the below submission at this time. Thank you to the author(s) for participating in the Wikimania 2014 programme submission, we hope to still see you at Wikimania this August.
- Submission no. 6033
- Title of the submission
- ORCID and DOI as main tools to increase quality and quantity of scholarship, and how to implement this improvements in Wikipedia
- Type of submission (discussion, hot seat, panel, presentation, tutorial, workshop)
- Author of the submission
- Pål Magnus Lykkja and Thomas Gramstad, University library of Oslo, Norway
- E-mail address
- Country of origin
- Affiliation, if any (organisation, company etc.)
- University Library of Oslo
- Personal homepage or blog
- Abstract (at least 300 words to describe your proposal)
- Single blind peer-review and copyright are in our opinion the single two most problematic issues hindering science today. Single blind peer-review has created problems with regression to the mean quality (Sydney Brenner), strengthening orthodoxy with no deviation from existing theories, increasing instances of forgery and fraud. Biswanger has identified many different types of crowding out effects like crowding out of quality by quantity and crowding out of unconventional approaches to scholarship. Copyright laws created in a paper based society of scholarly infrastructure has become one of the biggest obstacles for content mining and creation of sophisticated discovery systems because of the copyright holders monopoly to issue copies and annotations. Copyright has also become a big obstacle for opening up ideas exchange, remixing content in the creative writing process, and remixing of research results directly into streams of learning like Massive Online Open Resources (not Courses, Keith Devlin).
We suggest that the appropriate answer to this challenges is simply to 1) separate content from tools and publishing services with institutional repositories at the core, 2) use of XML and HTML formats, 3) CC-BY licence, 4) ORCID author registration, 5) DOI document registration in all relevant repositories as Zenodo (OpenAire) and Mozilla GetDOI (in cooperation with GitHub), together with new types of 6) quality filters like open pre-publication peer review and post-publication peer review.
For wikipedians the main challenges may be this: how to exploit the new and better search techniques and the new content mining tecniques that arise as a product of a modernised scholarly infrastructure. New search and content mining will make it possible to create up-to-date wikipedia articles of higher quality than with todays discovery and evaluation tools that is based on single-blind peer-review, low-quality search and no access to content mining. The UK and EU are in the process to make changes in copyright laws to open up for content mining. OpenAire has issued metadatastandards for author/document/formats. Many startups are in the process to implementing alternatively peer review, more and more reseach funding demand discoverability with subsequently “remixability” in the education systems like MOOCs (or MOORs) and in research processes.
The system that make remixing possible is the same system that makes efficient evaluation, search and content mining possible. This system not only open up for reading of the final scholarly output, but open up the asessment phase, the create phase and the publication phase for anyone interested in the subject, and important, machine-to-machine communication. With machine-readable DOI and ORCID it should be possible to create filters, just as we prefer to listen to advices from certain groups of people in the real world. By identify what ORCIDs one prefer to receive contributions from, based on the document with DOI that belong to each ORCID. Open content make it scalable to mine in the open scholarly content “corpus” (a large and structured set of media content) and that makes it possible for non-expert wikipedians to perform and document systematic reviews similarly as Google and Wikipedia made anyone to “experts” just by natural language searches.
The most time consuming and costly (by heavvy use of experts for basically trivial search tasks) part of systematic reviews consist of searching through numerous, difficult-to-search silos, with often different meta-data systems and sometimes user restrictions like Digital rights management (DRM). Automatically executed systematic review, should be easy to re-run and to document compared to systematic reviews done in different information silos that is hard to document and difficult to perform. Of course, a better peer review system, better copyright laws, with hopefully fewer of the weakness with the traditional formal peer review system, this is of course an advantage for wikipedians as well as the researchers themselves. Disappearance of problems are perhaps as important as the realisation of the opportunities. The importance for wikipedians is to learn 1) New search techniques 2) Content mining techniques 3) make Wikipedia to a dynamic source of “knowledge as a continually stream” and not a “static knowledge for eternity” by bringing wikipedia into the research frontiers characterised by continually “beta” (illustrated by fig. 2 and fig. 6).
- Length of session (if other than 30 minutes, specify how long)
- 30 minutes
- Will you attend Wikimania if your submission is not accepted?
- Slides or further information (optional)
- Special requests
If you are interested in attending this session, please sign with your username below. This will help reviewers to decide which sessions are of high interest. Sign with a hash and four tildes. (# ~~~~).