forum_id
stringlengths 8
20
| forum_title
stringlengths 1
899
| forum_authors
sequencelengths 0
174
| forum_abstract
stringlengths 0
4.69k
| forum_keywords
sequencelengths 0
35
| forum_pdf_url
stringlengths 38
50
| forum_url
stringlengths 40
52
| note_id
stringlengths 8
20
| note_type
stringclasses 6
values | note_created
int64 1,360B
1,737B
| note_replyto
stringlengths 4
20
| note_readers
sequencelengths 1
8
| note_signatures
sequencelengths 1
2
| venue
stringclasses 349
values | year
stringclasses 12
values | note_text
stringlengths 10
56.5k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Q1DaWDXKrY | From data to images: OpenRefine for Wikidata and Wikimedia Commons | [
"Elena Martellotta"
] | Many cultural institutions are increasingly embracing the open data movement by making their collections, research, and archival materials accessible to the public, matching the Wikimedia movement mission that aims to provide free access to the human knowledge.
Through platforms like Wikidata and Wikimedia Commons, institutions can share structured data on artworks, historical events and figures, making them interoperable. By 'freeing' data and images for their use and re-use, they are therefore facilitating research, education, and collaboration.
From this context moved the pilot project "Progetto Dati Lombardia", developed focusing practices, methods and use possibilities of OpenRefine's extensions. Starting from the dataset concerning cultural buildings from the Lombardia region, released in public domain, data has been before wrangled through OpenRefine, checked using Quickstatements and then has been uploaded to Wikidata. Transforming data values as property values has minimised the loss of information.
After that, in parallel to the specific implementation of OpenRefine for Wikimedia Commons, The Egyptian Museum of Turin released, not just data, but also pictures of the collection, already available on their website. The fist step was similar to what was done for the previous project about data. For the images it was necessary to link the item page on Wikidata for each artwork, so that the metadata were already structured, before uploading on Wikimedia Commons.
These projects showed how cultural institutions can contributing to Wikidata and Wikimedia Commons, to preserve and disseminate cultural knowledge, enhance the visibility of cultural heritage globally. | [
"OpenRefine",
"Wikidata",
"Wikimedia Commons",
"Cultural Heritage",
"Museums",
"Open Data",
"Public Domain",
"Quickstatements"
] | https://openreview.net/pdf?id=Q1DaWDXKrY | https://openreview.net/forum?id=Q1DaWDXKrY | 8AhuiRX8DL | official_review | 1,736,249,824,797 | Q1DaWDXKrY | [
"everyone"
] | [
"~Alessandra_Boccone1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: Valido caso di studio per un lightning talk
review: L'abstract descrive in breve, ma in maniera efficace, i presupposti, la metodologia e i risultati di un interessante progetto che vede l'utilizzo integrato di varie piattaforme Wikimedia e la sperimentazione di uno strumento complesso come Open Refine, utile a migliorare e velocizzare l'inserimento dei dati in Wikidata. Il progetto rappresenta un valido caso di studio, tendenzialmente innovativo e sicuramente replicabile, in linea con gli argomenti della conferenza.
compliance: 5
scientific_quality: 4
originality: 4
impact: 5
confidence: 5 |
PQSOgMCpGX | Integrating Projects Working Around An Open Database Of Published Music Recordings: a call for collaboration | [
"Toni Sant"
] | Outside the immediate purview of the expansive WikiProject Music on Wikipedia, there are at least three other projects relating to capturing structured data about published music recordings through Wikidata, Wikibase, and other Wikimedia platforms. One is the AfroSounds project, led by Oreoluwa (User:ReoMartins) from Nigeria since 2022. Another is the proposal by Daniel Antal (presented at the 2024 CEE Meeting) to build a music data sharing space with Wikibase starting with music published in Slovakia, inspired by the Luxembourg Shared Authority File project. And the third is the work of the Malta Music Memory Project, developed by the author with the M3P Foundation since 2009, using a MediaWiki site and Wikidata. This is a call for other academic researchers to collaborate on the development of an integrated data structure and workflow model – including possibilities for automation through bots – for published music recordings that is applicable to Wikidata. The aim is to enable systematic data gathering on a global level, building on existing datasets currently held by music publishing platforms and organisations who seek to make it more findable. Considerations for restrictive database rights that sometimes preclude integration into Wikimedia's open knowledge ecosystem, may require staging via Wikibase, rather than Wikidata, in the first instance. | [
"digital curation",
"music",
"wikidata"
] | https://openreview.net/pdf?id=PQSOgMCpGX | https://openreview.net/forum?id=PQSOgMCpGX | Hn7M1fbp5z | official_review | 1,736,335,592,092 | PQSOgMCpGX | [
"everyone"
] | [
"~Carlo_Bianchini1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: Improving and expanding structured data on Music in Wikimedia environment
review: The lightning talk has the purpose to describe the state of the art of data about music in Wikimedia platforms and to engage more volunteers in focusing on this relevant field.
compliance: 5
scientific_quality: 5
originality: 4
impact: 4
confidence: 4 |
PQSOgMCpGX | Integrating Projects Working Around An Open Database Of Published Music Recordings: a call for collaboration | [
"Toni Sant"
] | Outside the immediate purview of the expansive WikiProject Music on Wikipedia, there are at least three other projects relating to capturing structured data about published music recordings through Wikidata, Wikibase, and other Wikimedia platforms. One is the AfroSounds project, led by Oreoluwa (User:ReoMartins) from Nigeria since 2022. Another is the proposal by Daniel Antal (presented at the 2024 CEE Meeting) to build a music data sharing space with Wikibase starting with music published in Slovakia, inspired by the Luxembourg Shared Authority File project. And the third is the work of the Malta Music Memory Project, developed by the author with the M3P Foundation since 2009, using a MediaWiki site and Wikidata. This is a call for other academic researchers to collaborate on the development of an integrated data structure and workflow model – including possibilities for automation through bots – for published music recordings that is applicable to Wikidata. The aim is to enable systematic data gathering on a global level, building on existing datasets currently held by music publishing platforms and organisations who seek to make it more findable. Considerations for restrictive database rights that sometimes preclude integration into Wikimedia's open knowledge ecosystem, may require staging via Wikibase, rather than Wikidata, in the first instance. | [
"digital curation",
"music",
"wikidata"
] | https://openreview.net/pdf?id=PQSOgMCpGX | https://openreview.net/forum?id=PQSOgMCpGX | 9rN0JIUqbY | official_review | 1,735,937,263,499 | PQSOgMCpGX | [
"everyone"
] | [
"~Camillo_Carlo_Pellizzari_di_San_Girolamo1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: Music recordings in Wikidata and Wikibase instances
review: The proposal describes how Wikidata and Wikibase instances have been (and are being) used to gather data about music recordings, offering a comprehensive overview of existing projects and making proposals to systematise data collection in this field.
compliance: 5
scientific_quality: 5
originality: 5
impact: 5
confidence: 4 |
LeDpCKLh3D | Automatic Verification of References of Wikidata Statements | [
"Elena Simperl",
"Odinaldo Rodrigues",
"Albert Meroño-Peñuela",
"Kholoud Saad alghamdi",
"Gabriel Maia Rocha Amaral",
"Jongmo kim",
"Miriam Redi",
"Bohui Zhang",
"Yihang Zhao"
] | Wikidata is one of the world's most important machine-readable data assets. It is used by web search engines, virtual assistants such as Siri and Alexa, fact checkers, and in over 800 projects in the Wikimedia ecosystem. Wikidata contains information about 100 million of topics edited daily by 24 thousand active editors. Manually checking whether an individual reference supports the claim of a Wikidata statement is not very difficult, but it is a slow and somewhat tedious process. Given the overall number of statements to check, the collaborative nature of the knowledge graph, and the fact that referenced documents can change over time, preserving the quality of the references is an onerous process requiring continuous intervention. In this paper, we present ProVe - a reference verification tool for Wikidata statements developed from state-of-the-art research for quality assurance in collaboratively constructed knowledge graphs. ProVe harnesses the power of LLMs to automatically verify and assess the quality of the references in Wikidata. | [
"reference verification",
"quality assurance",
"tools",
"Wikidata"
] | https://openreview.net/pdf?id=LeDpCKLh3D | https://openreview.net/forum?id=LeDpCKLh3D | vjeWelQe2P | official_review | 1,735,933,795,175 | LeDpCKLh3D | [
"everyone"
] | [
"~Camillo_Carlo_Pellizzari_di_San_Girolamo1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: A very important effort to measure a crucial aspect of Wikidata's data quality
review: References are fundamental for Wikidata's data quality; a reference verification tool can have a very important impact, in both assessing the present quality of Wikidata and highlighting which statements require better references, in order to focus there the efforts of the community. ProVe has been developed by a group of researchers and has a solid scientific basis; it has also been already well documented on Wikidata (https://www.wikidata.org/wiki/Wikidata:ProVe).
compliance: 5
scientific_quality: 5
originality: 5
impact: 5
confidence: 5 |
KiF1D4fIkH | Opening up and linking type catalogues in Wikidata – increasing the visibility of natural history collections | [
"Sabine von Mering"
] | Natural history museums and collections around the world house several billion preserved specimens used by the scientific community to answer questions about the biodiversity and geodiversity on Earth. Collections are increasingly digitised, opened up and made accessible to the wider scientific community and beyond. However, initially only basic information is shared, further related metadata or information on historical contexts is often not connected.
Type specimens are the most important objects in these collections because they are associated with the names of new taxa, serving as a permanent reference. As name-bearing specimens, the type material is regularly examined by scientists as decisive objects for resolving taxonomic issues and clarifying species delimitation. Having easy access to type material is, therefore, an important prerequisite for facilitating research. Type catalogues list present and sometimes lost or missing type material of specific – historical and contemporary – collections, for certain taxonomic groups or research expeditions. Collection catalogues have been published for several centuries and comprise information such as the housing institutions of types and duplicate material, type localities, annotations, and sometimes illustrations.
The open, multilingual and multidisciplinary knowledgebase Wikidata supports discoverability, transparency and accessibility of research data. It is community-curated and serves as a hub for external identifiers and provides structured, human and machine-readable data. Wikidata already comprises data for a huge number of publications including type catalogues, many are accessible via the Biodiversity Heritage Library (BHL) or other digital libraries. However, they are currently not easily searchable as a type of scholarly work.
Data from Wikidata can be reused by other platforms and tools. Well-curated high-quality datasets related to natural history collections require the use of community-agreed standards and persistent identifiers as well as international collaboration. This includes exchange with different organisations promoting standardisation and open access to biodiversity data such as the Global Biodiversity Information Facility (GBIF), the Consortium of European Taxonomic Facilities (CETAF) and Biodiversity Information Standards (TDWG). For example, a TDWG Task Group is developing a terminology on how to model research expeditions in Wikidata, and the BHL-Wiki Working Group is involved in data modelling. Such collaborations – also with the wider Wiki community – help to improve the modelling of type catalogues and other entities in Wikidata and to develop best practice recommendations.
In this study, new Wikidata items are created for articles of type catalogues published in different languages and academic journals, and existing items are enriched by adding different external identifiers (e.g. DOIs, BHL page IDs). In addition, further entities such as the type specimen holding institution(s) or collection agents connected to the material or collection are linked. The project started with type catalogues from the Museum für Naturkunde Berlin, and was then expanded to compile additional catalogues from around the world. The growing open dataset in Wikidata can be (re)used for research in different fields such as taxonomy and systematics, history of collections, digital humanities or provenance research. It highlights the potential of Wikidata for research and knowledge contextualisation. | [
"collection agents",
"Linked Open Data",
"museum catalogues",
"natural history collections",
"Open Science",
"scholarly publications",
"taxonomy",
"type catalogues",
"Wikidata"
] | https://openreview.net/pdf?id=KiF1D4fIkH | https://openreview.net/forum?id=KiF1D4fIkH | nhMOg8gAKn | official_review | 1,736,120,128,755 | KiF1D4fIkH | [
"everyone"
] | [
"~Antonella_Buccianti1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: Wikidata for increasing the visibility of natural history collections as an interesting application for natural museums
review: In the paper is discussed the open, multilingual and multidisciplinary knowledge base Wikidata to support discoverability, transparency and accessibility of research data. The impact for the management of data from natural museums cold be very interesting in different fields such as taxonomy and systematics, history of collections, digital humanities or provenance research, It highlighting the potential of Wikidata for research and knowledge contextualisation.
compliance: 4
scientific_quality: 4
originality: 4
impact: 5
confidence: 3 |
K5wwBLacpM | A data visualization tool for heritage buildings in India and Bangladesh: the project of a wiki for ID-SCAPES | [
"Giuseppe Resta",
"Sidh Losa Mendiratta",
"Tiago Filipe Trindade Cruz"
] | In late 2023, the European Research Council approved the funding for the ID-SCAPES project to study and document the early modern religious architecture of the Christian minorities in India and Bangladesh. These religious sites are often multi-layered and contested heritage, and some buildings have suffered from increasing neglect, erasure and even effacement during recent decades. Furthermore, after the countries’ independence, many within the Church hierarchies sought to distance themselves from colonial legacies, and many of the older churches were radically transformed or completely rebuilt with contemporary designs. This politically influenced process gained pace during the 1960s and continues today, evincing conflicting notions of heritage and identity.
Taking into consideration the risks arising from these processes and the progressive erasure of cultural heritage, ID-SCAPES aims to produce a Social History of the Built Environment of India and Bangladesh’s medieval and early modern churches and sacral landscapes (built before ca. 1800), including both functioning and ruined buildings.
One of the principal outputs of the project will be herichurch.org, a wiki platform intended to provide a digital map, connected to the visual database, accessible to a wider audience.
Following the idea of “preservation by record” the project wikibase will combine 3D visualizations and CAD drawings as fundamental tools for cultural heritage conservation and research.
Challenging the European-centric historiographical framework that has been commonly employed for these themes, and through extensive fieldwork and the analysis of visual and written documents that remain unexplored, the project will advance a new methodological approach that embraces the buildings’ complex histories. Addressing issues such as caste, cultural “accommodation,” “indigenous” agency, and local spatial and artistic traditions, ID-SCAPES will uncover the impact of such factors on church architecture.
Hence, the aim of ID-SCAPES’ wiki is to visualize historical information gathering and processing data from research and fieldwork. In turn, such digital tools for endangered/contested sites can have an immediate impact on heritage management interventions.
For the Wikidata and Research 2025 conference, ideally in a “lightning talks” format, we will share our idea of how visual and written historical knowledge can be structured, possibly gathering suggestions from researchers who have achieved comparable outputs. As we are in the initial phase of the project, qualified feedback is of crucial importance to advance in the right direction.
The structure of knowledge should consist of a spatial database/digital map, visually organized for accessible consultation, and linked individual entries that constitute the main objects of research of ID-SCAPES. This system should work on a web mapping platform.
The authors have been involved in two comparable projects of heritage mapping, the hpip.org and eviterbo.fcsh.unl.pt platforms. In a similar vein, the Herichurch.org platform is expected to generate entries for a selection of about 50 representative sites during the project’s timeline. Further content will be uploaded at later dates. This wiki platform is also one of the milestones of the project.
In summary, the ID-SCAPES wiki is set to advance the understanding and preservation of early modern religious architecture in India and Bangladesh through an innovative digital platform that captures and engages with the complex histories of these significant cultural sites.
Themes:
- Data visualisations and tools.
- Projects and proposals.
Language of the presentation: English and Italian (both if necessary)
Confirmation of the physical presence in Florence of at least one of the authors: Yes
Authors bibliography: see pdf | [
"Data visualization",
"heritage buildings",
"India",
"churches",
"preservation by record"
] | https://openreview.net/pdf?id=K5wwBLacpM | https://openreview.net/forum?id=K5wwBLacpM | osHrCrx0af | official_review | 1,736,696,589,802 | K5wwBLacpM | [
"everyone"
] | [
"~Iolanda_Pensa1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: Wikibase but potentially also Wikidata and other Wikimedia projects
review: I find the topic fascinating, and it is relevant not only for Wikibase but also for Wikidata directly.
The topic of heritage is always very relevant for the Wikimedia projects and the fact that India and Bangladesh have freedom of panorama can facilitate not only the collaboration with Wikidata but also the upload of images and other content on Wikimedia Commons (and those uploads are easily possible thanks to the use of open licenses such as CC BY 4.0 always requested by the open science policy of the European projects).
Furthermore, more content about architecture in India and Bangladesh can contribute to filling Wikidata and Wikimedia knowledge gaps. There are also very active Wikimedia communities in both countries (maybe interesting to involve).
A very interesting lightening talk.
compliance: 5
scientific_quality: 5
originality: 4
impact: 5
confidence: 4 |
K5wwBLacpM | A data visualization tool for heritage buildings in India and Bangladesh: the project of a wiki for ID-SCAPES | [
"Giuseppe Resta",
"Sidh Losa Mendiratta",
"Tiago Filipe Trindade Cruz"
] | In late 2023, the European Research Council approved the funding for the ID-SCAPES project to study and document the early modern religious architecture of the Christian minorities in India and Bangladesh. These religious sites are often multi-layered and contested heritage, and some buildings have suffered from increasing neglect, erasure and even effacement during recent decades. Furthermore, after the countries’ independence, many within the Church hierarchies sought to distance themselves from colonial legacies, and many of the older churches were radically transformed or completely rebuilt with contemporary designs. This politically influenced process gained pace during the 1960s and continues today, evincing conflicting notions of heritage and identity.
Taking into consideration the risks arising from these processes and the progressive erasure of cultural heritage, ID-SCAPES aims to produce a Social History of the Built Environment of India and Bangladesh’s medieval and early modern churches and sacral landscapes (built before ca. 1800), including both functioning and ruined buildings.
One of the principal outputs of the project will be herichurch.org, a wiki platform intended to provide a digital map, connected to the visual database, accessible to a wider audience.
Following the idea of “preservation by record” the project wikibase will combine 3D visualizations and CAD drawings as fundamental tools for cultural heritage conservation and research.
Challenging the European-centric historiographical framework that has been commonly employed for these themes, and through extensive fieldwork and the analysis of visual and written documents that remain unexplored, the project will advance a new methodological approach that embraces the buildings’ complex histories. Addressing issues such as caste, cultural “accommodation,” “indigenous” agency, and local spatial and artistic traditions, ID-SCAPES will uncover the impact of such factors on church architecture.
Hence, the aim of ID-SCAPES’ wiki is to visualize historical information gathering and processing data from research and fieldwork. In turn, such digital tools for endangered/contested sites can have an immediate impact on heritage management interventions.
For the Wikidata and Research 2025 conference, ideally in a “lightning talks” format, we will share our idea of how visual and written historical knowledge can be structured, possibly gathering suggestions from researchers who have achieved comparable outputs. As we are in the initial phase of the project, qualified feedback is of crucial importance to advance in the right direction.
The structure of knowledge should consist of a spatial database/digital map, visually organized for accessible consultation, and linked individual entries that constitute the main objects of research of ID-SCAPES. This system should work on a web mapping platform.
The authors have been involved in two comparable projects of heritage mapping, the hpip.org and eviterbo.fcsh.unl.pt platforms. In a similar vein, the Herichurch.org platform is expected to generate entries for a selection of about 50 representative sites during the project’s timeline. Further content will be uploaded at later dates. This wiki platform is also one of the milestones of the project.
In summary, the ID-SCAPES wiki is set to advance the understanding and preservation of early modern religious architecture in India and Bangladesh through an innovative digital platform that captures and engages with the complex histories of these significant cultural sites.
Themes:
- Data visualisations and tools.
- Projects and proposals.
Language of the presentation: English and Italian (both if necessary)
Confirmation of the physical presence in Florence of at least one of the authors: Yes
Authors bibliography: see pdf | [
"Data visualization",
"heritage buildings",
"India",
"churches",
"preservation by record"
] | https://openreview.net/pdf?id=K5wwBLacpM | https://openreview.net/forum?id=K5wwBLacpM | cUPcGU9MMG | official_review | 1,736,101,138,176 | K5wwBLacpM | [
"everyone"
] | [
"~Elena_Marangoni1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: Architecture and history in structured data and geographical data
review: This project has some point in common with the submission #45 "eViterbo", as declared by the authors, but is at the first steps, so they proposes the lightning talk format that can be useful for a comparison and exchange of ideas and projects with other researchers. It is very interesting and complex, aiming at putting together architecture and cultural heritage preservation along with history, and in particolar colonial history.
compliance: 5
scientific_quality: 5
originality: 5
impact: 5
confidence: 4 |
INxfzXeHAb | A visualization system based on Wikidata for supporting and monitoring the Wiki Loves Monuments Italian contest | [
"Tommaso Elli",
"Andrea Benedetti",
"Angeles Briones",
"Michele Mauri",
"Dario Crespi"
] | This paper introduces the Wiki Loves Monuments Observatory, an interactive information and visualization system designed to leverage Wikidata (Vrandečić & Krötzsch, 2014) in support of the contest Wiki Loves Monument Italy (WLM Italy).
Since 2012, WLM Italy (Azizifard et al., 2023; Bertacchini & Pensa, 2023) has been organized by the local chapter of Wikimedia to enhance the documentation of Italian cultural heritage using content curated in Wikidata and Wikimedia Commons. The observatory facilitates the work of organizers, volunteers, and participants, and helps increase the quality of the materials produced. Specifically, it provides greater support in the following areas: (1) the organization of the national contest as well as regional and local competitions; (2) the engagement of volunteers, both to enhance the database and to produce new photographs; (3) the promotion of the contest through social media and other communication channels; (4) the monitoring of the coverage of Italian cultural heritage documentation on Wikidata and Wikimedia Commons.
The platform consists of a database (DB) and a user interface (UI). The UI (https://data.wikilovesmonuments.it/) empowers volunteers, heritage professionals, and local organizers to plan outreach campaigns, coordinate activities (e.g., wiki-expeditions), and enrich digital cultural repositories leveraging a map visualization and a filterable list of cultural properties. It offers both aggregated and granular views, enabling to understand coverage patterns, identify areas in need of documentation, reveal temporal trends in community-driven efforts, and produce visual reports. The DB (https://wlm-it-visual.wmcloud.org/api/schema/swagger-ui/) is automatically updated using specific Wikidata Sparql queries and API requests to Wikimedia Commons. It is accessible via unrestricted APIs and currently used by the presented observatory and other projects.
The relevance of this work lies not only in the technical demonstration of using Wikidata as a foundational infrastructure, but also in the participatory design approach (Dörk et al., 2020; Morelli et al., 2021; Sanders & Stappers, 2008) that underpins its implementation. The richness of Wikidata content and the heterogeneity of WLM stakeholders enable virtually limitless possibilities for data analysis. Therefore, identifying and prioritizing the features to be implemented is a complex and non-trivial task that requires activities designed to define, with the final users, what needs to be made available and how. The design methodology entailed desk research, participant observation and structured interviews, which are conducted to inform the realization of a participatory workshop. This step was crucial, as it allowed for the early identification of all necessary features, facilitating the design of a robust, modular, and upgradable infrastructure capable of adapting to the future needs. The resulting system is not merely a dashboard visualization, but a flexible framework where community contributions, iterative refinement, and open data principles converge.
By synthesizing these technical and participatory insights, the contribution offers a replicable model for future efforts to leverage Wikidata in projects related to cultural heritage and volunteer engagement. On one hand, it provides examples of data integration and curation strategies; on the other, it exposes the need for participatory design methods to accommodate and anticipate multiple stakeholders’ needs and perspectives. | [
"Wikidata",
"Wiki Loves Monuments",
"Cultural Heritage",
"Participatory Design",
"Volunteer Engagement"
] | https://openreview.net/pdf?id=INxfzXeHAb | https://openreview.net/forum?id=INxfzXeHAb | RsPDJ7Nuxb | official_review | 1,736,014,570,362 | INxfzXeHAb | [
"everyone"
] | [
"~Luca_Martinelli1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: Approvo
review: Wiki Loves Monuments è stato il progetto di punta dell'associazione Wikimedia Italia per gli ultimi 15 anni e Wikimedia Italia è stata fra le prime a trasferire il proprio database di codici e autorizzazioni su Wikidata, proprio per sfruttare le potenzialità del progetto nell'organizzazione del contest. Questo tool è un'applicazione che permette di valutare gli sforzi di 15 anni di raccolta di dati, autorizzazioni e immagini riguardanti i monumenti e i luoghi di interesse culturale dell'Italia, ed è pertanto ampiamente coerente con quanto si intende dimostrare sulle potenzialità di Wikidata.
compliance: 5
scientific_quality: 5
originality: 4
impact: 4
confidence: 5 |
IBIXAzbctF | Filologia e Wikidata. Per una riorganizzazione del lessico delle Edizioni critiche | [
"Giuseppe Arena"
] | Il lessico della filologia, in particolare quello legato alle edizioni critiche, trova su Wikidata una rappresentazione scarsa e frammentaria. Gli item relativi ai concetti fondamentali di questo ambito sono pochi, spesso sommari, difficili da reperire tramite query SPARQL e privi delle sfumature concettuali e del dibattito intellettuale che la filologia ha sviluppato nel corso del tempo. Questa lacuna compromette l’efficacia di Wikidata come strumento per la ricerca accademica e la condivisione interdisciplinare della conoscenza.
La proposta mira a riorganizzare e strutturare il lessico delle edizioni critiche utilizzando sia Wikidata sia Wikibase, la piattaforma open source alla base di Wikidata. Wikibase consentirà di creare un ambiente controllato e personalizzabile per la modellazione e il test del lessico, garantendo una maggiore flessibilità nell’organizzazione dei dati e facilitando un eventuale trasferimento su Wikidata.
Il progetto utilizza come punto di partenza il LexiconSe (Lexicon of Scholarly Editing), una risorsa collaborativa, aperta e multilingue che raccoglie definizioni provenienti da articoli, monografie e altre fonti accademiche sulle edizioni critiche e sulla filologia in senso lato. LexiconSe sarà integrato in una istanza di Wikibase, dove i termini saranno organizzati prendendo come punto di riferimento ontologie consolidate come la Critical Apparatus Ontology (CAO) e la Scholarly Editing Ontology.
La proposta intende coinvolgere la comunità accademica, la comunità Wikidata e Wikimedia, in senso più ampio, favorendo un approccio collaborativo che porti a riflettere sulla necessità di elaborare linee guida per i contributori relativamente alle tematiche della filologia. | [
"philology",
"ontology",
"wikidata",
"wikibase",
"lexicon"
] | https://openreview.net/pdf?id=IBIXAzbctF | https://openreview.net/forum?id=IBIXAzbctF | zxscuEentJ | official_review | 1,736,496,473,450 | IBIXAzbctF | [
"everyone"
] | [
"~Monica_Berti1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: Un contributo importante per una discussione sul coinvolgimento dei filologi in Wikidata
review: L'autore presenta una proposta per riorganizzare e strutturare il lessico delle edizioni critiche utilizzando Wikidata e Wikibase. L'intento è sicuramente molto ambizioso e il progetto parte con l'utilizzo della risorsa collaborative Lexicon of Scholarly Editing (LexiconSe). La proposta è interessante come stimolo al coinvolgimento della comunità accademica in Wikidata e Wikimedia per favorire un approccio collaborativo nell'ambito della filologia, che ancora necessita di una spinta in questa direzione.
compliance: 4
scientific_quality: 4
originality: 5
impact: 5
confidence: 5 |
IBIXAzbctF | Filologia e Wikidata. Per una riorganizzazione del lessico delle Edizioni critiche | [
"Giuseppe Arena"
] | Il lessico della filologia, in particolare quello legato alle edizioni critiche, trova su Wikidata una rappresentazione scarsa e frammentaria. Gli item relativi ai concetti fondamentali di questo ambito sono pochi, spesso sommari, difficili da reperire tramite query SPARQL e privi delle sfumature concettuali e del dibattito intellettuale che la filologia ha sviluppato nel corso del tempo. Questa lacuna compromette l’efficacia di Wikidata come strumento per la ricerca accademica e la condivisione interdisciplinare della conoscenza.
La proposta mira a riorganizzare e strutturare il lessico delle edizioni critiche utilizzando sia Wikidata sia Wikibase, la piattaforma open source alla base di Wikidata. Wikibase consentirà di creare un ambiente controllato e personalizzabile per la modellazione e il test del lessico, garantendo una maggiore flessibilità nell’organizzazione dei dati e facilitando un eventuale trasferimento su Wikidata.
Il progetto utilizza come punto di partenza il LexiconSe (Lexicon of Scholarly Editing), una risorsa collaborativa, aperta e multilingue che raccoglie definizioni provenienti da articoli, monografie e altre fonti accademiche sulle edizioni critiche e sulla filologia in senso lato. LexiconSe sarà integrato in una istanza di Wikibase, dove i termini saranno organizzati prendendo come punto di riferimento ontologie consolidate come la Critical Apparatus Ontology (CAO) e la Scholarly Editing Ontology.
La proposta intende coinvolgere la comunità accademica, la comunità Wikidata e Wikimedia, in senso più ampio, favorendo un approccio collaborativo che porti a riflettere sulla necessità di elaborare linee guida per i contributori relativamente alle tematiche della filologia. | [
"philology",
"ontology",
"wikidata",
"wikibase",
"lexicon"
] | https://openreview.net/pdf?id=IBIXAzbctF | https://openreview.net/forum?id=IBIXAzbctF | HI6PRmixBp | official_review | 1,736,244,765,523 | IBIXAzbctF | [
"everyone"
] | [
"~Lucia_Sardo1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: revisione
review: La proposta presentata risulta abbastanza ben strutturata e con obiettivi chiaramente definiti; l'autore, partendo da una buona conoscenza degli strumenti e delle risorse esistenti nell'ambito disciplinare di interesse, propone un interessante utilizzo di Wikibase per la strutturazione dei dati relativi al lessico delle edizioni critiche integrando fonti già esistenti. La proposta può essere di sicuro interesse per gli studiosi della disciplina e per coloro che sono interessati a lavorare sugli aspetti lessicologici.
compliance: 4
scientific_quality: 4
originality: 5
impact: 5
confidence: 4 |
HPYI1sHybV | Amharic Audio Data Search Engine using Text-Based Spoken Term Detection with Models | [
"Zemenfes Hailemariam Gebremedhin"
] | The generation of audio files from various sources, including the internet and social media, has increased significantly in the rapidly expanding digital landscape. It is difficult to efficiently access specific spoken words from this vast collection of Amharic audio data. To address this, we propose a novel method that combines Text-Based Spoken Term Detection (STD) with models. Our methodology includes speech segmentation with pydub, the development of an ASR model, and the implementation of keyword-based STD.
The ASR model successfully transcribes audio files, allowing meaningful keywords to be extracted for more accurate and frequent search queries. An analysis of 37 audio files reveals that the sentence error rate (SER) is 91.7 percent (33 of 36 sentences have errors) and the word error rate (WER) is 98.3 percent (285 of 290 words have errors). It improved search accuracy and efficiency for specific spoken terms, significantly improving search capabilities for users of Amharic multimedia resources.
However, the study emphasizes the need for a larger dataset to improve transcription capabilities and reduce errors, with the potential to revolutionize Amharic audio search engines and empower users in accessing precise information from Amharic audio data, ultimately transforming how we interact with and use Amharic audio resources. | [
"ASR (Automatic Speech Recognition)",
"STD (Spoken Term Detection)",
"WER (Word Error Rate)",
"SER (Search Error Rate)"
] | https://openreview.net/pdf?id=HPYI1sHybV | https://openreview.net/forum?id=HPYI1sHybV | J02qHPcLVw | official_review | 1,735,556,084,112 | HPYI1sHybV | [
"everyone"
] | [
"~Camillo_Carlo_Pellizzari_di_San_Girolamo1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: Seemingly no pertinence with Wikibase and/or Wikidata
review: The abstract does not show any clear connection with Wikidata and/or Wikibase, which are the themes of the conference; it might be hypothesized that the ASR model could be useful for the Wikibase instance Lingua Libre (https://lingualibre.org/) and the lexemes in Wikidata.
compliance: 2
scientific_quality: 3
originality: 3
impact: 2
confidence: 3
notes: - |
HPYI1sHybV | Amharic Audio Data Search Engine using Text-Based Spoken Term Detection with Models | [
"Zemenfes Hailemariam Gebremedhin"
] | The generation of audio files from various sources, including the internet and social media, has increased significantly in the rapidly expanding digital landscape. It is difficult to efficiently access specific spoken words from this vast collection of Amharic audio data. To address this, we propose a novel method that combines Text-Based Spoken Term Detection (STD) with models. Our methodology includes speech segmentation with pydub, the development of an ASR model, and the implementation of keyword-based STD.
The ASR model successfully transcribes audio files, allowing meaningful keywords to be extracted for more accurate and frequent search queries. An analysis of 37 audio files reveals that the sentence error rate (SER) is 91.7 percent (33 of 36 sentences have errors) and the word error rate (WER) is 98.3 percent (285 of 290 words have errors). It improved search accuracy and efficiency for specific spoken terms, significantly improving search capabilities for users of Amharic multimedia resources.
However, the study emphasizes the need for a larger dataset to improve transcription capabilities and reduce errors, with the potential to revolutionize Amharic audio search engines and empower users in accessing precise information from Amharic audio data, ultimately transforming how we interact with and use Amharic audio resources. | [
"ASR (Automatic Speech Recognition)",
"STD (Spoken Term Detection)",
"WER (Word Error Rate)",
"SER (Search Error Rate)"
] | https://openreview.net/pdf?id=HPYI1sHybV | https://openreview.net/forum?id=HPYI1sHybV | 9Cgq8FubRD | official_review | 1,736,698,012,926 | HPYI1sHybV | [
"everyone"
] | [
"~Iolanda_Pensa1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: Lingua Libre
review: The proposal doesn't seem pertinent for the conference Wikidata and research.
I can see some potential links with the Wiki project https://lingualibre.org, which adds recordings made by the communities to Wikimedia Commons and links them to Wikipedia and Wikidata (Lingua Libre is really a great project already well-connected with the Wikimedia projects). Maybe it could be interesting for the author to explore those synergies for the future.
compliance: 1
scientific_quality: 3
originality: 3
impact: 2
confidence: 3 |
GraNomoG42 | Committenze semantiche: Wikidata per la ricerca storico-artistica | [
"Alessio Ionna"
] | Negli ultimi anni le scienze umane hanno esplorato le potenzialità delle tecnologie digitali e in particolare le applicazioni del Web semantico per studiare fenomeni culturali complessi come le committenze artistiche. La ricerca intende presentare una possibile applicazione di Wikidata in ambito storico-artistico per rappresentare in un ambiente semantico e collaborativo il mecenatismo di una famiglia nobile, la famiglia Buonaccorsi di Macerata, una delle dinastie più influenti dello Stato Pontificio tra il XVII e XVIII secolo.
Attraverso lo spoglio sistematico delle fonti è stato possibile realizzare in Wikidata un dataset di oltre 400 elementi ascrivibili al contesto della casata maceratese. Ogni elemento relativo alla committenza Buonaccorsi è stato correttamente descritto e relazionato attraverso le proprietà Wikidata ritenute più idonee, permettendo così di restituire una visione diacronica e interconnessa del fenomeno artistico. Inoltre, la possibilità di referenziare le asserzioni ha rinforzato l'affidabilità dei dati, rendendo in questo modo più autorevole il loro riuso in ambito scientifico. Questo studio dimostra come Wikidata, attraverso i Linked Open Data, possa offrire nuove prospettive per la ricerca storico-artistica, raccogliendo in un’unica piattaforma - in forma di dati - fonti disseminate presso diverse sedi, ottimizzando le attività di legate al recupero delle informazioni. Inoltre, evidenzia come le piattaforme aperte possano innovare la ricerca nelle scienze umane, favorendo la costruzione e la diffusione di nuove conoscenze e strumenti di ricerca per gli storici e per le comunità d’interesse. | [
"Wikidata",
"Committenza artistica",
"Ricerca storico-artistica",
"Linked Open Data."
] | https://openreview.net/pdf?id=GraNomoG42 | https://openreview.net/forum?id=GraNomoG42 | NM8xQGjkY8 | official_review | 1,736,247,620,525 | GraNomoG42 | [
"everyone"
] | [
"~Alessandra_Boccone1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: Buona pratica nell'uso di WD nell'ambito storico-artistico
review: L'attività illustrata nell'abstract rappresenta un valido esempio di buone pratiche relative all'uso di Wikidata nell'ambito della ricerca storico-artistica, attraverso un uso consapevole delle fonti e con una sicura ricaduta positiva sulla ricerca in tale ambito.
Sarebbe stato utile descrivere brevemente nell'abstract il flusso di lavoro e le eventuali criticità rilevate, per avere un'idea più completa del progetto e maggiori elementi di valutazione.
compliance: 4
scientific_quality: 4
originality: 4
impact: 4
confidence: 5 |
GraNomoG42 | Committenze semantiche: Wikidata per la ricerca storico-artistica | [
"Alessio Ionna"
] | Negli ultimi anni le scienze umane hanno esplorato le potenzialità delle tecnologie digitali e in particolare le applicazioni del Web semantico per studiare fenomeni culturali complessi come le committenze artistiche. La ricerca intende presentare una possibile applicazione di Wikidata in ambito storico-artistico per rappresentare in un ambiente semantico e collaborativo il mecenatismo di una famiglia nobile, la famiglia Buonaccorsi di Macerata, una delle dinastie più influenti dello Stato Pontificio tra il XVII e XVIII secolo.
Attraverso lo spoglio sistematico delle fonti è stato possibile realizzare in Wikidata un dataset di oltre 400 elementi ascrivibili al contesto della casata maceratese. Ogni elemento relativo alla committenza Buonaccorsi è stato correttamente descritto e relazionato attraverso le proprietà Wikidata ritenute più idonee, permettendo così di restituire una visione diacronica e interconnessa del fenomeno artistico. Inoltre, la possibilità di referenziare le asserzioni ha rinforzato l'affidabilità dei dati, rendendo in questo modo più autorevole il loro riuso in ambito scientifico. Questo studio dimostra come Wikidata, attraverso i Linked Open Data, possa offrire nuove prospettive per la ricerca storico-artistica, raccogliendo in un’unica piattaforma - in forma di dati - fonti disseminate presso diverse sedi, ottimizzando le attività di legate al recupero delle informazioni. Inoltre, evidenzia come le piattaforme aperte possano innovare la ricerca nelle scienze umane, favorendo la costruzione e la diffusione di nuove conoscenze e strumenti di ricerca per gli storici e per le comunità d’interesse. | [
"Wikidata",
"Committenza artistica",
"Ricerca storico-artistica",
"Linked Open Data."
] | https://openreview.net/pdf?id=GraNomoG42 | https://openreview.net/forum?id=GraNomoG42 | 7wdygQpTfH | official_review | 1,736,243,669,147 | GraNomoG42 | [
"everyone"
] | [
"~Lucia_Sardo1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: revisione
review: La relazione presenta i risultati di progetto che ha portato al caricamento di un importante dataset relativo alla famiglia maceratese dei Buonaccorsi. L'abstract purtroppo non riporta la metodologia usata per la raccolta dei dati, il loro trattamento e le modalità utilizzare per il caricamento del dataset stesso. Sicuramente progetti di questo tipo rappresentano buone pratiche per l'arricchimento di Wikidata, ma non è possibile valutare la presenza di aspetti innovativi o l'originalità del progetto con le informazioni a disposizione.
compliance: 4
scientific_quality: 4
originality: 3
impact: 3
confidence: 4 |
GfrpYhoXu4 | Using Wikidata to describe Neo-Latin architecture-related authors, texts and words | [
"Neven Jovanović"
] | Neo-Latin literature denotes writing in Latin during the Early Modern period. On selected Neo-Latin writings related to architecture, I will demonstrate how I use Wikidata to connect authors, texts and words from a text collection (Croatiae auctores Latini, CroALa) with information in Wikidata – and what we do when the information does not exist in Wikidata yet (in the process of connecting the existing knowledge is made explicit and added to Wikidata, and especially to Wikidata Lexemes). Wikidata is used as a platform for an effort to make computationally manipulable what we know and understand about words and texts, with the larger ambition to make philological research as reproducible as possible. | [
"literature",
"Latin language",
"Neo-Latin literature",
"text collection",
"bibliography",
"architecture",
"terminology"
] | https://openreview.net/pdf?id=GfrpYhoXu4 | https://openreview.net/forum?id=GfrpYhoXu4 | UAoyv8wMlJ | official_review | 1,736,228,886,203 | GfrpYhoXu4 | [
"everyone"
] | [
"~Annick_Farina1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: Use of Wikidata on a Neo-Latin corpus
review: The author intends to show how he connects authors, texts and words from a Neo-Latin corpus of writings related to architecture using Wikidata which I find extremely interesting and innovative.
compliance: 5
scientific_quality: 5
originality: 4
impact: 5
confidence: 4 |
GfrpYhoXu4 | Using Wikidata to describe Neo-Latin architecture-related authors, texts and words | [
"Neven Jovanović"
] | Neo-Latin literature denotes writing in Latin during the Early Modern period. On selected Neo-Latin writings related to architecture, I will demonstrate how I use Wikidata to connect authors, texts and words from a text collection (Croatiae auctores Latini, CroALa) with information in Wikidata – and what we do when the information does not exist in Wikidata yet (in the process of connecting the existing knowledge is made explicit and added to Wikidata, and especially to Wikidata Lexemes). Wikidata is used as a platform for an effort to make computationally manipulable what we know and understand about words and texts, with the larger ambition to make philological research as reproducible as possible. | [
"literature",
"Latin language",
"Neo-Latin literature",
"text collection",
"bibliography",
"architecture",
"terminology"
] | https://openreview.net/pdf?id=GfrpYhoXu4 | https://openreview.net/forum?id=GfrpYhoXu4 | 11Y9qkVqOY | official_review | 1,735,943,438,118 | GfrpYhoXu4 | [
"everyone"
] | [
"~Camillo_Carlo_Pellizzari_di_San_Girolamo1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: Use of Wikidata for digital philology
review: The proposal shows how Wikidata is used to enhance a collection of Neo-Latin texts; items about the authors of the texts and the texts themselves, and lexemes for the words in the texts are created and improved by this project. Given the increasing number of digital collections of literary works available online, establishing a close cooperation between them and Wikidata would surely be very helpful for improving Wikidata's coverage and quality in the next years.
compliance: 5
scientific_quality: 5
originality: 5
impact: 5
confidence: 5 |
EsdsPN5aj4 | Il progetto OpenAcolit, un repertorio delle biblioteche italiane realizzato con Wikibase | [
"Stefano Bargioni"
] | OpenAcolit is the transposition in LOD of Acolit, a repertoire in 4 volumes about ecclesiastical authors and liturgical works. | [
"OpenAcolit",
"Acolit",
"Wikibase"
] | https://openreview.net/pdf?id=EsdsPN5aj4 | https://openreview.net/forum?id=EsdsPN5aj4 | O0LGs36Wts | official_review | 1,735,995,709,645 | EsdsPN5aj4 | [
"everyone"
] | [
"~Lucia_Sardo1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: Revisione "sulla fiducia"
review: Sebbene il paper non abbia un abstract e si possano dedurre qualità rilevanza e impatto solo dal titolo, è sicuramente una presentazione di grande interesse in particolare per la comunità bibliotecaria, ma non solo. Si tratta infatti della trasposizione in LOD dell'authority list dedicata agli autori cattolici e alle opere liturgiche pubblicata alla fine del secolo scorso. In questo senso un lavoro di questo tipo è di grande impatto e interesse per tutti coloro che a vario titolo potrebbero essere interessati a questo tipo di dati. A mio avviso si tratta di un progetto la cui realizzazione porterà vantaggi e un miglioramento della qualità dei dati su queste entità.
compliance: 5
scientific_quality: 5
originality: 5
impact: 5
confidence: 5 |
EsdsPN5aj4 | Il progetto OpenAcolit, un repertorio delle biblioteche italiane realizzato con Wikibase | [
"Stefano Bargioni"
] | OpenAcolit is the transposition in LOD of Acolit, a repertoire in 4 volumes about ecclesiastical authors and liturgical works. | [
"OpenAcolit",
"Acolit",
"Wikibase"
] | https://openreview.net/pdf?id=EsdsPN5aj4 | https://openreview.net/forum?id=EsdsPN5aj4 | 9eSQKzIowk | official_review | 1,736,013,938,908 | EsdsPN5aj4 | [
"everyone"
] | [
"~Luca_Martinelli1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: Accettato sulla fiducia
review: Concordo con la revisione fatta da Lucia Sardo. Lo scopo del progetto portato avanti da Utente:Bargioni è sicuramente di alto impatto per la comunità di Wikidata, dal momento che va a coprire un ambito estremamente interessante e particolare da coprire, con dei dati di sicuro valore data la fonte. Sarà sicuramente un progetto interessante da sentire, soprattutto per quanto riguarda lo svolgimento delle attività.
compliance: 5
scientific_quality: 5
originality: 4
impact: 5
confidence: 5 |
EeyzfPT9Gf | Breaking the pattern | [
"Yuri Gallo",
"Matteo De Toffoli"
] | The Philosophy Library of the University of Milan holds about 80 thousand volumes. Most of the collection is open-shelved and divided into two main parts: “History of Philosophy” gathers all publications up to the end of the 19th century and is organized in a chronological order; "Contemporary Philosophy” comprises texts by authors from 20th century onwards and is divided along both a thematic and a linguistic criterion. Over the years, the lack of updates in the thematic sectors has led to a growing imbalance in the distribution of the volumes in contemporary philosophy section: those belonging to innovative areas of research or fields that are not covered by the current subdivision have mostly been assigned to the linguistic sectors, which have consequently become extremely large. Therefore, the content affinities between contiguous volumes are lost and user orientation is compromised.
With the aim of revising this scheme, we decided to increase the number and variety of thematic sectors. By so doing, we could prioritize the allocation of new volumes within them, allow an easier relocation of those placed in the linguistic sectors, minimize the internal variance within each sector while maximizing the external variance between sectors. The main problem we faced was reducing the amount of work and arbitrariness involved in identifying the volumes to be moved into the new thematic sectors. Therefore, we identified a test sector and designed a workflow capable of automating at least part of the selection work.
To decide which texts should be moved to the test sector, we compared the database used in the library to assign shelfmarks (containing a list of authors and the related sectors) with lists drawn from two qualified disciplinary sources. A Wikidata dataset was used to maximize matches between lists and reduce noise. The SPARQL query was based on the Date of birth, Field of work, and Occupation properties. Wikidata was also used to identify, in our local database, only the entries having the property Person and obtain data in a format useful for subsequent analysis. After normalizing the data to make them comparable, we cross-referenced the four lists to obtain a matrix in which each author was either present or absent. All entries that did not appear in any of the three external lists (the two qualified sources and the Wikidata dataset) were eliminated, thus leaving us with a preliminary list of approximately 400 authors that were candidates for the new sector. Each of these names was then reviewed individually to define whether it belonged to the test sector or a different one. The list was organized chronologically according to date of birth of the authors and used to plan the transfer of the volumes to the new shelves.
The use of Wikidata thus enabled us to reduce data analysis from an initial dataset of over 20 thousand entries to just 400 with a high degree of precision and retrieval. | [
"Stack management",
"Data management",
"Academic libraries",
"Wikidata"
] | https://openreview.net/pdf?id=EeyzfPT9Gf | https://openreview.net/forum?id=EeyzfPT9Gf | sOYscpC3Xk | official_review | 1,735,842,475,574 | EeyzfPT9Gf | [
"everyone"
] | [
"~Elena_Marangoni1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: Wikidata for the organization of the collection in an academic library
review: The proposal describe how Wikidata has been used for the stack management in an academic library of Philosophy.
compliance: 4
scientific_quality: 4
originality: 3
impact: 3
confidence: 4 |
EeyzfPT9Gf | Breaking the pattern | [
"Yuri Gallo",
"Matteo De Toffoli"
] | The Philosophy Library of the University of Milan holds about 80 thousand volumes. Most of the collection is open-shelved and divided into two main parts: “History of Philosophy” gathers all publications up to the end of the 19th century and is organized in a chronological order; "Contemporary Philosophy” comprises texts by authors from 20th century onwards and is divided along both a thematic and a linguistic criterion. Over the years, the lack of updates in the thematic sectors has led to a growing imbalance in the distribution of the volumes in contemporary philosophy section: those belonging to innovative areas of research or fields that are not covered by the current subdivision have mostly been assigned to the linguistic sectors, which have consequently become extremely large. Therefore, the content affinities between contiguous volumes are lost and user orientation is compromised.
With the aim of revising this scheme, we decided to increase the number and variety of thematic sectors. By so doing, we could prioritize the allocation of new volumes within them, allow an easier relocation of those placed in the linguistic sectors, minimize the internal variance within each sector while maximizing the external variance between sectors. The main problem we faced was reducing the amount of work and arbitrariness involved in identifying the volumes to be moved into the new thematic sectors. Therefore, we identified a test sector and designed a workflow capable of automating at least part of the selection work.
To decide which texts should be moved to the test sector, we compared the database used in the library to assign shelfmarks (containing a list of authors and the related sectors) with lists drawn from two qualified disciplinary sources. A Wikidata dataset was used to maximize matches between lists and reduce noise. The SPARQL query was based on the Date of birth, Field of work, and Occupation properties. Wikidata was also used to identify, in our local database, only the entries having the property Person and obtain data in a format useful for subsequent analysis. After normalizing the data to make them comparable, we cross-referenced the four lists to obtain a matrix in which each author was either present or absent. All entries that did not appear in any of the three external lists (the two qualified sources and the Wikidata dataset) were eliminated, thus leaving us with a preliminary list of approximately 400 authors that were candidates for the new sector. Each of these names was then reviewed individually to define whether it belonged to the test sector or a different one. The list was organized chronologically according to date of birth of the authors and used to plan the transfer of the volumes to the new shelves.
The use of Wikidata thus enabled us to reduce data analysis from an initial dataset of over 20 thousand entries to just 400 with a high degree of precision and retrieval. | [
"Stack management",
"Data management",
"Academic libraries",
"Wikidata"
] | https://openreview.net/pdf?id=EeyzfPT9Gf | https://openreview.net/forum?id=EeyzfPT9Gf | RQLO7RMlQs | official_review | 1,736,250,646,287 | EeyzfPT9Gf | [
"everyone"
] | [
"~Rossana_Morriello1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: Use of Wikidata in collection management
review: The use of Wikidata for collection management in an academic library, and potentially particularly for collection weeding, is quite interesting, although only indirectly linked to the theme of the conference. Anyway, it would be also useful to know about the criticalities found during the activity.
compliance: 3
scientific_quality: 4
originality: 4
impact: 4
confidence: 3 |
DQu8ELtJbH | Using Wikidata in the European Literary Bibliography: A Reproducible Approach | [
"Gustavo Candela",
"Cezary Rosiński",
"Arkadiusz Margraf"
] | GLAM institutions (Galleries, Libraries, Archives, and Museums) have been exploring new ways to make available their digital collections. Wikidata has emerged as a leading approach with which to enrich their digital collections [1]. In parallel, new trends such as Labs and Collections as data promote the publication of digital collections suitable for computational use [2] as well as the use of reproducible code in the form of Jupyter Notebooks [3].
The European Literary Bibliography (ELB) is a project of the Institute of Czech Literature (Czech Academy of Sciences) and the Institute for Literary Research (Polish Academy of Sciences). It intends to open bibliographic data for literary studies at the European level holding resources from several institutions.
Following the Collections as data principles and focusing on the ELB, this work provides a reproducible framework including several steps for publishing and reusing digital collections based on literary bibliographies made available by GLAM institutions [4]. It also presents a collection of DH research scenarios to show how data can be explored and reused. This work is the result of an ATRIUM Transnational Access Scheme Grant. The results are available in the form of a repository of reproducible code.
A reproducible framework to transform bibliographic metadata to Collections as data -
This section presents the framework to publish and reuse digital collections in the form of Collections as data [4].
Data extraction refers to the selection of data relevant to a specific topic (e.g., author, organization or theme). Data modelling aims at ensuring machine-readable bibliographic metadata, using ontologies and vocabularies. The transformation and enrichment step refers to the transformation of the data into Linked Open Data (LOD) using RDF to describe metadata as triples as well as the use of Wikidata to enrich the metadata. The data quality step ensures the high quality of the RDF data. The publication requires the inclusion of additional documentation including aspects such as provenance and licensing. Finally, the published datasets can be reused in various ways (e.g., prototypes or research scenarios defined by DH scholars).
Defining research scenarios -
After applying the proposed framework to the ELB and exploring new uses of the data, a selection of research scenarios were defined to illustrate data reuse and integration using Wikidata as a main repository with which to enrich the metadata: i) comparative analysis of provincial vampire novels in Spain; ii) republican writers who emigrated during the Spanish Civil War; and iii) geographical distribution of publications about specific Spanish writers. Limitations were identified in terms of scope and completeness to meet researchers’ needs
Conclusions -
This work advances the publishing of digital collections in computationally usable forms describing how Wikidata can be used to explore new ways of analysis of the data. Future research directions include extending and implementing the research scenarios, and applying and adapting the framework to other domains.
Bibliography -
[1] Candela, G., Cuper, M., Holownia, O. et al. A Systematic Review of Wikidata in GLAM Institutions: a Labs Approach. TPDL (2) 2024: 34-50
[2] Candela, G., Gabriëls, N., Chambers, S., et al. (2023), "A checklist to publish collections as data in GLAM institutions", Global Knowledge, Memory and Communication, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/GKMC-06-2023-0195
[3] Candela, G., Chambers, S., Sherratt, T. An approach to assess the quality of Jupyter projects published by GLAM institutions. J. Assoc. Inf. Sci. Technol. 74(13): 1550-1564 (2023)
[4] Candela, G., Rosiński, C., & Margraf, A. (2024). A reproducible framework to publish and reuse Collections as data: the case of the European Literary Bibliography. https://doi.org/10.5281/zenodo.14106707 | [
"Data Publishing Framework",
"Collections as Data",
"Linked Open Data (LOD)",
"European Literary Bibliography (ELB)",
"GLAM Institutions"
] | https://openreview.net/pdf?id=DQu8ELtJbH | https://openreview.net/forum?id=DQu8ELtJbH | qRVzmOvF4O | official_review | 1,736,350,735,481 | DQu8ELtJbH | [
"everyone"
] | [
"~Carlo_Bianchini1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: A reproducible framework for publishing bibliographic data from European Literary Bibliography
review: The proposal aims to present a reproducible framework for publishing and reusing collections as data, through data extraction, data modelling, transformation and enrichment, data quality control, publication, and reuse. It is a topic of high interest and the developed methodology is very valuable.
compliance: 5
scientific_quality: 5
originality: 5
impact: 5
confidence: 4
notes: I would strongly suggest to ask the author to change their presentation from a poster to a paper. |
DQu8ELtJbH | Using Wikidata in the European Literary Bibliography: A Reproducible Approach | [
"Gustavo Candela",
"Cezary Rosiński",
"Arkadiusz Margraf"
] | GLAM institutions (Galleries, Libraries, Archives, and Museums) have been exploring new ways to make available their digital collections. Wikidata has emerged as a leading approach with which to enrich their digital collections [1]. In parallel, new trends such as Labs and Collections as data promote the publication of digital collections suitable for computational use [2] as well as the use of reproducible code in the form of Jupyter Notebooks [3].
The European Literary Bibliography (ELB) is a project of the Institute of Czech Literature (Czech Academy of Sciences) and the Institute for Literary Research (Polish Academy of Sciences). It intends to open bibliographic data for literary studies at the European level holding resources from several institutions.
Following the Collections as data principles and focusing on the ELB, this work provides a reproducible framework including several steps for publishing and reusing digital collections based on literary bibliographies made available by GLAM institutions [4]. It also presents a collection of DH research scenarios to show how data can be explored and reused. This work is the result of an ATRIUM Transnational Access Scheme Grant. The results are available in the form of a repository of reproducible code.
A reproducible framework to transform bibliographic metadata to Collections as data -
This section presents the framework to publish and reuse digital collections in the form of Collections as data [4].
Data extraction refers to the selection of data relevant to a specific topic (e.g., author, organization or theme). Data modelling aims at ensuring machine-readable bibliographic metadata, using ontologies and vocabularies. The transformation and enrichment step refers to the transformation of the data into Linked Open Data (LOD) using RDF to describe metadata as triples as well as the use of Wikidata to enrich the metadata. The data quality step ensures the high quality of the RDF data. The publication requires the inclusion of additional documentation including aspects such as provenance and licensing. Finally, the published datasets can be reused in various ways (e.g., prototypes or research scenarios defined by DH scholars).
Defining research scenarios -
After applying the proposed framework to the ELB and exploring new uses of the data, a selection of research scenarios were defined to illustrate data reuse and integration using Wikidata as a main repository with which to enrich the metadata: i) comparative analysis of provincial vampire novels in Spain; ii) republican writers who emigrated during the Spanish Civil War; and iii) geographical distribution of publications about specific Spanish writers. Limitations were identified in terms of scope and completeness to meet researchers’ needs
Conclusions -
This work advances the publishing of digital collections in computationally usable forms describing how Wikidata can be used to explore new ways of analysis of the data. Future research directions include extending and implementing the research scenarios, and applying and adapting the framework to other domains.
Bibliography -
[1] Candela, G., Cuper, M., Holownia, O. et al. A Systematic Review of Wikidata in GLAM Institutions: a Labs Approach. TPDL (2) 2024: 34-50
[2] Candela, G., Gabriëls, N., Chambers, S., et al. (2023), "A checklist to publish collections as data in GLAM institutions", Global Knowledge, Memory and Communication, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/GKMC-06-2023-0195
[3] Candela, G., Chambers, S., Sherratt, T. An approach to assess the quality of Jupyter projects published by GLAM institutions. J. Assoc. Inf. Sci. Technol. 74(13): 1550-1564 (2023)
[4] Candela, G., Rosiński, C., & Margraf, A. (2024). A reproducible framework to publish and reuse Collections as data: the case of the European Literary Bibliography. https://doi.org/10.5281/zenodo.14106707 | [
"Data Publishing Framework",
"Collections as Data",
"Linked Open Data (LOD)",
"European Literary Bibliography (ELB)",
"GLAM Institutions"
] | https://openreview.net/pdf?id=DQu8ELtJbH | https://openreview.net/forum?id=DQu8ELtJbH | jr6kAGovu1 | official_review | 1,736,311,062,743 | DQu8ELtJbH | [
"everyone"
] | [
"~Annick_Farina1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: very interesting research on the use of Wikidata in the European Literary Bibliography
review: As part of dissemination of a research project (ELB) - Institute of Czech Literature (Czech Academy of Sciences) and Institute for Literary Research (Polish Academy of Sciences) the authors intend to present their work which could provide a reproducible framework including several steps for publishing and reusing digital collections based on literary bibliographies.
compliance: 5
scientific_quality: 5
originality: 5
impact: 5
confidence: 3 |
C2Nz5zVH0M | eViterbo: Linking Humanities Research and Open Data | [
"Alice Santiago Faria"
] | eViterbo is a platform that combines a MediaWiki-based encyclopedia with an open, structured linked data database, built on a Wikibase suite. Designed as a collaborative research tool, it operates under a CC BY-SA 4.0 license, ensuring openness and accessibility. Its structured data is intentionally designed to be shared with Wikidata, amplifying its potential for interoperability and global knowledge integration.
Developed within a research project based at CHAM – Center for the Humanities, FCSH, Universidade NOVA de Lisboa, eViterbo the synergies between academia and the Wikimedia projects
This presentation explores the conceptual and technical decisions behind the creation of eViterbo, including the development of its infrastructure, data model, and collaborative workflows. It also examines the challenges faced in building and maintaining a platform of this kind in a humanities-oriented environment, including strategies for its adaptation, evolution, and maintenance in a social sciences and humanities academic environment. | [
"MediaWiki",
"Wikidata",
"sharing data",
"open science"
] | https://openreview.net/pdf?id=C2Nz5zVH0M | https://openreview.net/forum?id=C2Nz5zVH0M | tRmkvQ8Lig | official_review | 1,736,120,570,200 | C2Nz5zVH0M | [
"everyone"
] | [
"~Antonella_Buccianti1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: MediaWiki-based encyclopedia and structured linked data database
review: The presentation explores the conceptual and technical decisions behind the creation of eViterbo, discussing the development of its infrastructure, data model, and collaborative workflows. The challenges of this of platform in a humanities-oriented environment, including strategies for its adaptation, evolution, and maintenance in a social sciences and humanities academic environment is also discussed. It represent an evaluable contribution to understand the link between MediaWiki-based encyclopedia and structured linked data database.
compliance: 4
scientific_quality: 4
originality: 4
impact: 5
confidence: 3 |
C2Nz5zVH0M | eViterbo: Linking Humanities Research and Open Data | [
"Alice Santiago Faria"
] | eViterbo is a platform that combines a MediaWiki-based encyclopedia with an open, structured linked data database, built on a Wikibase suite. Designed as a collaborative research tool, it operates under a CC BY-SA 4.0 license, ensuring openness and accessibility. Its structured data is intentionally designed to be shared with Wikidata, amplifying its potential for interoperability and global knowledge integration.
Developed within a research project based at CHAM – Center for the Humanities, FCSH, Universidade NOVA de Lisboa, eViterbo the synergies between academia and the Wikimedia projects
This presentation explores the conceptual and technical decisions behind the creation of eViterbo, including the development of its infrastructure, data model, and collaborative workflows. It also examines the challenges faced in building and maintaining a platform of this kind in a humanities-oriented environment, including strategies for its adaptation, evolution, and maintenance in a social sciences and humanities academic environment. | [
"MediaWiki",
"Wikidata",
"sharing data",
"open science"
] | https://openreview.net/pdf?id=C2Nz5zVH0M | https://openreview.net/forum?id=C2Nz5zVH0M | exlgysvBfH | official_review | 1,735,847,706,275 | C2Nz5zVH0M | [
"everyone"
] | [
"~Elena_Marangoni1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: A complex project based upon Wikibase and Mediawiki in humanities
review: The project is very complex and interesting since it involves an encyclopedia based on Mediawiki and a Wikibase instance. It has solid scientific basis and involves a group of researchers.
It would be interesting a futher clarification about the disciplinary, geographical and temporal area covered by the project and about the licence (CC BY-4.0 is mentioned: for all the project? or just for the mediawiki part?)
compliance: 5
scientific_quality: 4
originality: 5
impact: 5
confidence: 4 |
Bdsqs0Yyvy | Reimagining Digital Gazetteers: A Wikidata-Powered Approach | [
"Maxime Guénette"
] | As an open, multilingual, and collaborative knowledge base, Wikidata is increasingly essential for academic research, particularly in Digital Humanities (DH). Its capacity to centralize data from multiple sources, structure it using interoperable standards, and enrich it through collaboration makes it invaluable for DH projects that both import and export data directly on the platform.
A prevalent type of project in DH is the development of digital gazetteers. A gazetteer is traditionally a directory of location names and coordinates. However, in its digital form, it links locations to enriched data such as historical descriptions, spatial coordinates, and temporal information. Digital gazetteers have increase in popularity since the early 2000s for their ability to publish geographic data using Semantic Web standards. Nevertheless, numerous reports from the scientific community indicate that the publication of Linked Open Data (LOD) through digital gazetteers remains hindered by several barriers, including high technical skill requirements and significant financial costs.
This paper demonstrates Wikidata’s potential for creating state-of-the-art digital gazetteers through two case studies from classical studies and archaeology. These examples illustrate how Wikidata supports both micro- and macro-scale gazetteer projects, enabling advanced data integration, spatial analysis, and collaboration.
The first case study focuses on the International (Digital) Dura-Europos Archive (IDEA) project, which uses Wikidata to build an urban gazetteer of Dura-Europos, an ancient city in Syria. The city’s cultural heritage is under threat due to the ongoing civil war. By leveraging Wikidata’s multilingual capabilities and Linked Open Data principles, IDEA aims to reassemble fragmented data from Dura-Europos located in collections worldwide. This effort addresses historical and archival biases from colonial-era excavations, promoting more equitable access to heritage. Wikidata’s collaborative nature enables for the first time Syrian researchers and the public to contribute to and benefit from the project.
The second case study examines our doctoral research on sacred spaces in Roman Britain. As part of the Wikiproject *Temples in Roman Britain*, we are cataloging temples and sanctuaries in the Roman province of Britannia (43–410 AD), with metadata such as construction and destruction dates, geographic coordinates, connections to other gazetteers, and interpretative frameworks. Furthermore, the project uses the SPARQLing Unicorn plugin for QGIS, enabling dynamic integration of GeoJSON layers directly from Wikidata’s LOD ecosystem, facilitating spatial analysis and visualization.
Both projects follow similar methodologies, using legacy data to transform disparate archival records into structured and interoperable datasets. It includes extracting information into spreadsheets, modeling the data to fit Wikidata’s ontology, and preparing it for upload using OpenRefine. Schemas are then used to ensure consistency, and the data is exported through QuickStatements for quality control before batch uploads to Wikidata. This workflow ensures data accuracy and integration into the Linked Open Data ecosystem.
By highlighting these case studies, this paper argues that Wikidata is not only a reliable platform for digital gazetteers but also a transformative tool for DH. Its ability to democratize data creation, integrate Semantic Web technologies, and foster global collaboration represents a significant advancement in the creation and publication of linked geographic data. | [
"gazetteers",
"Linked Open Data",
"Wikidata",
"digital cultural heritage"
] | https://openreview.net/pdf?id=Bdsqs0Yyvy | https://openreview.net/forum?id=Bdsqs0Yyvy | kDG56NWnSG | official_review | 1,736,497,451,308 | Bdsqs0Yyvy | [
"everyone"
] | [
"~Monica_Berti1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: A very interesting paper on the use of Wikidata for academic research in historical geography and digital gazetteers
review: The author of this paper presents the use of Wikidata for the development of digital historical gazetteers. This paper demonstrates the potential of Wikidata for the creation of state-of-the-art digital gazetteers through two case studies from Classical Studies and Archaeology: data from 1) the International (Digital) Dura-Europos Archive (IDEA) project and from 2) a doctoral dissertation on sacred spaces in Roman Britain. This paper shows that the author is familiar with the use of Wikidata and DH formats and standards. He also presents concrete challenges and needs of the community for the future of digital historical geography.
compliance: 5
scientific_quality: 5
originality: 5
impact: 5
confidence: 5 |
Bdsqs0Yyvy | Reimagining Digital Gazetteers: A Wikidata-Powered Approach | [
"Maxime Guénette"
] | As an open, multilingual, and collaborative knowledge base, Wikidata is increasingly essential for academic research, particularly in Digital Humanities (DH). Its capacity to centralize data from multiple sources, structure it using interoperable standards, and enrich it through collaboration makes it invaluable for DH projects that both import and export data directly on the platform.
A prevalent type of project in DH is the development of digital gazetteers. A gazetteer is traditionally a directory of location names and coordinates. However, in its digital form, it links locations to enriched data such as historical descriptions, spatial coordinates, and temporal information. Digital gazetteers have increase in popularity since the early 2000s for their ability to publish geographic data using Semantic Web standards. Nevertheless, numerous reports from the scientific community indicate that the publication of Linked Open Data (LOD) through digital gazetteers remains hindered by several barriers, including high technical skill requirements and significant financial costs.
This paper demonstrates Wikidata’s potential for creating state-of-the-art digital gazetteers through two case studies from classical studies and archaeology. These examples illustrate how Wikidata supports both micro- and macro-scale gazetteer projects, enabling advanced data integration, spatial analysis, and collaboration.
The first case study focuses on the International (Digital) Dura-Europos Archive (IDEA) project, which uses Wikidata to build an urban gazetteer of Dura-Europos, an ancient city in Syria. The city’s cultural heritage is under threat due to the ongoing civil war. By leveraging Wikidata’s multilingual capabilities and Linked Open Data principles, IDEA aims to reassemble fragmented data from Dura-Europos located in collections worldwide. This effort addresses historical and archival biases from colonial-era excavations, promoting more equitable access to heritage. Wikidata’s collaborative nature enables for the first time Syrian researchers and the public to contribute to and benefit from the project.
The second case study examines our doctoral research on sacred spaces in Roman Britain. As part of the Wikiproject *Temples in Roman Britain*, we are cataloging temples and sanctuaries in the Roman province of Britannia (43–410 AD), with metadata such as construction and destruction dates, geographic coordinates, connections to other gazetteers, and interpretative frameworks. Furthermore, the project uses the SPARQLing Unicorn plugin for QGIS, enabling dynamic integration of GeoJSON layers directly from Wikidata’s LOD ecosystem, facilitating spatial analysis and visualization.
Both projects follow similar methodologies, using legacy data to transform disparate archival records into structured and interoperable datasets. It includes extracting information into spreadsheets, modeling the data to fit Wikidata’s ontology, and preparing it for upload using OpenRefine. Schemas are then used to ensure consistency, and the data is exported through QuickStatements for quality control before batch uploads to Wikidata. This workflow ensures data accuracy and integration into the Linked Open Data ecosystem.
By highlighting these case studies, this paper argues that Wikidata is not only a reliable platform for digital gazetteers but also a transformative tool for DH. Its ability to democratize data creation, integrate Semantic Web technologies, and foster global collaboration represents a significant advancement in the creation and publication of linked geographic data. | [
"gazetteers",
"Linked Open Data",
"Wikidata",
"digital cultural heritage"
] | https://openreview.net/pdf?id=Bdsqs0Yyvy | https://openreview.net/forum?id=Bdsqs0Yyvy | 7MHasdtjbe | official_review | 1,736,399,215,605 | Bdsqs0Yyvy | [
"everyone"
] | [
"~Annick_Farina1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: interesting analisis of Wikidata potential for digital gazetteers and as a transformative tool for DH
review: The authors would like to illustrate two case studies a) the International (Digital) Dura-Europos Archive (IDEA) project, which uses Wikidata to build an urban gazetteer of Dura-Europos, an ancient city in Syria and b) their doctoral research on sacred spaces in Roman Britain as part of the Wikiproject Temples in Roman Britain (they cataloged temples and sanctuaries in the Roman province of Britannia). From these case of studies the authors would show how Wikidata is not only a reliable platform for digital gazetteers but also a transformative tool for DH.
compliance: 5
scientific_quality: 4
originality: 5
impact: 4
confidence: 3 |
AnH21anTtE | Wikidata e Thesaurus Nuovo soggettario: insieme per costruire un’ontologia di dominio nell’ambito della fotografia | [
"Silvia Bruni",
"Alida Daniele",
"Fabrizio Nunnari",
"Elisabetta Viti",
"Elena Cencetti",
"Valientina Lepore"
] | This paper discusses the collaboration between the Wikidata Group at the University of Florence, librarians from the Research and Semantic Indexing Tools Department at the National Central Library of Florence (BNCF), and Wikimdata volunteers, aimed at creating a domain ontology for photography in Wikidata. This project involves aligning Wikidata with the Thesaurus Nuovo soggettario through semantic and technical integration. The collaboration leverages the open, structured data of the Thesaurus, adhering to international standards and the FAIR principles (Findability, Accessibility, Interoperability, and Reusability).
Key challenges include:
Structural and functional differences between the two tools.
The cultural scope, user base, and mission of the participating institutions.
The complex, specific, and sometimes ambiguous nature of the specialized terminology involved.
These challenges revealed semantic misalignments, prompting revisions in both databases to better align their meanings and terms. | [
"Ontology in Wikidata; Nuovo soggettario and Wikidata; Improved wikidata items",
"Wikidata ontology of photographic terms"
] | https://openreview.net/pdf?id=AnH21anTtE | https://openreview.net/forum?id=AnH21anTtE | KyffjroX32 | official_review | 1,736,261,687,548 | AnH21anTtE | [
"everyone"
] | [
"~Alessandra_Boccone1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: Ottima proposta
review: La peculiarità del progetto presentato e la molteplicità dei soggetti coinvolti denotano l'alta qualità della proposta, che colpisce anche per i possibili sviluppi futuri. L'allineamento fra Wikidata e il Nuovo Soggettario nella creazione di un'ontologia per la fotografia, con la necessaria revisione di entrambi i database, denota un approccio critico e costruttivo, base essenziale per l'evoluzione di entrambi gli strumenti.
compliance: 5
scientific_quality: 5
originality: 5
impact: 5
confidence: 5 |
6BSZLEJA7f | Linking European Commission data with Wikidata: Unlocking the potential of linked open data for all | [
"Bence Molnár",
"Sébastien Albouze",
"Cosimo Palma",
"Anikó Gerencsér"
] | The lack of uniformity in codes and names to identify the same entity is inefficient and hinders the implementation of interoperable IT systems. To address this issue, the European Commission has prioritised the development of data policies and guidelines for reference data to set high-level principles for ensuring data interoperability, user-friendly and data-driven administration, and digital-ready policymaking. The Publications Office of the European Union (OP), in its capacity as data steward, bears the responsibility of maintaining the Commission’s corporate reference data, ensuring that the data is FAIR and accessible in all European Union official languages. Although the data is free and open to everyone, further commitments are needed from the OP to ensure the interoperability of EU data with other open linked data resources.
In the autumn of 2024, OP completed the alignment between Wikidata and its standardised corporate list of countries and territories, which was endorsed as a corporate data asset by the European Commission in 2023. The aim of this exercise was to test the matching workflow of an AI-based alignment tool, developed for the Directorate-General for Communications Networks, Content and Technology, in a rather specific domain, and to explore the possibility of incorporating Wikidata’s external data.
During the exercise, 319 exact matches were successfully identified out of a total of 336 entities in the data asset. The alignment package is available in SKOS format, retrievable from Cellar, OP’s common data repository, and the matches are shown on the individual entities when browsing the website. The tools used by OP to create and validate alignments are presented. As the data asset follows the conventions of the EU’s Interinstitutional Style Guide for writing country names, and includes politically sensitive and disputed territories, some special cases required manual matching and further verification. Some territories disputed by the parties to a different extent showed the limitations of exact matches when the two datasets defined them differently (if at all).
Potential EU data assets for future alignment are introduced, including the authority list of currencies and currency subunits, and EuroVoc, the EU’s multidisciplinary and multilingual thesaurus covering the activities of the EU. EuroVoc is explored with specific focus on its already existing Wikidata property can be used to enhance the content available. The process of aligning a multidisciplinary thesaurus presents many challenges, and this paper presents some possible solutions, such as processing data in smaller batches, focusing on related domains, or following the structure of the thesaurus.
The Publications Office recognises the value of active community engagement to maximise the potential of alignment between its data assets and Wikidata, while at the same time, by examining its process for publishing and maintaining up-to-date alignments, the Wikidata community will gain a deeper understanding of how it can provide the necessary support and expertise to facilitate more efficient collaboration with public bodies. The aim of the presentation would be to explore the potential of EU data for Wikidata and to see how these data assets could be better aligned and used for mutual enrichment. | [
"interoperability",
"linked open data",
"public data",
"Publications Office of the European Union",
"Wikidata"
] | https://openreview.net/pdf?id=6BSZLEJA7f | https://openreview.net/forum?id=6BSZLEJA7f | joNBoCkxbH | official_review | 1,736,270,461,147 | 6BSZLEJA7f | [
"everyone"
] | [
"~Luca_Martinelli1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: Approvo
review: Basandomi sull'abstract fornito, mi sembra che questo possa essere un intervento interessante. Sono particolarmente interessato a cosa non ha funzionato nella riconciliazione fra i due sistemi, più specificamente quali sono stati i casi che non hanno funzionato, così come sono interessato a valutare i potenziali follow-up con i nuovi dataset portati.
compliance: 4
scientific_quality: 5
originality: 4
impact: 5
confidence: 5 |
11aBQ27ovR | The role of Wikidata in DAMEIP Project (DAta and MEtadata for Implementing Peer Review) | [
"Rossana Morriello",
"Donatella Selva"
] | The research project DAMEIP (Data and Metadata to Implement Peer Review) is funded by the University of Florence for the years 2025–2026. It will be carried out by a research team consisting of Rossana Morriello, a tenure-track researcher (RTD-b) in the academic field HIST-04/C - Archival Science, Bibliography, and Library Science at the Department of History, Archaeology, Geography, Arts, and Performing Arts (SAGAS); Donatella Selva, a tenure-track researcher (RTD-b) in the field GSPS-06/A - Sociology of Cultural and Communicative Processes at the Department of Political and Social Sciences (DSPS), and two research fellows in their respective academic fields.
The aim of the research is to analyze the dynamics and practices of peer review in the journals of Florence University Press (FUP), selected as a significant sample of Italian academic publishing. With full respect for privacy and all related ethical aspects, the project seeks to identify and analyze patterns, procedures, quantitative measurements, and timelines of peer review, with particular attention to gender differences.
Peer review is one of the research evaluation systems that operates in two phases of the research cycle: prior to the publication of research results in a journal or other type of publication, and later in research evaluation systems implemented by national evaluation agencies, such as ANVUR. This approach is predominantly applied in the humanities and social sciences (HSS) and partially in STEM disciplines.
As part of its planned activities, the project also includes updating the essential metadata of FUP journals in Wikidata, with a focus, in this case as well, on gender representation, which is sometimes absent or ambiguous due to the common practice of using initials for authors' first names.
Wikidata is a crucial reference tool for projects and applications requiring metadata, from the simplest to the most complex, which today increasingly rely on artificial intelligence. The quality of data in Wikidata is therefore an essential element for effective knowledge organization in the digital world and in scholarly communication. Adding accurate metadata to publications also facilitates their global findability and the assignment of reviewers in internal journal processes.
The contribution of the two researchers, in the form of a lightning talk, will aim to present the ongoing project, which will officially start in January 2025 and will therefore not yet have results suitable for a full paper. Nevertheless, we believe it is important to begin disseminating it, particularly regarding the project component related to Wikidata, and the June conference “Wikidata and Research” is certainly the appropriate venue for this purpose. | [
"Digital libraries",
"Scholarly publishing",
"Metadata",
"Gender",
"System evaluation",
"Peer review"
] | https://openreview.net/pdf?id=11aBQ27ovR | https://openreview.net/forum?id=11aBQ27ovR | oMaiFoK7PV | official_review | 1,736,697,551,385 | 11aBQ27ovR | [
"everyone"
] | [
"~Iolanda_Pensa1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: Research assessment
review: Research assessment and peer review are topics of great interest and are at the centre of many discussions and research. Exploring the synergies with Wikidata can certainly be important. I am unsure of how much Wikidata can facilitate peer review and can host relevant data. But maybe this is a useful discussion to launch.
It would also be important to reflect on how contributions to open peer projects such as the Wikimedia projects (but also OpenStreetMap, open software, open hardware... with their open science infrastructures and practices and peer review systems) can be acknowledged in research assessment.
compliance: 3
scientific_quality: 4
originality: 5
impact: 4
confidence: 3
notes: I suggest that members of the scientific committee present their proposals as posters. |
11aBQ27ovR | The role of Wikidata in DAMEIP Project (DAta and MEtadata for Implementing Peer Review) | [
"Rossana Morriello",
"Donatella Selva"
] | The research project DAMEIP (Data and Metadata to Implement Peer Review) is funded by the University of Florence for the years 2025–2026. It will be carried out by a research team consisting of Rossana Morriello, a tenure-track researcher (RTD-b) in the academic field HIST-04/C - Archival Science, Bibliography, and Library Science at the Department of History, Archaeology, Geography, Arts, and Performing Arts (SAGAS); Donatella Selva, a tenure-track researcher (RTD-b) in the field GSPS-06/A - Sociology of Cultural and Communicative Processes at the Department of Political and Social Sciences (DSPS), and two research fellows in their respective academic fields.
The aim of the research is to analyze the dynamics and practices of peer review in the journals of Florence University Press (FUP), selected as a significant sample of Italian academic publishing. With full respect for privacy and all related ethical aspects, the project seeks to identify and analyze patterns, procedures, quantitative measurements, and timelines of peer review, with particular attention to gender differences.
Peer review is one of the research evaluation systems that operates in two phases of the research cycle: prior to the publication of research results in a journal or other type of publication, and later in research evaluation systems implemented by national evaluation agencies, such as ANVUR. This approach is predominantly applied in the humanities and social sciences (HSS) and partially in STEM disciplines.
As part of its planned activities, the project also includes updating the essential metadata of FUP journals in Wikidata, with a focus, in this case as well, on gender representation, which is sometimes absent or ambiguous due to the common practice of using initials for authors' first names.
Wikidata is a crucial reference tool for projects and applications requiring metadata, from the simplest to the most complex, which today increasingly rely on artificial intelligence. The quality of data in Wikidata is therefore an essential element for effective knowledge organization in the digital world and in scholarly communication. Adding accurate metadata to publications also facilitates their global findability and the assignment of reviewers in internal journal processes.
The contribution of the two researchers, in the form of a lightning talk, will aim to present the ongoing project, which will officially start in January 2025 and will therefore not yet have results suitable for a full paper. Nevertheless, we believe it is important to begin disseminating it, particularly regarding the project component related to Wikidata, and the June conference “Wikidata and Research” is certainly the appropriate venue for this purpose. | [
"Digital libraries",
"Scholarly publishing",
"Metadata",
"Gender",
"System evaluation",
"Peer review"
] | https://openreview.net/pdf?id=11aBQ27ovR | https://openreview.net/forum?id=11aBQ27ovR | F4ZPYcm8xM | official_review | 1,735,747,421,393 | 11aBQ27ovR | [
"everyone"
] | [
"~Franco_Bagnoli1"
] | wikimedia.it/Wikidata_and_Research/2025/Conference | 2025 | title: An interesting projects and a possible useful starting point for a collaborative initiative
review: Authors want to illustrate the scopes and methodology of a starting project about metadata and peer review in the FUP journals. I think that the proposal is fully in line with the goals of the conference, that the project is innovative and can have an impact on the Wikimedia community. Moreover, in my opinion this lightning talk can originate useful discussions among the authors and the attendees, possibly originating a collaborative cooperation beyond the staff directly involved in the project.
compliance: 5
scientific_quality: 4
originality: 4
impact: 4
confidence: 3 |