Translate

Τρίτη 16 Ιουλίου 2019

Digital Humanities

Artificial imagination, imagine: new developments in digital scholarly editing

On edited archives and archived editions

Abstract

Building on a longstanding terminological discussion in the field of textual scholarship, this essay explores the archival and editorial potential of the digital scholarly edition. Following Van Hulle and Eggert, the author argues that in the digital medium these traditionally distinct activities now find the space they need to complement and reinforce one another. By critically examining some of the early and more recent theorists and adaptors of this relatively new medium, the essay aims to shed a clearer light on some of its strengths and pitfalls. To conclude, the essay takes the discussion further by offering a broader reflection on the difficulties of providing a ‘definitive’ archival base transcription of especially handwritten materials, questioning if this should be something to aspire to for the edition in the first place.

The ‘assertive edition’

Abstract

The paper describes the special interest among historians in scholarly editing and the resulting editorial practice in contrast to the methods applied by pure philological textual criticism. The interest in historical ‘facts’ suggests methods the goal of which is to create formal representations of the information conveyed by the text in structured databases. This can be achieved with RDF representations of statements extracted from the text, by automatic information extraction methods, or by hand. The paper suggests the use of embedded RDF representations in TEI markup, following the practice in several recent projects, and it concludes with a proposal for a definition of the ‘assertive edition’.

Tracking the evolution of translated documents: revisions, languages and contaminations

Abstract

Dealing with documents that have changed through time requires keeping track of additional metadata, for example the order of the revisions. This small issue explodes in complexity when these documents are translated. Even more complicate is keeping track of the parallel evolution of a document and its translations. The fact that this extra metadata has to be encoded in formal terms in order to be processed by computers has forced us to reflect on issues that are usually overlooked or, at least, not actively discussed and documented: How do I record which document is a translation of which? How do I record that this document is a translation of that specific revision of another document? And what if a certain translation has been created using one or more intermediate translations with no access to the original document? In this paper we addresses all these issues, starting from first principles and incrementally building towards a comprehensive solution. This solution is then distilled in terms of formal concepts (e.g., translation, abstraction levels, comparability, division in parts, addressability) and abstract data structures (e.g., derivation graphs, revisions-alignment tables, source-document tables, source-part tables). The proposed data structures can be seen as a generalization of the classical evolutionary trees (e.g., stemma codicum), extended to take into account the concepts of translation and contamination (i.e., multiple sources). The presented abstract data structures can easily be implemented in any programming language and customized to fit the specific needs of a research project.

Opening the book: data models and distractions in digital scholarly editing

Abstract

This article argues that editors of scholarly digital editions should not be distracted by underlying technological concerns except when these concerns affect the editorial tasks at hand. It surveys issues in the creation of scholarly digital editions and the open licensing of resources and addresses concerns about underlying data models and vocabularies, such as the Guidelines of the Text Encoding Initiative. It calls for solutions which promote the collaborative creation, annotation, and publication of scholarly digital editions. The article draws a line between issues with which editors of scholarly digital editions should concern themselves and issues which may only prove to be distractions.

Exercises in modelling: textual variants

Abstract

The article presents a model for annotating textual variants. The annotations made can be queried in order to analyse and find patterns in textual variation. The model is flexible, allowing scholars to set the boundaries of the readings, to nest or concatenate variation sites, and to annotate each pair of readings; furthermore, it organizes the characteristics of the variants in features of the readings and features of the variation. After presenting the conceptual model and its applications in a number of case studies, this article introduces two implementations in logical models: namely, a relational database schema and an OWL 2 ontology. While the scope of this article is a specific issue in textual criticism, its broader focus is on how data is structured and visualized in digital scholarly editing.

What future for digital scholarly editions? From Haute Couture to Prêt-à-Porter

Abstract

Digital scholarly editions are expensive to make and to maintain. As such, they prove unattainable for less established scholars like early careers and PhD students, or indeed anyone without access to significant funding. One solution could be to create tools and platforms able to provide a publishing framework for digital scholarly editions that requires neither a high-tech skillset nor big investment. I call this type of edition “Prêt-à-Porter”, to be distinguished from “haute couture” editions which are tailored to the specific needs of specific scholars. I argued that both types of editions are necessary for a healthy scholarly environment.

Editing social media: the case of online book discussion

Abstract

Online book discussion is a popular activity on weblogs, specialized book discussion sites, booksellers’ sites and elsewhere. These discussions are important for research into literary reception and should be made and kept accessible for researchers. This article asks what an archive of online book discussion should and could look like, and how we could describe such an archive in terms of some of the central concepts of textual scholarship: work, document, text, transcription and variant. What could an approach along the lines of textual scholarship mean for such a collection? If such a collection holds many pieces of information that would not usually be considered text (such as demographic information about contributors), could we still call such a collection an edition, and could we call editing the activity of preparing such a collection?The article introduces some of the relevant (Dutch-language) sites, and summarizes their properties (among others: they are dynamic and vulnerable, they contain structured data and are very large) from the perspective of creating a research collection. It discusses the interpretation of some essential terms of textual studies in this context, and briefly lists a number of components that a digital edition of these sites might or should contain. It argues that such a collection is the result of scholarly work and should not be considered as 'just' a web archive.

The Charles Harpur Critical Archive

Abstract

This is a history of and a technical report on the Charles Harpur Critical Archive (CHCA), which has been in preparation since 2009. Harpur was a predominantly newspaper poet in colonial New South Wales from the 1830s to the 1860s. Approximately 2700 versions of his 700 poems in newspaper and manuscript form have been recovered. In order to manage the complexity of his often heavily revised manuscripts, traditional encoding in XML-TEI, with its known difficulties in handling overlapping structures and complex revisions, was rejected. Instead, the transcriptions were split into simplified versions and layers of revision. Markup describing textual formats was stored externally using properties that may freely overlap. Both markup and the versions and layers were merged into multi-version documents (MVDs) to facilitate later comparison, editing and searching. This reorganisation is generic in design and should be reusable in other editorial projects.

From graveyard to graph

Abstract

The technological developments in the field of textual scholarship lead to a renewed focus on textual variation. Variants are liberated from their peripheral place in appendices or footnotes and are given a more prominent position in the (digital) edition of a work. But what constitutes an informative and meaningful visualisation of textual variation? The present article takes visualisation of the result of collation software as point of departure, examining several visualisations of collation output that contains a wealth of information about textual variance. The newly developed collation software HyperCollate is used as a touchstone to study the issue of representing textual information to advance literary research. The article concludes with a set of recommendations in order to evaluate different visualisations of collation output.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου

Αρχειοθήκη ιστολογίου

Translate