Social Knowledge Creation/History

Various studies have analyzed the history of knowledge production, primarily focusing on three major fields within this line of inquiry: textual studies, historical scholarly practices, and media history. The first category focuses largely on the advent of print and the consequences thereof. Next, the second category encompasses the history of scholarly communication, specifically concerning academic journals and peer review. Finally, the third category more directly concentrates on the social context of various media and mediums. The conception of knowledge production as plural represents the point of contact between these fields – knowledge reflects a composite of various people as well as networks of historical, political, and social contexts.

A Theoretical Overview of Key Issues
Scholarly understandings of how knowledge is produced and conveyed have been transformed in recent decades by profound “changes in knowledge regimes” (Burke 2012). The ideological and repressive statist “Apparatuses” established by Louis Althusser (Althusser 2010) has undergone deterioration and deterritorialization. Techne makes subjects, as the genealogies of Michel Foucault have demonstrated (Foucault 1977), but in our moment, when culture and technological innovation are so intimately allied, evidence that socially situated actors also remake techne surrounds us (Bijker and Law 1992). The book is no longer what it was but is still very much living, despite a generation of theorists for whom those by Jacques Derrida and Paul de Man became canonical. The medium has its specific effects as Marshall McLuhan had demonstrated, but they are locally massaged, scholars now recognize, by multifarious forces. With knowledge increasingly mediated by software in the new regimes, “The Media” cannot be conceived as a single deterministic force but rather as an ecosystem in which multiple mediums mutually shape one another in social contexts (Manovich 2001). Situated “users” are capable of redefining what media becomes despite the generic “publics” they are constructed for and set out to construct (Gitelman 2006). The remediation of culture through the interconnectedness of Web 2.0 software further illuminates that existing knowledges undergoing their digital transformation today were social already at origin—if subsequently de-socialized over time through the increasing power of institutional hierarchical structures. Conversation, epistolary correspondence, manuscript circulation, and other informal modes of scholarly exchange have been recovered at the source of academic knowledge disciplines (Siemens 2002). Such knowledge was and is inevitably plural with multiple institutions, political and economic conditions, and cultural specificities affecting their production in their own ways through unique agencies (Burke 2000). There was no single “print culture” animating a world inaugurated by Gutenberg, therefore, but myriad localized print cultures (Johns 1998). In the contemporary era, there exists no unitary “digital culture” producing homogenous knowledge throughout the global realm of “The Internet,” and new opportunities have emerged for the humanities, increasingly regarded at risk of irrelevance in this new state of affairs, to actively pursue knowledge as a social creation.

The technologies that affect the material and symbolic production of a cultural object within a given field to redeploy terminology of Bourdieu (Bourdieu 1983)—are themselves generated from the multifaceted complexities of social fields and agents within them (Bijker and Law 1982). Essentialist conceptions of the Internet have thus rightly been exchanged for more nuanced accounts acknowledging the complex historical, political, economic, and cultural contexts (Streeter 2010). From these contexts emerged a globalized computer network that is inherently open in protocol while simultaneously productive of centripetal sites for new modes of instantaneous communication, such as Facebook, Google, and Twitter, corporatizing what have become basic social practices (Liu 2011). The Cold War and the counter-cultural sixties were concurrent: a dichotomous time for the sphere of communication, during which the structuralist isolationism of English departments (Graff 1987) accompanied the ascendence of the discipline within the humanities and within the academy as a whole, but a period also stimulative of new forms of knowledge production outside the university, such as those of Stewart Brand and his “Whole Earth” initiatives, discovered at the source of the global personalization and socialization of computing (Turner 2006). Times have changed, although the increased use of intellectual property and piracy laws to limit grass-roots cultural development (Lessig 2004) and the simultaneous proliferation of non-market activity and decentralized production provoked by the Internet (Benkler 2003) might be attributed to algorithmic permutations of post-war emergences. For the humanities to readapt to the current postindustrial corporatized social landscape of neoliberalism—culturally manifested in accord with what Allan Liu describes as the “Laws of Cool” (Liu 2004)—reintegration with the public social sphere by way of digital means offers better chances for the continuation of its knowledge practices, its repositories (see Caidi and Ross 2005) and the principles they have evolved, compared to reliances on corporate-based funding for the humanities (Ang 2005), aligning programs to strictly economic incentives (Balsamo 2011), or the commodification of its disciplines as training platforms (Vaidhyanathan 2002).

The idealism of early theorists of hypertext who recognized in the performative agency of computing textualities the reification of their postmodern literary theory (Aarseth 1997) and identity politics (Haraway 1990) became exchanged for critique as the Internet was incorporated into economic structures. Challenges to the compromising of search engines in their biased representations of knowledge (Introna and Nissenbaum 2000) have more recently given way to critical denunciations of insidious data collecting practices by search engines and social media (Berry 2012). Content-oriented critiques of Internet practices, whether focused upon the information users are excluded from or the user information accessible to corporations, have a form-oriented counterpart in scholarly inquiry surrounding interface design. Software is ideological in that it affects subject formation by the particular way it represents knowledge (Chun 2004). It is for this reason, among others, that the “digital humanities”—an emerging set of scholarly meta-practices closely associated with the recent “changes in knowledge regimes”—must be self-reflexive, incorporating cultural criticism (Liu 2012) to affect political, public, and institutional practices in socially productive ways (Losh 2012).

Academic discipline formation is a complex process forging structural stability out of multiple and often competing knowledge contexts. Today, the infamous Marxist reading by Terry Eagleton of English literature—deemed a strictly ideological process to establish class hierarchies—appears a facile simplification (Eagleton 1987). In its historical study, actual discipline formation is highly disorderly (Brant 2011) and irreducible to economic principles. Research into the formation of various literatures have confirmed that institutional and political factors do play a role in discipline formation (Brooks 2002), but simultaneously so do various societal contexts brought together through communication structures (Garson 2008). The history of the academic journal, established as the paramount form of scholarly communication and regarded by scholars today as central in discipline-specific pedagogy (see, e.g., Ball 2010), is a case in point. As a form, the peer reviewed journal arose out of early modern monarchist book censorship practices to evolve into an evaluative scholarly communication system (Biagioli 2002). The journal succeeded in providing structural stability to disciplines by meeting the needs of various stakeholders: the general public; booksellers and libraries; researchers who wished to make their work known and claim authorship; a scientific community wanting to build upon previous findings; publishers seeking to capitalize on discovery; and academic institutions desiring metrics for evaluating faculty (Fjllbrant 1997). In the current climate, as the stakes in journals and academic publishing held by publishers and the entrenched reputation-based hierarchies of scholarly communities have come to outweigh the particular interests of researchers, the system faces increasing criticism with calls for reform by way of the efficacies and openness afforded by software and the Internet (Cohen 2012; Erickson et al. 2004; Fitzpatrick 2011; Fitzpatrick 2012; Gudon 2008; Jagodzinski 2008; Lorimer 2013). Ongoing work in the digital humanities includes the development of new forms of scholarly communication to ensure the relevant perpetuity of the humanities, but the forms its practitioners create need not replicate in their entirety the structures and format of journals and other prominent print-based academic forms. In allying the humanities with the social sphere from which it originated, other early but subjugated forms of scholarly communication (see Bazerman 1991; Siemens 2002; van Ittersum 2011) which prioritized the needs of scholars while non-differentiating between academics and autodidacts, might offer more suitable models in the development of scholarly communication and publishing platforms that extend into the social sphere. Just as libraries are attempting to preserve principles arising during the Enlightenment that have proven to assist in social knowledge creation and conveyance (see Besser 2004) while integrating digital repository systems and other software—which themselves ought to be designed in accord with proven traditional principles (see Roy Rosenzweig Center for History and New Media 2007-2013; Van House 2013)—new forms of digital scholarly communication must retain in code the key values scholars currently attach to academic publishing, such as sharing and knowledge advancement (Guedon 2008).

Despite the ever increasing self-reflexivity of the digital humanities and the commonly held understanding amongst its practitioners that socially-situated phenomenologically-based interpretation is inherent to their work, anxiety surrounding the rapid rise of the new meta-field within the academy persists as a “productive unease” (Flanders 2009), with concerns primarily directed towards the detailism, automation, and numerical or scientific applications associated with “distant reading” (Moretti 2005) in textual studies and literary criticism (Flanders 2005). Quantitative analyses of Big Data to visualize cultural trends (e.g. Michel 2011) continues to hold promise, particularly for researchers wishing to discover explanatory patterns within the transient effervescence of popular culture (Wasik 2009), but obstacles to this mode of inquiry have been acknowledged among those who use it most productively: restrictive access to relevant Big Data (controlled by social media companies); the potential non-representativeness of user-created data; the computer science expertise these methods demand; and the false assumption that Big Data inquiry can replicate the findings of “deep data” involving intense, long-term study (Manovich 2012). “The digital” has been heralded as a new unifying principal for the post-disciplinary university designed to produce “digital intellect” (Berry 2011), unencumbered by discipline-specific rhetorical structures (Ackerman et al. 1991), which are now conceived in some quarters as “pretentious” and “hyper-intellectual” barriers to knowledge (Graff 2003). Composition as the fundamental discipline underlying divisive academic practices (see Carlton 1995) is a view increasingly challenged by those believing information literacy is the new essential scholarly skill set (Lightman and Reingold 2005). Amidst concerns that such conceptions contribute to the transformation of humanities graduate students into “highly qualified personnel” (Zarcharias 2011), alternative proposals for a renewed humanities include archival research practices tightly integrated into its disciplines (Buehl 2012), greater interdisciplinarity through a problem- or issue-based humanities model (Davidson and Goldberg 2004), and radical new methods of graduate training in the digital humanities blended with traditional disciplinary work and credit systems (Nowviskie 2012).

Johanna Drucker, who contends influentially that computer science techniques and theories are categorically at odds with the fluidity, interpretation, and interconnectedness of humanities research, further argues that the humanities ought to intervene in knowledge-oriented software design (Drucker 2012). In accord with the ideological assessment of Chun, Drucker conceives interface as constituting the user through the specific cognition it produces as a purposeful mediator of knowledge (rather than being a transparent conveyor of information) (Drucker 2009). Bruno Latour suggests humanity has now entered an age of design (Latour 2008), echoing both the modernist Sir Herbert Read, who championed design for transforming existing social structures (Read 2012), and the later Marshall McLuhan who exchanged the “global village” for a conception of the programmable “global theatre” after perceiving early transformations of society by software (McLuhan 1970). In reorienting for research into the design of societal knowledge and communication structures, the humanities might shift productively away from its detached textual focus and towards inherently argumentative experimental prototyping (Ramsay and Rockwell 2012), modelling (McCarty 2005), and hands-on critical making (Ratto 2011), developing new perspectives by which to approach the so-called “technoculture” (Balsamo 2011) of contemporary society. In the writings of scholars who have taken up the call for humanities-oriented social knowledge design, there is a common interest in installing the user, reader, or subject at the foundational level of their building. This aesthetic strategy takes its cue from the values underlying Web 2.0 development (Dix 2008) to effect what is increasingly understood as the inherently social nature of scholarship (Borgman 2007) imperilled by traditional academic structures and institutions. Wikipedia is the foremost example of an effective many-to-many communication structure that demonstrates the changing nature of knowledge production. The multivocal authorship, argument, and collaboration Wikipedia produces is broadly considered as a democratic and more representative model for knowledge-oriented communication (Pfister 2011). Liu insists that literary scholars must take social computing seriously as both an object of research, for it has become the basic mode of cultural and personal expression, and as a practice of literary study (Liu 2013). Along these lines, the digital publishing experiment Hacking the Academy sought out ways of crowdsourcing knowledge to reform existing academic institutions and practices for increased social relevance (Cohen and Scheinfeldt 2013). Bolter conceives social media as instantiating the modernist goal of the avant-garde in transforming through artistic creation existing social forms (Bolter 2007). The effects of social media on existing scholarly practices may be both positive and negative, as discovered by Mrva-Montoya in using social media to transform editing practices (Mrva-Montoya 2012). Kirschenbaum suggests that as a community, the digital humanities develops through the building of reputation, status, influence, and professional connections through Twitter and other social media (Kirschenbaum 2012).

Web 2.0 practices and standards have encouraged scholars to rethink from a user’s perspective the design of the scholarly edition, the fundamental textual form for literary research (Vetch 2010; Robinson 2012). Books have always been inherently social media (McGann 1991; Liu 2009), suggesting their suitability for a form of social editing (McGann 1991) in a digital edition able to reflect through careful design the dynamic relations inherent to textual production and reception (McGann 2006). Shillingsburg expands this conception of the digital edition into a “knowledge site” to include analysis and data typically associated with secondary sources (Shillingsburg 2006). Community and collaboration are integral to scholarly knowledge creation (Fitzpatrick 2007); thus Siemens et al. seek to integrate within the social scholarly digital edition collaborative electronic tools for annotation, user-derived content, folksonomy tagging, community bibliography, and text analysis capabilities (Siemens et al. 2012). This conception of the scholarly edition conceives the editor as a facilitator—an intelligent switch within a dynamic knowledge network (see Kittur and Kraut 2008)—rather than as a didactic authority, thereby permitting the polyphonic interpretation of multiple readers (Smith 2004).

Evaluating and Creating New Media Knowledge Formats for New Media Publics
This section begins with Mario Biagioli’s “From Book Censorship to Academic Peer Review,” using it, and the history of the peer review process, as a sounding board for a consideration of how researchers perform peer review on new forms of media and cultural production and knowledge products that are produced collectively, socially, and collaboratively in the current academic system. He writes, “Together with tenure, peer review is probably the most distinctive feature of the modern academic system. Peer review sets academia apart from all other professions by construing value through peer judgment, not market dynamics” (Biagioli 2002: 11). Peer review began to function more and more as a producer of academic value, whereas it previously served as a means of verified circulation for texts, people, and ideas, fostering a knowledge community interconnected with these publication forms. Knowledge has always been socially negotiated. The historiography of peer review illustrates how processes of review, censorship, and advancement can be adapted to suit the needs of the academy and the knowledge communities connected to the academy, and serve as both a form of academic certification and community certification. How, then, do researchers shift the current, and historically entrenched, evaluation culture of humanities academia to facilitate new media, socially produced knowledge, and public humanities scholarship? Structures such as peer review shape and control what is and what is not considered important and innovative in a given field. This can lead to the dissemination of new systems of knowledge if the peer community’s definition of the field is liberal, it can also lead to disciplinary stasis if the peer community is resistant to epistemological change or has difficulty conceiving of ways to apply their review methods to new forms of media and knowledge to new and ever-changing publics. As Biagioli maintains, the move away from publications owned and operated by an academic institution to a model operated by a knowledge community with shared intellectual interests facilitates the adoptions of new media forms as forms of academic knowledge mobilization that can be applied to institutional academic review (e.g. pre-tenure and tenure review).

Music departments face the ongoing challenge of evaluating different formats of academic knowledge production, ranging from lecture-recitals, performance master classes, peer-reviewed publications, and compositions (whether or not they receive premiere performances). The history of digital musicology or music/sound in the digital humanities at large is underdeveloped, and frequently music/sound is viewed as a presentational tool or supplement to enhance a digital exhibit that is primarily text based rather than approaching music/sound as a text in itself. The sub-article “Music” in the section pertaining to the histories of digital humanities focuses primarily on the digital and analog formats through which music is disseminated and stored, the best practices for sound archiving, and listings of specialized databases and bibliographies. Challenged by the fragility of the earliest analog audio recording materials, such as the wax cylinder, much musical ephemera has been preserved, although not digitally, and remains unplayable with their sounds silenced. However, new technological developments have led to the production of laser based digital audio transfer devices to playback, transfer, and archive these musical pasts without material damage to the analog artifacts. Musicologists sporadically use many of the listed databases and bibliographies because their search engines and organization are not intuitive and cumbersome. However, one useful and flourishing sound archive is the National Sound Archive at the British Library, whose catalogue includes entries in genres ranging from pop, jazz, classical, and world music, to oral history, drama and literature, dialect, language, and wildlife sounds.

Computer applications to music in/of the digital humanities have also made strides in the areas of nation software and computer generated musics (e.g. electroacoustic music, soundscape composition, computational music, and digital instruments and performer interfaces), where the “machines may be used to create new sounds, create new instruments and interfaces, imitate existing instruments with fine control, generate new composition, or train computers to listen to music for interaction with human performers.” Notation software allows for the preparation of scores, scholarly editions, and excerpts more “accurately, efficiently, and economically than before.” However, these software systems are challenging because they are based on Western classical notational language and syntax, and are therefore not adaptable or flexible enough for use in the transcription and notation of non-western musics and oral musical traditions. Historically, the music rather than the notational system has been adapted to “fit” the available syntax, resulting in cultural approximation.

This section concludes with a contribution from musician, scholar, and activist, Paul D. Miller (aka DJ Spooky) whose creative work and supplementary scholarship is concerned with engaging the public through remixing and synthesizing various forms of data, culture, and information in order to present it in new ways for new publics to which Gitelman (2006) refers in Always Already New: Media, History, and the Data of Culture. Miller’s The Hidden Code and Arctic Rhythms/Ice Music respond to the ethic of shared knowledges heralded by digital humanities, producing products of cognitive capitalism that are democratically shared in order to increase the sociopolitical efficacy of environmentalist artwork. As forms of open-source multimodal ecomedia that synthesize art, science, and society, Miller’s recent environmentalist remix projects challenge hegemonic systems of sonic production, consumption, and reproduction to illuminate environmental issues. His collaborative synthesis of art, science, and society is mobilized as a form of political resistance that shares knowledge with listening publics through processes of sonification and remix. Drawing from the archive of social, environmental, and sonic memory, Miller uses remix practices to decode historic, scientific, geographic data, and ethnographic materials through the interpretive lens of hip-hop cultural aesthetics. “Today, when we browse and search, we invoke a series of chance operations–we use interfaces, icons, and text as a flexible set of languages and tools,” argues Miller. “Our semantic web is a remix of all available information–display elements, metadata, services, images, and especially content–made immediately accessible. The result is an immense repository–an archive of almost anything that has ever been recorded” (Miller 2003) By using the environment as a source of creativity and socially conscious communication, Miller identifies ways that urban sound cultures can participate in environmentalist conversations through a techno-poetics of media activism. He views physical environment, like ice, as an archive containing key data and information about the environment and society’s relationship to it, these records preserved in soil or ice, but requiring interpretation to covey this information to the public. Miller’s politicized output is deliberately open-source so that his artistic materials and their embedded environmental messages are publically accessible, potentially reaching the ears of listeners in socioeconomic demographics typically excluded from climate change discourse.

In his article Miller writes, “DJ culture as a kind of archival impulse put to a kind of hunter-gatherer milieu -- textual poaching, becomes zero-paid, becomes no-logo, becomes brand X” (Miller 2003). DJ culture, particularly the idea of remix and its relationship to the “archive” serves as an interface to reinterpret, sample, and playback information as sound to be comprehended in alternative ways by the public, where “sound is just information in a different form” (Miller 2003). All forms of history, data, information, and searchable materials, are creative fodder for sampling and reinterpretation in new media forms where past and present exist in simultaneity. For Miller, the public who knowledge, data, and information is directed towards has changed, however that knowledge, data, and information is not conveyed and communicated in format easily digestible and understood by the general public. It is intended for a specialized readership who typically interact with these informatics primarily through visual modes of perception.

Contemporary writing in music careers and academia must also integrate emergent humanist-focused technology literacy (e.g. digital humanities), but scholars must establish digital projects as viable forms of scholarly knowledge formation in the discipline on par with conventional peer-review scholarly “products.” For example, digital humanities can help students engage with the relationships among, sound, music, and place in new ways, using open-source mapping software to map and spatially organize sonic geographies of the past and present. Embracing new humanist-focused technologies in the classroom brings real world objects–text, image, sound, and video–into digital space, allowing students to reach a broader audience through research creation and classroom collaborations. These platforms mobilize knowledge for broad dissemination and community engagement, thus, contributing to the development of applied and public sound studies within the academy. Integrating new media and technologies into the classroom is essential. The development of podcasts, remixes, and audio essays as peer reviewed online audio research projects foster opportunities to create an engaged community for community outreach, but the frameworks for evaluation remain undeveloped. Thus, digital projects in the discipline of musicology are tangential to, or supplement conventional forms of peer-review writing practiced and proven in the history of the discipline. Remixes, podcasting, and audio documentary are all forms of communication and comparable to writing, scholars benefit from a process of revision and iteration. To produce publishable material, one must write out a script beforehand, record once, listen back, then revise the script and record again.

As David Huron alludes, musicology/ethnomusicology has the potential to transition from a “data-poor” field to a “data-rich” field (Huron 1999), and sources repeatedly conclude that more has to be done alongside the establishment of evaluation criteria for digital musicological work. One reason why digital musicology still remains relatively undeveloped compared to practice-based digital sound studies and making in the digital humanities is that musicology and ethnomusicology in the 1900s onwards followed a cultural analysis rather than an empirical trend. Musical empiricism has remained in the domain of music theory and analysis and therefore scholarship has continued to deal centrally with computational analysis of the notes on the page in addition to the adoption by early music scholars of best practice remediation practices in book history, which has resulted in a wealth of digital editions of pre-1800 manuscripts.

History of Social Knowledge Production Explored through Book History & the University Press
This section explores changing perceptions of “the book” and textuality through the discipline of book history and a case study of university presses. Adriaan van der Weel defines the history of books and textual communications as a history “not only of technologies of writing and reproduction, but also of the wider spread of culture and knowledge.” In his essay “Book Studies and the Sociology of Text Technologies,” he analyses the reasons for including digital textuality in the study of book history and expanding the discipline to study various forms of text from handwritten to print to digital. He postulates that the earliest printed books resemble manuscripts in the same way that e-books resemble contemporary printed books, and thus the current model of digital communications is part of a continuum of evolving modes of textual communication. His article poses the question that if “screen books were made less book-like and more ‘digital,’ what would the result look like?”

As printed and electronic texts now function in a social environment that is infused with digital media, online reading shares screen space with other media and entertainment. An understanding of the social impact of these various forms of textual communications serves to inform a broader study of human culture. The investigation of the sociology of texts considers the human interaction and the institutional role at every level of text production and consumption, as well as an analysis of the socially constructed meaning of texts. As knowledge in a textual form serves as an external memory for society, the historical vantage point of book studies informs a study of the sociology of texts. This significance of texts and reading practices in the public sphere is explored in more depth by historian Robert Gross in his chapter “Print and the Public Sphere in Early America.”

Gross draws from Jürgen Habermas’s Structured Transformation of the Public Space, first published in English in 1989, to discuss the construction of the “public sphere” in early America. Habermas’s understanding of print culture included both the physical publications and the spaces where books and periodicals were read and discussed. By expanding this definition to include digital books, comparisons maybe drawn between attending a club in the 1790s and sharing a newspaper and visiting a coffee shop today and borrowing their WiFi connection to access online content. But in the modern construction, the discussion is expanded beyond the physical walls of a public space to include a far-reaching social sphere.

In the early republic, the rise of newspaper publication ushered print media into a formerly face-to-face and verbal culture. Boston launched the first American newspaper in 1704 and devotion to the press increased steadily through the Revolution as a vehicle of resistance to British rule. Gross characterizes the role of print culture in forming citizens of the early republic through the shared experience of literacy. With the reach of digital text now radically expanded beyond its printed counterparts, does it further facilitate the formation of more global citizens?

Furthering the discussion of print culture in early America, the chapter explores the commercial strategies of the press that drove newspaper distribution and book publication. The livelihood of printers depended on serving their constituencies and remaining neutral tradesman. This changed in the later 18th century when newspapers began to choose sides in political battles, chronicling local rallies and party gatherings for the benefit of two audiences: those participating in the events and those reading about then in the next edition of the newspaper. The “preoccupation with personality” in newspapers also permeated the market for novels. Readers bonded with both authors and characters alike in a celebrity culture melding fact and fiction. Gesturing towards today’s digital culture, Gross concludes that “on the commercial exploitation of those possibilities, the media age has been built, with technologies of communication nobody could have imagined two hundred years ago.”

Cecile M. Jagodzinski provides a case study of a specialized sector of the publishing industry in her article “The University Press in North America: A Brief History.” Beginning with the founding of the first academic press at Cornell University in 1869, the article charts the increasing number of university presses through the 1960s, followed by a period of limited growth and a gradual decline over the last twenty-five years. Jagodzinski cites two key factors for the mounting stresses on present-day academic presses: increased competition from the commercial presses producing scholarly publications and decreasing library budgets for new monographs. Foundations have offered some support to faltering university presses, and the Mellon Foundation in particular has taken an interest in revitalizing and reimaging scholarly presses with sustainable models for the future. Their most recent round of grant awards encouraged university presses to review all aspects of their operations from accessions to production to distribution and formulate a plan for a new vision applicable to all faltering academic presses. It was implicit that the presses look towards digital workflows to reimagine the creation, format and delivery of texts.

Jagodzinski recognizes that some of the most significant works are of the university presses are the major scholarly projects that other publishing entities would not willingly embrace. This includes the Dictionary of American Regional English (Harvard), the Oxford English Dictionary (Oxford), the Dictionary of Canadian Biography (Toronto), and the papers of American founders. Taking the founding fathers projects as an example raises the question: what is lost and what is gained by the digitization of books when profit margins are considered? While the projects generate some minimal royalties from the sale of printed volumes, the authoritative transcripts and annotations are also made freely available online. Recently, unaffiliated projects have provided access to raw transcriptions of manuscript text. Considering this model, there is a trade-off in accuracy and quality when delivering content on an expedited timetable, and the onus is on the skilled researcher to evaluate on their own the quality of the online resources.

Exemplary Instances and Open-Source Tools

 * Taking into account this overarching framework for studying printed and digital texts, numerous online resources offer resources for studying the historical and material perspectives offered by the discipline of book history. The Society for the History of Authorship, Reading and Publishing (SHARP) has a superb list of web resources on book history research (http://www.sharpweb.org/main/research).


 * The American Antiquarian Society hosts the 19th Century American Children’s Book Trade Directory (www.americanantiquarian.org/btdirectory.htm), which records the publishers and distributors of children’s books. It charts the evolution of a segment of the book industry during a period of technological advances and social change.


 * The First Charging Ledger, 1789-1792 (www.nysoclib.org/collection/ledger/circulation-records-1789-1792/people) provides a record of the reading habits of the New York Society Library during the early years of the United States. Prominent New Yorkers and government officials are included in its records, including George Washington, John Adams and Alexander Hamilton.


 * Some of the essential open-source tools for textual scholarship include Juxta and Editors’ Notes. Juxta (www.juxtasoftware.org) allows for side-by-side analysis and comparison of a single textual work. Both TXT and XML file formats may be used as base files to identify textual variants, and options for analytical visualizations are available as outputs. Juxta is available in a web-based edition or as a standalone desktop application.


 * Editors’ Notes (http://editorsnotes.org) facilitates the research and annotation workflow of documentary editors, archivists and librarians. It provides a web-based platform for preserving research notes and sharing information with scholars working on similar topics. Annotations are fully searchable and may include bibliographical metadata and topic keywords.


 * As university presses explore new modes of content delivery, digital initiatives focus on full-length monographs, archival collections and organizational research reports. The Rotunda Imprint at the University of Virginia launched documentary editions on the American Founding Era Collection (http://rotunda.upress.virginia.edu/founders/FGEA.html), including born-digital collections and electronic editions of published volumes. The bulk of this resource is subscription-based with university libraries as the primary subscribers, although the top-level searching capabilities and select collections are freely available.


 * The Columbia University Press is actively experimenting with electronic delivery formats and has a range on scholarly materials and archives available across multiple disciplines. Their open-access Gutenberg-e website, a collaborative project with the American Historical Association (http://www.gutenberg-e.org), interweaves primary sources materials with award-winning monographs by emerging scholars.

Development of Academic Knowledge Production and Dissemination
In the introduction to this page, the history of social knowledge production is separated into three fields: textual studies, historical scholarly practices, and media history. This section explores the second field, historical scholarly practices, through a look at the development of the traditional peer review system and related processes that control how academic knowledge is produced and disseminated. This section draws on the works of three scholars who approach different aspects of the topic, including Mario Biagioli, Sheila Cavanagh, and Erik W. Black. A brief history of the traditional peer review system will be traced back to its possible predecessors in the late-17th century through Biagioli's arguments. Cavanagh's and Black's works will be used to place the traditional conceptions of the peer review system alongside digital scholarship, and how it is applied to contemporary digital projects.

Briefly, the academic peer review process involves experts in a field reviewing a fellow author’s scholarly work before it can be published in an academic journal. The process is used to ensure the quality and credibility of the work. Today, the review is often “double blind,” meaning that the evaluators do not know the identity of the author (and vice versa), which is used to further ensure the quality and credibility. In "From Book Censorship to Academic Peer Review" (2002), Biagioli argues that peer review is routed in early modern book censorship (31). This connection can be traced back to the late-17th century, and is tied to the first state-chartered academies, being the Royal Society of London (1662) and the Académie Royale des Sciences of Paris (1699) as they were granted permission to publish their own works, administrating their own reviews and licenses (14); since the 16th century, political and religious authorities had established licensing and censorship systems in response to the perceived political and religious threats posed by the printing press (14). Moving into the 18th century, Biagioli argues that a separation developed between academic peer review and state censorship, as state censorship continued with the goal to prevent the publication of views that could destabilize the state (19), but peer review existed because of the apparent risks of both publishing and not publishing, and this was controlled was directed to the academy (20). So, in looking at Biagioli's historical overview of the development of peer review, he argues that it went from being a technique to filter out books that were deemed "unsuitable," to a system in place to ensure texts adhere to disciplinary standards (32).

Both Cavanagh and Black contemplate the traditional conceptualization of peer review in the context of digital scholarship. The former, Cavanagh's "Living in a Digital World: Rethinking Peer Review, Collaboration, and Open Access" (2012) extends beyond Biagioli's essay to include peer-review-related processes related to digital humanities projects. She argues that "…our traditional conceptualization of peer review…remain[s a] primary obstacle[] to the kind of digital humanities work that can help our disciplines flourish" (2). She further acknowledges that peer review has actually evolved (albeit quietly) as digital scholarship has expanded (2); however, Cavanagh is specifically referring to peer-review-related processes that exist outside of the traditional academic journal, where the processes were not necessarily meant to be applied. She argues that peer review privileges established tenured professors when applied to digital projects, as a scholars cannot present a finished or nearly completed project for evaluation, instead one would need approval from various sources (that would need to deem the project necessary) before proceeding beyond the conceptual stage (4). The latter, Black's "Wikipedia and Academic Peer Review: Wikipedia as a Recognized Medium for Scholarly Publication?" (2008), looks at peer review alongside the development of Wikipedia. Similar to Cavanagh, Black acknowledges that traditional peer-review methods have not developed alongside knowledge production (83). He further suggests that Wikipedia could be a potential model for a more "rapid and reliable dissemination of scholarly knowledge" (73) for a variety of reasons, including: its encouragement of dividing work whilst individuals focus on their own strengths (78); its effective and efficient editing system (78); and its ability for those with collaborative views to collaborate on the same document (77). Black argues that in the last 50-or-so years few changes have been made to the process (74), however vast changes have happened within the realm of digital scholarship, such as within digital humanities, as Cavanagh suggests. By applying both Black's and Cavanagh's arguments concerning contemporary issues surrounding peer review to Biagioli's historical look at the method, the process that is used to ensure the quality and creditability of a work is still intertwined with notions of control within the production and dissemination of knowledge, whether intentional or not.

Exemplary Instances and Open-Source Tools
This sub-section includes a detailed list of examples of instances where the traditional model of peer review has been modified to work alongside the development of the production and dissemination of academic knowledge in a digital environment. By suggesting these examples, I am not suggesting that any is "perfect," but rather providing instances for critical discussion. Credit is given to the examples and tools suggested by the sources listed in the bibliography.
 * Academia.edu:
 * Academia.edu is an academic social networking website that allows individuals to share their work online. The site offers an alternative to the traditional peer-review method, where papers can be reviewed alongside distribution. Users can also upload incomplete papers and receive feedback from other users.
 * NINES [Networked Infrastructure for Nineteenth-Century Electronic Scholarship ]:
 * NINES is a scholarly organization that utilizes the digital research environment of the 21st century for the study of 19th-century materials. It acts as a peer-review source for works within that subject area.
 * Cavanagh suggests NINES as an example of a group that provides an external review process (5).
 * Peer Evaluation:
 * Peer Evaluation is a site that allows individuals to upload their documents (including data, media, articles, etc.) where they can be openly discussed and reviewed by peers. Users can endorse work, and can build a reputation that is viewable to others.
 * Review experiment led by Shakespeare Quarterly:
 * In a special issue of the academic journal titled Special Issue: Shakespeare and New Media the publication chose to run an experiment that opened up their review process online. Anyone could edit, however scholars with relevant expertise were specifically invited. The editor still made the final revisions. This was the first experiment done within an academic journal.
 * Wikipedia:
 * Wikipedia is a web-based encyclopedia project that features collaboratively written articles with openly editable content. While not peer-reviewed, the site has policies in place to attempt to ensure that content is credible.
 * Black's article suggests Wikipedia as a model for possible future changes to the current peer review process.

Social Knowledge, Selfhood, and the History of the Internet
"“The openness of the Internet is a product of the peculiar way in which it developed, not something inherent in the technology; the Internet's history, as a result, is inscribed in its practical character and use.” (Streeter 16)"

This section explores social histories of the Internet and the World Wide Web. In doing so, it aims to show that the digital infrastructures and tools relied upon for social knowledge construction and dissemination are themselves products of social knowledge. They have meanings beyond their practical applications and cannot be divorced from the socio-historical contexts in which they are used and developed. It is important to note that this section does not seek to provide a comprehensive account of the technological and technical histories of the Internet. Instead, it provides a survey and synthesis of key pieces of scholarship that pertain to the socially-constructed meanings reflected in and constituted by the Internet.

Thomas Streeter and Karl Spracklen trace the social history of the Internet back to the American 1950s (3; 13). In its early days, the Internet was firmly rooted in the militarized technological complex of the Cold War (Spracklen 31; Streeter 24) and was widely conceived of as a centralized system for mathematical calculation and management (Streeter 3). In short, it was not perceived as a technology for common use. This perception shifted in the early 1960s with the rise in American Romanticism (Streeter 3). According to Streeter, early 1960s thought was premised on a conception of selfhood that privileged individualism, subjectivity, and romantic ideations of inspiration (14). This shift in thinking led people to redefine computing and the Internet as forms of expressive media (Streeter 3, 14). In the late 1960s and early 1970s America was a participant in the Conflict in Vietnam. During this time, ideas about computing began to mirror the ethos of contemporary counterculture (Streeter 14). The Internet soon became about play, experimentation, and mediating shifting relationships to truth and reality (Streeter 15; Spracklen 16).

The predominant historical conception of the Internet arose in the 1980s with the rise of right-wing Reaganomics (Streeter 14). In this decade, Americans reimagined the Internet as “an icon” of “an embodiment” of capitalist free market principles (Streeter 3, 9). It provided an infrastructure that allowed individual free agents to pursue their interests in an open and unregulated environment. The longevity of this pervasive view continues to be addressed in today’s scholarship, particularly because of the way it cements the Internet within Western discourses of power. Robin Mansell, for instance, argues that the “Internet-as-free-market” analogy is one of the plural “social imaginaries” (24) or ideological foundations of American media history.

If the 1980s changed the way people conceptualized or made sense of the Internet, the 1990s changed the way people used it. The World Wide Web was launched in 1990, coinciding roughly with the advent of bilateral free trade agreements across North America. From this time on, the Internet was thought of by-and-large as a symbol of freedom and connectivity—the romanticism of the 1960s merged with 1980s capitalist ideations (Streeter 15). The Internet and the World Wide Web also rather quickly became a form of cultural capital for the emerging urban middle classes (Spracklen 28). As Spracklen writes “access to the net became a political and cultural demand among marginalize groups and developing countries” (30), particularly when the infrastructure and tools for knowledge production and dissemination began to move online in the mid–late 1990s.

The Internet has reflected and constituted a myriad of socially and historically contingent cultural attitudes towards topics such as capital, labour, power, and the self. The meanings we attribute to these technologies shift with our cultural climate; they are contingent upon the very histories of thought that enabled their production (Streeter 3). The three scholars cited in this piece all note that the history of the Internet is a complex and social one—a history that is founded on plural, fluid, and converging systems of thought (Mansell 24; Spracklen 13; Streeter 6). Knowledge, they contend, is produced within situated communities and is disseminated through "tacit habits" and bottom-up power structures (Streeter 6–7). In other words, the Internet is a social knowledge product that enables future knowledge to be negotiated and filtered through communities.

While the Internet and the World Wide Web are pervasive technologies in Western culture, their cultural meanings and social histories are largely obscured by a generalized focus on speculations of technological and economic progress (Mansell 25). In practice, they are often conceptualized as ideologically-neutral tools—or, as a means to an end. More often than not, they are not thought about at all. This section serves as a reminder that our everyday technologies are always embedded in and are inseparable from the contexts in which they are negotiated.

Exemplary Instances and Open-Source Tools
The Internet forms the infrastructure for many of the social knowledge creation tools that used today. This sub-section focuses on early iterations of common Internet tools such as search engines, online library catalogues, web archives, and chat rooms. Each entry contains a brief description of the tool and its history, as well as a list of three modern analogs.
 * Boujnane, Leila and Paul Bloore. Tin Eye. Idée Inc., https://tineye.com/ .
 * TinEye was the first reverse image search engine to operate based on image recognition technology rather than user-generated metadata, keywords, and watermarks. The web-based platform was created in 2008 by Leila Boujnane and Paul Bloore of the Toronto-based company, Idée, Inc.
 * Analogs: Google Search by Image; Pinterest; Pixsy.
 * Emtage, Alan, Bill Heelan, and Peter Deutsch. Archie Query Form. University of Warsaw, http://archie.icm.edu.pl/archie-adv_eng.html.
 * Archie is widely considered to be the first Internet search engine. It was created between 1987 and 1990 by Alan Emtage, Bill Heelan, and Peter Deutsch at McGill University, and was originally used as a file retrieval service for local students and staff. An archived version of Archie is hosted and maintained by the University of Warsaw.
 * Analogs: Google; Yahoo; Bing.
 * Kahle, Brewster and Bruce Gilliat. The Wayback Machine. The Internet Archive, https://archive.org/web/.
 * The Wayback Machine is an ongoing digital archive of the world-wide web. It was created in 2001 by Brewster Kahle and Bruce Gilliat for the non-profit Internet Archive in San Francisco. The Wayback Machine crawls the web, indexes, and “archives” cached public websites. It also allows users to manually save iterations of websites.
 * Analogs: Google Cache; Common Crawl; WebCite.
 * Kilgour, Fred et al. WorldCat. Online Computer Library Center, https://www.worldcat.org/ .
 * WorldCat is a networked catalog of over 70 000 worldwide library records. It is currently the world’s largest online bibliographic database. WorldCat was founded in 1998 by Fred Kilgour and the team of developers behind the Online Computer Library Centre (OCLC) in Dublin, Ohio. The technology behind WorldCat was in development from the mid-1960s up until its formal release.
 * Analogs: Summon (available via. most institutional library accounts); HathiTrust (available via. some institutional library accounts); Open Library.
 * Oikarinen, Jarkko. Internet Relay Chat, University of Oulo.
 * Internet Relay Chat (IRC) was the first widely popular Internet chat service. It allows for real-time file sharing and text-based communication between two or more users. IRC was created in 1988 by Finnish developer, Jarkko Oikarinen, and followed in the tradition of early Internet chat rooms such as the CB Simulator (1980) and Talkomatic (1978).
 * Analogs: Reddit; Facebook; Skype.