Digital Methods Initiative
The Digital Methods Initiative is a contribution to doing research into the "natively digital". Consider, for example, the hyperlink, the thread and the tag. Each may 'remediate' older media forms (reference, telephone chain, book index), and genealogical histories remain useful (Bolter/Grusin, 1999; Elsaesser, 2005; Kittler, 1995). At the same time new media environments - and the software-makers - have implemented these concepts, algorithmically, in ways that may resist familiar thinking as well as methods (Manovich, 2005; Fuller, 2007). In other words, the effort is not simply to import well-known methods - be they from humanities, social science or computing. Rather, the focus is on how methods may change, however slightly or wholesale, owing to the technical specificities of new media.
The initiative is twofold. First, we wish to interrogate what scholars have called "virtual methods," ascertaining the extent to which the new methods can stake claim to taking into account the differences that new media make (Hine, 2005). Second, we desire to create a platform to display the tools and methods to perform research that, also, can take advantage of "web epistemology". The web may have distinctive ways of recommending information (Rogers, 2004; Sunstein, 2006). Which digital methods innovate with and also critically display the recommender culture that is at the heart of new media information environments?
Amsterdam-based new media scholars have been developing methods, techniques and tools since 1999, starting with the Net Locator and, later, the Issue Crawler, which focuses on hyperlink analysis (Govcom.org, 1999, 2001). Since then a set of allied tools and independent modules have been made to extend the research into the blogosphere, online newssphere, discussion lists and forums, folksonomies as well as search engine behavior. These tools include scripts to scrape web, blog, news, image and social bookmarking search engines, as well as simple analytical machines that output data sets as well as graphical visualizations.
The analyses may lead to device critiques - exercises in deconstructing the political and epistemological consequences of algorithms. They may lead to critical inquiries into debates about the value and reputation of information.
To date the visualizations have contributed to the changing notions of Web space over the past decade - be it the virtual roundtable, the sphere, the network, the cloud or the 'revenge of geography' in the current locative period. Other visualizations are explicative, or recipe-like, providing step-by-step methods and findings, for example, about the quantity and intensity of content circulation. (See Govcom.org graphics at
http://www.govcom.org/drafts.html).
To begin the Digital Methods Initiative we have gathered together many of the tools in a single space. Related tools and scripts that we use to study the web in particular are also listed. The collection provides a particular outlook on digital methods, described above in terms of the natively digital, and also includes how-to's.
For example, how to study Internet censorship (by using proxies)? How to study information inclusion and exclusion (by interrogating robot.txt exclusion policies)? How to study surfer pathways (using measures of 'related sites')? How to study site reputation (by hyperlink analysis)? How to study a site's search engine placement over time (by storing and querying within engine results)?
Additionally the Digital Methods Initiative provides views on the value of visualization. How to output the results of the analyses (in ranked lists, in cluster graphs, in line graphs, in clouds, on maps)? Which visualizations communicate findings? Which visualizations embed critical ways of seeing?
In all, the effort is to bring together in one realm the expertise developed over the past number of years for the purposes of information and data-sharing, teaching and research, in what some call a 'collaboratory'. In particular the initiative seeks to invite Internet researchers into a particular strand of thought concerning the specificity of the new medium and the distinctive approaches to its study.
References
D. Bolter and R. Grusin (1999), Remediation: Understanding New Media, Cambridge, MA: MIT Press.
T. Elsaesser (2005), "Early Film History and Multi-media: An Archaeology of Possible Futures?" in W. Hui Kyong Chun and T.W. Keenan (eds.), New Media, Old Media: A History and Theory Reader, New York: Routledge.
M. Fuller (ed.) (2007), Software Studies, Cambridge, MA: MIT Press.
Govcom.org (1999), Net Locator software.
Govcom.org (2001), Issue Crawler software.
C. Hine (ed.) (2005), Virtual Methods: Issues in Social Research on the Internet, Oxford: Berg.
A. Kittler (1995), "There is No Software," CTheory, a032.
L. Manovich (2005), "New Media: Capture, Store, Interface, Search," lecture delivered at the University of Amsterdam, 29 November.
R. Rogers (2004), Information Politics on the Web, Cambridge, MA: MIT Press.
C. Sunstein (2006), Infotopia: How Many Minds Produce Knowledge, New York: Oxford.
Tags:
,
view all tags