Qualitative Content Analysis
1. Background and Introduction
In the history of research, scientists have used both quantitative and qualitative content analysis. Precursors of text interpretation and comparison can be found in hermeneutic contexts. The basis of quantitative content analysis were laid in the 1920s and 1930s. In the 1960s, the methodology found its way into multiple scientific fields. But since Kracauers critique of the somewhat superficial analysis, different researchers have contributed to the formation of a qualitative method for analyzing content (Mayring 2000).
Content analyses can be seen as a means to compress large texts into categories, identifying specified characteristics. Texts may include works of art, videos, music and so on. The work to analyze, however, has to be durable of nature in order to allow replication (Stemler 2001). Whereas quantitative text analysis greatly serves the purpose of turning large data sets into less abstract forms, the method has certain disadvantages. It may lead to a neglect of qualitative explorations. So quantitative data can tell us many things, but in some cases qualitative analysis of texts is necessary for further comprehension (Kracauer 1952). The best content-analytic studies use both qualitative and quantitative operations on texts. Thus content analysis methods combine what are usually thought to be antithetical modes of analysis (Weber 1990). Qualitative content analysis focusses on the informal content of a text. It is about extracting meaning that can not be read when solely focussing on the formal aspects (Hsieh & Shannon 2005).
2. Hermeneutics
Qualitative content analysis is often about textual data. When studying or analyzing a certain text its difficult to stay totally unprejudiced. Chances are that everyone will interpret a text differently. The science which is trying to coop with the problem of interpretation is called Hermeneutics. Hermeneutics can be seen as both a fundamental philosophy and as a method of analysis. As an analyzis method, its about finding a way of understanding textual data. It is a theory of text interpretation and its a philosophical concept which has established different interpretations. For qualitative content research hermeneutics is primarily about the meaning of a text. The plain question is: what is the meaning of this text?
History
Hermeneutics has its origins in the interpretations of ancient and biblical texts. As a theory of interpretation, the hermeneutic tradition begins in Ancient Greece. There the term was probably first used by the Greek poet Homer (circa 800 BC.). He defined hermeneutics as the interpretation and translation of messages that were given by the Greek gods. Plato also used the term in a number of dialogues. He was using hermeneutics for the interpretation of religious knowledge. Aristotles work Peri hermeneias on logic and semantics also describes the art of interpretations. Also the Stoics interpretation of myth can been seen as a methodological awareness of the problems of textual understanding (Grondin, 1995).
Augustine of Hippo is an influential thinker on hermeneutics. Modern hermeneuticists, such as Dilthey, Heidegger, and Gadamer are influenced by Augustine. According to Gadamer, it was Augustine who first came up with a universality claim of hermeneutics. Augustine makes a connection between language and interpretation. He also claims that the interpretation of scripture contains a more profound, existential level of self-understanding (Grondin, 1995).
One of the most important scholars of hermeneutics was Friedrich Schleiermacher. Schleiermachers aim was to establish a more general set of rules for the interpretation of texts. Before, the possibility of understanding a text was so obvious that it was not even questioned. Schleiermacher instead, looked at the historical assumptions and the purpose of the interpretation. For him, the meaning of a text is set in its unique individuality. Each text is determined by its individual historical appearance and its author (Schleiermacher, 1998).
Wilhelm Dilthey goes even further than Schleiermacher. His work, Kritik der historischen Vernunft saw hermeneutics as a methodological basis for the human sciences. According to Dilthey, human behaviour and the social world does not behave like nature, and therefore the humanities should follow a totally different method. A phenomenon should not be explained but should be understood in terms of its own meaning (Bleicher, 1980).
Martin Heidegger's work Sein und Zeit (1927) completely changed modern hermeneutics. He had a different approach to hermeneutics. He saw it as ontology, the study of the nature of being or becoming. Its about the most essential conditions of man's being in the world. Hermeneutics no longer is seen as one of several theoretical possibilities but hermeneutics is what philosophy was all about in the first place. Understanding is not a method of reading but it is something we consciously do or fail to do. Understanding as a mode of being (Kisiel, 1995). Heidigger's work is used by a student of him, George Gadamer. Gadamer's work on hermeneutics can be described as a successful realization of Heideggers hermeneutics. According to Gadamer methodological hermeneutics is too influenced by the ideal of knowledge of the natural sciences. His aim is to see what the sciences can do with all of our worldly experiences (Kinsella, 2006).
The hermeneutic circle
A concept within hermeneutics is the
hermeneutic circle. A single word in hermeneutics has no permanent meaning. Its meaning depends on the context. You can say that we would therefore need to know and understand all factors of the context. However, this is an impossible task. You go from the general to the specific meaning and back. Full understanding of objectivity can therefore never be achieved.
3. Use Scenarios and Problems
Why code
When conducting a qualitative content analysis it is important that you have a system to find essential pieces of information and to structure them effectively. The more data you have and the more complex research gets, the harder it will be to keep track of it all. By coding your data you can compile groups of relevant data, this will give you a better overview of concepts and themes, potentially enriching research when relations become more apparent and new ideas might emerge. Coding is a way of defining relevant data and to label them with word or short phrase. Coding means naming segments of data with a label that simultaneously categorizes, summarizes and accounts for each piece of data (Charmaz, 2007). In short it has to represent and capture a datums primary content and essence (Saldana, 2009). Coding can be seen as the first analytical step you take in research, as you build a web of relevant concepts and group them together by relevance and relations. Be aware that coding needs to be done with care, if you cannot trust the measurements, how can you trust the analysis based on the measurements (Riffe, Lacy & Fico, 2005)? Reliability is the keyword here and of utmost importance to qualitative content analysis. The issue here is that coding is interpretive and that the data must be closely read and understood comprehensively. Qualitative research, by nature, deals with the interpretation of phenomena. Human subjectivity and creativity cannot be erased entirely in the process, this in itself is not a problem since creativity is an important aspect of research (Baptiste, 2001). However there are ways to structure coding and categorization to enhance reliability and overall effectiveness of research.
Initial coding/Open coding
There are many ways to code data and certain norms of how to code do exist. They are too many ways to code to summarize here but what is most crucial is that you keep in mind that coding is an ongoing process. The first stage of coding should follow a close line-by-line reading of the data where you identify as many relevant pieces of information. This is called initial/open coding. The idea is that coding is not a precise science but an interpretive act and initial coding helps as a first impression of the data. Not finding relations or similarities during this phase is normal as the process gets more refined as you go. It is best to keep an open mind and look for concepts and ideas that directly correlate to the research goal, after some coding you should recognize patterns that start to form, initial codings primary goal is to find these patterns as documented in the data (Given et al, 2008).
Focused coding and Theoretical coding
As patterns begin to emerge you can advance to the second stage of coding. This requires a method which is called focused coding. After the initial stage of coding it is time to refine, synthesize and explain the larger segments of data (Charmaz, 2007). As stated before, coding is a process and focused coding follows a more active involvement with the data. The coding and data need to be reviewed, this should be easier to do since there are codes that are grouped together. Now one has to act on the data where codes need to be further distilled and recoded or dropped. Focus reading keeps the researcher checking his/her own preconceptions by constantly reviewing and comparing the previous findings of codes and data. This way codes can be recoded to better fit the data, new categories and new concepts/ideas. After the focused coding stage it is time to place them in the bigger picture. After refining the codes and improving categories it should help analysis and construct a overview of relevant information and their connections. Codes may have been redefined and additional data might have been added. Once you have a comprehensive overview of codes and data through analysis it is time to take a more theoretical approach, once again reviewing the results of coding in relation data and now with theory. This last stage is called theoretical coding where you incorporate codes into theory, effectively constructing a narrative through codes and data.
Categorization
Categorization is a process that develops along with coding. As patterns become visible you need to group up certain codes that fit together. These categories need to be as explicit as possible, just an enumeration of codes will not suffice. Building up a coherent scheme of categories will facilitate keeping track with the broader concepts or research as a whole. Categories can develop inductively (approaching data without a preset list of categories, identifying units that conceptually match the phenomenon in the data) or deductively (categories emerge from prior studies, relevant literature, research question, own knowledge/experience) (Given, et al, 2008). As with coding, constructing categories is a process that needs constant reviewing and refinement leading to subcategories. The key element of categorization is either to cluster groups of coded data or as an intermediate step of separating and connecting units of meaning. Again, this process of evaluating internal integrity (the definitions of subcategories) and external integrity (how do categories relate to other categories) must continue as codes and categories get more refined and their relations more apparent (Given, et al, 2009). Once all codes are placed in a relevant (sub)category and all major categories are compared and consolidated with each other, then it can begin to transcend the reality of the data and progress towards the thematic, conceptual and theoretical (Saldana, 2009).
4. Discourses
Discourses on qualitative content analysis range from discussions on validity and reliability to the implications of
Big Data for qualitative content analysis; in terms of sample size and over-reliance on pattern recognition. While computer-assisted qualitative data analysis software (CAQDAS) is instrumental in processing data in an organised and timely fashion, issues arise in terms of qualitative data analysis, especially when the sample is large and derived from a data dump.
Issues tend to arise with the validity and reliability of qualitative content analysis, and Kelle and Laurie (1995) identify potential problems with the method. They assert that qualitative research is capable of producing infinite numbers of similarly true but not contradictory descriptions and explanations of the same phenomenon. When undertaking this sort of research, it is not enough to merely borrow scientific methods from other qualitative research practices, instead following the coding and categorization of data, the research question or hypothesis must be reviewed in order to check the reliability of the findings, so as to avoid contaminating the results with the worldviews of the research team. Objectivity is important here, as assumed knowledge may lead to the narrowing of the research question and skewed results. In essence the goal of qualitative content analysis is to understand a phenomenon and not to analyse an entire population through statistical data.
The increased availability of large data sets (
Big Data) facilitated by social platforms has outlined the need for a return to qualitative content analysis and the hybridization of traditional methods and software interpretation. The human subject still has a role to play in qualitative content analysis, as they are capable of extracting latent content and contextualising data in the media ecosystem. Human capacity for sensitivity is also important to this method. One potential problem with the analysis of
Big Data is the tendency of social science researchers to attempt to find patterns in large data dumps, distracting away from the subtleties of the content which could be analysed qualitatively. In their research into content analysis and big data, Lewis, Zamith and Hermida (2013) have identified difficulties presented by new social platforms in terms of qualitative research. Where once, text analysis could also characterize people into actor roles (such as activist, communist), todays
dividuals (Deleuze, 1990) are not so easily pigeonholed. In their analysis of Twitter users and the users bios, these subjects no longer just belonged to one actor role, but a plurality of roles. This example is extremely relevant because it reflects the new challenges faced by qualitative researchers in terms of text analysis: closed and professional media has become open and collaborative media with a plurality of identities at play. This stands to have repercussions for sampling, categorization and coding.
5. Case Examples
Examples of qualitative content analysis are plentiful, and there is little space to discuss all of these applications here. Instead, let us make reference to two cases: qualitative content analysis and unemployment and qualitative content analysis and news and social trends. The first case refers to the 2000 study by Mayring, Koenig, Birk and Hurst into unemployment among teachers in Eastern Germany. This involved taking a sample of 50 unemployed teachers and through open-ended interviews and open-ended biological questionnaires establishing two categories, psycho-social stresses and coping behaviours. Through inductive and deductive computer assisted content analysis the research team was able to determine that unemployment and german reunification were causing increased and specific stresses, and new adaptabilities. (Mayring, 2000). The second case looks at Danielson and Lasoras 1997 investigation into news and social trends with use of online text analysis. They studied 100 years of front pages from the New York Times and the LA Times, and over the course of this period they observed a decrease in focus on the individual, in favour of the group and a move away from religion and local government. (Neuendorf, 2002).
Thus through a mix of computer assisted qualitative data analysis and more traditional qualitative methods such as interviewing and surveying, valuable conclusions can be drawn which would be unobtainable through purely quantitative research.
6. Software
Computer assisted qualitative data analysis software (CAQDAS) is instrumental in qualitative research as it simplifies the process of text analysis in terms of categorization, coding rules and commentary. Furthermore it offers researchers the facility to locate data using a search function. Software also allows researchers to record of the all the necessary steps, allowing for replication. While software is important in structuring research and data, it is important to realise that it cannot effectively analyse data according to a specific hypothesis, indeed this remains the job of the human subject. Software, especially web-based software, is especially useful for those researchers working in large, unfixed teams as it allows for a collaborative approach.
CAQDAS
Software which aids qualitative research through the provision of coding tools, linking tools, mapping tools, search tools, query tools and annotation tools. A plurality of software packages exist for qualitative research and some of these are outlined below:
Paid Options
Atlas.ti 7 http://www.atlasti.com/index.html
- Runs on Windows, Mac OS and Ipad release date 2014.
- Student license (2 years) - 75 (6 months) - 39.
NVivo http://www.qsrinternational.com/products_nvivo.aspx
- Runs on Windows ( MacOS expected 2013)
- Allows for capture of content from Facebook, Twitter, LinkedIn and Youtube.
- Student price- 95.
Concordance® http://www.lexisnexis.com/en-us/litigation/products/concordance.page
- Runs on Windows.
- Free 30 day trial available.
Open Source/ Free
Aquad http://www.aquad.de/en/
- Windows.
- Spanish and German versions also available.
CATMA http://www.catma.de/
- Windows, MacOS, Linux and Web-based.
- Allows for the collaborative transfer of results online..
Compendium http://compendium.open.ac.uk/institute/download/download.htm
- Windows, MacOS, Linux.
- Mapping software, latest version includes movie clip mapping.
TAMS Analyzer http://tamsys.sourceforge.net/
- MacOS
- Coding and video coding and analysis features.
Even
more software options can be found at:
http://en.wikipedia.org/wiki/Computer-assisted_qualitative_data_analysis_software
7. Literature
Baptiste, Ian. 'Qualitative data analysis: Common phases, strategic differences.' Forum Qualitative Sozialforschung/Forum: Qualitative Social Research vol 1, nr. 3 (2001). 20 September 2013.
http://www.qualitative-research.net/index.php/fqs/article/view/917/2002
Bleicher, Josef.
Contemporary Hermeneutics: Hermeneutics as Method, Philosophy, and Critique. Routledge & Kegan Paul, 1980. Print.
Charmaz, Kathy. Constructing Grounded Theory: A Practical Guide through Qualitative Analysis. London: Sage Publications, 2007. pp. 41 - 71. Print.
Denzin, Norman K., and Yvonna S. Lincoln. The SAGE Handbook of Qualitative Research. SAGE, 2005. Print.
Flick, Uwe. An Introduction to Qualitative Research. SAGE, 2009. Print.
Given, Lisa M., et al. The Sage Encyclopedia of Qualitative Research Methods. Thousand Oaks: Sage Publications, 2008. Print.
Grondin, Jean. Sources of Hermeneutics. SUNY Press, 1995. Print.
Hsieh, Hsiu-Fang, and Sarah E Shannon. Three Approaches to Qualitative Content Analysis. Qualitative health research 15.9 (2005): 12771288. Print.
Jacoby, Liva, and Laura A. Siminoff. "Qualitative Content Analysis." Empirical Methods for Bioethics: A Primer. Amsterdam: Elsevier JAI, 2008. pp. 39-42, 58,60. Print.
Kelle, Udo, Gerald Prein, and Katherine Bird. "General Methodological Issues."Computer-aided Qualitative Data Analysis: Theory, Methods and Practice. London: Sage Publications, 1995. pp. 5-17. Print.
Kinsella, Elizabeth Anne. Hermeneutics and Critical Hermeneutics: Exploring Possibilities Within the Art of Interpretation. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research 7.3 (2006): n. pag. www.qualitative-research.net. Web. 21 Sept. 2013.
Kisiel, Theodore.
The Genesis of Heideggers Being and Time. University of California Press, 1995. Print.
Kracauer, B Y Siegfried. The Challenge of Qualitative Content Analysis. The Public Opinion Quarterly 16.4 (1952): 631642. Print.
Lewis, S. C., Zamith, R., & Hermida, A. (2013). Content Analysis in an Era of Big Data: A Hybrid Approach to Computational and Manual Methods. Journal of Broadcasting & Electronic Media, 57 (1), 3452. doi:10.1080/08838151.2012.76170.
Mayring, Philip. Qualitative Content Analysis. Forum: Qualitative Social Research On-line Journal 1.2 (2000). Print.
Neuendorf, Kimberly A. "Contexts and Computer Content Analysis Software." The Content Analysis Guidebook. Thousand Oaks, CA: Sage Publications, 2002. pp. 191-204, 225-239. Print.
Riffe, Daniel, Stephen Lacey, Frederick G. Fico. Analyzing Media Messages: Using Quantitative Content Analysis in Research. New Jersey: Lawrence Erlbaum Associates, 2005. pp. 122 - 126. Print.
Schleiermacher, Friedrich.
Schleiermacher: Hermeneutics and Criticism: And Other Writings. Cambridge University Press, 1998. Print.
Saldana, Johnny. The Coding Manual for Qualitative Researchers. London: Sage Publications, 2009. pp. 1 - 31. Print.
Weber, Robert Philip. Basic Content Analysis. Series: Quantitative Applications in the Social Sciences. Second Edi. Sage Publications (1990).
Website
"Computer-assisted Qualitative Data Analysis Software." Wikipedia. Wikimedia Foundation, 22 Aug. 2013. Web. 21 Sept. 2013.