Tracing relational affect on social platforms through image recognition

The case of the Syrian war

Team Members

Marloes Geboers, Daniele Zanetti, Andrea Benedetti, Nelly Marina, Stefanie Trapl, SeongIn Choi, Adam Ferron, Nisha Rani, Zhao Jing, Concetta La Mattina, Luca del Fabbro, Salvatore Romano,Yitong Tang

Contents

Summary of Key Findings

Studying images that circulate social media poses specific methodological challenges, which in their turn have directed scholarly attention towards the computational interpretation of visual data. When analyzing large numbers of images, both content analysis, as well as cultural analytics, have proven to be valuable; however, they do not take into account the circulation and audiencing (Rose, 2016) of images within a socio-technical environment. Employing networked affect as a conceptual and theoretical framework, this project sets out to advance a new methodology that takes into account image content, context, and technicity. We mapped images and their affective potentialities by blending computational analysis of images and established digital methods, mainly using platform ranking data (likes, retweets and comments counts).

Content (and networkedness)

We decided to unentangle the Arabic and English language spaces on both platforms (querying similar hashtags in both languages separately), to lay bare any possible differences in terms of narratives and discursive frameworks regarding migrant issues. We could see clear differences in image vernaculars between the two language spaces on Twitter. Within Instagram we could unentangle languages by Arabic and English tag queries, however, the Arabic tags that were comparable to the Twitter tags were insignificant in terms of frequencies. Interestingly, the English language tags were predominantly talked about in Arabic at least in the case of our queries. Very fortunately, there was in our midst someone who mastered the language, so we could deal with this in our analyses.

Differences in the vernaculars of the English and Arabic Twitter language spaces were especially apparent in the more resonating (more retweeted) images circulating the English and Arabic spaces. While overall image objects (clustered through the use of automated annotation using Google Vision) are quite similar in both language spaces, what resonates within the audiences of these images differs significantly. Most retweeted images in the English language space shows us portraits of young refugees and refugee children, combined with images of the artifacts (eg. keys of their house in Syria) they took from home as a reminder of the place they had to leave behind. These images were published by Unicef and gained traction, primarily due to World Refugee Day on June 17, a date that was included in our queried time slot (June 2019). While the other resonating images in the English space are often depicting a typical charity style narrative, tapping into the notion of common humanity so as to appeal for solidarity (see the excellent work of Lilie Chouliaraki, describing humanitarian appeal imagery in The Ironic Spectator, 2013), the images found in the Arabic space are more diverse in terms of their protagonists: one sees not merely children as 'vehicles' for calls for solidarity but a more varied depiction of families, including men, elderly people and aid workers on the ground. And these people are also present in most retweeted images here, pointing towards an audience that might relate more to the everyday hardship of living in a battered place or a refugee camp and less to the charity message conveyed through personalized portraits.

There is also an interesting platform divide between the vernacular of Instagram and Twitter. Instagram is, through its rather hysterical tagging culture, grabbing a lot more non-related content into the network (dresses for sale) which is also a result of scraping data using tags of course. When one queries tags on Instagram, almost always these tags are used together with non-relevant tags and thus also brings unrelated content into the mix.

Context

Contextually, we found very interesting differences between the Twitter language spaces. Semantic networks show how the English space is much more tying into calls for (international) action and outlining the geo-political dynamics of the Syrian conflict and its consequences in terms of migration. The Arabic language space is much more about the local situation within Syria and within refugee camps. Also, this semantic network holds affectively laden words such as 'support' and 'displacement'. A side note must be made that in terms of the size of data sets, the Arabic data set was much smaller and thus we worked with much less words than in the English language space.

Technicity

Another interesting finding emerged from an analysis that combined both Google Vision API data (annotated object labels) and the hashtags that correspond to these objects, eg. we tried to see whether image language and hashtag language are aligning, so do soldier in images also get tagged with #soldiers? In our case (we ran this analysis on Instagram data with Syrian migration-related hashtags as queries) there was quite a strong alignment between the content of the images and the content of the hashtags. The protocol of this analysis holds promises for future research, especially in longitudinal analyses of hashtags and their visual languages, evolving over time.

1. Introduction

As the number of digital images of conflicts expands, the study of the visual is of paramount importance. This study analyzes images that bond affective publics in networked spaces pertaining to the Syrian war and refugee crisis on Twitter and Instagram. Online affective publics are mobilized and connected through expressions of shared sentiment (Papacharissi, 2015). Given the well-documented use of Twitter for political mobilization and discourse within the extant literature and the visual nature of Instagram, we focused on these two giant platforms. This study maps large numbers of images and their affective potential through mixed methods: using automated computational analysis of images that gained traction in hashtag spaces relating to Syrian migrant issues and through a qualitative study of visual patterns that are a result of the automated analyses. Through including visual narratives, we contribute to the study of emotions, affect, feelings and sentiments and how these are articulated in online communities.

A bit on the time span and case

We decided to focus on the narration of issues relating to the mounting tension between residents of Lebanon and Syrian refugees living in camps in a country that struggles to provide for basic amenities due to drought forcing the government to increase pressure on refugees to return to their home country, the latter is strongly being advised against by, among others, UNICEF.

A bit on affect and emotion

Although often used interchangeably, we uphold the distinction between affect and emotion: emotions are understood as being subsumed within affect (Papacharissi, 2015a), where affect is seen as the ‘moving force’ that precedes the emotional expression. This moving force is articulated by Massumi (2010) who describes affect as a bodily sensation in an individual, a reaction to stimuli characterized by intensity and energy but lacking a conscious orientation or interpretation. Affect is a force preceding emotion and is subsumed in emotion (Papacharissi, 2015), and as such, it cannot be observed — one cannot observe forces, one can observe its effects: ‘‘Newton did not see gravity. He felt its effect: a pain in the head’’ (Massumi, 2010, p. 160). As said, this very individual and unconscious process cannot be detected in the (often carefully) ‘edited’ and staged emotional expressions in Tweet content. However, we can, through the emotional expressions in such content and their resonance in terms of sharing behavior of the audience, determine the relational interpretation of affect experienced in individual bodies that, in turn, are part of a networked public tied together by the collective use of particular hashtags (Bruns & Burgess, 2015; Papacharissi, 2015).

2. Initial Data Sets

We used a subset demarcating twitter data from 1 June till 1 July 2019 (DMI-TCAT), focusing on a time span that included mounting tensions in Syria's neighboring counties' refugee camps, especially in Lebanon. The mother bin holds a range of both English and Arabic hashtags and keywords relating to the Syrian conflict and its subsequent refugee crisis.

We also scraped Instagram data, using the DMI Instagram Scraper, which works well due to the fact that we did not have to scrape far back in time thus preventing gaps in data due to API restrictions.

The queries and research protocol are outlined below, in the methodology section.

3. Research Questions


How is migration-related suffering mediated through images circulating within particular Twitter and Instagram spaces?

(CONTENT LEVEL)

How is migration-related suffering mediated through text circulating within particular Twitter and Instagram spaces?

(CONTEXTUAL LEVEL)

Can we detect relational affect through the resonance of images of migration?
(NETWORKEDNESS LEVEL)

The overarching question is of METHODOLOGICAL nature:

Can we advance visual methods making use of digital methods?

4. Methodology

We set off establishing relevant hashtags to query, by the co-occurrences of hashtags in the time span of 1 June to 1 July 2019 on Twitter. Through the co-hashtag graph module in DMI-TCAT we calculated modularity class and then decided to zoom into the relevant clusters (that is clusters pertaining to the migrant crisis in refugee camps bordering Syria, unentangling Arabic and English language clusters). Within these clusters, most frequent tags were filtered out to be queried. The exact queries that came out of this endeavor can be found in the protocol diagrams below.

In short, we got to full exports of the data of the English and Arabc Twitter data and ran the media URLs of images through Memespector (Python version), a tool tapping into the Google Cloud Vision API in order to annotate image objects. This tool also gives you a networked gexf file of annotated labels and images which can be tweaked to include platform rank data to size images providing for a sense on what resonates within hashtag spaces.

We also set out to combine object label organization of images by Vision API and comparing this way of 'objectively organizing of images' to the way in which hashtags organize images along a discursive grid that is more contextual.

Figure 1: protocol for Twitter

Figure 2: Instagram Protocol

5. Findings

Figure 3: Crop of the image/annotated Vision API label network for English Twitter. Images sized by retweets.

Figure 4. Twitter, Arabic language space. Beware: the tags in this network were not completely comparable to the English tag query.

Figure 5: Image/hashtag network of English Twitter

Figure 6: Semantic network of tweeted text, made with WordIJ. English Twitter space.

Figure 7. Semantic network of Arabic Twitter.

Figure 8. Image/annotated labels network Instagram. Queried for migration but not in relation to Syria.

Figure 9. The semantic network of Instagram, made with Wordij. Clearly pointing to distinctive discursive frameworks. [do fill in here, @group]

The “objective” machine and the “subjective” human meanings

Figure 10: Images organized by annotated object labels, depicting distinct clusters of similar objects. Note how one can categorize, not by visual formal characteristics like in Cultural Analytics, but by content of the images!!

Figure 11: these images broker between two distinct object clusters [do fill me in @andrea or @daniele]

Figure 12: an overview of how hashtags evolve (are shared between images) along the networked path in the network visualization of object annotated organization of images (figure 10).

Figure 13: comparison between hashtag annotation and object labels (Vision API) annotations. In our case images seem to align with the hashtag narratives.

6. Discussion

According to Rose, meaning in images is not often isolated to the identification of faces and features, but happens on a much more implicit and tacit level (Bechmann and Bowker, 2019). How then, might automated object labeling - meaning doing things with content on a quantified scale (d'Orazio, 2013) - do justice to the meaning of images, the latter often having to do with incentives for posting content to one’s profile (Bechmann, 2017), creating social value? Using image detection and tracking errors - false negatives and positives - can be seen as a way of re-engineering the otherwise ‘black boxes’ as algorithms are often typified. From this academics should, in the context of studying social media, move forward to include sensitivity towards the social values that characterize the diffusion of visual content on social platforms. Users post images with certain incentives - to show off good taste for example (Bechmann, 2017) - and with that create social value for platforms (Pybus, 2015). While engaging with an issue such as the Syrian migrant crisis has little to do with showing off good taste, it is to do with how publics narrate the story of suffering and position themselves in relation to this suffering.

For future research I'd like to point out the possibilities of analyzing content, using emoji language. In the image below (figure 14) we see co-occurrences of emojis used in the Instagram data on Syrian migrant issues. When combined with the images that co-occur with certain (co-occurrences of) emojis, we could find a way to network Twitter images along an emoji grid in a similar fashion to an earlier DMI project conducted in 2017: https://wiki.digitalmethods.net/Dmi/EmotionalClicktivism.

Figure 14: co-occurrences of emoji in Instagram data pertaining to Syrian migration issues.

7. Conclusions

See also Summary of Key Findings.

Using resonant images within hashtag spaces related to Syrian issues on Twitter and Instagram, we argue that studying hashtags through their diverging visual content leads to a better understanding of how different hashtags mobilize different affective publics. Based on the possibilities of such an approach, future research should aim to combine the outlined computational and network analyses of resonating images alongside prominent actors and their active followers. Such an approach sheds light on what hashtags ‘do’ to both content and the creation of affective publics within socio-technical spaces.

8. References

Bechmann, A. & Bowker GC (2019). Unsupervised by any other name: Hidden layers of knowledge production in artificial intelligence on social media. Big Data & Society, Vol. 6, No. 1

Bechmann, A. (2017). Keeping It Real: From Faces and Features to Social Values in Deep Learning Algorithms on Social Media Images. Proceedings of the 50th Hawaii International Conference on System Sciences | 2017

Bruns, A., & Burgess, G. (2015). Twitter hashtags from ad hoc to calculated publics. In N. Rambukkana (Ed.), Hashtag publics: The power and politics of discursive networks (pp. 13-27). New York, NY: Peter Lang.

D’Orazio, F. (2013). The future of social media research: Or how to re-invent social listening in 10 steps. Pulsar Platform Blog. Retrieved from: https://www.pulsarplatform.com/resources/the-future-of-social-media-research/

Massumi, B. (2010). The political ontology of threat. In M. Gregg and G. Seigworth (Eds.), The affect theory reader (pp. 52–70). Durham, NC: Duke University Press.

Papacharissi, Z. (2015). Affective publics: Sentiment, technology, and politics. Oxford, UK: Oxford University Press.

Rieder, B., Den Tex, E., & Mintz, A. (2018). Memespector [Computer software]. Retrieved from https://github.com/bernorieder/memespector/

Rieder, B., & Sire, G. (2013). Conflicts of interest and incentives to bias: A microeconomic critique of Google’s tangled position on the Web. New Media & Society, 16(2), 195–211. https://doi:10.1177/1461444813481195

This topic: Dmi > WinterSchool2016 > WinterSchool2016ProjectPages > SummerSchool2019TracingAffect
Topic revision: 09 Aug 2019, MarloesGeboers
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback