The Weekly Create – September 18, 2014 – How to Read the Human Brain (Kinda, Sorta, Not Reeealllyyy…)

How to Read the Human Brain (Kinda, Sorta, Not Reeealllyyy…)

http://dlsanthology.commons.mla.org/text-analysis-data-mining-and-visualizations-in-literary-scholarship/

Humans have an uncanny ability to take stuff from their brains and transcribe it into a different modality – text is the “old school” version of this, and those who were the most gifted made their words sing on the printed page, with other humans reading this brain output with awe and wonderment (and in the case of some singers, like James Joyce and later Virginia Woolf, frustration – if we even don’t give up midway through the novel).

The “newer school” version for us mere mortals of interpreting what the great writers of the past wrote is by using digital software that can “map” the words, rhythms, styles, cadences, and “grooves” of the writer. Using such tools can create a “magnifying glass which can draw the text out of its shy place to look at the layout of a page, and enlarging and making comprehensible some chosen bit”. The technology can help readers see the forest through the trees – but for formulating nuance in your thesis or hypothesis, that still requires the brainpower of current humans to think.

Clement discusses differential reading, in which we look at a writing through different academic lenses – we see the art, cultural production, practical criticism (not getting too hung up on the particulars of the text) and philosophy of a text – and meld it together to fit our own computer mind (in a sense). The software allows for a structure of archivization (as Derrida would say), but the software is still dependent on the humans in the more-structured states of how we use this information within text analysis, data mining, and visualization methodologies.

The technology, such as StageGraph, can map a Shakspearean plays “look” such as the length of acts, or Voyant to see word frequencies in novels and provide insight into what the writer may have been intending when placing a particular word or group of words in particular places within the text. But to always keep in mind that even with using the clean and seemingly infallible computer technologies, like all literature it comprises the “situated knowledges” – we are looking at representations of representations of representations.

Such tools as Vocabulary Management Profiles can interpret patterns we see in texts and how others have read texts, even regarding narrative style, but o keep in mind that these technologies can perhaps be most beneficial in large scale text collections in seeing that an author’s randomness in words or phrases may be actually deliberate, something that “old school” scholars would have a hard time picking up.

The lone, solitary scholar does not easily situate herself or himself in creating meaning and new conversations within digital humanities – collaboration by invisible hands is key. This collaboration is always in tune with the realization that distant reading (for example, data mining) must always be entwined with the thoughts of the person (close reading) to think outside the box. Can the language be code? What are the multiple, meaning-making properties of a literary text? Is there a common, agreed upon meaning of the words in a literary text? Even if you think that by using “non-judgmental” tools that you may believe are inherent in internet software, which you believe may be created by robots, the messiness and doubt that humans have had to confront in writing about complex concepts such as race, gender, class, and culture still has to be questioned, since these new technologies still have their own messiness. Alas, human brain power is still needed to give this internet thing the dynamism to grow and move forward.

Even by using Google’s nGram to track the etymology of a word through a defined period of time, it must be viewed with a grain of salt – I am looking at the word primarily in the abstract – I was not present when word and meaning were constructed, and I am looking at it through a digital lens, which can obfuscate the messiness and fuzziness that actually is at the heart of language production and meaning. To put it another way: What is pornography? I’ll know it when I see it.

I attempted to “see” what pornography was, in a textual sense, by using nGram, in the 1950’s. A book from 1958, Pornography and obscenity: handbook for censors, by David Herbert Lawrence and Henry Miller, apparently challenges the negative connotation of pornography, and the effect of the individual in its “secrecy”. Venus in Boston: And Other Tales of Nineteenth-century City Life by George Thompson, from 1950, looked at pornography in a historical sense in the 1800’s. The text The Pornography of Violence in Literature: And a Check List of Titles with a High Content Level of Violence by Robert T. Jordan in 1956 using the negativity of the term to connote it with another negative term – violence. What can I surmise about the use of this word? Well, it can be bad or shameful. What else? I have to use my mind to think about it.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s