TILL HEILMANN (.info)  

Digital Decoding
and the Techno-Logic
of Representation

Till A. Heilmann
Institute for Media Studies
University of Basel

Paper presented at
2009 Annual Conference of the
Society for Science, Literature and the Arts
“Decodings”
Atlanta, Georgia, 8 November 2009


I shall argue that the digital decoding of information performed by computers undermines a seemingly unproblematic distinction, namely the distinction between ‘soft’, ambiguous, social codes on the one hand and ‘hard’, rigid, technical codes on the other hand. This distinction I am trying to criticize is, of course, a extremely simplified one. One could even say that it is only a construct of mine having the sole purpose of being deconstructed. So before I discuss digital decoding and its consequences for the logic of representation by reference to some examples, I would like to give a very short account of some important concepts of social and technical codes.

Social and technical codes

In a wider sense, code comprises any means of expressing information or representation. The distinction between social and technical codes divides a highly heterogeneous field of cultural conventions, legal rules, ethical principles, political regulations, technical specifications, linguistic systems, corporate guidelines, and so forth into two homogeneous sectors and identifies each sector with a distinguishing mark. Code will then mean either a semantically rich mode of representation or the context-free and unambiguous mapping of terms.

Accordingly, decoding is conceived either as an interpretation of meaning or as a transformation of expressions. Therefore the main difference between social and technical decoding seems to lie in the nature of their respective norms. While the interpretation of a social code can be more or less appropriate to a given context or situation, the transformation processes governed by a technical code can only ever be correct of false. Within social codes there is always a certain fuzziness that leaves a chance for misunderstanding or divergent readings, whereas technical codes specifically and structurally eliminate all ambiguity. On one side, then, we have so called natural languages, fashion systems or greeting rituals, on the other side we find things like Morse code, ASCII, and the programming language C.

From a communications-theoretical perspective we might rephrase the distinction between social and technical codes simply as the difference between communication and communications engineering. In communication the goal is typically to convey meaning by composing a suitable message. In communications engineering the goal is to transmit messages by selecting the right combination of signals. The corresponding modes of decoding—i.e. the interpretation of meaning and the transformation of signals—I shall call ‘semantic’ and ‘syntactic’ decoding. To illustrate this distinction I will briefly outline some exemplary concepts of social and technical codes.

One of the first uses of the term ‘code’ for so called human communication can be found in the work of Swiss linguist Ferdinand de Saussure. In his Course in General Linguistics he differentiates between language and speech (‘langue’ and ‘parole’) and notes that in speech one can discern “the combinations by which the speaking subject utilises the code of language in order to express his personal thoughts”.1 Saussure focuses on language as a system of signs. The ‘code of language’ is simply another word for the totality of this system in which every single position is assigned a unique and identifiable value. In Saussure’s notion of language as a system there is no room for ambiguity.2 Code provides semantics through syntax.

The most prominent and probably most important thinker of code within the context of communications engineering is, of course, Claude E. Shannon. His work on information theory and its consequences not only for engineering but also for the conceptualization of human communication have been thoroughly discussed and debated. Here, I will only quote his famous statement that “messages have meaning” but that “[t]hese semantic aspects of communication are irrelevant to the engineering problem”.3 For Shannon, questions of code concern the pure syntax of signals. Decoding is a procedure that can be mathematically described without any reference to meaning. Matters of meaning are situated on a very different level. But in spite of Shannon’s straightforward remark about the character and scope of his work, his model of communication has—with slight modifications—been used over and over again to describe human communication, most notably perhaps by Russian linguist and literary critic Roman Jakobson.

Jakobson’s own model of communication is clearly influenced by Shannon’s. In the 1950ies, as a professor at Harvard and later at MIT, Jakobson made cybernetics and information theory part of his linguistic research as he tried to ground it on “hard” scientific theories.4 Jakobson’s 1960 address “Linguistics and Poetics” describes communication—very similar to Shannon’s model—as a bipolar process of transmission with the message being encoded at the source and decoded at the destination according to the ‘code of language’ used.5 Code plays an important role in Jakobson’s model but it is only one of six factors involved in communication. To fully succeed, the decoding of a message must not only take into account the systematic aspect of language, but also its emotive, referential, conative, phatic, and poetic functions.

One of the most influential approaches to social code and communication is British sociologist Stuart Hall’s work on “Encoding/Decoding” from the early 1970ies. Hall stresses the productive character of coding processes and sees meaning as the result of discursive practices. Encoding and decoding are not, as is the case in Saussure’s and Shannon’s work, non-semantic mechanisms on a purely systematic or syntactic level of communication. For Hall, code is unseparable from questions of semantics. Meaning is not so much transmitted as it is produced by the sender’s encoding as well as by the receiver’s decoding of the message. Since the codes used by encoder and decoder are not necessarily the same and depend, among other things, on social position, there is no guarantee for a perfect symmetry between meanings of encoded and decoded messages. Thus, Hall can distinguish between dominant, negotiated and oppositional decodings or ‘readings’ of messages.

This brief survey tried to highlight the main difference between social and technical codes. To summarize with the help of two simplistic examples: A telegraphic message in Morse code can be decoded only as one specific string of roman letters and numerals. The sequence ‘dot dash’ always stands for the letter A, never for E, I, O, U or something else. On the other hand, the linguistic message transmitted by a technical code like the Morse code can be decoded in multiple and even opposing ways. For example, a proposal to limit bankers’ pay could be read as an political instrument resolving a ‘moral issue’, but also as ‘a socialist government intervention’.

Digital decoding

My argument is basically this: The digital computer is such a powerful device because it implements divergent ‘readings’ of code which are usually thought to be the hallmark of social codes and semantic decoding. Whereas in social codes the multiplicity of decoding refers to the semantic field, the multiplicity of digital decoding is possible precisely due to the fact that it abstracts from all meaning and operates as a series of syntactic transformations.

Technical codes are instruments not only for the transformation of one kind of representation into another but also for the ever-expanding control over the process of transformation itself. From the beginning, technical codes have been accompanied by or have incorporated various mechanisms for efficient, accurate and reliable syntactic decoding—and these mechanisms are continuously optimized. To mention but a few examples: In Morse code the most frequent letters used in the written English language are assigned the shortest sequences of dots and dashes. The Murray code, also used for automatic transmission of telegraphic messages, was designed to minimize wear on the machinery. In ASCII, the set structure of the code reflects the need for easy ordering and sequencing.6 Storage media such as compact discs and hard drives rely on elaborate algorithms (like hash functions) to detect errors and correct them on the fly. The Transmission Control Protocol, one of the core elements of modern day digital communications infrastructure, includes checksums to help ensure the correctness of transmitted data. All these mechanisms work towards securing and enhancing transformation processes so that ever greater volumes of data can be stored and be transmitted at ever increasing speed.

Matthew Kirschenbaum has recently introduced into the analysis of media the distinction between forensic and formal materiality. Kirschenbaum maintains that while in the physical world, i.e. on the level of forensic materiality, “no two things […] are ever exactly alike”, digital computers create a world of perfect uniformity. To store, transmit, and process data, computers build a technical environment in which there is “identification without ambiguity, transmission without loss, repetition without originality”.7 Despite its unique forensical manifestation, every bit representing 1 has the same formal identity as every other bit representing 1. Just like all the shapes of all the dots on a punched paper tape may vary ever so slightly and still be identifiable as the type ‘dot’, the shapes of electromagnetic traces on a hard drive platter may be minimally distorted. Within the defined range of tolerance such variations in forensic materiality do not matter. In the syntactic chain of digital processing every bit functions as a formal element and always represents either 0 or 1, nothing in between. The crucial point for a better understanding of digital decoding, though, is not this perfect sameness of representation as such. Rather, it is the “endless permutations” of syntax made possible by the sameness of representation.8 Kirschenbaum himself says so but, to my mind, does not fully develop all the implications of this fact.

The features of formal materiality are absolute definitude and uniformity. However, the purpose of these features is not only stability of representation but also, and more importantly, variability of representation. The techno-logic of digital representation turns everything into a variable, a syntactic element that can be changed at will. In digital media, as Lev Manovich notes, “[i]f something stays the same for a while, that is an exception rather than the norm”.9 The message of the mechanical age and of mass reproduction was “more of the same”; the message of the digital age and of digital representation is an alchemic “everything in every form”.

Seen from the side of en-coding, the variability of digital representation simply means that digital objects can and are changed continually: Text documents are edited, images retouched and composited, sound recordings remixed, spreadsheets updated, lists sorted, confidential files encrypted and decrypted, databases extended, applications upgraded, and so on. Seen from the side of de-coding, variability means several things.

First there is the basic fact that every act of accessing a digital object is an act of syntactic decoding that transforms digital representations into texts, images, and sounds for human eyes and ears. Second, digital decoding often operates not on fixed but on variable objects. “[W]hether we are browsing a web site, use Gmail, play a video game, or use a GPS-enabled mobile phone to locate particular places or friends nearby, we are engaging not with pre-defined static documents but with the dynamic outputs of a real-time computation.”10 Third, for many—if not most—digital objects there is not one object that can be identified as the single ‘source’ of the representation. Digital objects are typically assembled from separate sets of data to form ever-changing syntactic combinations. A common example are complex websites that are made up of different blocks of texts and a number of images, videos, or sounds coming from one or more databases, but the same holds true for other digital objects as well.

Yet, the most radical implication of the variability of digital representation is that there can be no ‘right’ decoding or ‘proper’ representation. In digital decoding, there is no state of any object conforming to its ‘actual being’. The only candidate for the ‘actual being’ of a digital object would be its manifestation in forensical materiality (i.e. the unique shape of magnetic trace on a hard drive platter or the exact voltage in some circuit). In the realm of formal materiality a digital object exists only as a complex of transitory states in a potentially endless chain of syntactic transformations. Some of these states may be more adequate to a specific demand or task than others but none of them is the final state to give us the ‘authentic’ representation.11 Referring again to the example of websites: A website can be rendered by different browsers and devices in different ways for different needs. One can choose between a text-only or graphical browser, change fonts and their sizes, apply different color schemes, exclude certain elements like ads, and so on.

Still, the matter is more profound. Even for a single and seemingly simple file, say a JPEG image file, there is no ‘right’ decoding or ‘proper’ representation. When working with the image, it probably first appears to us a list item in a directory of files giving its name, date and time of last change, size, and type; or maybe we see it first as an icon or as a thumbnail preview in an image browse. Perhaps we are interested in the file’s metadata describing in letters and numerals the camera settings in effect when the picture was taken. It is also possible we only want to see the picture’s color histogram showing the tonal distribution. We can, of course, zoom into the picture so that only a small detail of it fills the entire display. And if we were spies, we might be looking at some secret information steganographically hidden inside the file. All these representations are the result of applying different syntactic decodings to the same file and all of them are ‘valid’—none of them is the single ‘true’ image.

Let us look at an even simpler case: an ASCII text file. Again, the ‘content’ of the text file can be represented in various ways, in different fonts, colors, and sizes. Long lines may wrapped at a specified length, at the edge of the window or simply truncated. But who says we have to look at printable characters only? ASCII does not describe the shape of a character (the glyph) but its syntactic identity as a grapheme. So we are free to view the numerical values that make up the characters. The text file then appears to us not as a string of letters but as a string of hexadecimal or even binary numbers.

Now, one might be tempted to regard this string of binary numbers as the ‘actual’ representation. Are not these numbers the ‘essence’ of the file, the origin of its manifold formal materialities? The answer is no, for the simple reason that we can only see these numbers as digits on a screen or a printout. The digits themselves are also representations. They are the product of a highly complex syntactic decoding ending in polarized light emitted from a LCD or toner fused to paper. As actual materialities these digits do not exist. There are no 1s and 0s ‘inside’ a computer. There are only forensical marks—forensical marks that must be syntactically decoded into various formal manifestations, one of which is the representation as a string of 1s and 0s displayed on a screen or on paper.

In our everyday handling of computers and digital media we do not pay particular attention to the variability of representation or the multiplicity of decoding. Kirschenbaum notes that some representations of digital objects are ‘naturalized’ by the programs we use.12 When working with computers, we routinely identify certain decodings of digital objects offered to us by standard software applications as ‘proper’ representations—e.g. two-dimensional pictures for image files, acoustic pressure for sound files, strings of letters for text files—and regard the others as some fancy “computer stuff” or “digital decoration”. Typically, only one from an infinite number of possible syntactic decodings is taken for the ‘real’ thing.

But is this not surprisingly similar to what theorists like Barthes, Derrida and Hall say about social codes and semantic decoding? Is the variability of digital representation and the ‘naturalization’ of syntactic decoding not very much like the variety of connotations of a sign and the ideological forces establishing one of these connotations as the designation of the sign and therefore its ‘natural’ meaning? Must we not acknowledge that even the most ‘rigorous’ of all technical codes, the code of digital computing, shows the same fundamental characteristic as social codes, namely the impossibility of arriving at a final state in interpretation or transformation that holds the ‘truth’?

I am inclined not only to answer ‘yes’ but to go one step further. I would like to suggest that today’s digital computers can be understood as an ingenious (if unwitting) technological exploitation of that characteristic. What makes them so powerful is the fact that they fabricate not fixed but variable representations. In a strange way computers seem to have captured the potential for articulation or differentiation Derrida called Écriture and its deffering momentum he called différance.13 Digital decoding turns a group of forensical marks into formal materialities which can be transformed into other formal materialities. Computers ‘read’ or treat physical marks as files, files as texts, texts as statistical series, statistical series as database records, database records as tables, tables as charts, and so on. Generally speaking, the constant transformation of representation is what allows us to edit, sort, analyze, combine, remix, and extend digital objects in the first place. Computers are, so to speak, the technological escalation of différance to the point—as Derrida himself put it—of ‘endless analysis and revision’.14

Two examples: Amazon Kindle, DeCSS

To conclude my talk, let me give you two examples illustrating the serious consequences the variablity of digital decoding has in the spheres of law and economy. My examples are Amazon’s e-book reader Kindle and the decryption program DeCSS.

When Amazon launched its e-book reader Kindle, commentators were quick to point out important differences between the printed and the electronic word. There was a lot of talk about the ease and increased speed of distribution, new payment models, and the demise of traditional publishing business. Rarely discussed, however, was the fundamental change for the representation of text brought about by its digital decoding. This change became apparent with the second version of the Kindle which was fitted with a voice function. Now, the Kindle could not only decode the ‘contents’ of an e-book as a formatted string of letters composing written words, sentences and paragraphs, but also as a reading spoken by a synthesized voice. Worried that this would pose a threat to lucrative audio-book sales, the Authors Guild claimed the Kindle’s voice function constituted a violation of writer’s copyrights. Paul Aiken, executive director of the Authors Guild, declared: “They don’t have the right to read a book out loud […] That’s an audio right, which is derivative under copyright law.” (Wired News) One decoding of an e-book—i.e as visible letters generated by the E-ink display—is deemed the ‘actual’ or primary representation of the text; other possible decodings, including audible sounds generated by the voice function, are ‘derivative’ and therefore secondary.

So, here it is not media theory but the Kindle’s voice function and the Authors Guild’s intervention that raise the question: What does a digital object ‘actually’ represent? In this case: the written or the spoken word? And what exactly is the relationship between orality and literacy in digital culture? I would like to share with you only two observations:

First, the Authors Guild’s argument in fact (and ironically, I would say) reverses the phonocentric notion of language and writing long held by Western metaphysical tradition, namely that the written word is secondary and a derivative of the spoken word, whereas the spoken word is the ‘true’ representation of a person’s thoughts. For the Authors Guild, on the other hand, the (synthesized) voice speaking the text is a derivative of the written words and only these are the ‘true’ representation of the writer’s work and intellectual property.

Second, one of the legal arguments against the claims of the Authors Guild is the fact that under the Copyright Act derivative works “must have some concrete or permanent form” and that an infringing copy of a work must also be “sufficiently permanent or stable to permit it to be […] reproduced […] for a period of more than transitory duration.” (quoted in Wired magazine; my emphasis) But, as we have seen, digital decoding produces representations in formal materiality—and no formal materiality ever is or can be concrete or stable. The only things concrete and stable are the physical marks of forensic materiality on hard drive platters and the like. Formal materialities such as sounds of a synthesized voice and even the letters on an e-ink display are always inherently transitory.

My second example is the decryption program DeCSS that was released on the Internet in 1999. DeCSS circumvents or ‘breaks’ the copy protection scheme of commercial DVDs and allows unlicensed devices and platforms (like Linux) to play the discs and to make copies of them. The technical details—how DeCSS works—are not relevant to our analysis. What is important, though, are the representations of the program.

Almost from the beginning, DeCSS was distributed as source code, i.e. the machine and human readable format of the program expressed in the syntax of a so-called higher programming language, in this case C. Various websites published this source code that could be downloaded and then easily compiled into object code, i.e. the executable format of the program. Obviously, the movie industry was not happy with this and tried to stop the distribution of DeCSS. The DVD Copy Control Association (DVD CCA)—the organization in charge of the copy protection scheme—therefore filed suit against several people who had published DeCSS on their websites. The ensuing campaign against and in support of DeCSS was a long and complicated legal battle that, as far as I know, has not resulted in any final decision. Today, I want to mention only two aspects of the controversy.

First, in November 2001 the California appeals court ruled that DeCSS was protected by the First Amendment when, and only when, the program was represented as source code. In this format the court considered DeCSS to be “pure speech” since it contained meaning: “DeCSS is a written expression of the author’s ideas and information about decryption of DVDs without CSS.”15 The representation of DeCSS in its executable format, however, was not considered “speech” and hence not protected by the First Amendment: “If the source code were ‘compiled’ to create object code, we would agree that the resulting composition of zeroes and ones would not convey ideas.”16 It seems to me that the Court’s differentiation between ‘written expression’ and a ‘composition of zeroes and ones’, between an ‘author’s ideas’ and something that does not ‘convey ideas’ can be understood as an attempt to uphold the distinction of social and technical codes, of semantic and syntactic decoding. As you may guess, in my view this distinction is highly problematic, not least because the transformation of source code into object code is itself a process of syntactic decoding.

Second, to bypass legal restrictions the hacker community soon began distributing the DeCSS program in a multitude of different representations: as image files, poems, in electronic greeting cards, printed on t-shirts, as ASCII art, as a dramatic reading or a musical version, as a MIDI tune, a Star Wars opening crawl, as steganography, and so on. Possibly the most interesting representation of DeCSS, though, is a prime number. When expressed as a string of binary digits, the source code of a program can be thought to represent not the sequence of separate characters making up the code but one single very large number. Because of Dirichlet’s theorem, if you are good enough in mathematics and are able to adapt the source code appropriately, you can find a prime corresponding to the code’s number. This prime, when decoded with a common decompression algorithm, will result in your original source code. And that is what happened. In 2001 a mathematician and software engineer called Phil Carmody discovered (or created) a nearly 2000-digit long prime number that can be decoded as the source code of DeCSS. What does this number represent? Is it ‘just’ a number which happens to be a very large prime or is it ‘actually’ the representation of a—possibly illegal—program? And if so, is the number itself illegal? But this is not only about primes and the DeCSS source code. These questions in principle apply to every digital object and any number for every digital object can be represented by a number and any number can be decoded as some digital object. To this day, the case of the mentioned prime number and DeCSS has not been tested in court.

I hope the examples have demonstrated how today’s computers undermine a simplistic distinction between semantic and syntactic decoding and how they question the notion of ‘actual’ or ‘true’ representation. One way to constrain the variability of digital representation is the process of naturalization through standard software; another is the attempt to control the multiplicity of digital decoding by means of another code: the code of law. The techno-logic of digital representation is not only a technical matter and it is not just a theoretical challenge—it is fundamentally a political issue.

Notes

1 “[L]es combinaisons par lesquelles le sujet parlant utilise le code de la langue en vue d’exprimer sa pensée personnelle”. (Ferdinand de Saussure: Cours de linguistique générale, Paris–Lausanne: Payot, 1916, pp. 30–31)

2 Klaas Willems: Logical polysemy and variable verb valency, in: Language Sciences 28.6 (2006), pp. 580–603, here pp. 584–585.

3 Claude E. Shannon: A Mathematical Theory of Communication, in: Bell System Technical Journal 27 (1948), pp. 379–423, 623–656.

4 Jürgen Van de Walle: Roman Jakobson, Cybernetics and Information Theory. A Critical Assessment, in: Folia Linguistica Historica 29.1 (2008), pp. 87-123.

5 Roman Jakobson: Linguistics and Poetics, in: Style in Language, ed. by Thomas A. Sebeok, New York-London 1960, pp. 350-377.

6 American Standards Association: X3.4-1963 American Standard Code for Information Interchange, 1963, p. 8.

7 Matthew G. Kirschenbaum: Mechanisms. New Media and the Forensic Imagination, Cambridge, MA–London: MIT Press, 2008, p. 11.

8 Ibid., pp. 145-146.

9 Lev Manovich: Software Takes Command, 2008, p. 175.

10 Ibid., p. 17.

11 Kirschenbaum: Mechanisms, pp. 145-146.

12 Ibid., pp. 132-133.

13 Jacques Derrida: De la grammatologie, Paris: Editions de Minuit, 1967.

14 Idem: La machine à traitement de texte, in: Papier machine. Le ruban de machine à Ècrire et autres réponses, Paris: Galilée, 2001, pp. 151-166.

15 California Appellate Decision Overturning DeCSS Injunction in DVDCCA v. MacLaughlin, Bunner et al., H021153 Santa Clara County, Super. Ct. No. CV 786804; http://w2.eff.org/IP/Video/DVDCCA_case/20011101_bunner_appellate_decision.html.

16 California Appellate Decision Overturning DeCSS Injunction in DVDCCA v. MacLaughlin, Bunner et al., H021153 Santa Clara County, Super. Ct. No. CV786804; http://w2.eff.org/IP/Video/DVDCCA_case/20011101_bunner_appellate_decision.html.


Cite as
Heilmann, Till A. “Digital Decoding and the Techno-Logic of Representation.” Paper presented at 2009 Annual Conference of the Society for Science, Literature and the Arts “Decodings.” Atlanta, Georgia. 8 November 2009. <http://tillheilmann.info/slsa2009.php>.

Till A. Heilmann (Dr. phil.) is a researcher at the Department of Media Studies at Ruhr University Bochum. He studied German, media studies, and history. Research Associate at the University of Basel (2003–2014), the University of Siegen (2014–2015), and the University of Bonn (2015–2021); doctorate for a thesis on computers as writing machines (2008); visiting scholar at the University of Siegen (2011); Fellow-in-Residence at the Obermann Center for Advanced Studies at the University of Iowa (2012); acting professor of Digital Media and Methods at the University of Siegen (2020–2021); book project on Photoshop and digital visual culture (ongoing). Fields of research: Media history; media theory; media semiotics; history of media studies. Research focus: digital image processing; algorithms and computer programming; North American and German media theory. Publications include: “Blackbox Bildfilter. Unscharfe Maske von Photoshop zur Röntgentechnischen Versuchsanstalt Wien [Black Box Image Filter: Unsharp Mask from Photoshop to the X-Ray Research Institute Vienna].” Navigationen 2 (2020): 75–93; “Friedrich Kittler’s Alphabetic Realism.” Classics and Media Theory. Ed. P. Michelakis. Oxford University Press 2020: 29–51; “Zur Vorgängigkeit der Operationskette in der Medienwissenschaft und bei Leroi-Gourhan [On the Precedence of the Operational Chain in Media Studies and Leroi-Gourhan].” Internationales Jahrbuch für Medienphilosophie 2 (2016): 7–29; “Datenarbeit im ‘Capture’-Kapitalismus. Zur Ausweitung der Verwertungszone im Zeitalter informatischer Überwachung [Data-Labor in Capture-Capitalism. On the Expansion of the Valorization Zone in the Age of Informatic Surveillance].” Zeitschrift für Medienwissenschaft 2 (2015): 35–48; “Reciprocal Materiality and the Body of Code.” Digital Culture & Society 1/1 (2015): 39–52; “Handschrift im digitalen Umfeld [Handwriting in the Digital Environment].” Osnabrücker Beiträge zur Sprachtheorie 85 (2014): 169–192; “‘Tap, tap, flap, flap.’ Ludic Seriality, Digitality, and the Finger.” Eludamos 8/1 (2014): 33–46; Textverarbeitung: Eine Mediengeschichte des Computers als Schreibmaschine [Word Processing: A Media History of the Computer as a Writing Machine] (2012); “Digitalität als Taktilität: McLuhan, der Computer und die Taste [Digitality as Tactility: McLuhan, the Computer and the Key].” Zeitschrift für Medienwissenschaft 2 (2010): 125–134.

  TILL HEILMANN (.info)