Jump to ContentJump to Main Navigation
Aesthetic AnimismDigital Poetry's Ontological Implications$

David Jhave Johnston

Print publication date: 2016

Print ISBN-13: 9780262034517

Published to MIT Press Scholarship Online: January 2017

DOI: 10.7551/mitpress/9780262034517.001.0001

Show Summary Details
Page of

PRINTED FROM MIT PRESS SCHOLARSHIP ONLINE (www.mitpress.universitypressscholarship.com). (c) Copyright The MIT Press, 2018. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in MITSO for personal use. Subscriber: null; date: 24 October 2019



(p.131) 4 Softwares
Aesthetic Animism

David Jhave Johnston

The MIT Press

Abstract and Keywords

Software defines what digital poetry is. This chapter explores the temporal implications of animation time lines on the literary imagination. Read it if you are concerned with software studies and/or creative media. It argues that an authoring environment specific to the literary must emerge. Chapter includes a rapid overview of most of the authoring tools and code languages used by contemporary poets. Software examined includes: After Effects, Mudbox, Mr Softie, Flash, VMRL, and Second Life; and the programming languages Processing, RitaJS, Python, and C++.

Keywords:   Software studies, Temporality, Generative, digital poetics, electronic literature

Critical Code Studies is the application of hermeneutics to the interpretation of the extra-functional significance of computer source code. It is a study that follows the developments of Software Studies and Platform Studies into the layer of the code. In their oft-taught text, Structure and Interpretation of Computer Programs, Herald Abelson, Gerald Jay Sussman, and Julie Sussman declare, “Underlying our approach to this subject is our conviction that ‘computer science’ is not a science and that its significance has little to do with computers. The computer revolution is a revolution in the way we think and in the way we express what we think.”

Mark C. Marino, 2010

For literary scholars, literature students, poets, writers, or the casual reader of this text, this section may seem incongruous. What role can software studies legitimately play in the study of writing literature? Can software studies offer insights into animist ontologies about literature? Did critics in the eleventh century write treatises critiquing the quill? Granted, the printing press modulated writing. But most poets don’t animate their words; most writers don’t use motion graphics. The potential implications of the following analysis apply to a minority of writers. Yet the implications of software (motion graphics, typographic sculpting, and code-generated poetry) on reading and writing may have implications for all concerned with language. As audiences habituate to reading language in games, film credits, ads, and experimental video, attention dilates, and new syntaxes emerge that privilege morph over cut. The tools that create these effects influence the concepts expressed.

What follows is a hands-on focus on the creation of several specific works, preceded by a consideration of temporality and the role of animation interface “timelines.” I examine the keyframed timeline (an authoring (p.132) interface used in animation, 3-D, video, and special effects) as a historical design artifact, acknowledge Johanna Drucker’s (2009) SpecLab insights into the effect of interfaces on temporality and creation, and meditate (briefly) on the benefits and risks of timeline systems that quantify repetition versus systems of (what I call) instrumental softwares that provoke improvisational process.

What Is Software Studies?

Software studies lies at the nexus of code and culture, in an epistemological estuary that although mapped and known to exist, is still relatively untracked. For practice-led researchers (i.e., artist-academics), software studies offers a chance to reflect on the interdependency of creativity and design in practice. In this book, software studies connects the ontological proposal (of aesthetic animism) to the empirical practices of digital poets.

The critical discourse around software is shifting rapidly. The quotation from Marino that opens this chapter points out several ways these shifts are occurring: first, software studies has been joined by platform and now critical code studies. Each serves as a valuable tool in an increasingly technological world. Future domains may include network, and avatar/augmentation studies; each of these will impact the humanities.

As Lev Manovich insightfully notes, several key figures at the origin of interface design left clues that they perceived software as quasi-entity. Ivan Sutherland, who in 1963 laid the seeds for motion graphics, titled his PhD dissertation “Sketchpad: A Man-Machine Graphical Communication System.” Manovich comments:

Rather than conceiving of Sketchpad as simply another media, Sutherland presents it as something else—a communication system between two entities: a human and an intelligent machine. Kay and Goldberg will later also foreground this communication dimension, referring to it as “a two-way conversation” and calling the new “metamedium” “active.” (We can also think of Sketchpad as a practical demonstration of the idea of “man-machine symbiosis” by J.C. Licklider applied to image making and design).

(Manovich in 2008 online draft of 2013, 67)

Interface design influences the relationship that a writer has to words. As symbiosis and conversations, writing tools alter how art and literature arise.

(p.133) Timelines

Instead of drawing with lines, Nomencluster allows you to create your own designs with insects, 19th century engineering engravings, food chemistry, and a continual stream of poetic texts and interactive writing.

Jason Nelson and Matthew Horton, 2015

The turn toward living language entails authoring environments appropriate to the task, and it is my feeling that the animation-timeline paradigm is suboptimal in certain respects when it comes to the modeling and manipulation of (TAV) digital texts. Time-based media and life-forms are both not amenable to nuanced descriptions within linear, quantifiable spreadsheets. And spreadsheets (as explained below) are historically the organizational paradigm that underlies the contemporary animation timeline. Literature is precisely the opposite: ambiguous, parallel, and quality rich; experiential time curves.

As far as I know (apart from the work of Drucker), the impact of linear timeline authoring environment design on experiential depiction remains unresearched in digital humanities. That theoretical gap may be due to the difficulty of conceiving of interfaces that do not exist; it’s difficult enough that I do not attempt it (Drucker does, and with some success!). Instead, in the following sections I outline a history of timelines and then examine in detail a few key software.

Timelines Defined

Timelines are the dominant paradigm for scrubbable media authoring and playback; they are prevalent in most commercial and industrial softwares that work with media (including diverse softwares from multiple domains: film editing, motion graphics, 3-D rendering, DVD and music players, slide shows, etc.). For media consumers, timelines allow scrubbable time.

With timelines, time can be controlled, so as design artifact they reflect the instinct to engineer, will to power, and control impulses. In some sense, they are the multimedia equivalent of a keyboard. Keyboards allow rapid, fluctuating writing styles to emerge; timelines enable complex, animated letterforms.

(p.134) For media authors, timeline interpolation operates as algorithmic suture, sewing and joining time.1 A “tween” literally makes a path between distinct temporal (digital) frames. It fills in the gaps. It guesses the holes.

The advantages of this style of animation are manifold. Fine-grained control of parameters distributed across easing curves (which permits easy repetition) constitutes an empirically viable method for creative control. The author can iterate and tweak multiple parameters independently; time is carefully and cleanly laid out in a linear fashion; it is easy to understand chronological events. The disadvantages are subtler to identify but relate at a specific level to spontaneity and improvisation, and secondarily at a general level to a concept of time that is an antiseptic contingency.

For poets, as poets, or people who play with the in-between, there is a necessity to retain within authoring environments, a nontimeline mode, to allow for unstructured play and exploratory improvisation. To enable within writing the fullest range of expressive and formal processes capable requires carefully considering the tools.


In 2008, a fifty-two-hundred-year-old Iranian earthenware bowl (with five drawings of a goat on it) was spun around and reputed to be an early instance of animation (of a goat leaping to eat a leaf). In this case, the claim to animation is tenuous, but as a sequence of poses displayed on a surface, this conical surface echoes faintly the contemporary timeline’s integration of visual language with chronological control. Gestural control of a bowl is in the spin; this gesture is echoed in the scrub wheel of contemporary editing suites. It is also echoed in the numerous animation tropes that occurred between ancient poetry and modern film animation: zeotrope, praxinoscope, thaumatrope, and phenakistoscope. Flip books laid out before binding; histories constructed from mnemonic principles as in ancient Rome: the conceptual legacy of timelines is vast.

A Missing History

In the Augmented Human Intellect (AHI) Research Center at Stanford Research Institute a group of researchers is developing an experimental laboratory around an interactive, multi-console computer-display system, and is working to learn the principles by which interactive computer aids can augment their intellectual capability.

—Douglas Engelbart and Bill English, 1968

(p.135) Although the history of graphical user interfaces and interface developments—like Engelbart’s (1968) ‘Demo’ at the Stanford Research Institute, windows-icon-mouse-pointer, and the evolution of personal computer operating systems (Vis-On, Lisa, Amiga, MS-DOS, etc.)—is well documented online, the history of how individual softwares evolved and integrated their various features and grew into the complex beasts we know today is not easily found.

I did not find any step-by-step history of the timeline as an interface module. Perhaps that is because searching for ‘timeline’ does not produce refined results; perhaps software paleontology is sparse. Many computer professionals and programmers (who create for their pleasure online archives of hardware development) are unfamiliar with multimedia software, and the meaning of the term timeline remains associated for the most part with its analog form in historical presentations. So what follows is a tentative history, assembled from a few fragments.

Turing Mach

The best way to predict the future is to invent it.

Alan Kay, 1971

Turing machines are commonly used to teach the principles of discrete math underlying computer science. A Turing machine is a thought experiment that involves imagining a single-frame tape reader that can read one symbol instruction at a time. Sequentially, these symbols construct and simulate the logic essential to computing. They are also remarkably similar to timelines: one frame, one symbol, and a pointer to that frame, along with an infinite memory of everything before and after. At the same time, the Turing machine is an abstract representation of the assembly line with its sequential passage of parts past multiple time pointers. Another intriguing structural resonance with timelines reoccurs at the origin of graphic processor units (GPUs). Graphic cards underlie all motion graphics; they are the physical architecture necessary for the multimedia revolution. They are basically pixel-based frame buffer systems: a unit of time-stamped data held by a pointer in memory. The precursor-to-GPU pixel-based frame buffer arose in the early 1970s as the Sandin video synthesizer was invented (p.136) and the Computer Graphics and Image Processing journal began publication (Shoup 2001).

So perhaps the origin of digital timelines begins at the confluence of theory (Turing machines), digital hardware (GPU frame buffers), efficient capitalist productivity (assembly line), and cartoons (cel animation).

Animation Spreadsheets

Disney Animation Studio’s Exposure Sheet, accessible from the Pencil Test, works rather like an animation spreadsheet.

—Steven Anzovin, 1992

Timelines are a specific case of charts. A diversity of ancient accounting systems and mechanisms all use some sort of a timeline: pulleys, gears, film sprockets, axles, abacus style devices, grinding mills, Kabbalist divination wheels, and even mandelas. Software (like all culture) soaks up paradigms; remediation is conceptual reincorporation.

It is also probable that the commercialization of the software timeline was born when cel animation met computation. In 1990, in conjunction with Amiga (which had a dedicated hardware system for multimedia before the personal computer), Disney developed a commercial software package. In promotional television demos of this package (available online), the primary authoring screen clearly has no timeline.2 Creative work occurs in a cel-animation-style space where the animator controls (with keystrokes) the amount of onion skinning. Animation happens automatically. The environment conforms to the classic Greek metaphor for time: a human walks backward into the future, with its most recent trail fading away behind it. The future is unknown.

Without a timeline there is no future; there is only the present moment. The animator is not supplied with visual evidence that a future exists, and that time runs straight and then ends. Teleology, with all it implies (origin, progress, Armageddon, etc.), does not exist. Nontimeline design environments are the visual equivalents of oral cultures. The animator must remember the set as junctures that contribute to a totality.

In the 1990s, however, more complex animation projects must have demanded methods for remembering scenes and the ability to jump visually from one time to another. A software “evaluation” article from Compute (p.137) (Anzovin 1992) reveals how a timeline-like module (two years after launch) has been added to (or always existed in) the Disney Animation Studio. It is called an exposure sheet. Functionally, it is compartmentalized off from the main real-time, cel-style animating mode. Exposure sheet

works rather like an animation spreadsheet. Each cel in the animation is given a line in the Exposure Sheet, showing the cel number, assigned sounds, timing, and other information. You can rearrange cels of an animation in the Exposure Sheet by cutting, pasting, or deleting their lines, which is much easier than cutting and pasting cels in Pencil Test. (ibid.)

Note the metaphoric reference to spreadsheets in the promo material; time-lines are sold as organizational efficiency tools. Spreads are essential for the rapid dissection of quantifiable data. Primarily used in accounting and inventory, spreadsheets induce precise analytic calibrations of data; it is difficult to envision the purpose of displaying ambiguous evolving emotional experiences in spreadsheets. Spreadsheets are spaces for keeping track of data; they are tabulation tools, interface panopticons, and grid databases. So does it mean anything that the timeline grew from a spreadsheet metaphor? As a ubiquitous feature of contemporary animation software, do timelines introduce quantification and product analysis into the creative process? Will quantified modes of animation (as in audio quantization and timelines) provoke neglect of live improvisational, instrumental authoring environments?

In 1990 (accepting that year’s release of Disney Animation Studio as some sort of benchmark not of research software but rather of commercial diffusion), the timeline function of examining the creative process as a production line is still kept separate as a module; it is secondary, to be consulted as necessary, as an adjunct to creative flow. At this stage of animation software design, time-based structural analysis is used only occasionally during creation. Real-time creation and timeline organization have been grafted together in the same device, but they are not superimposed. Modeling and animation occur together, but independently of the exposure sheets. A nonquantified nontimeline view is the default; fluid gestural flow and crafting frame by frame remain the dominant paradigm. Then at some point in the 1990s, the situation reversed; the default layout became the timeline. The nontimeline view occasionally remains as an option, a vestigial configuration.

(p.138) In other words, in contemporary authoring softwares, focus shifts from an ancient emphasis on tactile process to a tactical procedure. Timelines (animation spreadsheets) dominate; free, fluid, real-time animation environments become secondary and marginal. Head trumps hand and heart; algorithms and accountancy fill gaps. And with this subtle transformation in design paradigms, animation shifts away from choreographic craft and sculptural caress toward a mechanistic mercantile model.

Strangely enough, it is perhaps this transition that needs to be reexamined if a living language is to emerge. Biological clocks do not run in straight lines. Nature’s clocks follow cycles, mushy gradients, and seasonal spirals; Salvador Dali’s clocks melt and bend, as do the associational swerves in poems, sites where tongues trip over intonations.

Timeline’s Fundamental Parts

It is the story of a man who digs a hole so deep he can hear the past, a woman who climbs a ladder so high she can see the future.

Steve Tomasula, 2010

Timelines are narrow strips of unidirectional temporal flow. Their pace quantifies without eddies, an antiseptic pipe that runs along narrow tracks. They are composed of several fundamental parts:

  1. 1. A horizontal straight line(s) that goes from the beginning of the time to the end.

  2. 2. A point(er) (usually drawn as an arrowhead) that represents the present moment.

  3. 3. A display window that shows that present moment.

The ancillary parts (that are not necessarily part of all timelines) include zooming mechanisms, frame markers, and cells. The animator moves step by step through that environment as they would through an inventory. The production environment is a warehouse of boxes, clips, frames, windows, and menus (stacks). The timeline always remains linear and straight. It cannot be bent or forked or broken into multiple strands. Bifurcations can be built in through nesting (compositions or movie clips), so that in actuality the timeline is like a single stalk of the main timeline with multiple looping, repetitive subtimelines occurring. Yet the animator/poet does not see (p.139) the interface timeline like a tree. There is no generic way (or software that I know of) that allows the user to see a timeline’s multiple branching time, nor is there any implementation of independent time signatures on different timelines in the same project. Once the clock starts ticking, it runs to the end.

The metaphoric and ontological implications of these fundamental and seemingly innocuous design elements are unexplored terrain. Are temporal implications implicit in interfaces? Does this effect how we as users/viewers/people think of time? Or is it the reverse: Do these design elements arise from an innately human instinct of what time is?

Specifically, is it possible that the paradigm of malleable living language requires an authoring environment where multiple modules of intersecting flow exist simultaneously? It is one of the ironies of this critique that Tomasula’s ornate, effective, and complex multimedia novel TOC implements many interactive modes of working with text, and it was made in After Effects on a timeline. The question is, What would TOC have been if Tomasula’s Borgesian fantasia of circular time was made in an interface offering a more complex model of time?

Implicit Principles of Timelines

When poets compose with timelessness in mind, they will always be on the route to originality.

—Christopher Funkhouser, 2007

Stating what is implied by interface design is a tricky business, fraught with potential for mistakes. Nevertheless, given the fundamental parts of a time-line, the following beliefs seem implied by its structure:

  1. 1. Time is linear.

  2. 2. Time is unidirectional.

  3. 3. Time can be broken into units.

  4. 4. Units of time are frames; frames are discrete moments.

  5. 5. Frames can be frozen.

  6. 6. Time is never known outside the frame (until the process of render).

  7. 7. Time has a beginning and an end.

(p.140) Claims about Timeline

Considered as a whole, the above list presents a bleak cosmology: a teleological dystopia that if applied to experience, would convert existence into a meta-Kafkaesque plod from birth to death. On the other hand, it reflects pragmatic reality. Task-use efficiency is (at a general level) synonymous with compartmentalization. It would be foolish to claim that interfaces using this model are ruining their users’ capacity to conceive of flexible bifurcations, ambiguous reflectivity, and/or intersecting life stories. There is no shortage of soft, subtle, emotive, and intuitive movies and animations produced using these devices. I have no interest in stating a polemical case.

But I am claiming that in some instances (when timelines eradicate instrumental options—that allow real-time manipulation with tangible feedback), the timeline introduces an implicit model that places the creative practitioner at a distance from immediate temporal feedback with their materials. A classical musician develops sets of muscular reflexes attuned to changes in the matter of their instrument; these reflexes occur subconsciously, instinctively at a muscular level, and neurologically in the dorsal brain; these subtle cues are not accessible within most timeline software, which requires that the machine stop while parameters are changed.3 Live coding may provide a paradigm around this blockade.

By separating run time from work time, timelines deflect the creative process into modular contained moments. The assembly line metaphor may function well in some circumstances, where flow can slowly evolve as it might for a wood or stone carver who steps back and considers the process, continues, and steps back, in a repetitive dance of proximity and distance. Yet traditional sculptural materials (wood, stone, and metal) are static matter. Malleable dimensional texts (as focused on in this book) are temporal entities. They change. Stepping back from change may provide the opportunity to assess independent frames, but timeline-imposed distance removes the creator from the momentum of process. Tactile reduction replaces relation with a living entity. Straight lines refute cycles.

As Stephanie Strickland and Cynthia Lawson Jaramillo note, both code and poetry involve loops. Poems invoke semantic loops in the readers, spaces of retracing. Code is also structurally founded on iterations: “People think of going forward in reading poetry, but the very turning of the line is in constant conflict with that goal, as are the triple realms contending for (p.141) meaning. Neither poetry nor code proceeds by forging ahead” (Strickland and Jaramillo 2007).

Strickland and Jaramillo are not alone in this diagnosis; for Douglas Hofstadter, strange loops permeate aesthetic experience. And I can add my own voice to this chorus: in my essay “Programming as Poetry” (Jhave 2001), I compared recursion to poetic impact:

Poetry and programming share more than strong affinities. Each is language-based, obsessed with conciseness, consistently evolving, modelled on consciousness, and inscrutable to the uninitiated (think of James Joyce reading C++). Each uses language in ways that involve leaps and circular paths; each requires an arduous concentration that ultimately relies upon reasoning which invokes intuition; and each is closely related by a shared goal of precise communication of complex realities.

Creative authoring requires interface design respectful of the sinuous paths of creative process and the recursive foundations of semantic epiphanies.

Homogeneous Granularity

Diagrammatic representations of temporal relations fall into three basic categories: linear, planar, and spatial. Linear diagrams, or timelines, are by far the simplest and most prevalent forms. … The timeline is a linear spectrum with homogenous granularity. On a linear diagram data can exhibit only three relative temporal conditions: earlier than, later than, or (sometimes awkwardly) simultaneous with (or overlapping).

Johanna Drucker, 2009

Drucker’s notion of the timeline’s homogeneous granularity in SpecLab (cited just above) is the only research I am aware of that has directly questioned the cultural implications of temporality in interface design. In SpecLab’s chapter “Temporal Modeling,” Drucker provides an overview of the research that she and her team conducted into the models underlying an exploratory design response to a software initially designed by John David Miller and John Maeda. Drucker explains that in spite of the cleverness of the software, “in its use of screen space and creation of conventions for ordering materials, it was based on what I considered non-humanistic, objective conventions. Such timelines are derived from the empirical sciences and bear all the conspicuous hallmarks of its basis in objectivity. They are unidirectional, continuous, and organized by a standard non-varying metric” (Drucker 2009, 37).

(p.142) Having reached similar conclusions independently, I am in agreement with Drucker when she continues to outline how linearity is not conducive to capturing experience. She uses the words “almost useless for describing the experience” (ibid., 37) in relation to complex felt events that might have many simultaneous components.

Much as I agree with the general direction of Drucker’s argument, and to some degree the case studies that follow are based on a similar premise, there is a general empirical objection to this claim. Films for the last decade have been created using timelines in software, yet the emotional complexity of films has not deteriorated. There are many nuanced special FXs constructed using strictly linear timelines that as a final product, contribute to humanistic goals, and depictions of experience that are rich and nuanced. As a case in point, the final shot of Andrei Tarkovsky’s Solaris is an apex of modernist humanism. Evidently, there is a subtle way that humans separate process from end result. Process does not necessarily contaminate product. Intention is encapsulated.4 The surplus of nuanced projects emerging from timeline-based software thus is a strong objection to arguments for the “nonhumanistic” aspect of timelines. In addition, the prevalent use of nested timelines permits simultaneous with perspectives to occur. And loops within loops, hierarchies, inheritances, and modules are inherent to programming, so the linearity of timelines is only apparent; beneath the surface abstraction of the interface, recursion rules.

Yet Drucker’s argument is itself nuanced and exploratory; she does not claim absolute opposition but instead suggests that alternative modalities exist that might instigate modes of creativity more appropriate to human experience. Her view promotes warped, spatial, and “topographic images of temporal events—a time landscape—with the idea of being able to map experience” (Drucker 2009, 59). The ideas are not implemented, yet the actual process of thinking through them constitutes an exercise in creative interface design within a field that has not changed radically since the epoch of Sutherland, Engelbart’s demo, and Kay at Xerox PARC.

What has been revealed in the previous section is how paradigms of temporality (conveyed by the dominant presence of the timeline) might be constraining creativity, and particularly literary creativity, at some points. Obviously to claim that timelines eradicate the capacity for subtle work is untenable. What is tenable, however, is the inevitability of transformative change in interface design. In particular, Drucker precipitates an awareness (p.143) of software’s temporal bias toward linearity, and Matthew Fuller points to technology as cultural; both utilize references from structural linguistics, psychoanalysis, film theory, and cultural studies. Added to these references, insights from information visualization and the so-called studio or plastic arts (such as sculpture) suggest that tangible feedbacks and real-time instrumentality must be incorporated into future typographic interfaces. In the following section, these threads of temporality and tangibility are subsumed within empirical case studies of specific creative processes.

Case Studies

The impact of electronic technology on our lives is now the object of intense study, but what remains obscure is the role, if any, this technology has in shaping the ostensibly private language of poetry.

Marjorie Perloff, 1991

Each of the following software case studies is an attempt to examine the ontological considerations of aesthetic animism in empirical context, and see how the subtle confluence of temporality, design, and animus intermingle within a digital practice. It is also an attempt to write software studies from the perspective of a practitioner, and move between conceptual speculation and historical overviews down to the discrete minutia of interface details. In the process, I hope to reveal the value of tangible software instruments that permit the real-time play of sculptural letterforms.

After Effects

“Everything was becoming conceptual,” Duchamp explained: “that is, it depended on things other than the retina.”

Craig Dworkin and Kenneth Goldsmith, 2011

After Effects software often elicits a reactionary repulsion from those in the occidental avant-garde. Duchamp fetishism can tend toward untenable absolutes. From a modernist avant-garde perspective, conceptualism’s capacity to recontextualize is considered laudable, sophisticated, self-reflexive cognition, while the ability to contrive is mere manual labor, (p.144) playing with the surface of the mind without awareness of its structure. Graphic activities are castigated as hedonism incapable of yielding meta-aware stances. And the eroticism of the eye is caricatured as a superficial Hollywood film full of fake explosions, extruded aliens, and rogue nebulae. In short, special FXs are associated with cartoonish hypnotism, commercial mind manipulation, and masturbatory immaturity.

Yet I am here to argue (as clearly as I can) why compositing softwares, which are behind many of the world’s most glitzy motion graphic campaigns, deserve recognition as precursors to a truly digital twenty-first-century word processor.

For Expression

A lot of poets are working audiovisually and yet they really get validated only once they start publishing books.

Caroline Bergvall, 2007

Referring to motion graphic works made by Len Lye in 1937, Scott Rettberg (2011) writes, “Letters moving in space, often synchronized to a musical soundtrack, is not precisely a novel phenomena, but something that writers and artists have been experimenting with to some degree since the dawn of moving image technology.” Yet after decades of work, these experiments still inhabit a strange exile from serious literary criticism; it’s almost as if moving image-text triggers a taboo (like masturbation, shitting, and death, even while ubiquitous, they are somehow discomfiting, peripheral).

Why are moving image-texts (glitz and glam) not mere effervescent by-products of puerile imaginations incapable of really grasping the crucial role of abstraction in an information economy (or the primacy of a self-reflexive materiality in art practice)? Because (to put it simply), occasionally motion graphics are also the expression of the deepest felt sentiments experienced by any of us; they grapple with the ignorance that is at the core of existing, the mystery of self, and the role of humanity in a universe whose scale exceeds our capacity to comprehend it. Surfaces (do sometimes) contain concepts. Naive aesthetics play a nourishing role in the evolution of representation (aesthetic recycling and cultural compost). Discourse must be built around even excluded or marginal (dynamic visual typography and poetic) practices.

(p.145) There are of course numerous examples of typographic effects applied with cosmetic abundance in ways that simply reinforce clichés. As effects move from obscurity into mass appeal, their capacity to genuinely contribute to poetics diminishes. Yet the presence of diluted glossy effects does not justify eradicating all motion graphics from the digital poetry toolbox.

John Berger, in his 1976 essay “The Primitive and the Professional” insightfully suggests that conventions and cultural class systems distinguish between the professional and primitive artist. The professional, trained and articulate, approaches art with the idiom of academia. The primitive arrives at art later in life, crudely, as a means of expressing lived experience. The resistance and ridicule met with by primitive artists is due to the turbulent protective reflexes of the dominant professional caste, whose definitions of what constitutes correct aesthetic goals define a carefully guarded, commercially viable field of discourse and practice. Discourse self-reinforces. My argument for the relevance of compositing and expression to contemporary writing is (in some respects) an appeal for the inclusion of digital primitives, the basement autodidacts of gloss, exuberant homespun authors expressing their poetic instincts with contemporary motion graphic tools.

Literalism and Excess

The irony at the heart of the widespread adoption of the Bauhaus design maxims “eradicate the superfluous” and “less is more” are that they distill culture down to a generic style, acceptable to all. IKEA-like Zen minimalism manuals proliferate in the art academies, and occasionally creative writing departments disguise ideologies as textbooks. The simplicity of effect cherished by the elite avant-garde reflects an austerity that refutes personal flourish; expressivity is banished to the baroque along with Sarah Bernhardt and other excesses. And it is this tendency that makes me suspicious of my own negative reactions to some of the work that follows. I wonder if my own immersion in the art world, design discourse networks does not necessitate that I conform to the move away from representational modes. Nevertheless, it does feel as if excess does not always entail more, so the following examples attempt to disentangle the authentic from the disingenuous.

The tendency to convert the entire world into letterforms, to make everything a wireframe of language, is not the goal of aesthetic animism. Movies that translate poetry into landscapes of leaping letters invariably exceed the literal threshold. Consider Tongue of the Hidden, a 2009 five-minute motion (p.146) graphic conversion of Ḥāfez’s poetry into a 3-D world constructed of Persian calligraphy. Its pure literalism exudes a gothic concern with detail yet overabundance dilutes its concentrative focus. Pure literalism conceives a world of animated skeletons that have no direct correlation to biological veracity, psychological interiority, agency, and/or the skins, ecosystems, and contexts that genuine animism invokes. Animism is as subtle as biology; it relies on layers of abstract reality negotiated by modes of interpreted perception. Aesthetic animism is not the direct conversion of scenes to letterforms, nor the simple accumulation of motion.5 In fact, it’s most successful motion graphic precursor implementations often occur where market forces dictate a tight, lean aesthetic and adherence to signature branding: music videos.

Music (and Other) Videos

In spite of the problems of excess, After Effects typographic innovations developed for music videos are seminal influences on motion graphic poetry. As in any field, there is much to be learned from precursors.

One example is the music video for Justice’s single “DVNO” (directed by Machine Molle and So-Me); it displays its song lyrics in forms based on animated logos from the 1980s: 20th Century Fox, HBO, NBC, PBS, CBS, Universal, Sega, and so on. Basically, this music-video-commercial appropriates voraciously not as a methodological adaptation to technological networking (as advocated by Goldsmith [2011] in Uncreative Writing) but instead for profit. The DVNO video samples a decade’s worth of motion graphics and compresses the experience into several minutes. It is technically possible because of direct feedback processes in modeling software, and scripts that bypass timelines in the compositing environments.

The effect of a video like DVNO engages because culture is suffused in typographic effects; this ad-for-a-band leverages intertextuality: its pleasure arises from identifying how it subversively recycles aesthetic tropes from television and record labels. It is the entertainment equivalent of the aesthetic pleasure derived from high art mashups like Christian Marclay’s (2010) The Clock—which builds a clock from film footage of clocks.6

DVNO is not a film mashup, although it is in effect a mashup. The objects that are being composited, the fuel and content of the assimilation aesthetic, are 3-D models, and often these are models of letters. The software involved in these animations increasingly involves the capacity to (p.147) manipulate in real time. In the twentieth century, animation was primarily accomplished using cel-by-cel frame animation; contemporary practice escapes the timeline frames by assigning algorithms to interpolate between positions. And increasingly the software itself anticipates or generates 3-D meshes or transitions; these automated processes in my view constitute the preliminary architecture of rudimentary metabolisms. So the compositing happens at the level of content (where old motifs reemerge), software (modeling, rendering, and compositing softwares used in sequence), and technical synergy (where models are merged with live footage, and the hand merges with algorithm).

In other examples of text-with-video classics, the pure sensuality of MK12’s (2005) virtuoso After Effect’s laden, soft-porn classic music video for Common’s “Go!” suggests an augmented data-saturated interface where text and video collude with virtual representations of motifs from classic posters. In contrast, the minimalist, monochrome microvignettes in Ji Lee’s Word as Image video/book propose insightful extensions of concrete poetry. The constraints of Lee’s process are deviously simple yet produce lush results: “Challenge: Create an image out of a word, using only the letters in the word itself. Rule: use only the graphic elements of the letters without adding outside parts” (Lee 2011). Ji Lee’s Word as Image suggests a context-dependent alphabet where letterforms adopt tiny gestures customized for each word. His restraint is matched by baroque excessive yet effective landscapes that enhance oral readings of Heebok Lee’s (2006) video setting of William Butler Yeat’s poem He Wishes for the Cloths of Heaven. In this video adaptation, the threads that voluptuously connect letterforms contrast with a segment of effulgent cosmological apparitions. It may not be to everyone’s taste, but it is a path, a space in the convulsive wilderness, a knot of potentiality unraveling for poetry as it merges with other media.

George Meliés (1900s) and John Whitney (1960)

Just as the roots of poetry entwine material and ontological concerns, the roots of motion graphics begin with magic and math, a magician and a mathematician, an individual concerned with tricks and one concerned with formal rigor. Both in their own way were concerned with awe.

Jeff Bellantoni and Matt Woolman’s (2000) Type in Motion identifies Georges Méliès’s advertising work as the earliest known example of film-based animated typography.7 Méliès emerged from a tradition of carny (p.148) barkers and hustlers, stage magicians, and illusionists, fantasy and horror, working the crowd, weaving a hypnotic spiel in order to plant a spell.8 Unfortunately, most of Méliès’s footage does not exist today; time literally marched over it: its celluloid was melted into use as boot heels during World War I (ironically he began in the family shoe business, and his entire career could be psychoanalytically attributed to a desire to escape from under the heel of realism).

The other root-origin of the term motion graphics, the mathematical one, begins with Whitney, who in 1960 started a company appropriately called Motion Graphics. Whitney was obsessed with principles of harmony that occurred between visuals and music: proportional systems with mathematical foundations. Noting how baroque counterpoint and Islamic arabesques were tractable subjects for computation, he created abstract rhythmic synesthesia. In 1958, he collaborated with Saul Bass on the titles to Alfred Hitchcock’s Vertigo—a collaboration that places him at a pivotal event in the popularization of dynamic typography. In the 1980s, he became concerned with real-time computer instrumentation—a prescient position given the crucial roles of Pure Data and MaxMSP in contemporary media art, and the contemporary field of live coding. His work, as Holly Willis notes, shares the idealistic propositions put forth in the 1970s by Gene Youngblood.9 He is a techno-utopian; his devotion to appearances evokes platonic ideals.

When motion graphic typography began with Méliès and his contemporaries, he was among the first (or the first) to use multiple exposures, which essentially is a precursor to compositing. Making multiple exposures is still one of the novice tutorials in After Effects today: camera on tripod, and mask down the middle of scene. Result: you stand next to yourself. This is the preliminary epistemological lesson: truth is subject to manipulation. The self divides; art provides us with a doppelgänger. Appearances are conceptual; they split self and experience, fact and fiction, essence and surface. Whitney’s revelation is more austere and transcendent, seeking to delineate how computers change the auric potential of mimesis. But his tricks of recursion and symmetry also constitute the foundational level of motion graphic animation programming instruction.

Hybridity’s Origin

The new hybrid visual language of moving images emerged during the period of 1993–1998. Today it is everywhere. … [I]t is appropriate to highlight one software (p.149) package as being in the center of the events. This software is After Effects. Introduced in 1993, After Effects was the first software designed to do animation, compositing, and special effects on the personal computer.

Lev Manovich, 2013

Manovich is the only media arts scholar (scholart?) who I know of to have considered the history (and developed a sustained discourse around the role) of After Effects. He identifies the release of After Effects in 1993 as a key date in the emergence of media hybridity. Even though many contemporary compositing packages do the same sort of work, for Manovich, After Effects is important because it is affordable: its affordability transformed compositing from an esoteric high-end technique into a grassroots commercial preoccupation.

To reiterate, compositing (similar to composting) contributes to assimilation, the capacity of language to chameleon into environments. Similarly, Manovich sees the aesthetic of motion graphics toward hybridity as a Velvet Revolution that occurs in the era 1993–1998. During this time, according to Manovich, graphic design and typography were imported into motion graphics; this importation transformed and fused disparate disciplines and gave rise to new aesthetic hybrids.

Prior to After Effects, dynamic and kinetic typography obeyed arduous technical and financial constraints. It is exactly these sorts of technical and financial constraints that affordable compositing, with the birth of After Effects, dissolves.10

The Hybrid Canon

In the civic imagination, science is still considered dull, geeky, hard, abstract, and, conveniently, peripheral, now, perhaps, more than ever.

Natalie Angier, 2007

Replace the word science in the above quotation with the word poetry. Angier wrote her book to reverse public perceptions about science’s canon; I hope (perhaps imperceptibly) to contribute to the acceptance of digital poetry in the traditional poetic canon. Problematically, digital poetry is newborn; its canon is emerging and currently indeterminate. And how is it that After Effects fits into this argument?

(p.150) In conventional literary theory, a canon (the set of works considered worthy of study) is the focus of both dispute and reverence. The contemporary occidental literary canon is, generally, a by-product of the printing press: a huge forest of literature. To summarize a story often told by historians of technology, mass-produced books modified the dynamics of publishing from elitist scribe to populist broadsheets and independent artisanal presses.11

What I am proposing (in parallel with Manovich) is that a similar transformation of motion graphics (and specifically kinetic typography and thus digital poetry) occurred with the release of After Effects. As the scale, scope, and sophistication of After Effects surpassed critical mass, an autodidactic tutorial frenzy took place. Recursive feedback fed radical experimentation, which was rapidly assimilated into effect presets and new capacities in the release cycle. Creative production exploded in the communal estuary of After Effects users: aesthetic curiosity, growing computer use, Moore’s law, entry-level compositing, exchange forums, and online video tutorials. This symbiotic flourishing of technical means and artistic impulse is symptomatic of an incipient canon. The canon is a hybrid. It exists in the interstices between audiovisual art and literature.

Kinetic Type’s Printing Press: Suites

It is my feeling that kinetic type’s printing press is not the word processor but rather synergetic combinations (or suites) of software and code, such as Mr. Softie, Mudbox, Processing, Flash, JavaScript, and After Effects. These distinctly different softwares each offer a unique modality for dealing with kinetic type, yet each supply quick, easy access to textual transformations. Each (to varying degrees) combines fluid motion with the capacity to composite text into combinations with 3-D models, video, images, and/or sound. This textual fluidity constitutes a breeding ground for the birth of a canon. Already signature motion graphic styles and formats of typographic manipulation can be identified. Expert users can spot software chains, effects, or combinations of sets of effects. The lineage or inheritance of various artistic styles or innovations (often fused into new variations) is readable by an informed viewer.

In the same way that a literary scholar can identify writers who have inherited (or appropriated) stylistic influence from Virginia Woolf (for example), it is possible to trace the roots of many motion graphic typography experiments to the production software (or suite of softwares), the (p.151) technique of the evangelist who first taught or popularized the technique, and the visual birthplace of the typographic style as logos or credits for film and television companies.12 Literary scholars might shudder at the suggestion that the contemporary literary canon was born from a complicit field of corporate propaganda and/or music videos, but it is plausible to resituate Homeric epics and threnody as ancient rock songs sung to warrior kings to glorify conquests. So it is not unknown for canons of enormous sensitivity, emotional range, and humanist sensibilities to arise from origins proximal to greed, glam, glitz, and aggression.

Immersive Gloss?

It is easy to dismiss compositing as mere technical innovation or cosmetic trivia. Yet its potential implications for writing as an activity that involves the entire being of the author become clearer if seen historically.

Jay David Bolter observed, “Wordsworth’s definition of poetry as a ‘spontaneous overflowing of powerful feeling’ does not easily include electronic poetry” (Bolter 1990, 153). Bolter wrote this statement prior to After Effects in reference to hypertext. Hypertext in the 1980–1990s era of low bandwidth was minimalist: a few words and an underlined hyperlink. Computer graphics were weak, difficult, and not affordable to most authors or readers. To author digital work in that period required a concentration that precluded spontaneity. With each year, compositing tools and exponentially more powerful GPUs modulate that difficulty; with contemporary technology, spontaneity is an option, and the computer is no longer antithetical to “powerful feeling.”

For the young digital natives who engage (both today and in the future) with computation, navigating plug-ins may become as innate as putting a quill into an inkpot, and reading interfaces as easy as speech. That is to say, speech (which is a learned skill requiring years of immersive assimilation to evolve from babbling to coherence) develops in ways analogous to digital ease of use. Spontaneity takes time, absorption, and immersion; it involves muscle memory and innate dorsal reflexes; it requires immersion in an idiom and the cultural techniques specific to a technology. Critics of the use of glossy effects in digital poetry might warn that gloss and glamour (etymologically rooted in illusion) perform a paradoxical trick: in fixating the reader’s attention on surface effects, the reader forgets the material level. Nevertheless, while immersive engagement can engender gloss, it can (p.152) also generate depth and access processes of profound reflective interiority. Epiphanies by their nature are neither analytic nor materially self-conscious; they composite identity over the void.

Ads as Tech Ops: Attack of the Filler Poems

It may seem obscene to move from altruistic empathy and epiphanies to advertisements, and even more obscene to cite ads as poetry, but that is my next step. In a culture where rampant consumption threatens the material substrate of existence for the species, ads openly fuel addictive greed, amplifying the innate seek reflex. Yet ethics and planetary considerations aside, ads continue to exemplify the cutting edge of what kinetic, visual, malleable text is becoming. Video bumpers and channel idents advance the technical edge of typographic motion graphics. Merch placement logos for toddlers, tweeners, and seniors evolve the state of the art rapidly in a competitive system of software upgrades and corporate budgets. A large majority of these advertisements use After Effects templates as the foundations for their text manipulations. Tutorial archives for After Effects such as Video Copilot can then become reservoirs of style, spaces where astute dialecticians of motion semiotics can survey the metadiffusion.

If aesthetic animism (for language) emerges, then digital methods (metadata and animation) will need to be integral to letterforms; as such, ads are (unwitting) construction workers, building templates, exploring techniques, and establishing ways that data, visuals, audio, interactivity, and letterforms fuse to ensure semantic impact. Digital ads operate as pluripotent nexus where opportunistic mutations in the properties of letterforms are tested against the ecosystem of market attention.

Ads, in addition to this technical function, share with poetry succinctness—the swift, rhythmic, and judicious use of text. This constrained use of text (twittered slogan/logo aphorisms of temporally constrained-screen-dwellers cyber-haiku) corresponds to poetic constraint: minimal means; maximal efficiency; a high information-to-noise ratio; small packets, dense messages, small minds, thirty seconds, fifteen seconds, five seconds, logo, cut.

Now how can kinetic ads and the motion in them be read? To answer that, I turn to rhetoric.

(p.153) Bi-Stable Decorum

The textual surface is now a malleable and self-conscious one. All kinds of production decisions have now become authorial ones. The textual surface has now become permanently bi-stable. We are first looking AT it and then THROUGH it.

Richard Lanham, 1993

In his book The Electronic Word, Lanham, a rhetorician, anticipates a new theory of literature needed for electronic texts; he proposes a theory based on a matrices of oppositional values, or what he calls a “bi-stable decorum” (ibid., 14). The primary opposition is between looking “AT” and “THROUGH” a text. Basically, the AT is a self-conscious reading of the materiality of the medium; the THROUGH is an immersive unself-conscious absorption of textual content. Many proponents of materiality (critics of immersive absorption) imply that in FX-rich environments, reading never occurs; it is short-circuited into narcissistic display.

Materiality critiques certainly have validity. Modes of aesthetic excess may temporarily obstruct semantic meaning or deflect cultural interventions. Yet later in his book, Lanham makes several “oracular speculations” that mitigate critiques of visual-hybrid literature:

Writing will be taught as a three-dimensional, not a two-dimensional art. … Word, image, and sound will be inextricably intertwined in a dynamic and continually shifting mixture. Clearly we will need a new theory of prose style to cope with all this. … I am talking about a theory superior to any that print allows us to conceive, but which would include print as well as dynamic alphabetic expression.

(Lanham 1993, 127–28)

So given the twenty years that exist between Lanham’s oracular proclamations and our own era, what would such a superior hybrid theory look like? In the following section, I attempt a tentative step along that path by suggesting that compositing as a term offers theoretical affordances appropriate to the task.


Composition has roots in both writing poetry and imagistic technology. In After Effects, units of work are called compositions. The name derives from the technique of compositing or keying out parts of an image so that the keyed parts disappear and layering effects can occur (i.e., a television weather forecaster). In the oracular arts, composition refers to the ancient act of composing (as in composing an ode, or composing a poem or (p.154) symphony); composition is often conjoined with rhetoric, and is synonymous with the act of sustained writing.

Composition is thus a word etymologically and historically situated to operate at the interstice between writing and audiovisual art in a new theory of hybrid literature. That is why I believe that compositing tools like After Effects are probably forerunners of the sort of tools that the next generation of TAVIT poets will compose within. The level of complexity and depth of immersive experiences possible with such tools exceed those of a word processor by an order of magnitude, and they offer the affordance of terminology like composition that has ancient roots and a contemporary usage.

One could compare composited to print textuality, as 3-D to 2-D, perspectival to flat representations. Composition in its expanded sense here operates as a measure of the level of visual depth and procedural complexity offered. As in rhetoric’s labyrinthine terminology, compositing will probably undergo terminological fracturing as subspecies arise. Critics knowledgeable of the history of compositing will read visual language within a historical perspective: shadow play, cutouts, collage, and the evolution of integration. Their intertextual conversations will concern how text assimilates or evolves motifs in conjunction with its video, code, or generative backgrounds. Simultaneously bi-stable, they will also read THROUGH the text to analyze and absorb what the words are saying.

A Seed for a Theory

As much as choreography and easing equations need to be considered as literary devices (an argument I alluded to in my master’s thesis, but also a point made by many other commentators on kinetic text), raycasting, polygon counts, recursive scripting, and other qualities and effects possible within compositing software operate as semiotic tools. To speak authoritatively in this hybrid literary domain requires such terms implicated in the creative process.

Ferdinand de Saussure’s arbitrariness of the sign, the way its visual does not relate to its meaning, may undergo erosion. Digital composting incubates signs toward nonarbitrary forms; it recruits form as semantic protagonist (elevating it from subsidiary support role). As visual choices made by visual poets refute the canonical transparency of the text, the AT becomes read as a THROUGH. The bi-stable decorum proposed by Lanham dampens into apparent concurrency. As I stated earlier, I believe that digital modeling (p.155) constitutes an opportunity to sculpt letterforms into structures congruent with our archetypal, proprioceptive, embodied conceptions of them: conceptions reinforced by millennia of physically resonating speech sounds. Compositing augments that opportunity by allowing semantic meaning to resituate itself in real space. The formal qualities of the page, the line, spacing, line breaks, and all subsequent print experimentations enter into a 3-D, contextualized, spatial and auditory semiotic space. It is not easy to conceive how deep (or even cursory) readings of this material will occur without a new and hybrid theory that draws from cinema, gaming, programming, and literature.

A term (such as compositing) is not a theory; it is merely a seed for a theory—a stand-in or substitute until the actuality arises. Converting compositing from term into theory is beyond the scope of this book. The preliminary steps, however, would involve a comparative analysis of analytic tools from literary cinematic and new media studies. The questions would include: If compositing is a literary device, then what sort of device is it? And is it possible there already exists a cinematic term that might function? A quick list of literary devices would consist of: allegory, alliteration, allusion, analogy, assonance, climax, foreshadowing, hyperbole, metaphor, onomatopoeia, oxymoron, personification, pun, and simile. A quick list of cinematic techniques would include: cinematography (close-up, medium, long, and establishing), mise-en-scène, moving and position of cameras, lighting, special FXs, and montage. Essentially, there is nothing in either list specific to the superimposition of text over/within visuals (except for compositing itself). Compositing shares a conjunction of items with metaphor, analogy, and simile. These techniques bring disparate things or qualities together, and by placing them together, reveal or generate a semantic discharge. Yet there is no existing theoretical frame for how to critique composited text. The best that can be hoped for at this juncture is sensitive observers who evaluate instinctively using hybrid theories.

Theory from previously independent disciplines (cinema, gaming, literature, and music) must also be composited over each other. Thus compositing occurs at practical and theoretical levels.

Case Study: Mudbox

Although the following case study concerns the software Mudbox, Mudbox was not the first (nor is it the only) software to develop modeling tools that (p.156) are sculptural in quality (it just happens to be the software I used, but the argument can be generalized to other ones). Notable as a precedent, ZBrush developed by Pixologic was demonstrated in 1999 at Siggraph, and then commercialized by 2002. Mudbox was first developed to produce the 2005 version of King Kong, then it was purchased by Autodesk in 2008, and now it ships in a suite with Maya (which has its own set of modeling capacities and was first released in 1998). As these tools develop, they adopt ways of manipulating models derived from both the arts and industry. In the arts, sculptural methods provide the foundation for sets of brushes (more on brushes later), and in science, these softwares borrow industrial processes of replication and duplication, and architectural techniques derived from solid-modeling tools like AutoCAD (released in 1982).

ZBrush and Mudbox (unlike AutoCAD) model soft and fluid materials.13 It is for this reason that they signal a bridge in 3-D authoring that moves from hard to malleable, dry to wet, linear to curved. They are also in many ways precursors of software that will render objects in real time as they are modeled. Thus they fit metaphorically into the explosion of biological sciences and BioArts that now manipulate wet DNA. As noted previously, there is a lineage between language arts and genetics that leads from holograms to bioculture (via Eduardo Kac).

Minimal Information Temperature versus FX Fever

Fixing the informational temperature at the minimum necessary to obtain the aesthetic achievement of each poem undertaken.

Haroldo de Campos, 1982

When in 2009, I published Human-Mind-Machine, a video constructed from screen captures of the manipulations of single words within Mudbox (a 3-D animation software), I was not concerned with what de Campos refers to (in the quotation above) as minimal means. Nor was I concerned (as Brian Kim Stefans is) with a refutation of the lyric.14 The video-poems are minimal. And they might seem at some level to be computational poetry—that is, readable as data evoking a refutation of the lyric.

There are other possible (opposite yet not incompatible) interpretations, though. First off, I am a novice user of Mudbox; the artifacts and effects generated are in many instances spontaneous accidents. Second, Mudbox (p.157) permits rash, reckless experimentation that provokes excess. Surplus is not inelegant when innocent. I was hoping to convey a classic concern with life as wound, scarification, egocentric inflation, and the rough transformations circumstance creates in consciousness. In short, 3-D permitted an open situation, concerned with classic content, through which the lyric reincarnates as excess.

In addition, Mudbox (when hacked for innocent use as a screen-capture animation tool) has no timeline. It is not (as is After Effects) an authoring environment where precisely planned and tediously crafted elegance happens. Instead, it is an area of swift experimental probes, excursions into spontaneous pressure—a playground for letterform deformation. Everything occurs in real time. It is a riot not a ballet.

Mudbox Machinima

Disclaimer: This entire section is vestigial. Since the 2011 version of Mudbox embedded a video-rendering engine into the interface so that users can exchange interface tips using online videos, the following process describes a low-fi hack/work-around that is no longer necessary. Yet the mode of approach is, I think, indicative of how poets might appropriate technology using deviant techniques for unanticipated purposes. And it highlights how methodologies and attitudes survive technological obsolescence. My education as a 3-D animator is limited to a yearlong, full-credit, undergrad university course in Maya, a programming class in OpenGL, and extensive autodidactic play ever since. In 2009, I was given a one-year student license to Autodesk suite that included Mudbox. I knew no one in the Mudbox user community, and still don’t, and suspect that they would consider my practice to be that of a misinformed Luddite. In any case, I also suspect my innocence is an asset. Because I had no one to teach me how to use the tool properly, and I had some ingenuity concerning similar tools, I developed an idiosyncratic (and limited) pipeline for manipulating letterforms. In other words, my improper use arrived at a relatively unique method that says something about the tools as they exist now.

Three-dimensional modeling reminds me of medieval craftsmanship. It is time-consuming, energy intensive, and more often goes wrong than right. General-purpose tools like Softimage, Blender, or Maya do not encourage amateur users. The learning curve is steep, and the path begins with a cliff. Exploratory creativity in these authoring environments exacts a heavy temporal entrance fee. Mastery is even more expensive. It is for this reason (p.158) that these softwares are analogous to arts (that sometimes involved apprenticeships) such as oil painting, etching, or casting sculptures in metal, and instruments like the oboe or clarinet. Both physical skill and long-term dedicated practice are prerequisites for competence.

When I began muddling about in Mudbox, I knew that my own stylistic preference for spontaneity and sketch work would have to find a methodological foundation. Mudbox was designed for the quick, intuitive, clay-like sculpting of 3-D characters, but it has not yet been conceived of as an animation tool. So I derived a screen-grab method that effectively converted Mudbox into a crude animation tool. I knighted my idiosyncratic method with the title Mudbox Machinima. Machinima arose when game users began to produce short 3-D movies using the capture tools inside console games, and it basically involves repurposing a tool/game for a use not foreseen by its creators; it seemed an appropriate name for my ludic hijacking of Mudbox’s capacities that effectively short-circuits the normal arduous rendering route of letterforms (from letter-creation in Maya to manipulation in Mudbox to lighting and rendering in Maya), avoids the creations of cameras and lights, does not involve complex raycasting, and within its constraints offers an opportunity for spontaneous quasi-improvisational play.

The process that I called Mudbox Machinima was a multisoftware workaround. The process began by creating a simple letterform model in Maya; the model was then exported for use in Mudbox. In Mudbox, the background was set to a classic blue screen color and the grid hidden. A screen-capture tool (Camtasia) recorded a video of the sculpting. My goal (even then as now) was different from the software designer’s intended users: not to instruct or tutorialize, but rather to adapt, manipulate, and composite improvisational deformations. The resulting exported video was imported into a video-editing software (in my case, Sony Vegas) and a chroma key applied to remove the background. Shadow was created by duplicating the Mudbox-film layer, removing its color and contrast, rotating it in 3-D, changing its opacity, and applying a small amount of blur.

All in all it was a relatively simple process, but one that in the intervening two years since I developed it, is already obsolete, superseded by multiple improvements in the interoperability of Maya and Mudbox as well as new video renders direct from the Mudbox interface. It nonetheless demonstrates incipient signs of letterform life, the twitching skin of letters, (p.159) a fast pipeline from conception to product, and the tendency of users to contort software for specific needs unanticipated by the designers.15

Gestural Manipulations of Matter: Sculpting Software

Though we have spoken, indeed, metaphorically of the “life” of the program, it is not only metaphor. Mind enters world, not contained within skin, but as a circuit-loop feedback operation.16 The living, and all living functions, are indissoluble from information-driven environmental loops which alone serve as units of survival. Animal mind, protected from “real” impact by the physical world, negotiates its circuits by abstract, non-physically locatable, information.

Stephanie Strickland and Cynthia Lawson Jaramillo, 2007

Mudbox and ZBrush offer direct gestural deflections of 3-D surfaces in ways analogous to the manipulation of matter; in this way, they evade the key frame tweening mind-set inculcated by timeline production that temporally distances the artist from the normal immanence of cause and effect. To repeat, with traditional animation timelines the artist performs a transformation, applies a key frame, and renders to watch. It is as if the artist has to press a button in order to see change occur after touch. On the other hand, in Mudbox, direct tactile control leverages ancient instincts that engage and respond to immediate visual feedback. There is no delay, no interrupt, no obstruction.

ZBrush first shipped in 2002 with thirty brushes. The palette has expanded since then. Some brushes relate directly to painting; others relate to sculpting, strokes, textures, and materials. All are parameterized so that each brush actually represents a wide range of potential deflections. Mud-box uses a colloquial naming pattern for its brushes; the sculpt brushes are called sculpt, smooth, grab, pinch, flatten, foamy, spray, repeat, imprint, wax, scrape, fill, knife, smear, bulge, amplify, free, mask, and erase. At a nominal level, these tools replicate normal, easily understandable ways of working with physical matter; at a cultural level, these tools merge the toolboxes of sculptures and painters; at a physiological level, they function as prosthetics, enhancing the hand and extending the eye.

In terms of letterforms, software brushes echo typographic foundries that produced hot metal type, which was poured into matrices. Ironically, matrices again hold the form of type in Mudbox—matrices of binary code—except that it is not lead that is poured hot into the molds but rather data.

(p.160) What Does Mud Have to Do with Language?

To reiterate, malleable typography allows semantic deflections to occur on the skin of the letterform itself, in the texture of the text so to speak. Texture in 3-D idiom refers to the skin of a model. If the skin of a letterform is a surface that can be scratched, scarred, or twisted, then surface deflection becomes semiotic. The shapes of skins are also read. J. Abbott Miller (1996), Matthew G. Kirschenbaum (1997), and John Cayley (2005) all anticipated this potential.

Humans interpret and classify both costuming and contortions of bodies. Letterforms with bodies get read somewhere in between language and image. This oscillation merges literature with aesthetics. An expressive displacement that happens at the level of vision reverberates into thought. It is a change that occurs in parallel with the changes in depth postulated by Noah Wardrip-Fruin’s reading of expressive processes and Alan Sondheim’s emphasis on codeworks, where the programmatic foundations underlying mediated language become semiotic. Instead of a depth expansion, I am speaking of a breadth expansion, a semiotic infusion that takes place on the surface of letters.

Choreography carries expressive capacity. Anthropomorphic 3-D container letterforms echo our own skins. Visual deformations activate a history of aesthetic analysis. As many before me have noted, textural deformations of letterforms expand reading. And like contemporary biological sciences, which are permitting new genetic manipulations to emerge, 3-D modeling tools such as Mudbox and ZBrush permit a range of mutations that exceed the traditional range of typography (making it opaque and embodied), choreography (defying gravity and interpenetrating bodies), anthropomorphism (inflating, inverting, and merging), and visual history (oscillating from perspectival to flat, animating the frame).

Shape Semantic Synergy, Motion-Tracking, and Music Videos

As previously alluded to—in the sections “Ads as Tech Ops” and “Music (and Other) Videos”—the expanded synergetic reading of literal as visual has been most cleverly and deftly exploited not by digital poets (who have contributed to the conceptual and aesthetic evolution) but instead by film credits, music videos, and advertising. Ads have colonized the genre, rapaciously assimilating tropes and inventing motifs. Augmenting this accelerated creative process, there are many proficient software point trackers on the market: Shake, Fusion, Nuke, PFTrack, Bonjou, MatchMover, and Mocha. They (p.161) resolve and match 3-D into video space. As stated earlier, digital language will shift ontologically when digital language adopts features of organic life, and is perceived as natural and natured. Point trackers perform the basic physics of orientation. They place language in the scene.

Question: Why would advertisers prefer language that blends in and belongs? Why go to the intense technical trouble of creating credible letterforms with shadows, depth, weight, momentum, and respect for collision boundaries? Why not use letterforms that are objectively present just as decoration? What advantages might situated object-like text deliver? It seems safe to assume some advertisers intuitively recognize several cognitive benefits. Situated text is perceived as being heavy; weight, as cognitive science has shown, is intuitively associated with seriousness, solidity, and durability (a heavy clipboard used in a survey means that the process is serious; to praise something, we say, That argument has weight; its logic is solid). Additionally perhaps situated text bypasses the analytic scrutiny habitually applied to language when it is read; it places the perceiver in a place more proximal to desire; this is language that clowns or performs for us, distracting us from the precise metrical inferences of reading. Most theorists laugh when viewing clowns.

Technically achieving the effect of putting 3-D text into a scene now involves many automated, algorithmically tractable processes that the designer simply initiates; processes that were previously performed by hand-pinning keyframes to timelines. Softwares interpolate velocity utilizing physics, detect edges and collision, and correctly adjust, align, scale, and light. Combining, therefore, modeling where letterforms respond immediately to deflections of the hand, algorithms that autoactivate motion based on proximity or generative processes, and the ability to blend these letters into environments give letters the ontological status of objects, and is a crucial step on the path toward living language.

Per-severe or Per-ish

  • ads that are also language art
  • bifurcate between meanings,
  • careen between disciplines; and
  • bypassing discourse,
  • render & sell

—Jhave, blog post, 2011

(p.162) My tastes and interests are obviously more sensual (some might say naive) than the dominant vector of conceptual language arts criticism that emphasizes a lineage including Joseph Kosuth, Lawrence Weiner, John Baldessari, and others, whose visual styles, incidentally, have not modulated radically in reaction to digital technology. I prefer the ad company Psyop whose brand idents expand the technical capacities of text in 3-D video environments; in these ads, wonder and craft have not been sacrificed at the altar of austerity and concept. It’s surprising to me how few digital poets actually work with 3-D or motion graphics. If anything, there has been a backlash against it. Poets of a previous generation worked with 3-D: Kac, André Vallas, Ladislao Pablo Györi, and—one could include—Muriel Cooper. They often came from a hybrid or visual art background. Perhaps due to the stigma of 3-D ads colonization (i.e., contamination) of the genre, poets have rejected it. Perhaps it’s due to the learning cliff. Perhaps it’s Marshall McLuhan, the prophet admonishing them at the gates: the medium is the massage. Perhaps it’s simply an abhorrence of effect for effect’s sake. Anyway, poet-practitioners dedicated to 3-D art are rare. It’s a rarity that might cease in the next generation. It is this potential that motivates.

Take a simple conventional yet clever ad found online. In it, two words lie side by side: Per-severe Per-ish. Each word is split at the hyphen as the word has shattered. Their sides are lit and glinting as if made out of annealed chrome. Shadows fall around them gleaming as if it is dusk, illuminated in a placeless place: a pool of light on a hard, dark background. They replicate the rich burnished depth of aged oil paint. The only thing remarkable about these sculpted letters is that they do not exist.

The Per-severe Per-ish ad is apparently a product of the marketing agency J. Walter Thompson’s executive creative director Chafic Haddad, but it is not trumpeted anywhere. There are so many of these 3-D letters, so many ads, so many campaigns and animators, that there is no scarcity. Some go missing, anonymous, adrift. They are not miracles; the miracle is that they are normal. Yet even if it is already so normal and common, Per-severe Perish is also to my mind a relevant demonstration of how 3-D modeling could so easily fit within the minimal means and aesthetics of a contemporary digital concrete (digital pudding) poetry. Maybe the image of Per-severe Per-ish is a still from an animation (in the next frames, the “ish,” slowly toppling, shatters). Imagine Duchamp finding this ad and submitting it as (p.163) his artwork for a language show. The level at which the play of language in Per-severe Per-ish sends semantic meanings in recursive circles exceeds that of a simple branding exercise. Form follows content (a little too obediently but nonetheless symmetrically), the medium is integral to the piece, and its execution is stylistically (as in much lavishly budgeted branding) impeccable.

Reawakening the Inert

Virtual 3D structures made from letter forms will have, as it were, an appreciably enhanced spatial structure for literate readers. Moreover, because of the expectations (of legibility) that these forms bear, it should be possible to “play”—affectively, viscerally—with their form and arrangement in ways that are likely to have aesthetic significance, and some bearing—potentially, ultimately—on literary practice.

—John Cayley quoted in Rita Raley, 2006a

Origin myths often begin with a lump of clay or mud into which the spark or breath of life enters. The inert mud awakens. The Sufi poet Rumi is occasionally cited in evolutionary literature because he identified a chain of incarnations from mineral, vegetable, animal, human, and so on—the path of life spark through matter. This vision of a gradient of sentience is shared by many Western panpsychists. Life starts with chemical constituents and arrives through structural emergence at self-consciousness. The core matter of the nonliving and living are not different: these are carbon-based forms. From the perspective of both myth and biochemistry, mud is at the root of reason, passion, credit card charges, and world wars.

Currently tools like ZBrush and Mudbox offer a reasonable visual simulation of physical contact with digital representation that seems a lot like wet clay or mud. It is not of course wet or gritty or chemically coherent in ways that emulate the complex capacities of matter, but it can, within the confines of a screen, emulate the physics of these substances. And screens in spite of their evident ocular-centric limitations do effectively activate empathic processes. If screens did not function empathically, action films would be boring and porn would not be a major industry. Modeling software is already one step farther than most “films”: it is interactive. So additional physiognomic reflexes and endogenous networks of biochemistry (p.164) arise during the authoring-modeling process (amplified as the mouse is replaced by pressure-sensitive Wacom gestures). The software user is physically implicated in a process that is mythological; they are reconfiguring matter into emulations of life.

One step beyond modeling is generating. Growing generative forms automates the sculptural instinct. Scripting languages specific to many 3-D vendors encourages exploration of generative forms. How are they grown? They are written. They are often recursive. They manipulate geometries in topological ways. This trio of attributes (written, recursive, and topology) palpably echoes the linguistic theories of language itself, and resonates with thoughts previously cited from Strickland and Gregory Bateson (n16).

Code pervades the process; human agency and intervention are reduced to aesthetic nurturance roles. Creating works in such a way is analogous to gardening. Future fonts may be grown (as anticipated to some degree by Miller). Donald Knuth’s quest for the essence of all fonts may not be answered, but the seeds he sowed by initiating the first sustained computational attention to font formats as programmed entities will flourish. One potential pathway such fonts might take is explored in my Easy Font project (Jhave 2011). All the component pieces of the Easy Font letters are algorithmically produced using a commercially available Mandelbulb ray tracing 3-D plug-in produced by the ex-physicist Tom Beddard. A real-time version of the plug-in is currently under development; it will apparently run in the browser. So it is not speculative sci-fi to anticipate fonts that organically occupy space. It is not fantasy to anticipate the poets who will culture and grow from seed algorithms morphing letterforms and compositional structures. Poets will examine these creations with the same proud sense of authorship as previous generations have harvested their subconscious for rampant, sensual scribblings.


One of the underlying realities of contemporary software is that some tasks in 3-D environments are getting easier. The story of my own experience with Mudbox confirms this tendency. When I began working with Mudbox and Maya in late 2008, the interoperability pipeline between these two softwares, vended by the same company as part of a suite, was far from stable. Complex, intersecting sets of parameters had to be meticulously compatible in order for the transfers of typographic models to take place without errors. This occurred in both directions. The only way to play (p.165) with text in Mudbox was to first model it in Maya, enable the object export plug-in, carefully calibrate the bevels, and send an .obj file to a disk. Only after opening the .obj file in Mudbox would errors appear. These errors would be visual deformations (destroyed kerning, inverted corners, and smooth meshes that looked like cactus). Inside Mudbox, there was no error list or suggestions on what had gone wrong. Getting text to export correctly, in a way that was satisfactory to my aesthetic goals, took me about one and a half days of steady back and forth effort: a blind process of trial and error. The overall feeling was of being submitted to a border crossing where rigid, unwritten rules controlled my fate.

The current workflow offers greater ease of use. The dilemma is that the professional tools want to offer infinite customization processes. Daunting menus and submenu intricacies proliferate. If a writer stumbles into these forests of options, it is unlikely that they will escape with expertise.

Yet while there are symptoms in the emergence of tools like SketchUp that 3-D will proliferate in ways analogous to the spread of literacy, inducing a generation that has grown up immersed in CGI and 3-D to become familiar with the paradigms of modeling and rendering is difficult. There are also symptoms that this might never occur—that humans like flat, surface-screen displays for ingesting literary reasoning. A Global Visuage (Piringer and Vallaster 2012), an anthology of visual poems, contains only one image done in a 3-D modeling software (mine).

Sculptors, Prosthetic Fingers, and Feral Cats

We cannot be sure whether Leibniz was right to compare the perceptions of a rock to those of a very dizzy human, or whether we should speak of “experience” at all in the inanimate realm. … However I would propose that if we look closely at intentionality, the key to it lies not is some special human cogito marked by lucid representational awareness. Instead, what is most striking about intentionality is the object-giving encounter. In other words, human awareness stands amidst a swarm of concrete sensual realities.

Graham Harman, 2010

Traditional sculptors relate to their materials like feral cats: they prowl, absorbing them. A block of granite or wood provides flocks of subconscious cues: grain, temperature, rivers of color, deformations, flaws, weight, and so on. An old coat hanger may suggest a crucifix; a skull may need to (p.166) be encrusted with diamonds. Many of the cues are multimodal. Fingers, eyes, nose, ears, and the proprioceptive body each contribute. Michelangelo reputedly claimed that he was freeing figures within stone. Figurative expressivity is not alone in this absorptive approach. Other cues are social: What use has this object had? What context does it arise from? How has it never been seen before? Duchamp’s sophisticated grasp of the contours of conformity and stigma gave him the capacity to challenge and transform contemporary art. Rosalind Krauss’s conception of extended field heralded the antimonumental movement. In each case (traditional, modern, and postmodern), the sculptor’s relation to materials contributes to creation. How does this work when the materials are screen based and software derived? Is it possible to relate creatively to the materiality of computation? No current category of conventional arts can accurately describe thick words gouged and spinning, plump words fluffing up into indecipherable froth, and letterforms carved like moist icing.

Inside Mudbox’s default layout, there is a tabbed rack of tools at the bottom. These are prosthetic fingers—rigid, clawed, and magnetic. Kneading digital substance occurs by flicking between these tools (a flicking that in Mudbox 2011 is accomplished with the numerical keypad). Altering brush parameters permits customizable deflections. Wacom tablets are the preferred input device. Pressure sensitivity delivers simulacra of sensation. The surface can be worked at various levels of resolution from rough (low poly res) up through levels of increasing density. These levels coexist superimposed virtually as abstract entities; the sculptor flicks between them (using page up or page down). Traditional advice floats around the public forums about how the sculpture must be roughed in at low res and then progressively worked layer by layer. It is the same advice as that given to apprentice sculptors in the Renaissance.

Just as one would with a real chunk of clay, the 3-D modeler turns the model, prods at it, zooms in (steps toward) and scuffs or scratches, zooms out (steps back), rotates (the pedestal), corrects a detail, and rotates again. It happens at the same speed (if not quicker) as it would physically. Clearly the paradigm of tactile precision has made a cursory conversion into computation. Ancient and contemporary crafts (and I use the word with respect) are iterative processes, repetitive toil. After the instigating idea, creation devolves into a steady process of approaching the implementation of that idea (while sporadic spikes of ancillary inspiration occur, most of the work (p.167) is attention to detail). Luckily, monotony of labor, if accompanied by a need for concentration, sometimes pleases the body; to hit the chisel with a hammer, move a chess piece, or click over and over on a Wacom tablet all belong on a similar continuum. Hours are measured in tiny modulations as the work creeps toward completion. I see little difference between computational and physical modeling: same instinct, new tools.

In my view, the brain empathically bridges the tactical impoverishment so often seen as symptomatic of contemporary screen culture. Sculpting in software is sculpting. Brains already do live happily in jars; the jars are called the skull.

Improvisation versus Timelines

I want to emphasize that the work-flow work-around I developed had one ancillary effect: rendering (rather than being timeline based) became spontaneous real-time improvisation. Instead of reimporting the model into Maya, creating cameras and lights, applying a texture, and animating the mesh of the letterform by setting key frames on a timeline, the rendering was extracted directly from the screen in Mudbox in a single improvised take. Instead of calculating each position as a step and allowing the software to interpolate between them during the final output, gesture was immediately transcribed. This process suggests that there is a role for nontimeline-based animation work during the spontaneous manipulation of an object (regardless of whether it is a letterform or anything else).


Software that permits the real-time autorecording of parameter changes already exists in the audio realm. The Ableton Suite interface is divided into clip and session modes, which allow users to manipulate multiple parameters while playing. These manipulations automatically enter into a key framed timeline. Parallel ways of working (improvisational and cell/frame based) interweave. Subsequent runs of the same timeline can occur with changes to any of the parameters made during the run or after it is over. Spontaneity and rigor are equally enabled. Fine-grained modulations can be done by hand over tiny regions.

This integration of parallel capacities that encompass improvisation and iteration creates flexible software instrumentality. The software can be played like an instrument (free improvisation) even as it records (classical inscription). The instrument analogy at one level explains why audio software (p.168) has incorporated such capacities while 3-D has only tentatively explored it: musicians have for millennia been using a combination of improvisation (free play) and timelines (scored music). Sculptors have not in general worked with a single tool as musicians typically do. At another level, the added GPU and CPU intensive processes entailed by 3-D preclude such a free approach. Real-time rendering at high frame rates with complex polygon counts is not yet occurring on commercial-level personal computers.

The Role of 3-D in Future Writing

Language is both acoustic and optic.

Alfred Kallir, 1961

I have repeatedly stated that the shape of the body’s internal resonators when speaking might be the source of shape-sound associations that operate as archetypes. And these shapes (basically sculptural forms congruent with morphemes) have (until digital 3-D) lacked the technological means to become integrated in a volumetric way with letterforms. It is my contention that tools like Mudbox (and other 3-D sculpting tools such as ZBrush, Cinema 4D, etc.) will permit these associations to become manifest.

Unfortunately, there are few credible sources for this claim. Kallir’s Sign and Design: The Psychogenetic Origins of the Alphabet, while astoundingly rich in etymological fauna, is an outlier. It claims that the alphabet emerged from painting, all languages (even remote ones) emerged from a communal source, and modern alphabets contain the sediment of deep-rooted, atavistic sexual and psychological pictorial impulses. I am inclined to believe there is much that is true in Kallir’s basic ideas; the details may occasionally spurt into fiction, but the core is tenable. The letter A, for instance, flipped vertical is a horned animal, a priapic hunter. B is an abode, a dwelling, a feminine womb. L carries liquid within it. These optic-semantic roots (what Kallir refers to as symballic: concurrences of semantic sediment carried by form) carry over into contemporary language as the allusions and ricochets of congealed meaning that make words more than literal. Letters are in this sense monuments weathered by use.

As alluded to in chapter 3 on aesthetic animism, the evolution of printed text can be seen as progressive abstraction enabled by technology. To be literate is to read abstract symbols. Indo-European printed letters are not (p.169) consciously ideogrammatic, nor are they doodles. Their meaning bears little relevance to their visual sense (even if we accept Kallir’s claims, the resonance of visual archetypes is a residue). It seems likely that we are schooled to learn them, not born into them. There is not yet (as far as I know) a genetic marker that predisposes one to learn QWERTY keyboards. It is a skill, absorbed over time; it is an epigenetic feature. Letterpress involves an apprenticeship. The same holds true for 3-D animation studios. Modelers absorb traditions, expand, extrapolate, evolve, imitate, and innovate. It will be curious to see, however, if as 3-D authoring tools enter daily usage, will these tools enhance letterform shape-sound-semantic co-occurrences?

In this postulated future, letterforms evolve meanings that correspond to archetypes of how they appear. A liquid word might use a liquid font. Or adversely, a dry concrete-block font might spell out the word fluid and shatter into dust. In this way, poetry, specifically visual poetry, by engaging with the materiality of letterforms as entities, will advance the evolution of letterforms so that the form and animation of letters constitutes a vector for interpretative analysis. Volumetric animated typography in this scenario re- or devolves on a spiral to parallel the reputed origins of language: painting and sculpture, the molding of forms, wet clay, or raw touch. As such, tactile language becomes a precursor to an eternal return, bonding language once again to representations that (although screenic) are in this world, of it, as its.

Mr. Softie

A sequencer might play itself for some time after being given instructions, but a guitar demands interaction for each note sounded.

Noah Wardrip-Fruin, 2009

Mr. Softie is typographic software that allows touch-sensitive user manipulation of vector-based type. It allows flexible effects to be applied to text in real time. It presents an interesting contrast to commercial animation products, because in Mr. Softie there is no animation timeline. The implications of this interface change are subtle yet profound. It both aids and impedes the capacity of creativity in ways that have resonant implications for writing in the twenty-first century. It suggests word processors that operate as instruments sensitive to the gestures of their users.

(p.170) Mr. Softie ties into the presuppositions underlying aesthetic animism. Namely, visual digital poetry is innately sculptural; the formal issues it explores are structures: layout, placement, motion (or implied motion), and shape. Structures can be visual, linguistic, or emotive. Shapes bear the expressive weight of events that preceded them. In the same way that words gather emotive force (magnetizing semantic turbulence around them and evolving over time), shapes carry esoteric dimensions that have history and record time. Serenity, pain, sexuality, and anguish (while subjective and culturally specific) have associated shapes; they writhe or remain still. Subconscious forms are collective. Sculptures bear witness to the capacity of humans to read form; totems are literary devices designed to express myth. Archetypal forms conjoined with language synergistically couple literature and sculpture.

What Mr. Softie allows is the real-time capacity to modulate archetypal typographic shapes and capture those sculptural modifications as time-based media. In practice, it is a vehicle for hybrid creativity that spans and fuses disciplines. Processes of writing and sculptural concerns merge. It is this confluence of activities that (sometimes) permits conscious activity to be at the same time intuitive and direct.

Mr. Softie History

Mr. Softie builds on a foundation that originated when Jason Lewis and Alex Weyers (1999) published “ActiveText: An Architecture for Creating Dynamic and Interactive Texts.” Developed at Interval Research in the heyday of bubble-boom euphoria, ActiveText included a center-triggered mouse-menu system with menus available directly from the mouse position. Sets of behaviors could be applied to sentences, words, or glyphs. In 1998, when the It’s Alive! software was created, Flash was at version 3, had been introduced in 1996, had no sets of presets, and required extensive coding in order to produce similar effects. Timelines for animation had been incorporated into Flash’s precursor in 1995, Smart Sketch. The primary mode of animation was simple key framing; the paradigm was (and continues to be) adopted from traditional cel animation.

It’s Alive! and Text Nozzle challenged a few design paradigms: both promoted context menus to a central role and did not use timelines based on cel animation. In most contemporary software, context menus are used for basic tasks. It’s Alive! placed tasks at the position of the observer; all tasks (p.171) were accessible at the cursor location. Similar functionality is offered by many 3-D softwares now.

Design changes can induce changes in user experience, thereby creating changes in creative practice. At a rough level of granularity, It’s Alive! emphasized the immediate and spontaneous. Text was accessed through a hierarchy of block-word-glyph by simple, repetitive clicking (this feature allows quick cluster chunking without drag-and-draw style selecting); text was sprayed; text could be assigned parametric behaviors with two clicks. Some of these features have been carried over into Mr. Softie.

Interacting with Mr. Softie requires practice. It rewards investment in the tool in ways that are analogous to traditional musical instruments and choreography, where gestural prowess and sensitivity combine to yield polished results. The type can be assigned effects that correspond to emulations of different substances (clay, cloth, and pulse). The user touches the type to produce changes in the form. These changes become aesthetic events that are occasionally charged with emotive and intellectual importance, because they are precipitated by sensitive gradients in touch and emulate the subtle play involved in ancient, embodied activities (sculpture, hunting, etc.).

Creative Practice in Mr. Softie

Opening Mr. Softie can be as delightful as lifting the lid of a piano. There is no necessity to really have a plan in mind. (By contrast, I can’t imagine beginning a coding project without first having some vague idea of what I wanted to do.) This primary open pleasure is one of the key features of instrument-like interfaces: the potential available to a naive, intuitive practitioner is considerable. The ancient rituals of doodling or doing practice scales, or just fiddling about with a material, are palpably present.17

Some poets write from inside themselves, and others write as conduits of a vast outside. In each case, what is needed is a way of transcribing the poem that does not get in the way, and allows the poem to be remembered in its immediateness, directly. Pen, paper, and notebook have traditionally served poets well. For visual poets the problem is more complicated. Visual poetry often leverages effects that emerge concurrently with writing technologies: concrete poets (like Ian Hamilton Finlay, bpNichol, Steve McCaffery, Judith Copithorne, dom sylvester houedard, bill bissett, etc.) developed styles that were only possible on typewriters; Drucker explored effects specific to custom typesetting; for a while in the early 1990s, I made (p.172) a lot of work with old Letraset packages (as currently does Derek Beaulieu, who seems to have augmented the process with Photoshop). In short, technologies invoke change. As visual poetry migrates onto digital platforms, the adaptive opportunistic trend continues: visual poems often exploit signature potentials specific to their authoring software; as such, it is the software itself that defines how visual poetry is created and appears.

The extent of the perceived aliveness of the text is a by-product of how much the authoring environment encourages manipulations independently of quantified time. Timelines in my mind replicate the scientific model of re-creating life: they enable compartmentalized and measurable parameters to be manipulated rigorously. The nontimeline, free-form sculpting environment is more related to musical improvisation; it relies on gestural fluidity, instinct, and immediacy. When the two modalities (linear granular and fluid improv) converge (as is increasingly occurring in contemporary software packages), then typography accesses synergetic strength.

StandUnder: A Specific Case Study of Mr. Softie Use

StandUnder is an animated-typographic poem I created in 2009 with the Mr. Softie software. Without the real-time manipulation capabilities of Mr. Softie (enabling an agile, tactile, and exploratory creative process), StandUnder might never have been created. In the same way that the typewriter and custom typesetting provide signature motifs, Mr. Softie offers a unique set of potentials that influence the digital poetry created with it. In the following, I interweave the story of how StandUnder was created with reflections on the symbiosis of software design and creative process.

In mid-2009, inside the Mr. Softie authoring environment, I began idly stacking words, without thinking very much, until I had created a tower out of one word repeated over and over: understand. Then since each word was standing under another, I (mischievously, out of boredom) changed all the words to StandUnder, introduced a few line breaks, and so it read:







(p.173) Note that there were more words repeated than what I have reproduced here. I still had no idea really what I was doing or aiming toward. At this point, StandUnder was already a reasonably intriguing concrete or lettrist-style poem. Although viewed through the jaded eyes of multimedia-saturated consciousness, its appeal was conceptual rather than sensual.

In static form, the interplay of semantic and visual structure in the static work generated knots of fertile ambiguity: is standing-under the opposite/extension of under-standing something? Are there physical relationships implicit in comprehension? Is humility coincident with receptivity? Is knowledge hierarchical and power inflected at the social, political, and personal levels? Are facts cascading down from iconic sources like viral memes released from a tower of conformity?

With these epistemological and literary questions in the back of my mind, I began to apply effects to the tower of words. Since the cascading, steep, dense stack of words resembled a cliff, and the questions it evoked made me think of knowledge as a cascade of pressure dynamics, I was led to apply what had become (for me) a standard set of drift effects, with different strengths and radius of brushes mapped to the three (left, middle, and right) mouse buttons. These effects are not immediately active; they are now latent material properties of the text. They are physical potentialities that define how it will respond to touch. Once active, the text will distort as if flexible and sinuous. But at this point, nothing in the visual form of the text tower changes; only the structure is now capable of changing dynamically.

This process took a few minutes. It is now ten to fifteen minutes after I opened the software and began perusing around. I have built a static visual poem and applied sets of effects to the mouse, which will operate as a variable-pressure brush. I change the background color of the canvas to green so that I can composite the animation later. I am ready to press the play button. What is static will now move.

Parameters and Palpability

In the Mr. Softie environment, using the drift effect, mouse pressure parametrically deflects the form of letters as if the cursor were a finger pressing into wet mud. The various parameters available for user manipulation (when using drift) are: effect radius, mouse strength, mouse falloff, origin strength, and friction. The user also chooses whether the effect is always on or which mouse button will trigger it. Effect (p.174) radius defines how large the drift brush is. Mouse strength simulates pressure. Mouse falloff sets a gradient into the brush radius. Origin strength defines how intensely the text tries to return to normal (higher values glue the text to its original shape). Friction defines how much resistance there is to the pressure of the mouse. These parameters can be changed for each instance of the effect.

In the case of StandUnder, I assigned three different drift effects to the complete text block; each drift is independent and activated from a different mouse button. Each is of a different strength, radius, and falloff. I have also assigned an originate effect that independently of the drift actions, ensures that the text will elastically try to return to its (origin) normal shape no matter how it is deformed. At this point the static text is like a primed organism, but the animating force of the mouse effects or originate effect are not active until after play is pressed.

So here is the tension before beginning: I don’t really know how the animation will behave. I have, like anyone who uses an instrument and has some degree of experience with it (embodied skill), tuned the Mr. Softie instrument (by applying the set of effects with parameters that I have used before). I feel confident that I can expect some sort of deflections to occur, but I am in a mild state of anticipation, since exactly what takes place next is unknown. Algorithmic events of sufficient complexity engender ambiguity. The smallest changes in pressure or gesture or parameters can intersect in chaotic, nonlinear ways. As with a dance or musical performance, it is rarely exactly the same twice. Playing in this sense is genuinely playing; it is an open activity.

I press the play button. The effects are activated, but nothing happens until I bring the mouse over the text and then press one of the mouse buttons. Immediately, the tower of text sheers sinuously away from my touch as if driven by a wind. I release the mouse. The text relaxes, retracting along fluid lines back into its original position. Wobbling slightly, the tower of text resembles a shimmering ribbon of substance, Jell-O ink. At a computational level, it behaves as a responsive fluid-cloth simulation. Consider it from a choreographic perspective. To get a particular shape, a choreographer might approach a dancer, lift the arm, turn the elbow, and place the shoulder. Like a puppeteer manipulating a marionette, the constituent pieces are put into place; while the choreographer works, the dancer freezes and holds the form. If in Mr. Softie I had not set the originate effect and (p.175) had set the origin strength of the drifts to zero, then the text would have responded like a pliable material that could be bent and remain in shape: coat hanger style. With the originate set, responsiveness occurs until the mouse is released, and then the system flows back toward its source. Like the motion of a dancer who has been instructed to try to return to an original pose, the StandUnder tower text in Mr. Softie (with the originate effect on) is relentlessly flowing back toward its original base shape.

Obviously, working with text in Mr. Softie is also sculptural. A traditional sculptor spins or walks around a piece, changing viewing angles, oscillating between a position of proximity and a position of distance: nicking, cutting, nudging, and melding. Similarly in most contemporary softwares (including Mr. Softie, Mudbox, and After Effects), variable views are available: close-ups (zooms) and distance shots. The organic physicality of proximity and intimacy allows for fine-grained and general control. The writer models textual form. As in sculpting, in Mr. Softie, pliable form yields to touch in ways evocative of malleable matter.

The moment I press play in Mr. Softie is when these metaphors (choreographer, sculptor, and musician) extend into motion and the time-based work begins. The dancer is on the move, the choreographer yells instructions, and the speed, posture, form, and structure of the dancer change responsively, adapting to the instructions. The potter’s wheel spins, and clay drenched in water dives under a gouging thumb. A musician bends a string, and sound bends with it. In these real-world scenarios, it’s the pressure applied sonically or physically that alters the performative matter of the dancer or musical instrument or clay. In Mr. Softie, it’s the assignment of diverse effects to different keystroke or mouse combinations (left, center, right, up, and/or down) that allow gesture to modulate the form of pixels.

When the effects are set and balanced, and the animation begins playing, the cursor roams over the surface of the type like a sheepdog racing from side to side behind a small herd, catching the pixels, directing the flow of the polygons. When it is working well, when the user-author is playing the text well, manipulating it with dexterity, not pushing it beyond control (unless intentionally), the process is intuitive and simple, the motion responsive, and control immediate.

Rehearsing or practicing is how I think of the repetitive process of trying out gestural play in Mr. Softie: play, stop, reset, and repeat. Working on the StandUnder piece, I rehearsed several times how much pressure the (p.176) text could tolerate before its fluidity shattered. This iterative process provokes muscle memory of the sequence of effects and often generates visual possibilities that cannot be anticipated—emergent moments (as happens frequently in theatrical rehearsals where repetition functions as improvisation). This time, it was possible to segment off and stretch out a neck of text, and then to bend and fold the remaining text over the crushed lower level. In my mind, this created a sense of a downward weight, inexorable pressure, a visual analogy of performance anxiety provoked by a knowledge hierarchy.

Synthesis of Interaction and Instinct

The preceding comparisons to traditional media (choreography, sculpting, and music) reflect my belief that an engagement with creative process in digital media emerges when gestural interaction converges with evolutionary instincts. Gaming first-person shooters are the preeminent examples of how ancient hunting reflexes reinvest themselves in technology: find, aim, and fire. Musical instruments constitute yet another model: pluck, caress, and strum. Mr. Softie activates the same instincts as molding clay or playing with water. In instrumentalized, nontimeline authoring environments—of which Mr. Softie is one—nothing can be exactly repeated or replayed as in a conventional timeline environment. The ephemeral nature of the practice combined with the fluidity of the typographic styles changes every time. As Heracleitus reputedly said, You cannot step into the same river twice. This alters the relation between poet and typography. Control and flow enter into dialogue. Typography becomes categorically like sound or sculpture: responsive, pressure sensitive, sticky, slippery, loud, and delicate.

Mr. Softie induces the writer into the role of a sculptor-choreographer. It does this in a way that enables the flow of creativity, permitting direct reactivity to occur between hand, gesture, and distortions in the materiality of language. It is an open situation (much like play), where the enjoyment arises from unexpected serendipity, unanticipated reactions, and reactive motion. Tactile deflection is primary to understanding Mr. Softie. Direct pressure-based, real-time malleability gives the sense of working with flexible material; the material in this case is language. The physical sense of our normal exterior world are preserved or at the least emulated: pressure changes surfaces. In Mr. Softie, touch deflects and pulls text into ribbons. It is as if clay or plastic or licorice is placed under the hand. In spite of its (p.177) mediated status, the type’s direct reactivity makes it feel like a lived situation, and the materiality of the text becomes tangible.

Offering spontaneous, intuitive, visual direct-feedback, software design can contribute to enchantment—a poetic process where the innate animistic roots of poetic process flourish. StandUnder finished as the submerged knot of the tower stood up, unraveling its resistance to the pressure I’d placed on it; all I had to do was stand back and let the software do the work. This elastic embodied materiality of resilience programmed into the typography itself meant that the final version (output in movie form) is the record of a live performance: a play between gestures, physics, poet, language, and programming.

Flash (RIP)

The first browser came out in 1994, and soon after websites began to be called home pages (nicknamed after the HTML root index page called a home). Every creative wanted to own and control its own site. Few wanted to be homeless. Live in a hotel? Sleep in a mall? People built homes. To decorate those new homes, they needed animation tools. In 1995, Netscape released a plug-in application program interface that allowed optimized graphics in a browser. In May 1996, FutureSplash Animator (an animation tool with a web plug-in) shipped. In December 1996, it was purchased by Macromedia and renamed Flash. By 2001, it went from having 3 to 50 developers, and at that point it had 500,000 multimedia creatives who used the software and 325 million web surfers viewing it (Gay 2001).

Between 1999 and 2011, on year01.com and then on my website glia. ca, I posted hundreds of experiments in Flash. Up until 2010, almost every device supported it; it claimed 99 percent market penetration. Borrow the code.18 Insert the graphics. Publish. Simple. This ease of use and widespread distribution led to a massive proliferation of TAVITs. Many online art galleries/publication-venues (Vispo, Born Magazine, Poems that Go, and so on) highlighted emergent language practices enabled by the affordances of software (Director and Flash) that were practical for artists and yet capable of sophisticated effects and easily launched online.

Poetry Portals: Born Magazine, Poems that Go, Vispo

As previously outlined, Born Magazine (1996–2011) featured collaborations between poets and professional designers. Over the duration of the project, it connected (p.178) 903 creatives and published 417 “literary/art” works. Many of these works expand the paradigm of what literary/art interactivity can entail. At many junctures, the multimedia interpretations expand the vision of the poet and the canonical sacrosanct purity of poetry suffuses with the dense hallucinatory power of audiovisuals. Succinct autopsies can no longer disentangle poetry from its art manifestation. If this is an infection, it is an opportunistic, synergetic effulgence.

In 2005, Born Magazine curated Help Wanted: Collaborations in Art, an exhibit at the the Center on Contemporary Art in Seattle. The exhibit did not solely consist of screen-based recapitulation of text-art projects but instead expanded collaborative practice into physical installations. To cite one example, Think Tank was a random political speech generator (using Markov chains from a database of George Bush speeches) and a physical sculpture with chickens on solenoids pecking generated words into a typewriter.

Poems That Go (2000–2004) published work that “freely let the arts mingle in a space we still dare to draw a circle around and label ‘poetry’ … [and] … explores how language is shaped in new media spaces, how interactivity can change the meaning of a sign, how an image can conflict with a sound, and how code exerts machine-order on a text” (Sapnar and Ankerson 2000). Poems That Go featured works exclusively made in Flash, including works by formidable practitioners such as Deena Larsen, Nicolas Clauss, Natalie Bookchin, and the curators Megan Sapnar and Ingrid Ankerson.

Another contributor to Poems That Go was Jim Andrews, whose personal portal vispo.com (1996–) became a mecca for digital poets. Andrews developed work in Macromedia Director. Director offered more 3-D support than Flash since it was originally designed for authoring CD-ROMs. In addition, Andrews collaborated widely and was an active (seemingly unfatigable) participant in innumerable Listserv dialogues that constituted the womb of critical inquiry from which works such as this book derive their terminology and methods. Leonardo Flores’s (2010b) PhD dissertation “Typing the Dancing Signifier: Jim Andrews’ (Vis)Poetics” demarcates three approaches to Andrew’s oeuvre: visual, sound, and code. This succinct taxonomy reflects the reality of multimedia as two primary sensory modalities conjoined by logic (code) (ibid.). It is a tradition continued by contemporary portals: Drunken Boat, Spring Press, and Claudius App where e-lit media work is published in parallel with more traditional format work.

(p.179) Flash: Flourish then Fail

Cynical commentators might attribute Flash’s success to marketing. Evidence from artists suggests otherwise. Jason Nelson (2009) offers an example of an artist whose vision was empowered by Flash:

i made this. you play this. we are enemies. explores internet portals, supposedly collaborative web 2.0 sites, through a modified and disrupted platform game engine. Using a combination of hand drawn notations, poetic lines, videos and animations, the art/poetry game lets users play in the worlds hovering over what we browse, to exists outside/over their controlling constraints. And while the non-linear poems and messy artwork suggests madness to some, the intention is to reflect the actual condition of these 2-dimensional virtual worlds spinning from our screens with the occasional leak of insanity.

Nelson dug deep into the Dada heart of eight-bit doodle graphics and became a Net art celebrity, attracting millions to play his demented online art games. But what is that permitted itinerant bards like Nelson to become multimedia interactivity designer-celebrities? Flash hit that sweet spot of write once, publish everywhere. More important, it incorporated timeline and scripting processes in the same interface; neither dominated, it was possible to use either modality: 100 percent tweened cel animation or 100 percent pure code, or weave both. It allowed coding for abstract control reasoning, and timelines for tactile spatial experience. Objects that will appear on-screen are visible in the interface; they can be picked up and moved with the mouse or code, or both. The muse was well pleased to move among the dim-witted machines with its mouth full of grapes.

Pop-up typographic design experiments like Gicheal Lee’s (2003) typorganism claimed: “‘Type is an Organism,’ that Lives on the Net, Responds to user input, Evolves through Time, as Intelligence, powered by Computational Organism.” Provoked by the ease of adaptability of new behaviors there was a wave of incautious optimism that anticipated quasi-organisms that the Net ecosystem snuffed out. Meme-garden by Mary Flanagan and others (2006) postulates a space where searches offer “seed” terms, word particles animated and trembling with Brownian momentum; beneath the seeds is “soil” where seeds can be dragged; from the set of planted seeds, “trees” arise to offer new associational paths; yet on a 2014 visit, the database threw an error, and germination failed. Tempered by time, it now seems as if the digital-poetic fossil record will contain many evolutionary (p.180) dead ends: Net spaces where the encoding of the soil shifts and species of poems disappear.

ActionScript Explained

In Flash, drag ’n drop functionality allowed a tactile, level of control (moving drawings onstage, building mazes by dragging, animating by keyframes, etc.). Then by naming the instantiations of those objects, code control allowed an abstract level of control. The abstract and tactile modes complimented each other. The code was called Action-Script and it was interpreted-not-compiled scripting language (easier to use, simpler to learn).19 Up until ActionScript 3, it was not even strict typed (it let you use different data types together), and not object-oriented programming (OOP uses complex classes that require naming conventions and encapsulates data—protecting them from being accessed—encapsulation requires more coding forethought and conceptual understanding). Action-Script 2 encouraged prototypes, rough experiments, one-off processes, and tiny little textural interventions.

With success came commercial proliferation, code sharing desiccated slightly, and Adobe contributed to its own demise by answering the demands of its commercial clients: the coding engine and language was rebuilt to specifications more appropriate for computer science graduates. The more formal language of ActionScript 3 required a density of preparation and infrastructure that made tiny, single, creative amateur projects less tenable. And then the corporate war began between Apple and Adobe.

HTML5 Fail

Some five years ago, Steve Jobs (2010) killed the comprehensive market penetration of Flash by announcing that Apple would no longer support it; he cited technical reasons. Wired magazine interpreted Job’s announcement differently: “Flash would open a new door for application developers to get their software onto the iPhone: Just code them in Flash and put them on a web page. In so doing, Flash would divert business from the App Store, as well as enable publishers to distribute music, videos and movies that could compete with the iTunes Store” (Chen 2008).

In this case, self-hosting (ownership! data independent of a corporation!) and the flexible interactive control of audiovideo (malleable mashups! an animation timeline and a scripting language in a single authoring environment!) became corporate roadkill.

(p.181) Most major web devs began moving to HTML5. In spite of the hype, HTML5 utopian ease of use has yet to arrive: coding a multimedia website and maintaining it across multiple platforms for diverse devices is now immensely time-consuming—almost impossible.

Consider this announcement from UbuWeb (2014):

For the past two years, we’ve been trying to convert all our films so that they can be viewed on mobile media. Guess what? We failed. The films stuttered, stopped and started. In the interim, we figured that it’s better to have crappy web-only Flash files than have faulty films that don’t play well on any media. Many hands have tried to make this a success, and for that we are grateful.

Unity (Not Diversity): The Rise of the Platforms

So what comes next? In terms of unified platforms that feature full-bodied timelines with complex yet accessible scripting languages, few alternatives exist. The increasingly perilous web environment bifurcated by platform wars and segmented by device proliferation precludes any immediate easy answers. One candidate is Unity3D, but while Flash was originally conceived of for fun little animations and eventually co-opted by advertising, Unity3D is built from the ground up to operate as a first-person shooter maze; it is optimized for that particular style. If Flash was secular, Unity3D is the military-industrial complex. While Unity3D does feature full 3-D potential, few browsers bundle it natively anymore; the era of write once, publish everywhere is over.20

Thus the rise of platforms: hosting services like Flickr, Vine, Vimeo, You-Tube, Twitter, and others that resolve the exigencies of launching media. And with the platforms, there is a dwindling of those esoteric, strange, perverse provocations possible: context controlled by corporations now leashes even the creatives.

Obsolescence: VRML (1994–)

Functionally, it is both a text to be read and a space to be surveyed.

—Matthew G. Kirschenbaum, 2007

Flash is not the only internet plugin to have gone from ascendancy to obsolescence. VRML as a term was coined in 1994 and then arose on the web when VR was ported over to the Mosaic browser. It was popular; by 1999, “the population of Cybertown (hosted by Blaxxun, and based on VRML) (p.182) surpassed 100,000 residents.”21 In the second half of the 1990s, VRML was a powerful presence in the e-lit community.22 Poets such as Ladislao Pablo Györi issued paeans to its glory: “Virtual poetry results from a basic need to impel a new kind of creation related to facts whose emergence—for their morphological and/or structural characteristics—would be improbable in the natural context” (Györi 1995 in Kac ed. 2007, 94). Györi also made general proclamations: “All creative processes will move into the virtual space offered by the machine” (ibid., 94). Funkhouser refers to Györi’s sculptural virtual poetry as of the utmost significance. In terms of history, this is true, but Györi’s website is gone and his work has all but disappeared. The Internet is a swift tributary that eradicates its past as efficiently as fire in Alexandria’s libraries. VRML was a powerful medium capable of investigations into textuality that are difficult to reproduce with current internet tools. Yet VRML has all but disappeared as an authoring technique and online distribution vehicle.

VRML also contributed to the emergence of new literary terminology. What Kirschenbaum (1997, in a paper to accompany his VRML work Lucid Mapping) calls fractal meaning is the same thing that Cayley (quoted in Raley 2006a) refers to as literal materiality: the ability to use the scale of letterforms to alter the reading. In Kirschenbaum’s example, inside a VRML environment, he places a complete paragraph in the bell of an a. To read, the reader dives in, microscopically entering a region of scale where legibility becomes feasible. As Kirschenbaum points out, this could continue ad infinitum: intimacy enters a scalar recursion (Kirschenbaum 1997).

Obsolescence: Second Life (2003–)

When VRML became a ghost town and its URLs died, the people moved into Second Life. Now Second Life is less known; it’s like the countryside, and the young folks all dwell in cities. Their parents may think they inhabit Facebook and Whatsapp, but younger people are in Snapchat and Kik, and the next generation will move on, mobilized by the tides of commercial incentive and a relentless desire to not repeat the previous generations’ imperatives. Nevertheless, many evocative poetic experiments have flourished in Second Life, among them an initiative by Sarah Waterson, Cristyn Davies, and Elena Knox (2008) to highlight Australian poets, called trope—poems visualized on walls in space, custom-deformed avatars, a sound track, and spoken word overdubs.

(p.183) Sondheim is one of the few poets to persistently work in Second Life. His approach is hallucinatory and excessive. It stretches the boundaries of what many might consider poetry. Trusting in the aesthetics of accumulation, Sondheim builds massive folly machines, churning wheels and polygon shard waterfalls. Independent parts rotate and careen; it is a bit like watching many superimposed looped explosions. In fact, there are no words so to speak; these are added afterward in performative contexts where Sondheim recites and shrieks while dance collaborators gyrate in front of screens. The effect is similar to Survival Research Laboratory’s aesthetic: twisted heaps of dementia colliding until catastrophe occurs, and then occurs again. Oh it’s fun. Sondheim’s point (if he has one, which he does; he has many—perhaps too many) is that we live in an era of entropy and excess. The careful, antiseptic, Bauhaus-like IKEA furniture of our homes conceals a careening that is taking place technologically. His staged interventions interrupt sane prognostications and cast viewers into a volatile perdition. Space distorts in ways that would have made surrealists jealous. It is cubism exponential. How is it poetry? Think of it as a collage of mannerist conceits, a place on the highway of culture where the conventional trucks of meaning have overturned and blind commuters collide within an extruded semantic mass.

The collapse of Second Life points to what might be a crucial flaw in all claims that textuality might integrate itself into the ontological fabric: humans cognitively prefer distinct categorical regions. The screen instigates a reaction against itself.

N. Katherine Hayles notes, “The next move is from imaging three dimensions interactively on the screen to immersion in actual three-dimensional spaces … [given that] computers have moved off the desktop and into the environment” (Hayles 2008, 11) In support of this reaction against the virtual flat screen, she cites the audiowalk projects of Janet Cardiff, the collective augmented gaming of Blast Theory, and cave automatic virtual environment (CAVE), specifically Cayley’s Torus in 2005 and Wardrip-Fruin’s (and others) Screen in 2003.23 The tension between physically being in a space and being emulated as an avatar in virtual space is what Oculus Rift and the next generation of Google Glass augmented VR wearables might bridge, provoking another explosion of language arts that involve the reader moving physically through what they read wherever they are. Caitlin Fisher’s AR installations anticipate this symbiosis of room-based infrastructure with handheld VR.

(p.184) Untold Cube Odes: Contemporary CAVE Works

Every kid with a Wii remote in their basement understands the impulse to move in physical space that corresponds to a virtual representation. Yet relatively few readers have experienced the CAVE works emerging out of Cayley’s electronic writing program at Brown University. Of note, Kathleen Ottinger’s (2013) Untold is a CAVE poem about desire; in this poem, the reader is pulled along twisting corridors, through words that penetrate flesh and rush through the reader whispering.24 Untold fixates on letterforms as a lover fixates on the beloved, with the body of the reader (compelled inexorably) flying at different speeds, pierced by language (emulating the sacred ecstasy of Saint Theresa wounded by subliminal eros). One effect the CAVE produces is of being touched at a visual level by letters that do not create any tactile sensation (see them move into skin, yet feel nothing). It involves proprioceptive language and subcutaneous diffractions.

CAVE poets are not constrained by the size of the page; like oral poets, they race over landscapes (of letters), and as they race, landscapes transform. Structurally, many of the poems featured at the Brown CAVE are multisection poems, filmic in ambition, cutting into divergent paradigms with changes in sound or color. Ottinger’s work even contains a traditional finale revelation as the camera draws back, giving a new global perspective, analogous to the last lines of a sonnet inverting the expectations of a reader. One of the works that does not contain this multifaceted filmic structure is Ian Hatcher’s (2010) Cubes: a recursive library of cubes floating in space, each created with lines of words from Jorge Luis Borges’s “The Library of Babel.” Through the cube-cages, the reader interactively can move up, down, left, right, forward, or backward. Phrases come and go, elegiac and precise, fading in and out. The world slides around the reader like an automated repository. Cubes defines an austerity of minimalism proximal to the dimensional lattices between molecules.

One of the primary ontological intuitions since antiquity is of a continuum of energy (or light or love) that underlies the world of phenomena. This spectrum of an eternal presence is surreptitiously evoked by Carman McNary (2008) in Ode, a work that begins unobtrusively as a homage fan letter to John Coltrane and then segues into a huge field of letterforms that slowly grow in size so that their intersections form an unceasing architecture—a drifting, synchronized, ceaseless excess that precipitates existential vertigo.

(p.185) Stereoscopic 3-D letterforms released from the page for now require expensive immersive infrastructures, yet there are indications that headset VRs will proliferate toward mass production. As words operate within the field of vision of the reader, proximal to the body, glazing skin, it requires a redefined notion of transcription. Immersion in this sense will arrive at and move beyond resolution at the limits of visual acuity (as in Brown University’s YURT opened in 2015), will get wearable (as Steve Mann anticipated), and eventually be born in the brain—broadcast onto neural textures.25


I run with code that’s a matter of tone.

Fred Moten, 2014

Code can either be seen as the creative toolbox used to configure appearance and interactivity, or as the guts, wiring, circuitry, engineering, infrastructure, and protometabolism of the quasi-alive twitching digital TAVIT poem-anism.

Pure coding (as opposed to hybrid code-timeline authoring environments) presents a creative challenge: it lacks any direct, immediate, tangible feedback. To write code is similar to writing literature: abstract words operate as pointers to imagined objects.26 The animation timeline (in spite of its limitations) has the benefit of displaying “objects” that can be dragged and dropped (as real things in this world). Code does not offer that physical analog. The creative strengths of code are granular control of every detail and loops that allow variations controlled by formulas (leading to dynamic open events and variations without end).

Code like language is not static. Over the period 1995–2014, major evolutions in code occurred. On his website Chronotext, the programmer-typographic-designer Ariel Malka offers a concise chronology of the programming languages he adopted over this period for language manipulation; his chronology accurately reflects the experience of many programmer-poets. Pre-2002, it was dynamic HTML and Flash; in 2003, it was Processing (featured sketches: scrollable text on a cube, a helix typewriter, and translated text sliding around a cyclotron); 2004 saw the dawn of “a new era of pure java or whatever works”; in 2005, it was OpenGL; a custom software toolbox was made in 2007; in 2008, “experiments [were] fed with (p.186) markup data or controlled by script”; 2009 brought the iPhone iOS programming with interactive tilting, touching, and shaking; in 2010, it was apps and Twitter readers; “Java has become irrelevant [in 2011]. … [P]lease welcome our new partners: cross-platform C++ and the Cinder framework”; 2012 offered JavaScript; and in 2014, it was “back to the mobile” (Malka 2014).

Programming language life spans are brief, but paradigms prevail. If HTML was the language of premillennial poets (along with the oceanic scripts of Perl and C, and the Eastgate Systems’ branching narrative software), Flash (ActionScript) and Director (using a language called Lingo)—which both offered a hybrid timeline-code authoring environment—dominated the first decade of the twentieth century by wrapping multimedia into a custom plug-in that once installed in the browser, ensured potential for widespread distribution.27

In 2001, the language Processing (initiated by Casey Reas and Ben Fry) became the scripting choice for education, prototypes, and experiments (it is open source and easy to learn, and features a powerful animation engine with OpenGL integration and many libraries). In parallel, JavaScript evolved into a powerful scripting language capable of making browser-based textuality dynamic (with JQuery and nimble programming methods amplifying its penetration into advertising and online textual practices), and at the same time, Python arose as a powerful component in generative analytic practices, linked to scientific statistical libraries yet with the capacity to flexibly instigate natural language processing.

In an ironic twist of remediation, the current scripting languages—Python, Processing, and JavaScript—actually digest and parse HTML, the former dominant dragon (Python does so using a library called Beautiful Soup). Once digested, text/HTML can be analyzed to discover poetic meter and form, by using libraries like the Natural Language Toolkit, or through comparison with existing pretagged archives like the Carnegie Mellon University Pronouncing Dictionary, which contains over 125,000 words annotated for stress or unstressed form. Each can generate n-grams (word-frequency collocation lists) and context-free grammars (a model for the hierarchical and recursive structure of syntax typically visualized as a tree growing downward through S (subjects) to NP (noun phrases), and so on. Part of speech tagging (syllables, phonemes, stemming, and etc.) allows (p.187) granular reconstructions and analysis. Poetry engineering emerges as a formidable subdiscipline.


Processing experiments have replicated many, if not all, of the text effects exploited by Flash, rendered 3-D interactive in the style of After Effects, and converted letterforms into particles, strains, strands, fields, flocks, and so forth.

Assuming instincts are rules, sets of parametrized behaviors designed to navigate organisms toward optimized survival, code emulates organisms. In Keyfleas by Miles Peyton (2013; emphasis in original), a particle system projected onto a keyboard animates reactive little instinctual insects-circles: “The Keyfleas live on a two-dimensional flatland. They travel as a flock, over key mountains and through aluminum valleys. They avoid touching letterforms, since they suspect that the symbols are of some evil origin. On occasion, a hostile tentacle [a finger typing] invades the flatland and disturbs its inhabitants.”

Assuming interoperability between text and image necessitates translation pipelines; Processing libraries function as paths. Boris Müller (2010), graphic designer for Poetry on the Road, utilizes such paths: “All graphics are generated by a computer program that turns texts into images. So every image is the direct representation of a specific text.” Interpreting a poem that has entwined modalities, as image and text, complicates the genealogy of imagery and confounds hermeneutics.

Code can allow for multiples without repetition, variation without end. The canonical text dissolves into TAVIT DNA that seeds a process. The Written Images print-on-demand book project generated a unique book with each printing based on seventy custom artist softwares (many written in Processing and a few using text): “A one of a kind snapshot in time” (Fuchs and Bichsel 2010). Most of these are images, but a few use text. Stable identity morphs on meeting dynamic data.

Few poets display the technical dexterity of programmer designers, so extreme play typography experiments—such as PostSpectacular studios Happy 2010! Card—often lack art/poetry content/concept/context. Technically sophisticated, the Happy 2010! Card uses particle strings on splines in a 3-D simulation box with constraints, elasticity, collisions, and so forth. Just as a medieval poet might have implemented obscure techniques like anaphora, euphony, metonym, or apostrophe, contemporary programmers utilize obscure processes. Processing eases the entry level, diffuses some (p.188) obscurity, and permits the generic paradigms of dot-syntax programming to become palpable for nonprogrammers.


RiTa is a software toolbox developed by programmer-poet Daniel Howe. Biologically, cellular ion gates regulate transcription processes, permitting or refusing molecules to enter/leave cells, as does RiTa (“from the old Norse, meaning to mark, scratch, or scribble” [Howe 2006]), which encapsulates and controls processes of text analysis and animated display. Like many other programming languages, RiTa lives in an ecosystem of intricate interdependencies, fluctuating protocols, and turbulent standards. RiTa enables language processing (grammars, Markov chains, and part-of-speech tagging). RiTaJS works in the browser (extending HTML5 first through Java and then with a JavaScript library); the library functions independently or in conjunction with Processing, Node.js, Android, and WordNet.

A project built with RiTa libraries that exemplifies the complex potential of how and where a poem can manifest inadvertently in networks is Howe’s (2014) browser-art-poem plug-in AdLiPo. AdLiPo works like an ad blocker, and utilizes the RiTa library to replace the ads on web pages with dynamic real-time static or kinetic text that is generated from a Markov-like model “composed of the ‘description’ texts from a recent digital writing conference, seasoned liberally with quotations from the likes of Marx and the Marquis de Sade” (ibid.). The result is savagely unexpected blobs of poetry that are almost more disturbing than ads.

Projects made with RiTa range from Molleindustria’s (2012) simple Definition of Game, which does one thing: generates definitions of the word game, to the complex information visualization pipeline of NASA reports implemented by the Office for Creative Research that parsed New York Times headlines using RiTa to create a sinuous, flowing, chronological language-usage map (Rubin, Thorp, and Hansen 2014).

A sophisticated poetic, textual example made with RiTa is Braxton Soderman’s Mémoire Involuntaire No. 1, a minimalist and evocative meditation on memory. In Mémoire Involuntaire No. 1, text replacement (implemented in RiTa according to sets of rules with a grammar) slowly and selectively changes words within a single block of text describing a childhood memory. The erased, deformed, replaced text occasionally enhances but often obscures the original meaning of the text-memory; after a period of time, the process reverses and the text attempts to retrieve the initial memory, (p.189) seeking to return to its original state (Soderman 2009). It’s a simple conceit that posits an unstable poetic form that is itself a statement on the instability and irretrievability of memory. The code in this sense operates like amyloid plaque momento mori, unraveling memory, destabilizing it, and then seeking it out again—attempting incessantly again and again to retrieve experience eroded by evaluation.


Python is a high-level programming language, and easier to use than compiled languages like C or C++ (more readable, less strict, offering more spontaneous workflow, and requiring fewer lines of code). It offers extensive, powerful access to language processing with native/external libraries, and is capable of data analytics, supplying a wide range of algorithms and visualization.28 It is uniquely situated to become a tool of choice for generative, combinatorial programmer-poets.

Python can also cut, splice, and display video. Sam Lavigne’s Videogrep implements conceptual film-mashup experiments; edits are chosen by isolating grammatical structures in subtitles in films, and then the code splices the video into new configurations. One result is: “every instance of a character saying the word ‘time’ in the movie In Time (a film whose dialog appears to consist mostly of clock-related puns)” (Lavigne 2014). Videogrep is creative coding at the service of an activity that bears marked similarities to conceptual and uncreative writing: appropriation that archaeologically plunders originals in order to highlight idiomatic speech forms.

Python seems to encourage appropriation by the very nature of its ease of splicing. J. R. Carpenter’s adaptation of Nick Montfort’s Takoro Gorge and other python scripts (many the subject of numerous encouraged collaborative appropriations) develops a riff off the word gorge to create a physical book (published in a 154-page softcover, perfect-bound edition by TRAUMAWIEN) on the theme of variations. As Carpenter (2011) states, there was one rule:

No new texts. All the texts in this book were previously published in some way. The texts the generators produce are intertwined with the generators’ source code, and these two types of texts are in turn interrupted by excerpts from the meta narrative that went into their creation. Most of the sentences in the fiction generators started off as Tweets, which were then pulled into Facebook. Some led to comments that led to responses that led to new texts. All these stages of intermediation are represented in the print book iteration.

(p.190) The result is a sinuous network of branching fingers and blossoming mouths. Gorging on already-generated forms as it generates more forms, Generation[s] questions the circular feeding on fetishized recursion operative in microcommunities.

The surplus also takes form in Winnie Soo’s (2014) spam-baiting project Hello Zombies, written in PHP and Python; it dynamically gathers lists of email addresses known to belong to spambots (these are published and updated daily by antivirus services), and then sends a poem (written specifically for the project by poet Susan Scarlata) every five seconds to a spambot and displays the “replies”: bounced messages, denials, rejections, and on rare occasion a human “hunh?” Her project points to the automatism of the network—the vast amount of processes bouncing around inside it that are simply working parts doing what they do with impeccable dedication. It seems also as if the body of poets and humans are full of such events: blind signals and ciphers, bacterial colonies, enzyme channels, peristaltic ripples, nerve messages, cascades of hunger-heat-thought-desire strobing onward, relentlessly independent of any greater context beyond their own need to be heard/read, or to perform the intricate task in which they have been set by some unknown source code. Python permits impeccable autopsies, forensics (to repurpose a term Kirschenbaum uses adeptly) of rhythms inherent within the froth of multiple tongues.


Andy Clymer’s (2011) typographic experiment Font-Face is a simple gimmick made with a video camera and openFrameworks code: font size, width of bowl, and stem mapped to face motion.29 Open your mouth, and the font gets fat. Close it, and the font becomes thin. Eyebrows up, and the font changes color. Frown, and the font curls. It reverses the role of reading as it empowers the reader, whose body becomes a visceral muscular designer of the font form. In this sense, this tiny experiment reveals one way the page can read us. There will be more.

Visual form does something, rather than that it is something.

Johanna Drucker, 2009

Consider text on a flat page. If printed on a press, the text is indented almost imperceptibly. The ink has bonded with the paper, the fibers of the paper have soaked up the stain of the letter, and paper and letter are (p.191) materially bonded, melded together. On screens, there is no indentation of ink into paper. Pixels portray depth through a luminous 2-D perspectival grid. Nonetheless, due to the persistence of iconographic traditions of print, most digital text appears as if printed. To a casual eye, the similarities between the trace mark-making of petroglyphs, papyrus, hieroglyphs, and screen-based digital typography are strong. Line based, left to right reading, columns with headlines, formatting (uppercase, sentences, underlines, italics, and justification): these formal elements of writing persevere through technologies. Writing remains what it always was: a reservoir of prescriptive grammatical rules, typographic traditions, and literary effects. There are few attempts to make strange what is overly familiar.

Andreas Müller (2005) in For All Seasons coded a monochromatic, minimalist, interactive fly-through of fields of letterforms: poem as park or field; poem as monoculture—all of it coded in C++30 Imagine blisters arising in the form of letters on the printed page. The dormant immobile ink of each letter bubbles upward just slightly. The indentation of the printing press is inverted. The letters hover like pimples, swollen with ink, foaming over. They shine as if plastic; they gleam as if wet. The page is now implicitly tactile. It references Braille. It is now possible to conceive of someone touching the page and slowly (laboriously) reading it with their fingers.31 Imagine more. Imagine that the letter-blisters grow more pronounced than pimples; swollen with pulsating and slushy ink, each letter now germinates and extrudes like a sprout; each letter is sexual, a thick fountain, a forest of letters, a field of wavering black stalks rising off the page; each is plush with a pulsing, succulent interiority. Our viewpoint shifts. We rush over a thriving field of grown language as if we were a bird or a low-flying plane; we rush over a field of wind-struck, writhing letters raising their heads to the sun, following the reader.

Live Coding

Live coding is currently the practice of writing real-time code that generates audio; beyond its connection to beatboxing (some of its practitioners practice beatboxing as well), there is no real connection to poetry.32 But live coding as a paradigm represents one space that poetry may evolve into: improvisational, augmented, on-the-fly performance-creations, or a real-time speaking of real-time generated verse.

In a live-coding performance, coding and composing occur in real time, no instruments are used, and the audience watches the performer’s screen (p.192) as they type the code. Currently bewildering audiences and exciting the musical community, live coding displays the improvisational coding skills as it is being done. So at some level, performing this music is actually a writing practice; there is no traditional instrument involved, just sets of instructions typed into the code that place the performer and audience in a proximal relation to how the programming influences the musical process. So the audiences watch the writing that underlies the sounds. It’s designed to disintegrate the delay between code and creation. Utilizing text-based, real-time audiosynthesis languages like Extempore, Impromptu, Overtone, and SuperCollider, these musician-geeks practice within a community that values improvisational dexterity and on-the-fly, rapid charismatic serendipity—all values that would seem congruous with poetic practice. Other live-coding practices involve graphical coding environments such as Pure-Data or MaxMSP—graphical here meaning on-screen, object, graphical user interface boxes connected by strings that pipe data in and out of processes.

There are indications that the live-coding community, whose practitioners occasionally interject live vocal samples and beatboxing into their mixes (projecting a kind of Kurt Schwitters’s Merz diatribe, vocalize, skew mix), is becoming interested in haptic interfaces and interdisciplinary excursions that might result in tweaking output pipes of language, poetry, and literature into real-time parallel code-literature writing procedures. The gap between interface and output might narrow. Books might be written/generated, animated, and published in synchronous networks of collaborative unfoldings, much like social network streams encourage instantaneous twist ricochets to reinforce and/or deflect current trending hashtags.


(1.) In animation softwares, timelines allow “key frames” to be set; each key frame marks a time of known behavior. The computer “interpolates” (inter = between, poles = positions) or calculates an “interpolation” (an interpretation) of what happens in between those key frames.

(2.) For the television footage from 1990, see https://www.youtube.com/watch?v=qSeYivHZpB8&feature=youtu.be (accessed September 17, 2015).

(3.) Roughly, the dorsal is the quick “where” and “how” stream of vision processing in the brain; the ventral involves slower analysis about “what.” See Goodale and Milner 2004. Recent neuroscience suggests that the two streams are not as independent as once believed.

(4.) Consider language as a technology; its constituent parts do not imply the wealth that has emerged from its configurations.

(5.) Examples of overabundant applications of typographic effects flourish (for instance, Sebastian Lange’s [2007] Flickermood 2.0). Contrast this with the ingenuity of the music video for Alex Gopher, directed by Antoine Bardou-Jacquet (1999), The Child, which although kitschy in its depiction of the pure space as letterform, (p.224) manages through congruent audio to convert the process into a micronarrative where reading occurs across levels. The ingenuity of The Child basically replicates and extends the style and premise of Jeffrey Shaw’s (1989) pioneering interactive installation Legible City, where viewers ride a bicycle through cities constructed out of words. Another example of this style of work is Logorama by H5.

(6.) Built at an extreme cost by a team of assistants and copyright lawyers.

(7.) Their work is one of the first overviews of kinetic typography in book form that incorporates both advertising and personal projects from television, video, and the early Web.

(8.) Wikipedia notes that Méliès invented a lot of stage magic: “One of his best-known illusions was the Recalcitrant Decapitated Man, in which a professor’s head is cut off in the middle of a speech and continues talking until it is returned to his body” (https://en.wikipedia.org/wiki/Georges_M%C3%A9li%C3%A8s [accessed September 20, 2015]).

(9.) Whitney’s is a story often told: it is on Wikipedia and can be found in many texts on video art history; he is both meme and archetype. For details, see Moritz 1997. See also Willis 2005, which states that Whitney “founded a company called Motion Graphics incorporated in the 1960s and IBM hired him as its first artist-in-residence” (9).

(10.) One precursor artist-poet who defies those constraints and anticipates some aesthetics of motion-typo-graphics is Marc Adrian (see the section on him in chapter 2).

(11.) Daniel Defoe and William Blake were both vanity press publishers. They stand in the same relation to the canon as contemporary self-publishing web poets (such as Jim Andrews, Brian Kim Stefans, Talan Memmot, J. R. Carpenter, Stephanie Strickland, myself, and many others) stand in relation to the incipient electronic literature canon.

(12.) Jared Tarbell? Mr. Doob? Hi-Res? Karsten Schmidt? Erik Spiekermann? Joshua Davis? Paula Scher?

(13.) AutoCAD is a software-modeling tool designed primarily for engineers and architects.

(14.) Stefans (2003) discusses de Campos’s 1982 article in the context of the computer poem (CP). In both de Campos (concrete) and Stefans (computation), a refutation of the lyric occurs. For Stefans, the CP “does not aim to satisfy any of the Aristotelian poetic criteria—plot, mimesis, catharsis, etc.). … [R]eading a CP invariably sinks into certain modes of data analysis” (ibid., 116–117). De Campos (1982, 181) concludes that a rigorous simplicity is “analytically and aesthetically, the character of a true stylistic principle. As such it is verifiable as a device.”

(p.225) (15.) For detailed instructions on how to create a text model compatible with Mudbox, see my website, http://glia.ca/conu/soundSeeker/wordpress/3D-Pipeline_Sound_Seeker.htm (accessed September 21, 2015). But these instructions are unnecessary as of 2011, since the new version of Maya and Mudbox contain improved interoperability between Mudbox and Maya. Plus Mudbox now renders out directly to movies. In 2009, I wrote an email to customer service asking when this would be available. I also asked if it would be possible to totally hide the cursor—a feature that, is not yet available. When Mudbox does introduce the hide-the-cursor capacity, it will introduce an explosion of malleable, morph, experimental videos.

(17.) Typically, I begin a session in Mr. Softie by preparing it to export compositing footage, setting the background color to a key tone (green), and using a commercial screen-capture software to grab output.

(18.) In the first decade of the millennium, there were many celebrities of Net culture posting massively shared items online regularly: Joshua Davis’s Praystation, Yugop, James Paterson and Amit Pitaru, Erik Matzke, and Hillman Curtis, among many others. For an informative history of Flash’s evolution from free-form playground to corporate sprawl, see Leishman 2012.

(19.) In the case of Flash, interpreted code gets “interpreted” line by line by a plug-in in the browser; compiled code (as in C language) gets written down to machine-level language in a single block. Generally, interpreted languages tend to be simpler and easier to write than compiled languages, which contain strict catastrophic and often unhelpful or cryptic error messages.

(20.) As of summer 2015, there are signs that Unity is losing browser support (as are Unity and Silverlight) as the Netscape Plugin Application Programming Interface gets depreciated; see http://twiik.net/articles/google-chrome-just-killed-the-unity-web-player (accessed September 23, 2015).

(21.) In my original manuscript this citation was footnoted as “http://vrmlworks.crispen.org/history.html (accessed Feb. 2011)”—unfortunately the website where this quotation originated is even now offline, so not only is VR disappearing but to some degree, the history of VR is disappearing as well.

(22.) The hype surrounding VRML is comparable to the hype surrounding VR circa 2015 (Oculus Rift, Morpheus, Gear VR, etc.).

(23.) CAVE systems and other immersive setups (such as the 360 cylindrical theaters built and inspired by Shaw) often use multiple stereoscopic 3-D projections onto walls and floors to give the sense of a single screen. Stereoscopic 3-D systems use active shutter glasses to alternatively feed images to each eye, tricking the brain into experiences of depth. Head tracking allows for responsive experiences. Words become palpable. The YURT Ultimate Reality Theater (YURT) is replacing the CAVE (p.226) at Brown University. The legacy Brown CAVE is 8 x 8 feet and 8 projectors; the YURT that opened in 2015 is 69 projectors and 100 million pixels. Interestingly, it utilizes 145 mirrors, 200 8-ounce fishing sinkers, a mile and a quarter of video cable, at a development cost of $2 million.

(24.) These works were selected from the list of student works featured at the opening of the Brown YURT in spring 2015; see http://www.yurt.interrupt.xyz/immersive-reading-presentations/ (accessed September 24, 2015).

(26.) If the business of getting code to run was not so bureaucratic, it would be poetry.

(27.) Director offers many classics, including M. D. Coverley’s Egypt: The Book of Going Forth by Day (originally published by Eastgate in 2000, elegized on Director circa 2003?), Strickland and Jaramillo’s (2002) Vniverse, William Poundstone’s (2001) New Digital Emblems, and Ana Marie Uribe’s (2001) Tipoems and Anipoems. The irony is that the widespread distribution has already decayed; works in Director or Flash are now frequently unplayable.

(28.) As outlined in the preceding chapter on my project BDP (Jhave 2014).

(29.) OpenFrameworks is a wrapper for C++ (specifically designed for the artist-programmer community) that abstracts away some of the more obscure technical processes.

(30.) Poets who believe in poetry as a community might protest that Müller is not a poet; that he works in interface design, built a lot of websites, and did some e-fashion gigs. How come he’s in this book? He’s here because he made a succinct, successful foray into one technological region and then (like Arthur Rimbaud wandering off to Africa) left it behind. So it goes.

(31.) Unfortunately, if this imagined page occurs on a contemporary screen, then its depth is implicit; it cannot be touched. Tactility is offered and then denied. This absence of techno technotactility (even in the multitouch swipe-screen era) is a common critique of digital media; yet paradoxically, to its credit, the screen offers many illusions of tactility and 3-D space in a way that the printed page never did. The tactile nostalgia referenced by printophiles is (like much nostalgia) operating at the level of mythology: books by their weight and density convey a presence that is time. Books, by their texture, place what is read within a canon. As generations change, however, so too will the mythological status of tablets, cell phones, and e-readers; devices will saturate in the memory of being held and read. That which has been treasured and held in the mind gains a tacit tactility; intimate, remembered words evoke identity.

(32.) I am indebted to Jason Levine (performance artist, beatboxer, and live coder) for introducing me to the concept and online community surrounding this practice. In 2014, I performed on the same bill as Levine (in New York City at WordHack curated (p.227) by the interesting young techno-poet Todd Anderson https://www.facebook.com/toddwords). I performed a segment of BDP real-time, text-generation vocalize: stitching together a single poem from a torrent of generated text. While not live coding, it was live poeting. Jason performed a beatboxing duet with audiocode that he wrote as we watched; his performance was live coding.