The web and temporality

As I’m beginning to recover from the unpleasant virus that has waylaid me these past few days, and am starting to feel partly responsible for my diminished blog traffic (as displayed in the “Blog Stats” graph), I think it would be good time to re-visit the thorny article by Hellsten, Leydesdorff, and Wouters, “Multiple presents: how search engines rewrite the past” (2006). Unfortunately, I cannot say I understood every aspect of their methodology and findings, and for this I admit that my own technical shortcomings are entirely to blame (though the authors’ bone-dry, Dutch-inflected English didn’t exactly help). But I do find that the authors provoke a fascinating discussion of how the internet changes (or disturbs) our sense of time, positing in particular that search engines are broadening “the concept of the present from a fleeting point in time to a [fragmented] spectrum of actualities.” So, “the present” (at least as it’s experienced on the web) is no longer the “photographic” instant in time but a dimension created by the interactions of various cyclical frequencies: page updates, search-engine crawler visits, the evolution of the web (and thus, presumably, the replacement of certain search platforms by others… anyone remember when the name “Google” still sounded slightly funny, and when Excite was considered a normal place to search?).

The internet’s sense of time has always been a very mysterious thing to me, and until now, I have not considered it in any systematic way; websites seem to get old at varying rates, evolve or not evolve, be forgotten, disappear, and (occasionally) re-surface. We are often told, by way of warning, that information uploaded to the web is permanent: that it can never be removed. But at which time does “permanent” information makes the transition from “present” to “past”? Surely it wouldn’t be correct to say that information is “forever young” on the web, available for retrieval in its original state and context; web data has a way of morphing, of being de-contextualized and re-contextualized. One dusty web relic that could inspire angst-filled debates about the web’s temporality is the classic, not to say primitive “Hello my Future Girlfriend” (requires sound for full effect): is some fragment of its author preserved eternally as a lovesick 11-year-old from New Mexico, even if his original URL has long since disappeared and the site now only exists as a dozen mirrors, even if he is actually 25 and no longer living in the Southwest, even if his future girlfriend is now his ex? Too bad Roland Barthes isn’t around to have a go at this one. 

Because I don’t have any answers. 

Anyhow, 90s-meme digressions aside, the article deals primarily with the frequency with which search-engine algorithms re-visit old content, and this is clearly a complicated matter. But to better understand another, primary web process – the generation of news content – Google Trends seems to be a promising tool. At Google Trends, you can easily visualize both the search volume and the news-reference volume of various keywords, thus getting a sense of exactly when an issue or idea entered the public consciousness, how it was manipulated by the media, and when it disappeared from view (only back to 2004, but eventually that will seem like a long time ago). It appears as though Google does this by storing static pages rather than recording the changing headlines on dynamic websites, though I’m not sure. I actually tested the article’s chosen search term, “Frankenfoods” to determine whether, indeed, the term is a forgotten buzzword that has disappeared from regular usage: Google Trends confirms that it is.

I think this idea of temporality has an important relevance to our group project, since we’re considering ways of making a web-based documentary in real time, one whose significance will doubtlessly evolve as it changes from newly-generated content to archival material.

Work cited: 

Hellsten, I., Leydesdorff, L.,  Wouters, P. “Multiple presents: how search engines rewrite the past.” New Media & Society 8.6 (2006): 901-924.

Advertisements

Pervasive urban sensing / art vs. data

In a previous entry, I discussed my experience uploading content to Flickr, reflecting in particular on the implications of making EXIF metadata publicly available alongside the photographic image. In the article “Urban Sensing: Out of the Woods” by Dana Cuff, Mark Hansen, and Jerry Kang (2008), the concerns I voiced (in a somewhat impressionistic manner) about the building of a web-based data commons are systematically discussed under the headings of “property” and “privacy”.

As regards property, the authors note that “Copyright law only protects creative expressions; it does not protect the underlying data” (p.30). In the case of Flickr, this raises an interesting problem: is a digital photograph uploaded to the web a “creative expression” or an informational record? Or, more fundamentally, is it “art” or “data”? I had mentioned that, in the context of my printed photobook, the photographs decidedly fell under the “art” rubric, in that they were geographically non-specific images aimed at connoting a personal state of mind — lacking captions (and thus, context), they were not “data” in the traditional sense (or, to use the article’s term, they are scientifically worthless “junk data”). But when uploaded to Flickr, with metadata specifying time and photographic settings, and geographic data supplied voluntarily by me when I mapped the photos, the photos, freely available and submerged in the sea of information, lose some of their “artistic” value as standalone works (for instance, the monetary value the work might have in a high-quality, limited-edition print); simultaneously, they acquire an informational value as “data,” that can be aggregated with other citizen-collected information for any number of uses. (And, if my camera possessed a GPS sensing component, the geographic data could be as precise as the temporal data). The article asserts that what one loses in individual property rights one might gain through “attribution,” which I interpret as the ability to advertise one’s talent as a citizen data-gatherer and feel rewarded by contributing to a larger project. And I agree that being part of a distributed data-gathering project (such as distributed documentary) can be very rewarding.  

However, I think there is a curious paradox in today’s world, in that the need for academic credentials to “get ahead in life” has never been higher (look at the proliferation of MFA and MBA programs for proof), yet articles such as this one evangelize on behalf of the idea that “research” (scientific/artistic/political) is leaving the rigid central control of the Academy and becoming dependent on mass participation by all citizens, whether expert or not. Credentialization as a means to power vs. citizen participation as a means to knowledge: I have no clue what the implications of this are, except it doesn’t seem to be a sustainable tension. Thoughts?

The loss of privacy seems to be a simpler matter — constant self-surveillance, in which people voluntarily record and share information about their location and activities, is becoming the depressing norm. While the article argues that people will not contribute data to the commons if they perceive that “computer security is weak”, quite frankly I do not know if most people have any sense of how good or how bad “computer security” actually is. I, for one, do not. I did not even know that all my metadata was being gathered along with the coloured pixels in my photographs, and for all I know, maybe there IS a GPS sensor in my Nikon D80. But the web has enough conspiracy theories out there, and I have to get to class in the Rogers Communication Centre by 1 pm (there’s some self-surveillance for you!). Let’s just say that, as the article notes, privacy preferences ARE adaptive, what seems radically invasive today may seem commonplace tomorrow, and the cultural creep towards 1984 will go unnoticed by most.   

One question I’d like to throw out there: the article discusses the importance of data visualization (and who doesn’t love a good multicoloured map or cartogram?). But do we run the risk of a “culture of visualization”, where data integrity is secondary? 

Work Cited:

Dana Cuff, Mark Hansen, and Jerry Kang. “Urban Sensing: Out of the Woods.” Communications of the ACM 51:3 (March 2008): 24-33.    

Second Life: First Thoughts

(Edited: Scroll down to see pictures!)

I had meant to address Alex’s comments concerning the urban sensing article, but as last night was my first experience in Second Life, I think I should reflect on that first, since it is still fresh in my mind.

My initial impression was very much in keeping with Mark Tollefson’s view that, from a strictly graphical standpoint,the SL world can seem rather underwhelming, even outdated, in a world where first-person gaming has achieved a much higher standard of visual realism. Some might say this misses the point, but I think that if a sense of embodiment in the virtual space is important (and it is), the space should be equipped with a more nuanced and responsive sense of gravity, light, texture — even several years ago, when I last played a first-person shooter, I had come to expect the sound of different surfaces crunching under your feet, the veritable sense of strain/fatigue when climbing up a steep hillside, etc.

However, despite this first impression, after a few moments in the Second Life world, I found myself increasingly mesmerized. First, by the simple recognition that all the other figures taking their awkward first steps around Orientation Island, were, like me, real people being born at that exact moment into their virtual lives: a Second Birth, complete with feelings of awe, trepidation, and a restless desire to grow. OK, maybe this is an exaggeration, but it was pretty neat.

Then came the task of desgining my avatar’s appearance; this is decidedly unlike being born into the real world, in that you can choose how you look. But while my initial instincts were either to 1) fashion my avatar into an idealized version of myself, as in a Dürer self-portrait, or 2) create a freakish monster, emphasizing for comic effect everything I hate about my appearance, I chose a third possibility, which was to become a woman. I mean, why not? I also thought people would be nicer to me in-world if I were female, which turned out to be only partially true.

The appearance-designing engine in SL is quite extraordinary: I was able to make my avatar into a near-identical twin of my girlfriend from First Life — I did this partially to avoid feeling like a eugenicist toiling in some fascist dystopia to create an Übermensch (actually, in this case, an Überfrau, but let’s not split hairs).

Then it was off to explore the world. And a what an immense, diverse world it is. I journeyed to a detailed reconstruction of the Alhambra in Granada, where it was politely requested that I don a veil; I visited the in-world headquarters of Barack Obama and Hillary Clinton (where I got to ride a shark!), not to mention the creepy, wood-panneled inner sanctum of the Republican Party; but mostly, I found a lot of places that looked like Club Med filled with Jimmy Buffett look-alikes. Oh well.

The one thing that I found discouraging was that everybody is trying to make money, and some (if not all) in-world ventures carry the whiff of scams. People are constantly trying to sell you junk, and you make the same laboured small-talk in SL boutiques that you make in real life when meeting with high-pressure time-share sales agents. (I mean, L$250 for Van Gogh’s Night Café might seem like a bargain, but it’s the size of a postage stamp and heavily pixilated. Oh, and it’s not real. It’s not as though there’s an in-world version of Antiques Roadshow that can somehow verify its virtual provenance). Others, like in the real world, are reduced to begging. So basically, if you are poor in real life, and cannot (or have the good sense not to) convert real dollars into Linden Dollars (a currency with an Orwellian ring to it), you get to be poor in Second Life, standing in the cold looking through windows at the sumptuously appointed tables of the rich, like a virtual Tiny Tim. Except that you’ll never starve. Maybe that’s how SL works as a new kind of documentary experiment, revealing for all the workings of the real world’s economic disparities by inscribing them into a virtual one. 

Or maybe I am completely wrong: I am new to this online world, and excited to learn and explore how it works.

  

Notes on Allan King

I just thought I’d post a report on yesterday’s master class with legendary Canadian filmmaker Allan King, held at Innis Town Hall and organized by the Documentary Organization of Canada. We saw several film clips spanning about 25 years, from the early, gritty CBC works like Skidrow (1956) to the controversial “actuality dramas” like Warrendale (1967) and A Married Couple (1969) for which he is most famous. The screening of Warrendale, with its troubling images of emotionally disturbed children lashing out against an unorthodox touch-based therapy, and A Married Couple, with its frank depiction of domestic conflict, prompted a vigorous discussion of documentary ethics: issues of consent, voyeurism, and editorial control were debated, as was the perennially-relevant question of whether the camera can observe unobtrusively, or whether it inherently elicits “performance” from subjects. As happens in many Q&As, I felt that a few questioners grilled him with undue intensity on ethical concerns without acknowledging the different social/cultural/artistic climate in which he worked, not to mention the simple fact that without the example of pioneers like King, few of us would be attempting to make documentary work.    

As moderator (and generally ubiquitous Canfilm guy) Marc Glassman noted, King stands rather apart from the other Canadian documentary pioneers of his generation, in that, from an early date, he maintained his own production company and strove to make films independently of the institutional filmmaking apparatus, embodied by the NFB. As his work is more unflinchingly bold than most contemporary NFB work, I had always assumed that King was more closely aligned with American cinema-vérité and particularly Frederick Wiseman. Wiseman’s first film, Titicut Follies (1967), a close contemporary of Warrendale, likewise examines a mental institution with striking similarities of approach and style (closely observational camerawork, lack of narration or interviews). Like Warrendale, that film was suppressed by authorities (more severely, actually). Furthermore, King’s longtime cinematographer Richard Leiterman manned the camera for Wiseman’s second film, High School (1968). It surprised me, then, to hear that King only met Wiseman relatively recently.

It was also interesting to get a sense of King’s process. He was unlike Wiseman (or a consummate cinematographer-director like Michel Brault) in that he felt his own presence would necessarily detract from the authenticity of a documentary scene, and spent most of the production of his great “actuality dramas” off-set, loading magazines or looking at rushes. 

In a slightly sad way, I feel that whereas the “golden age” of Canadian film embodied by King’s work has been well documented and celebrated, the current epoch in which non-fiction filmmakers are so numerous and stylistic/thematic lineages too tangled or diffuse to discern, will defy all such efforts at historical memorialization. The digital documentary scene is very different from the film-based one that preceded it, which essentially consisted of two large institutions for the incubation of talent and technology (the NFB and CBC) and a few lone wolves alongside them (King); assessing (and mythologizing) the current cinema’s significance in thirty years will be a very difficult and thankless task, I think. I don’t know, maybe this is just the kind of nostalgic, “après le deluge” thinking I am prone to.  

Apparently, TV Ontario is screening a retrospective of his work; I would recommend everybody check it out. A final thanks to Sadia for letting us in on this great event.

Virtual space as context of presentation

Now that the blog has been up for a little while, I have had some time to contemplate Alex’s question as to the effect of space (in this case, virtual space) on the context of presentation for visual works. Recently, I uploaded some landscape photographs to Flickr (see sidebar), taken on Toronto Island and in some midtown ravines in the dead of winter; these were originally presented in a printed book, produced for Blake’s Doc Studies class. For that project, I had juxtaposed these images with a personal archive of old letters and postcards, attempting to suggest emotional correspondences between text and images. In the original context of the book, the photographs were not captioned, and thus, deprived of their geographical specificity; their primary purpose was to present desolate, empty spaces and suggest the condition of solitude (admittedly in a rather Romantic way, along the lines of Caspar David Friedrich).

Flickr radically changed the context of presentation, and thus altered the meaning of the work. Using Flickr’s map function, I was able to pinpoint, with near-exactitude, the geographical locations recorded in the photographs, and was able to see all the photographs taken by the Flickr community in the immediate vicinity of those locations. For one thing, the sense of individual solitude was decidedly diminished; though the photographs depicted lonely snowscapes, in Flickr Maps, it is possible to get a sense of the sheer volume of photographers documenting the same (or nearby) terrain. “Documenting” is a key word; once a landscape photograph is pinpointed in Flickr (or Google Maps, or Wikipedia, for that matter), it becomes less a Romantic or expressionist reflection of the image-maker’s consciousness, and much more a specific document of a place at a given point in time. And the availability of EXIF metadata means that users can tell exactly which given point in time the photo depicts, not to mention how the photographer has enhanced the image in Photoshop. The photo becomes less singular, less mysterious, and more a piece of a bigger documentary puzzle. In effect, social media platforms such as Flickr make your photos part of an ongoing, collaborative documentary/encyclopedic mission, to record every place, person, thing in existence, from every possible temporal-spatial vantage point. (Obviously, the goal of said mission is unattainable; although with the proliferation of webcams, the paranoid and otherwise Orwellian-minded might reasonably disagree).

The paradigm changes, as individual images, immersed in a sea of other images, lose some of their individual value, but gain a different significance as part of a collaborative project, and the ego is forced to adapt (or possibly turn inward to painting or making daguerrotypes). That being said, sometimes I peruse the billion or so debates raging fiercely between thirteen-year-old wikipedians from their parents’ basements, and I am reassured that collaborative, financially non-lucrative online ventures do not by any means entail the death of the ego.   

An aside: personally, I opted to hide my EXIF data, for several reasons. For one thing, I think photographs, even those posted on social media sites, should leave viewers with some mystery (“was that Photoshopped?”, “what time of day was that?”, “was that taken recently?”). For another, as a novice photographer, I feel self-conscious, and don’t want the broader world to know that I took a picture in broad daylight with ISO 1000 (can’t I claim that my specific technical deficiencies are private?). I imagine that professional photographers would consider some of their metadata to reveal trade secrets; my metadata only reveals my flaws. (But then again, would professional photographers with a mind to asserting the monetary value of their work actually post work on Flickr?) Lastly, a minor point: I didn’t like being included in a graph depicting how many thousands of people used the same camera model – does Flickr exist only to provide market research information for Nikon, Canon, Olympus, Apple, Adobe? Anyway, my anxiety over metadata is, for now, mostly academic. I have received about ten picture views during brief my tenure as a Flickerite, and a sum total of zero comments.

By the way, I am still reflecting on the way physical (as opposed to virtual) space changes the context/meaning of visual work. We didn’t really get far during the exercise in which we were to project work on non-traditional surfaces (for the record, I brought a special holographic edition of National Geographic and some silk scarves, but time elapsed before I got to experiment with them). Will return to this question next time.   

User-triggered narratives and the [murmur] project

Developing a non-linear, user-triggered narrative presents many more complexities than were originally apparent to me. While the task, as introduced in class — to design and shoot a video sequence whose shots are coherent in any order — seemed challenging unto itself, the additional discipline of having said sequence work as a “user-triggered” piece, presents a whole new set of design issues. To examine some of these design issues, for now let’s set aside the fact that, from a purely technical standpoint, the Max/MSP patch we are currently working with only allows for video clips to be displayed in either linear or random order; exactly how to make the video-playing mechanism respond to user input is clearly beyond what we have learned thus far.

First off, many video sequences could be said to make sense in any order, but this does not necessarily imply that they would be coherent in a non-linear, user-triggered situation. Take, for instance, a simple montage sequence in film; often, the various images edited together to reinforce an idea could be re-ordered without any loss of intelligibility (and indeed, this creative re-ordering of actuality footage in post-production is at the core of traditional documentary practice: see Janis Cole’s Documentary Manifesto, whose eighth point asserts, non-controversially, that “Documentaries are written in the cutting room.”). But do the constituent shots in a montage make good raw material for user-triggered works? Likely not, I would think.

So, what type of shots work best in user-driven narratives? Taking a look at Ms. Dewey, Microsoft’s clever but somewhat demeaning “search assistant,” it is clear that there is something mesmerizing about her use of direct address (something documentarians from Dorothea Lange to Errol Morris have understood), and that, following the filmic injunction against jump cuts, the constituent clips tend to achieve a fairly seamless continuity by beginning and ending with Ms. Dewey in a fairly neutral, standardized pose. It seems that user-driven narratives should also require a large amount of content to be effective; if repetition or sparseness of content allows the user to perceive the limited extent of the database, his/her sense of personal discovery (and experiential uniqueness) is diminished and they will probably lose interest.

(A similar user-triggered narrative on the web is Burger King’s low-res, webcam-based Subservient Chicken, in which visitors have total control over a man in a chicken suit, and can command him to do anything — within reason, of course. The subservient chicken, unlike the green-screen-backed Ms. Dewey, is situated within a space [a gloomily furnished apartment] adding a measure of context and limiting the user’s reasonable command choices. As such, the user’s expectations are lower, and thus, the chicken is less likely to disappoint.) 

Putting aside these web novelties, which inevitably become tiresome, it dawned on me that an excellent, documentary example of a user-triggered narrative — and one situated in physical space, no less! — is Toronto’s own [murmur] project. I had always known about the initiative, but last year in Doc Studies 1, Rob Lendrum made a strong case for its value as an innovative form of documentary. In [murmur], users wandering the city discover green, ear-shaped signs affixed to lamp- and sign-posts, and can dial a telephone number to hear a story about that location. These stories can either take the form of a historical narrative about the place’s “official” significance or of a more personal and idiosyncratic tale as recounted by a local inhabitant; some guide the listener on a little walking tour through the space. [murmur] is non-linear in that users do not experience a single, unchanging narrative, nor do they absorb it from a fixed vantage point; they build a narrative based on their own trajectory of movement through the urban environment (it somewhat reminds me of those “Choose Your Own Adventure” novels from childhood.) And they can even contribute their own urban stories to the project’s database, a big plus.

Of course, [murmur] is not without its own issues — for one thing, the storytellers on its website do not seem to reflect the cultural diversity of their respective neighbourhoods; for another, the density of green ears in some places suggests a rather competitive, territorial view of urban memory (I’m looking at you, Kensington Ave.), in which contributors lay claim to the privilege of assigning significance to a particular house, signpost or streetcorner. That said, [murmur] is a wonderful, local, and relatively low-tech example of a user-triggered narrative deployed creatively in public space to give a sense of the multi-layered, non-linear histories existing in the pavement beneath our feet.

Murmur signage

 

Preliminary reflections on new media

First, a disclaimer, or, more accurately, an excuse: lacking a production background in any of the three documentary media (my university degree is in history), all media are technically “new” media to me. Prior to the beginning of the MFA program, I had never made serious use of either a still or motion-picture camera and was convinced that the intimidating world of even newer media was thoroughly beyond me. (In fairness, I did have one short documentary to my credit, an exceptionally naive five-minute nature film shot one afternoon on Toronto Island using the video function of a Sony point-and-shoot. It was called The Canada Goose: Friend of Man – link to follow). Furthermore, my brother, a software engineer at Microsoft in Seattle, was adept at belitting my modest computing skills.

With this aside, over the past two weeks I have already begun to open my eyes to the potential of new media, in both its physical and virtual manifestations, to complement (and challenge) traditional documentary conventions of process, form, authorship, etc. Truthfully, this sense of new possibilities emerged somewhat more gradually over the last several months, through exposure to works like Robert Arnold‘s then-unfinished Rotunda Project, an installation work that combines time-lapse imagery of the University of Virginia rotunda taken by a remotely-controlled camera with electronic musique concrète composed out of environmental sounds. Arnold’s Morphology of Desire, a “continuous loop” video created from Harlequin Romance covers and displayed in the Ryerson New Media Gallery, was also inspiring. Another eye-opening exposure to a new-media-type process was seeing Tori Foster‘s response to the photographic typologies of Bernd and Hilla Becher, produced using some sort of algorithmic process beyond my understanding, and presented on the web (anyone know if the link is still active?). And this is clearly just the tip of the iceberg (these examples demonstrate new ways of integrating computing into documentary production and presentation, though neither would appear to address the interactivity that is key to much new media work). It’s still an intimidating iceberg, but definitely worth trying to scale.

Today, an interesting issue emerged in class: the question of whether a documentary must necessarily be situated in the past, or whether a documentary can exist in the present, evolving organically through ongoing interactions with a group of people. The idea of an evolving, present-moment documentary seems, to me, to challenge the traditional notion of documentary authorship, so bound up in a heroic and hierarchical auteur theory. In the traditional view, documentary artists gather material in production, and then use an editing process to transform select “decisive moments” (to use Cartier-Bresson’s term) from actuality into finished, (ideally) perfect products that cannot evolve once they have attained the hallowed goal of “picture lock.” (These products can, however, be supplemented by trailers, subtitles, and something called a “featurette”). New media seems to offer a democratizing, collaborative alternative, in which audiences/participants (just what is the right word for those who share in a new-media experience?) shape the work through interaction with it. New media, with its emphasis on continuing bi-directional or multi-directional communication, seems to fundamentally blur the lines between maker, subject and audience. True, the old media came to emphasize collaborative approaches (the National Film Board’s Challenge for Change program, started in 1967, is an important example), but, at the end of the day, technocrats possessing the means of production still held a good deal of the creative control, and the finished films were, due to the constraints of the medium, still just that: finished, linear, inflexible. The virtual species of new media, in particular, appear to present immense opportunities for work that is ongoing, nonlinear and truly collaborative. Clearly, new media entails a necessary diminishing of the ego (no small feat for someone whose documentary idol was for many years that Wagnerian cowboy/conquistador, Werner Herzog).

Come to think of it, I have had the opportunity to assist on a documentary new media project. It is called Testaments of Honour, and its aim was to produce an evolving, online archive of primary source materials about the Canadian contribution in World War II. Testaments travelled the country conducting interviews with Canadian veterans and scanning their photographs. Rather than organize the material into a single, definitive documentary, the video and photos were minimally edited, meticulously tagged with keywords (using digital asset management software like iView Media – a kind of database, isn’t it?) and made available online through the government’s Heroes Remember website. Not high-concept new-media art, but an attempt to make a large, evolving database widely accessible and somehow responsive to public feedback.

Okay, so I am rambling. I also have some thoughts on how Flickr’s map function changes the experience of sharing/viewing photographic work (and not necessarily for the better), but they are not yet well formed.