Lev Manovich

Event

Montag 19.12.2005

Lev Manovich

Montag, 19. Dezember, 20:00 Uhr

Vortrag:
Lev Manovich «SCALE EFFECTS
Normalerweise denken wir über Mediengeschichte in Begriffen neuer, qualitativ differenter Erfindungen: Druck, Fotografie, Film, Computermedien, etc. Wie auch immer, heute sehen wir ein neues Phänomen am Wirken - Computer skalieren auf radikale Weise bereits bestehende Medien- und Kulturformen hoch und verändern deren Identität. Was sind die zu
erwartenden neuen Effekte der Skalierung? Wie können wir das Konzept der Skalierbarkeit in der Medientheorie anwenden?

Eine Videostream zu dieser Veranstaltung ist unter folgender Adresse erreichbar: snm03.snm-hgkz.ch/~isis/projekte/levmanovichdez2005/

Lev Manovich is the author of The Language of New Media (The MIT Press, 2001) which is hailed as "the most suggestive and broad ranging media history since Marshall McLuhan." He is Professor of Visual Arts, University of California, San Diego and a Director of The Lab for Cultural Analysis at the California Institute for Telecommunications and Information Technology <www.calit2.net>.


<www.manovich.net>


Lev Manovich

SCALE EFFECTS

iGrid 2005, September 26-30, 2005

I am used to flying around the world to get the most amazing and inspiring glimpses of the future – or at least, the particular part of the future that professionally interests me – the part where computers, visual culture, and art intersect. But with the establishment of the California Institute for Telecommunications and Information Technology (Calit2) on my own campus – University of California, San Diego (UCSD) – more and more often I don’t need to fly anywhere. Of course, my university – recently named by Newsweek the “hottest science campus in the US” – as well as being, according to The Institute for Scientific Information, the third most highly rated university in the world “in terms of its citation impact in science and social science” – always has had plenty of cutting-edge lectures and demos in any given week. But more often than not, they have been in fields of science that don’t directly impact my professional interests. However, since the Calit2 research agenda includes significant efforts in next-generation computing, networking, and display technologies, visualization, computer graphics, and computer vision – as well as new media arts – the establishment of Calit2 has affected me directly.

I still travel a lot, circling the world at least once a year so I can see with my own eyes the relentless march of globalization and the diverse new local cultural forms it provokes; soak in the dense cultural ecology and creative energy of traditional European cities; and talk with new generations of digital artists in places such as China and India. But to understand the future of imaging, visualization, and visual communication technologies, I no longer have to leave my home campus, for the key components of this future are being imagined and built right here in San Diego.

Calit2 is the largest research institute for IT in the US, housing, at full occupancy later this fall, about 900 researchers, graduate students, postdocs, and staff in its new building. Its researchers have won plenty of awards in most scientific fields, but what – at least in my view – is crucial to the institute’s already visible success and impact is the very broad and long-range vision of its leader Larry Smarr. This vision is pretty unique in a scientific community. Larry really understands the importance of new forms of visual communication for advancing science. He has a track record of leading, or being closely involved, in a number of groundbreaking projects at the intersection of imaging, computing, and networking: he has worked with people at the Electronic Visualization Laboratory at the University of Illinois at Chicago who designed the CAVE (now the most commonly used virtual-reality display system); he funded the students who, in the early 1990s, created Mosaic, the first graphical browser; and, before arriving to assume leadership of Calit2, he headed the National Center for Supercomputing Applications (NCSA), where a significant use of the supercomputers has been to compute detailed visualizations of very large data sets. Therefore he understands better than most how presenting something visually, interacting with the visualization, and sharing it with others in real time over distance can impact science – as well as culture. And this is what iGrid 2005 is all about, at least for me – computer imaging, telepresence, interactive visualization, and science collaboration over super-fast, optical networks using distributed computing resources. Super-fast is the key word here.

What happens when you scale things up? Wall-sized images that have a hundred times more detail than what we are using today; real-time streaming of visuals from the bottom of a sea floor or across the globe that look much sharper than today’s projection in a movie theatre; the ability for a research team around the world to see, discuss, and jointly manipulate such images.

Do scientists start thinking and working a little differently when they have these new abilities? And what happens when these abilities become available to various industries and wider public? And – the question that of course directly interests me – how will these new imaging, visualization, and communication capabilities affect future culture? What are the new cinematic, graphic, and multimedia languages that will take advantage of future imaging infrastructure? In other words, when you have a wall-size display with 35,000 x 12,000 pixel resolution, what do you put on it – besides super-resolved images of a brain, a geological process, and other scientific phenomena?

When we think of technology’s impact on culture, we are used to considering the effects of new technological inventions (including visual technologies). We are not used to thinking about the effects of scaling up already widely used technologies. For instance, generations of art historians have discussed the introduction of a new technique of one-point linear perspective during the Renaissance in western Europe. Similarly, endless volumes have been written about the inventions of photography in the 19th century and how it affected arts, culture, warfare, etc. To take a more recent example, it’s obvious that a whole series of new medical imaging techniques developed over the last two decades in addition to the century old X-ray technique – CAT, MRI, CT, PET, and others – have had a fundamental impact on medical practice. Similarly, the introduction of graphical browsers around 1993 is what allowed the World Wide Web – which at this point had already existed for a few years - to quickly take off.

But what about the impact of scaling up existing media technologies – for instance, faster networks or higher-resolution computer images? This is harder to think about – although if we are to go to the very source of contemporary thinking about visual media – Marshall McLuhan’s 1964 book Understanding Media – we will discover that the idea of scale is central to McLuhan’s thinking. McLuhan writes: “For the ‘message’ of any medium or technology is the change of scale or pace or pattern that it introduces into human affairs. The railway did not introduce movement or transportation or wheel or road into human society, but it accelerated and enlarged the scale of previous human functions, creating totally new kinds of cities and new kinds of work and leisure. This happened whether the railway functioned in a tropical or a northern environment, and is quite independent of the freight or content of the railway medium.”

As we can see, for McLuhan, new media technologies accelerate, expand, or scale already existing technologies, which leads to qualitative changes in society and culture. Yet these ideas were not taken up by subsequent writers, possibly because the table of contents of Understanding Media reads like a catalog of new communication inventions, with chapter names like “print,” “telegraph,” “telephone,” “car,” “television,” etc. – without mentioning the idea of scale itself.

On September 26, I made my way to a brand new Calit2 building where iGrid 2005 was about to start. Officially, the building will be dedicated only a month later, so I was not surprised to see construction workers both outside and inside putting finishing touches on it. I entered the main auditorium to attend the opening ceremony. It is focused on an event called “International Real-time Streaming of 4K Digital Cinema.” The master of ceremonies, Laurin Herr, tells us the facts about what we are about to witness: “Live and pre-recorded 4K content, with four times the resolution of HDTV, is compressed using JPEG 2000 at 200-400Mb and streamed in real time via 1Gb IP networks, from Keio University in Tokyo to iGrid 2005 in San Diego.” In layman terms: digital video – computer animations, dynamic visualizations generated in real-time, digitally scanned film as well as a real-time teleconferencing session – all with the resolution of 4000 x 2000 pixels is being streamed from Tokyo to San Diego where it is projected using a 4000-line projector. The screen goes green for a few seconds while the connection is being established.

I wonder what I will see – I am thinking of normal video streaming experience on the Net today – uneven frame rate, artifacts of compression, lost image-sound synchronization – in short, images that look like they are being blown by a wind that periodically changes its force.

But what I see has nothing to do visually with what I normally experience as streaming video. In fact, these moving images are unlike anything I’ve ever seen. Forget about the usual streaming artifacts – everything is perfect. The images contain much more detail than you can see with natural sight or capture with a film camera. Everything is in focus; the level of detail and sharpness can be compared to high-quality, large-format still photography. But this is not a print from a 4x5 color negative shot using a long exposure. What I am seeing, along with the stunned audience, is being captured in real time by a digital video camera in Tokyo, compressed, sent across the ocean, decompressed, then projected on a large screen in San Diego. The Tokyo hosts that we see on our screen here in San Diego joke with the hosts in our auditorium, while my hungry eyes try to take in all the incredible detail contained in the images on the big movie theatre size screen – the tiles of books on the shelf; the shadows on the faces; the light effects on the walls and the floor.

I feel that this new level of resolution indeed changes things: the people on the other side of the globe are very much present in our space; and this creates a new level of attentiveness and focus for me. I feel that, in fact, they are even more present than the audience in the auditorium where I am sitting, since I see them large and in amazing detail. Indeed, normally we expect to see this level of detail only when looking at objects that are quite close to us, while objects further away appear to us less sharp and with less detail. Therefore, 4K teleconferencing plays a trick on our brains, sending signals that tell the brain that the objects on the screen are physically closer than the people and objects physically present nearby.

Over the next couple of days, I see many other imaging, visualization, collaboration, and telepresence applications that use Grid infrastructure. All of them look like miracles – but they’re here today. In one, a scientist in San Diego uses a laptop to control a program that runs on a computer in another part of the world; the computed visualization is send back to a display here in the conference room, and, since there is no visible delay, a scientist can interact with the visualization as though it is running on the same laptop from which he is controlling the program. The advantage is that you don’t need to have powerful computational resources required to create a highly detailed visualizations locally – you can use programs, server space, and other resources, located remotely, as though they are all running locally. In other words, data can be transferred practically instantly using the dedicated optical network over which one can reserve particular lightpaths. Therefore, it becomes efficient to distribute the functions of a single computer across the network. The interface can be located at one node, computation can take place at another node (or multiple nodes), storage is yet somewhere else, and so on. One can use resources on a network as though it’s one single virtual computer. As Larry Smarr put it during the conference opening, with Grid computing, the world is reduced to a single point – it is as though all computing resources connected to the optical grid are located in the same room.

The two demos that made the most impression on me were presented on the EVL LambdaVision display. The display consists of 55 tiled LCD screens (11 horizontally x 5 vertically), resulting in a total resolution of 17,600 x 6,000 pixels (in total, 105,600,000 pixels, or approximately 100 Megapixels). During iGrid, the Netherlands Computing and Networking Center SARA set a world record for “bandwidth usage by one single application showing scientific content” when it was streaming visualizations of various large scientific objects from Amsterdam to the LambdaVision display in San Diego at a sustained rate of 18 Gigabits per second (Gbps). Amazing as it was to realize that the ultra-high-resolution images were coming in real time from Amsterdam at that speed – and to think how such capability might be used by a distributed science team or any distributed work group for that matter – I was most impressed simply by being able to interact with these super-detailed images on the wall-sized display. One image was a panoramic view of Delft. The resolution of this image: 78,797x 31,565 pixels. Yes, that is correct: seventy-eight thousand by thirty-one thousand pixels, plus some – which adds up to 2.48 Giga-pixels. The size of data that makes up the image: 7.12 GB. As Bram Stolk of SARA explained it to me, the multiple photos that make this monster image were captured by a camera mounted on a robotic arm. Afterwards, the computer that controls the camera automatically stitches the multiple photos together into one image.

Another image presented by SARA on the EVL LambdaVision display was a visualization of a brain structure, also constructed from multiple image sources. As we navigated around the image, Bram explained to us what, in his view, is an important advantage of using wall-size displays: You can zoom into details while still maintaining the sense of the whole. In other words, since you continue to see the whole image while examining the details, you have the sense of context in which each detail fits. In contrast, when you zoom into the same image on a single LCD commonly used today, whether it is 17- or 23-inch, this sense of context disappears.

SARA’s demo showed me one effect of scaling up existing imaging technologies – in this case, scaling up the size of an image and the size of a display. The same hi-res image presented on a wall-sized display functions in a new way. Although factual information in it does not change, we can now experience it and understand it differently. Pragmatically, it becomes a different image containing new knowledge.

Of course, large displays were not invented in 2005. For many centuries, people relied on wall-sized and table-sized maps – or sets – when planning a battle, designing a city, or performing any other task that required focusing on minute details while at the same time keeping track of the whole picture. But, with a Grid infrastructure, you can request images of that size to be instantly sent across the world to you from whatever location has the right computer resources to compute them. And, of course, since these are digital images, they can be processed, analyzed, enhanced, colorized, etc., enabling them to yield new information and knowledge.

At the concluding session of iGrid 2005, we were treated to more telepresence sessions, scientific visualizations, computer animations, and a short film – all created at 4K and all streaming in real time from Tokyo. I was thinking of another famous demo that took place 110 years ago in a café in Paris. At that “demo,” the Lumieres screened their film shorts, including one showing an arriving train that purportedly was so real it sent the audience running out of the café.

The media reports during the first years of cinema in the 1890s highlighted the miracle of making images move – pictures of leaves, water, people in a city street suddenly coming to life. The early name for cinema – “moving pictures” – similarly emphasized movement as the principal quality of this new media. At iGrid 2005, we were also fascinated with movement – but, in our case, it was movement of information over optical fiber across the ocean. But I also had a sense that we were revisiting the presentation by the Lumieres done 110 years earlier in a more direct way. For the first time, we saw highly detailed, sharp, panoramic images – until now only encountered as still photographs – suddenly come to life. We experienced Moving Pictures v2.0.

Watching the short film by a Japanese director that is beginning to explore the aesthetic possibilities of 4K digital video in relation to lighting, composition, and narrative, I was wondering if the pristine, super-clear, and poetic images of 4K digital video can be related to any visual tradition in the past. Surprisingly, if normal video flattens the world, rendering it prosaic and even banal, 4K digital video creates the opposite effect: Even most prosaic objects and boring, flat surfaces acquire a precious quality as the light captured and reflected by their micro-textures is rendered visible. The effect is as though seeing the world for the first time, after it has been washed clean by the rain. The comparison that comes to mind is with Dutch 17th-century paintings: portraits, still lives, and interiors. As analyzed by art historian Svetlana Alpers in her influential book The Art of Describing, in contrast to Italian Renaissance painters who recreated in their paintings soft Italian light that hides details and softens shapes, their Dutch counterparts delighted in presenting every detail and carefully rendering different surfaces, textures, and light effects. In the right hands, 4K digital video appears to be capable of creating a similar representation of the world. It achieves the poetic effect not by hiding the details in shadows or fog but rather by presenting them all – and letting our eyes delight in comparing different patterns and textures.

I am afraid that more than 400 scientists who participated in iGrid 2005 conference and Grid designers may be unhappy with me. They may wonder: Why do I dwell so much on the visual quality of images I saw rather than talking about what are probably more important uses of Grid computing from the point of view of day-to-day scientific work – collaborative data analysis of very big objects; interactive control of remote supercomputer simulations; visualization of big distributed data sets; and so on.

However, as Grid infrastructure becomes available to the art and entertainment industries, the new visual qualities of super-large images (such as the 78,797x 31,565 image of Delft shown at iGrid), coupled with large wall-sized displays and the ability to receive such images instantly from remote locations will impact how we see the world and the kinds of stories we tell about it. In short, scaling up – in this case, scaling the resolution, the size, and connectivity – will have all kinds of effects on future culture, most of which we still can’t envision today.