Loading
  • Grid
  • List

Collections Harvesting the Net::MemoryFlesh

Collections Harvesting the Net::MemoryFlesh

Image Rights
Close
Image
Courtesy Walker Art Center
Rights
Copyright retained by the artist

Copyright

All content including images, text documents, audio, video, and interactive media published on the Walker web site (walkerart.org) is for noncommercial, educational and/or personal use only. Any commercial use or republication is strictly prohibited. Copying, redistribution, or exploitation for personal or corporate gain is not permitted.

To obtain permission, or for information on slides and reproductions, please contact Loren Smith, Assistant Registrar at 612.375.7673 or rights.reproductions@walkerart.org.

Title
Harvesting the Net::MemoryFlesh
Artist
Diane Ludin
Date
April 2001
Location
Online

Object Details

Type
Media Arts (Internet Art)
Technology Used
G2 RealPlayer
Credit Line
Commissioned by Gallery 9/Walker Art Center through a grant from the Jerome Foundation.

overview Harvesting the Net: Memory Flesh , April 2001

Harvesting the Net::MemoryFlesh will be a para-formative tracing of the first year of the claimed completion of the Human Genome and the rise of genetic economies. The tracing process initiated by this harvesting will document, archive, and refold the routes of this new economy. The project will begin gathering net-based articles from April 06, 2000 when Celera Genomics announced its ‘completion' of the Human Genome, (a marketing date, not a scientific one). Research and documentation gathering will continue throughout the year and end on April 06, 2001.

This para-formative harvest will be a process of performing the system behavior of the networks and software of Genetic Economies and manifest itself in 5 steps. Harvesting the Net’s end product will be a series of visualizations as mirrors of our ‘Memory Flesh' where eventually, all cells will be for sale. Artist as Reflective Performance System The act of behaving in a purposefully reduced and analytically methodical manner is the result of immersion and successful upload of subjectivity when facing the behemoth master that is computer technology (the spine of the Internet). The act of performance allows a position to reconsider a chosen set of self-conscious, conceptual gestures in relation to this growing behemoth. Through this process I can translate a format of investigation and perform the system as it currently exists upsetting the master - slave relationship that is generated through technological distribution as we know it.

Step 1: Artist performs the system behavior. Tracing companies, research initiatives, mainstream media hype, and attempts to define the economic purpose of ourselves as Genome.

Step 2: Artist generates search strings and system behaviors which begin the harvest. This stage of system harvesting will be deposited into a database, the tissue of memoryflesh.

Step 3: Artist imposes a conceptual parsing engine that reflects the intersections between network architecture and genomic economies. Memoryflesh grafting continues.

Step 4: Artist reroutes developing archive of material collected and begins to formulate a database generator of genetic simulation, animating memoryflesh.

Step 5: Collaging and regenerating material mixed from the artist’s performative searches, artist’s generated auto-bot searches, and parsing engine outlines to create a series of visualizations that will parallel the process of building ourselves as Genome.

Walker Art Center Gallery 9, Harvesting the Net::MemoryFlesh, April 2001.

First published by Gallery 9/Walker Art Center, 2001.

interview Interview with Diane Ludin , 2001 (Rachel Greene interviewed Diane Ludin about her project Harvesting the Net::MemoryFlesh via email March 2001)

Rachel Greene:Harvesting the Net::MemoryFlesh is part of a series of works on genetics, and the new technological realities of bio-humans. Can you talk about how your earlier pieces informed what you wanted to do with this latest one? Clearly, it makes sense for you to have taken on the genome proper, but what else?

Diane Ludin: About three years ago I started investigating what the human genome was attempting to make. I found it almost impossible to sift through the emerging public discussion around it; it was and still continues to be a subject that stages a certain type of information warfare. But it kept making the papers and getting a lot of media attention with inflated projections of its potential.

After six-nine months of pretty focused research I was able to recognize some recurring themes. I had enough information to build proposals for online projects that would get funding from Franklin Furnace and Turbulence.org. These projects, at the intersection of performance, the body, computer technology, and the Internet, gave me a more concrete understanding of the surrounding info-science. My projects became containers for reflecting recurring themes I was beginning to recognize, some of the themes being: the economic inflation surrounding biotech companies; the invention of online software tools to help track information such as patenting on sequencing research for companies and research initiatives; the inflated projections by pharmaceutical companies and medical practitioners of biotech’s potential.

Like any futuristic phenomenon it takes projections extremely well. It was very hard to get to some of the practical mechanisms and real-time processes behind the hype being manufactured.

So Genetic response system 1.0 was about imaginary visual projections from movies that would draw together a broad approach to biotech in general, and not specifically the human genome. It had a series of quotations from various sources (none of them scientific), invented terms, and links from friends' projects, all mixed with biotech companies and scientific research initiatives. I had spent a few years working in collaboration with artists such as Francesca da Rimini, Ricardo Dominguez, and the Fakeshop gang whose work projected critical, imaginary scenarios approaching technology and science in an art context. Genetic response system 1.0 became a disembodied structure framing my work with these practitioners in my (impulsive) reasoning at the time.

In 1998 I began studying with Natalie Jeremijenko. I found many commonalities in Natalie’s critical view of science, technology and culture, with that of Francesca, Ricardo and the Fakeshoppers. However, Natalie had a different practical relationship to the discussions of the designing of that technology and it’s journey into culture and economy. Her ideas and work gave me a contrast for thinking about different cultural projects that technology and emerging sciences were bringing forward. I was able to modify my working practice and build my own investigations. This and financial support from Franklin Furnace and Turbulence.org allowed me to build some projects where I was responsible for the conceptual structure.

Genetic response system 3.0, commissioned by Turbulence.org, was more of solo meditation than Genetic response system 1.0. I decided to radically reduce the materials I was pulling together. I was chasing after computer companies advertising biotech and related sciences, and began archiving images of economic behavior through online news services like CNN. I mixed these still images with educational video on cellular behavior. It was a place for me to start conceptually mixing the imagery I was drawn to in a more focused manner.

When I finished working on Genetic response system 3.0, I was still feeling the need to go deeper. I had been considering trying to build a search engine, thinking that would be the ultimate way of tracking the shifting and large amounts of information on the human genome without spending much energy weeding through unnecessary information. I looked into what it would take to build a search engine, how they were programmed, and what their limitations were. I concluded that building a search engine kept me too far away from the information content I wanted to capture, and there would need to be some heavy duty filtering of that data to get the returns I was looking for. This, and the thought that I would be making temporary links based on information that other groups maintained, made me realize what I really wanted to build was a repository to record searches that I and other people I was working with could make.

So I proposed a database project whose contents I would gather and re-purpose for viewers over the course of a year. I then began working with Andrea Mayr to design a database that we could use to archive online materials I wanted to work with. We used MySQL with a PHP3 interface. MySQL is an open source database software, and PHP3 is a scripting language with HTML embedded in it. So Harvesting the Net::MemoryFlesh is a more complete framing structure in that it contains the original source material discovered through my time-based searches online. As far as some of the differences in the type of collage this project makes, it is a relatively more permanent one. Its contents are more focused conceptually. The relationships between all the visual elements are clearer and more generalized. Part of what I accomplished with this project, which I was unable to reach with the others, was to capture what the laboratories that make the human genome look like. What are the tools of the scientists who are making history? What do the laboratory workers look like, and what is the type of imagery these new factories are manufacturing to tell their stories?

RG: How has Natalie influenced you, and what have you learned from her? Not only am I a fan of her work, but I think seeing these exchanges/pedagogical relationships at work can be interesting–especially since as women we are often discouraged from this kind of exchange, and or get caught up in, or held up by, the goal of technomastery.

DL: Amen, been talking a lot about this phenomenon with Shu Lea Cheang, Yvonne Volkart, Diane Nerwin, and Ricardo over the last couple of days. They are part of the show I presented some work in here [in Lucerne, Switzerland]. We have been calling it technoformalism, but I like “technomastery” better.

RG: Cool! So what did you take from Natalie’s work and teaching?

Many things. The most recurring phrase that comes back to me as I am working on this project and technowork in general (be it devices or the Internet), is a phrase that I got from an essay of hers you published on Rhizome.org called “Database Politics.” She wrote: “…technologies are tangible social relations. That said, technologies can therefore be used to make social relations tangible.”

I often ask myself whether or not I am making tangible the social relations I am interested in–apparent or not. It has become one of the standards I use to evaluate my output. I was curious as to what that meant when I read it. I was only able to imagine it partially. It seemed that a technological relationship had its own category, and very little social interaction within it, by the fact that it has only begun to move into public awareness in the last couple of years (therefore having low contrast and only extremely minimal social experience could be accessed). It became an idea I understood more as I activated it and layered it into my thinking.

RG: You said “… Natalie had a different practical relationship to the discussions of the designing of that technology and its journey into culture and economy.” Let’s talk about that.

DL: Ricardo and Fakeshop did not work through the institution the way that Natalie does. Francesca began with a more organizing interface in Australia (and a background in corporate technological purposing), so there are specific differences that we in New York, outside institutions, had yet to access. Ricardo and Fakeshop were trying to mobilize their cultural activity through art, writing and activism and are more bound by these filters than Natalie. Natalie worked at Xerox PARC and was doing her doctorate at Stanford in Silicon Valley, which I consider a social and developmental root of the computer industry. Stanford was where a lot of the industry stars were educated. It seems that it offered her interior access to the industry development that we as East Coast artists and activists were struggling to grasp. She was able to practice her work and social activity with access to the machinery that was, and still is, defining technomastery.

RG: I really like that for a number of your projects you use links, images, text, or often some basic, frames technology. In your statement you use terms like “search strings,” “conceptual parsing engine”–you’re using somewhat inflated tech terms to talk about your own subjective hunting, gathering, and filtering. Can you talk about that as a strategy?

DL: I think emerging or progressive technological distribution language contains inflated projections. It is a creative process that is accessed by various types of PR media machinery building it. The distribution language we are fed needs to be regenerated. It is often very sci-fi and applies inflated technological language to simple software and Internet manipulations. This is a way in which I can locate the tangible social relation in whatever technology I am working with and behave it. It is in the concept and creative manipulation of that language that I can move the fastest. Visualization technology and visualization culture move at a different speed in relation to text, and writing within computer technology. The part of my practice that is regenerating technological terms is often the most fun for me. Word-processing interfaces and text manipulation are closer to innate computer language. The database that we designed for MemoryFlesh is a simple relational database.

RG: Tell me a little bit about what it’s been like as an artist circulating through some of the institutional hallways of interactive art. New media art has been so trendy and privileged lately; it worries me! I worry that the elements I cherish most about it–hacktivism, tactical media, and its capacity for institutional critique and social engagement–will be lost in favor of presentation or dumb technomastery.

DL: Part of the work I have been developing is possible because of the privilege that institutions are now affording to net-specific work. A major reason for my building on the net has to do with what I am financially supported to do. I have other work, both artwork and labor for living, but I am not paid enough to develop it, not to the level I am to work on the net. In some ways it makes my work as an artist easier, that I don’t have to work as hard to promote myself, propose projects or convince institutions of its significance. The institutions are doing this for me. It is also helping me activate a practice that is more culturally motivated, as opposed to artwork that has a set relationship to culture, and a history of cultural expectations that categorize it.

There is currently a scramble to find work that utilizes the net in the way that I have been using it in the last few years. I don’t know how long this will last, but I have been fortunate recently to propose ideas that institutions are willing to promote and to fund. And last but not least, it is easy to translate my artistic practice into experience as a designer and technical consultant for companies wanting to use the net.

The institutionalization or trendiness of any emerging artistic or cultural movement of attention goes hand in hand with the weaving of standards that are driven by previous historical traditions of mastery. As far as socially engaged/politicized work being replaced by technomastery work, I think technomastery work is already given more attention. There is the entertainment industry driving novel visual effects, not to mention the speed with which technology companies are infecting the economy and popular culture with hardware and software. Such technology is framed as a “must-have”: cell phones, cell phones with email, palmtops, wireless palmtops, beepers, digital cameras, portable MP3 players, etc. These cultural mechanisms shape our expectations of computer technology’s purpose. As a result, so much attention and time are given to keeping up with the latest trends in devices and software that there is little left to consider the impact of them. So we are left with a “technology for technology’s sake” attitude in our culture. This is an agenda that drives a lot of institutional funding of art. Artists are great for manifesting what doesn’t yet exist in culture at large. For me, when considering my recent projects, I think of what I want to do with people’s attention. I assume that the user of my sites will pay attention to all the choices I’ve made in assembling the elements of the project. This allows me to play with associations within the given set of text and images, and begin to interact with the expectations we are given when considering work on the net.

The potential we are losing in the transfer of art that is technologically based/interactive to being evaluated for its technomastery is the possibility to reach audiences that may not have been looking for socially engaged or politicized work, or even the opportunity to encounter it. It seems to me that the committed, politically motivated and socially active types will always find each other, as will their work. And yet the Internet offers a new layer of communication continuum that can help motivate or mobilize groups of people quickly.

Then there is the sensational nature of issues connected to the Internet, which has been promoted as being more than it is, offering more than it delivers. Perhaps this is the result of wildly successful distribution and advertising campaigns by star computer industry companies like Microsoft and Cisco. Not to mention the inflated economic impact venture capital injects into the system via companies and jobs. I have faith that there will always be artists who redirect our attention to social issues, and discussions around social issues, to see the limitations of authoritative representation we are fed. And there will always be a parallel group of artists who are uninterested or uninspired by what is behind what infotainment tells us is happening in the world. For them technology for technology’s sake will allow an easy transition to new discussions of aesthetics made possible by new media.

RG: Your work takes on quite a weird industry sector. Have there been any conflicts or issues you want to mention? Have any biotech companies/webmasters/publications objected to how you have been using their material?

DL: I think they are way too busy trying to develop, expand, and distribute their industry and its potential economically to be aware of the way in which someone other than themselves would be using their imagery. Last year at this time I wasn’t able to find the imagery I now have. Most of the imagery in the database was loaded in the last six months. This suggests to me that the speed with which they are currently operating doesn’t allow for careful examination of a sophisticated advertising/company representation campaign. Plus they, as biotech companies, aren’t expected to put forth an advertising campaign that compares with older more traditional companies.

RG: One of the central phenomena your project points to is the homogenization of rhetoric and language around the Genome Project and biotech more generally. And I think you effectively undermine some of the bureaucratic marketing-speak of the current discourse with your projects. But did you ever worry that the barrage, remix of images and text (what you explained as your own process to “drive conceptually and mix imagery you were drawn to”) would create more confusion for the user?

DL: I don’t think it could be more confusing than the way in which the human genome and biotech in general is represented. This media mess allowed me to take a simple approach, combining the language around economic distribution and promotion with images of the tools and the environment the tools exist and operate in. The interjection of phrases like “genetic landlords” and “point and click genes” are little bits of spin that nonscientific types can interpret and more easily understand when considering the battle over the human genome.

RG: I wasn’t sure if you were just showing how the genome discourse reproduces its masters' images–or if it was your experimental aesthetic in effect. What do you think?

DL: It starts in my experimental aesthetic. But when placed on the content of the human genome, its press, generative environment, and tools–these elements lead to the larger issue of how “the genome discourse is using technology to reproduce its masters' images.”

RG: What do you think is powerful about the tools of new media? Compared to the tools and mechanisms of Euro-corporatism?

DL: It is a space that is open to interpretation in a way that older media has been defined. There is more room to work, more work to do to translate the drives that various groups find in it. It was originally designed as a communication and research source for computer geeks and research scientists to share their findings. This communications nature and the audience it was originally designed by and for still remains at its core. The distribution and buzz from computer companies to wire the world and create stable ecommerce markets still has yet to be fully realized. The business models used to try and make it profitable are not working. We are seeing the limits of artificially generated economic value that venture capital creates with recent NASDAQ crashes, and ecommerce companies dropping out of business. In order for the net to be successful as a commerce circuit, it would have to be as prevalent in our individual homes as television currently is. It is not and I can’t imagine how long it would take for this to be a reality. The mainstream media attention it is given creates an opportunity for attention redirection on a global scale, potentially.

RG: You spoke about deflating some of the projections and claims of technology and the rhetoric of “distribution” and “network,” but let’s end in a place where you encourage folks to use tools…. ;)

DL: It is important to me, always to translate what I am given into my own terms. In this way I examine the limits of what is distributed via mainstream media representation. In this process I find various strategies that wrestle with the same questions and varying strategies for how to deflate the rhetoric of distribution. It is a beginning, a reintroduction to allow a more realistic view of what is happening behind the hype. I can’t imagine coming up with a sound strategy to build work on without this more realistic view of practical mechanisms within a given industry, be it new media or biotech. Since the culture at large is rushing to also go through this process of translation, new media has a cultural currency that other forms of media do not. As a result, reflections on translating net-specific topics like the human genome are a beginning that I look forward to seeing expand. And I am optimistic that the route that this expansion takes will be unexpected, and not defined by companies distributing for monetary profit.

Interview with Diane Ludin, March 2001.

Rachel Greene, 2001. First published by Gallery 9/Walker Art Center for Harvesting the Net::MemoryFlesh.

essay Diversity.com/Population.gov , April 2001

A Tsunami of Data

When, in the early 1990s, the U.S. government-funded Human Genome Diversity Project (HGDP) drafted plans for a genetic database of some 4,000 to 8,000 distinct ethnic populations, it was met with a great deal of controversy and criticism. The stakes were raised even more when it was discovered that the HGDP had proposals for the patenting of the cell lines from several members of indigenous populations, all without those members' or communities' informed consent. Due to the interventions by such groups as the Rural Advancement Foundation International (RAFI), the HGDP was forced to drop three of its patents. In 1996 it provided a testimony to the U.S. National Research Council and has since drafted a document of “Model Ethical Protocols” for research, which emphasizes informed consent and cultural-ethical negotiation. Since that time, however, the HGDP has been conspicuously silent (it is now based at Stanford University, as the Morris Institute for Population Studies), and, despite the flurry of news items and press releases relating to the various genome mapping endeavors around the world–both government and corporate sponsored–there has been relatively no news or updates on the progress of the HGDP’s original plans.

Much of this curious disappearing act has to do, certainly, with the bioethical conundrums in which the HGDP has been involved, as well as with the combination of vocal critics such as RAFI, and the HGDP’s having been marked by the media and dubbed by its critics as “the vampire project.” However, while the HGDP as an organization may have slipped from science headlines, the issues and problems associated with it have not. Another, parallel development within biotech and genetics has emerged, which has more or less taken up the “diversity problem” that the HGDP had dealt with in the 1990s: bioinformatics. Bioinformatics involves the use of computer and networking technologies in the organization of updated, networked, and interactive genomic databases being used by research institutions, the biotech industry, medical genetics, and the pharmaceutical industry. Bioinformatics signals an important development in the increasing computerization of “wet” biotech research, creating an abstract level where bioinformatics can form relationships between bioscientific approaches to diversity and the fluctuations of the biotech economy. A driving economic force is finance capital, bolstered from within by a wide range of “future promises” from biotech research (software-based gene discovery, data mining, genetic drugs, and so on). The emphasis we are witnessing now in “digital capitalism,” to use Dan Schiller’s term, is an intersection of economic systems with information technology. As Michael Dawson and John Bellamy Foster show, this trend leads to an emphasis on a “total marketing strategy” that is highly diversified: consumer profiling, individualized marketing, “narrowcasting,” “push-media” and so on. Such trends are transforming biotech research as well. More often than not, the future of a research field within biotech can flourish or perish depending on the tides of stock values. In turn, those stock values are directly tied to the proclaimed successes or failures of clinical trials or research results. Most of the stock value of the biotech industry is an example of what Catherine Waldby calls “biovalue”: either being able to produce valuable research results that can be transformed into products (such as genetic-based drugs or therapies), or the ability to take research and mobilize it within a product development pipeline (mostly within the domain of the pharmaceutical industry).

These trends are worth pointing out, because they draw our attention to the ways in which race, economics, and genomics are mediated by information technologies. Genomics–the technologically-assisted study of the total DNA, or genome, of organisms–currently commands a significant part of the biotech industry’s attention. In economic as well as scientific terms, genomics has, for some years, promised to become the foundation upon which the possibility of a future medical genetics and pharmacogenomics would be based. As a way of providing a backdrop for Diane Ludin’s project Harvesting the Net::MemoryFlesh, what I would like to do here is to outline some of the linkages between biotech as an increasingly corporate-managed field, and the emphasis within genomics programs on diversification. Such research programs, which highlight types of “genetic difference,” demonstrate the extent to which culture and biology are often con-fused, as well as the extent to which both ethnicity and race are compelled to accommodate the structures of informatics. Decoding Genomics The recent rise in genomics projects, especially those geared towards unique gene pools and genetic markers, has impelled the hybridization of a new practice of statistics and medicine, combining studies in population genetics with new techniques in genomic mapping. This application of sociobiological studies of populations–what I’ll be calling “population genomics”–brings together a lengthy tradition in the study of populations, hereditary patterns, and inherited characteristics, with the contemporary development of large-scale genomic sequencing and analysis for clinical medicine. For instance, biotech companies such as deCODE, Myriad Genetics, and Oxford Biosciences Inc., are focusing on the genomes of populations with histories of low migration and a low frequency of genetic mixing (Icelandic, Mormon, and Newfoundland communities, respectively). Other companies, such as DNA Sciences Inc., are focusing on building a volunteer-based genetic health database to aid in the fight against disease. DNA Science’s Gene Trust uses the GenBank model to archive medical, genetic, and health-related data (GenBank holds the public consortium’s human genome data). Still other companies and research labs are focusing on the minute genetic sequence differences between individuals–polymorphisms, SNPs, and haplotypes–that may be the keys to individual genetic susceptibility to disease, and, by extension, the key to the development of genetic drugs.

What all of these databases promise to provide is an extensive, computer-driven analysis of the genetic basis of disease, as well as assisting in the developments of treatments, cures, and preventive practices. In these projects, the database–both local and online–is a key technology. A database such as deCODE’s Icelandic Healthcare Database (IHD) is a good example, in that it brings together three types of genetic-medical data: (i) phenotypic (observable) health data and health-care information, (ii) genotypic data (genomic sequence), and (iii) genealogical and hereditary data (gene pool and statistical information). The IHD is both highly specified in its object (a genetically-isolated population) and widespread in its coverage (containing national health records, genealogical records, and genetic data). In addition, the IHD, as part of deCODE, is a business endeavor as well as a health-care one, and deCODE uses this product–here the product is information or database access–to forge productive relationships with governments (Icelandic national genealogical records) as well as with other businesses (especially pharmaceutical corporations). The scope and ambition of projects such as Celera’s or deCODE’s are, of course, made possible by advances in computing technologies, most of which are exorbitantly expensive, inaccessible to non-specialists, and which have a high learning curve. This, combined with the illegibility of genetic sequence data by non-specialists, makes the potentials of genomics extensively out of the reach of the general healthcar public, and it makes any informed critique or public debate challenging as well. For instance, in articles written by Kari Steffanson, CEO of deCODE, the potentials of informatics technologies transforms science research from a linear, hypothesis-driven approach, to a semi-automated data mining agent that completes computations far beyond what was possible prior to the use of parallel processing computers and data mining algorithms. Both deCODE and other companies such as Human Genome Sciences have stated that they are in the business of information and discovery. The intersection of business approaches to genomics and the use of informatics-based tools means that science research is based in combinatorial techniques; the best pattern-recognition combinations will be of the highest value, the greatest assets.

In considering this complex of life-science business models, new computer tools, and a genomics-based approach to populations and disease, we can actually differentiate several types of strategies for databasing the body within contemporary genomics.

Universal Reference: Generalized human genome projects, such as that undertaken and first completed by Celera, emphasize their universality as models for the study of disease, treatment, and a greater understanding of life at the molecular level. They also highlight the backdrop against which all genetic difference and/or deviations from a norm will be assessed. Indeed, part of the reason genomic projects by Celera, Incyte, or the public consortium have received so much attention is that these projects are in the process of establishing the very norms of genetic medicine. Their practices and techniques themselves are the processes of establishing what will or will not exist within the domain of consideration for genetic medicine, and will or will not be identified as anomalous or central to genetic knowledge and genetic drug design. A company like Celera, though it assembles its sequences from a number of anonymous individuals, constructs one single, universal human genome database. That database becomes the model for all sorts of research emerging from genomics–proteomics, functional genomics, DNA diagnostics, and so forth.

Difference and the Subject: At the opposite pole of Celera’s universal model of the human genome is a field of research that deals with the minute, highly specific base pair changes that differ from one individual to another. Accounting for about .1 percent of the total genome (or roughly one million base pair variations per individual), these “single nucleotide polymorphisms” (SNPs) are thought to contribute to a range of phenotypic characteristics, from the physical markers that make one person different from another (hair color, etc.), to the susceptibility to single base pair mutation conditions (such as sickle cell or forms of diabetes). However, many SNPs are phenotypically non-expressive; that is, they are base pair changes that do not affect the organism in any way and are simply differences in code sequence. Specific projects, such as Genaissance Pharmaceutical’s HAP series of haplotype database tools, as well as the Whitehead Institute’s SNP database, focus exclusively on these minute base pair changes. These databases of individual point changes form linkages between variation within a gene pool and a flexible drug development industry that operates at the genetic level.

Stratifications: Between universal genome projects and individualized genetic medicine research, other genomics projects are focusing on collectivities within a universal gene pool. The projects from deCODE, Genaissance, Myriad, and others focus on genetically isolated populations. Often combining the usual genotypic data with demographic, statistics, genealogy, and health-care data, these projects are both new forms of health-care management as well as studies of the effects of disease within genetically homogenous groups. Projects such as deCODE’s IHD promise to be able to perform large-scale computational analysis on entire genomes, but in this they also threaten to fully abstract genetic data from real, physical communities. The population genomics projects take a genetics-based or genotypic view of race, and make connections to the functioning of norms within medicine and health care. In doing this they establish an intermediary space between the universality of the human genome project (which claims a uniformity under the umbrella of a distinct species) and the high-specificity of SNP or haplotype databases (which claim individual difference within a general category). This intermediary space is precisely the space of racial (mis)identification and boundary-marking; it is the space where bioscience forms collectivities, composed of individuals and united under a common species categorization. With their primary aim as medical, these population genomics projects are involved in the re-articulation of race and ethnicity itself as biologically determined, and they do so through the lens of bioscience research and corporate biotechnology.

In this schematic of different types of genomics projects, we can see an approach toward biological information that is far from a simplistic monocultural model, in which difference is marginalized, silenced, or pushed out of the domain of serious consideration. In fact, everything in genomics moves toward an inclusion of differences, but differences that can be accounted for by both DNA and information. From a scientific perspective, of course, difference or diversity is the cornerstone of traditional evolutionary theory, be it random or directed environmental influence. Likewise, from a business perspective, diversity is not only the key to creating more custom-tailored products (as in genetic drug design), but diversity also enables a more thorough knowledge-production of the population. This is a kind of niche biomarketing, in which information is extracted from a heterogeneous population, then selectively organized, and re-routed into research and product development.

Projects like deCODE’s IHD are prototypes for scientific-business practices that are indissociable from a consideration of race and ethnicity, themselves considered from a molecular and informatic perspective. They do two things: they work toward establishing new, more flexible sets of norms, both within biomedicine and from the point of view of business strategies, and in doing so they form new methods of population management and regulation. The New Dotcom In some ways, it is misleading when we talk about the biotech “industry”; in the most literal sense biotech rarely produces anything. Its specialty lies in modifications and recontextualizations of organic life. We can begin by outlining three main kinds of companies within the biotech industry:

First, there are the “pick-and-shovel” companies, mostly in the technology and laboratory supply sector, which provide the tools for research. Like the original pick-and-shovel businesses during the California gold rush, such companies implicitly believe that, while the actual genome may not yield any profits, the need for research technologies will. These companies are generally the lowest risk-takers, though radical new tools such as DNA chips are transforming the way in which research is carried out. An example is Affymetrix, which is one of the leading suppliers of microarrays, or DNA chips, for large-scale, efficient sequencing.

Second, there are the software and service companies, which operate mostly on the level of computer technology, software, and network applications. These companies often provide a counterpart to the pick-and-shovel companies by supplying the software tools necessary to complete the work done by the hardware. Such companies can offer software packages (such as Incyte’s “LifeSeq” sequencing and analysis software), they can offer data analysis services (these are mostly software companies), or access to a database on a subscription-only basis (as the private genome companies such as Celera are doing).

Finally, there are what we might call the product-makers, those companies–usually large pharmaceutical corporations–that take the information generated by the second group (say, the information generated by Celera on the human genome), and transform it into an array of products, services, and practical techniques. The most prevalent among these is Big Pharma and the emphasis in such companies is on drug development and gene therapy-based drug treatments (or “pharmacogenomics”). Currently, the most prevalent test for the value of biotech research is the clinical trial, in which a genetically designed drug is put through an extensive series of tests before gaining approval by the U.S. Food & Drug Administration.

A consideration of these types of companies not only illustrates the degree to which biotech has become infotech, but it also suggests that the future success of the biotech industry is dependent on the ability to generate value out of the data collected from biological material. All of this is predicated on the assumption that biological bodies–tissues, cells, molecules, chromosomes, genes–can be unproblematically translated into data. Such a move indicates the degree to which biotech relies upon the notion of a stable “content” in the genome, irrespective of its material instantiation (be it in cells or in computer databases). It is in this sense that, for the biotech economy, the genome becomes a value-generator, through its ability to adequately perform in the organism (and thus the eagerness of biotech companies to gain patents on novel molecules). The biotech economy demands that everything within biotech have an informatic equivalent; it does not, like biotech research, demand that everything be translated into information, but it does demand a direct link between genetic bodies and relevant data.

The economics of biotech touches the population, not through direct genomic database management, but more indirectly through the commodification of such databases. As biotech becomes increasingly privatized, the database corporations such as Celera or Incyte will become the main bio-commerce brokers. At issue is not the buying or selling of databases, but the generative potential of genetic data; in such a case racial population genome databases, individualized SNP or genetic screening databases, and various animal genome databases important for human medicine will all become sources of a biopolitical management of selected collectivities. Each and Every As a way of approaching such issues, it might be helpful to consider Michel Foucault’s later work dealing with “biopolitics.” Although the term begins to appear with some regularity around the time of Discipline & Punish, Foucault later clarified its relationship to his other concepts of “bio-power” and “disciplines.” For Foucault, biopolitics is “the endeavor, begun in the eighteenth century, to rationalize the problems presented to governmental practice by the phenomena characteristic of a group of living human beings constituted as a population: health, sanitation, birthrate, longevity, race…” (Ethics, 73).

Roughly speaking, Foucault calls “biopolitics” that mode of politically accounting for “the population,” considered as a biological, species entity. This is differentiated, but not opposed to, “bio-power,” in which a range of techniques, gestures, habits, and movements, most often situated within social institutions such as the prison, the hospital, the military, or the school, collectively act upon the individualized body of the subject.

The fundamental difference here is not between the individual and society, but rather between individuating and collectivizing strategies, similar to what Foucault earlier called “dividing practices.” Biopolitics is, first, an organizational technology articulating something called the biological and species population–a collectivity of bio-subjects. Through a range of techniques and practices, it produces and collects knowledge of the population in the form of a manageable quantum of information. And biopolitics reproduces its continual and changing regulation of the population through a set of techniques and practices that insert this informatic knowledge back through the social-biological body of the population, culminating in a quantifiable, organizational entity that may be “touched” at a variety of points through a range of technologies. To these characteristics we might add a further extension, which is that contemporary population genome projects form complex hybrids of economics, policy negotiations, and high technology. In short, biopolitics is not exclusive to the mobilization of state forces that Foucault emphasizes (what he terms “governmentality”), but it brings to the forefront the multiple constraints set forth by an informatics-based view of the body.

Bearing in mind Foucault’s emphasis on the population as a biologically defined entity, we can outline several important factors in considering population genomics:

Informatics: Informatics is a key factor in considering contemporary power relationships in biotech, because it works as a medium for transforming bodies and biologies into data. But that data is understood in many different ways, not simply as the liquidation of the body. Genomic, population, ethnic, and SNP databases are just some examples of the variability of biological data. At its root the case of informatics brings up philosophical questions (what is the body if it is essentially information?), but preceding this at every point are political questions (how are the manifold differences in embodied communities encoded into data?).

Biodiversification: Biodiversity is a term that is most often reserved for debates concerning the preservation, conservation, or sustainability of natural resources, which depend a great deal on natural diversity. Most often biodiversity is opposed to transnational corporations, which take advantage of natural diversity to produce monocultures as product, in what Vandana Shiva calls “biopiracy.” In the context of genomics, biodiversity becomes a signifier for genetic difference, and the ways in which genetic difference gets translated into cultural difference. Biopiracy is not simply about the destruction of natural resources, it is about a complex re-framing of “nature” and the use of that diversity toward commercial ends. As Shiva states, the discourse of biodiversity is actually less about sustainability than it is about the conservation of biodiversity as a “raw material” for the production of monocultures. The same can be said of molecular biotech, especially in the case of genome projects, genome databases, gene banks, human tissue banks, DNA sample resources, and other instances of biopiracy.

Genethnicities: The point of controversy with many genome projects is the issue of genetic discrimination. With molecular genetics, a unique type of identification and differentiation has come about, in which individuals and populations can be uniquely analyzed and regulated through their DNA. This twofold process of molecular genetics (genetic essentialism) and informatics (population databases) paves the way for a new type of identification, and in some cases, discrimination. It is based not on race, gender, or sexuality, but rather on information (genetic information). Bioinformatics–an apparently neutral technical tool–thus becomes manifestly political, negotiating how race and ethnicity will be configured through the filter of information technology.

Managed Health: However, even when public projects attempt to assemble biological databases, there is still discomfort over the very process of sampling, extraction, and utilization of one’s own body for medical research. In the case of genomics, this is a very abstract process, but also a very simple one, moving from a blood sample to a DNA archive in a computer. On the one hand there have been disputes concerning the ownership of one’s own DNA, in which, for instance, a company like deCODE develops novel patents based on research done on individual human DNA samples. For this reason many companies require complex disclaimers, and they also make an important further distinction–between one’s own lived body and what is considered health-related data generated from a person’s body. This reduction of the debate to a distinction between blood and data may solve the patenting and ownership issue, but it still does not address the ontological difference between one’s own, proper body, and the genetic data extracted from one’s body. Biocolonialism Taking these issues into consideration, do genomics projects such as deCODE’s IHD, or Celera’s human genome database, form instances of a colonialist imperative, a kind of “biocolonialism”? If, traditionally, colonialism involves the forced appropriation of land and economy by one economically and technologically empowered collective over another disenfranchised collective, then biocolonialism presents us with a situation in which the bodies of the colonized are in fact the land and economy. As theorists such as Edward Said have pointed out, colonialism is not only economic and militaristic, but also a complex cultural encounter. How might this asymmetrical intersection of economies, power dynamics, and cultures translate into biotech?

Biocolonialism takes the molecular body and biological processes as the territory or the property to be strategically negotiated and acquired. Often, when governmental regulations stipulate, this is done through some type of informed consent, so that the individuals and/or community whose biological materials are being acquired, are informed about the reasons and future uses of their bodies. At other times this is handled without such formalities, resulting in either biopiracy (a simple taking without any reimbursement to the community) or patenting (based on cell lines that are minimally modified). In order to conduct successful research, and potentially turn over great profits, the first thing needed is a large resource, which in this context means a biological sample suited for the particular type of research being conducted. For example, a cell line from an individual from New Guinea, from a collectivity known to have developed a resistance to certain forms of diabetes (in the example of the HGDP). Therefore, the territory overtaken by the biotech industry is the molecular body itself, contextualized through race and high-technology. In addition, biocolonialism doesn’t so much reconfigure the colonized economy, as it appropriates the molecular body as a type of condensed economy in itself. The same molecular body that is the territory for biocolonialism, is also the primary value-generator, by virtue of its composition and operativity as a molecular organism. Here In This Colony As we’ve pointed out, the bioinformatics of database access is inextricably connected, in biotechnology, to software and subscription models for research, and this is where bio-informatics intersects with bio-capitalism, or the integration of genetic bodies into an advanced capitalist framework. Discussing the free-floating dynamics of late capital, Fredric Jameson notes that the self-referential feedback loops of finance capital propel it into a zone of “autonomization,” a virus-like epidemic that forms a speculation on speculations. For Jameson this has resulted in “the cybernetic ‘revolution,’ the intensification of communications technology to the point at which capital transfers today abolishes space and time and can be virtually instantaneously effectuated from one national zone to another.” It is this instantaneousness and total connectivity that has driven many labs to fully incorporate advanced computing and networking technologies (such as Celera), and it is this integration of biotech with infotech that has brought companies such as IBM, Compaq, and Sun Microsystems into the lifesciences. If, in the biotech industry, finance capital and laboratory research are interconnected, how does this transform the “wet” biological materials in the lab, the molecular bodies of life-science research?

One response is to suggest that it is in the unique, hybrid objects of the biotech industry–genomic databases, DNA chips, automated protein analysis computers–that genetic bodies and a “digital capitalism” intersect. In other words, the correlations between bodies and capital, which enables a biotech industry to exist at all, are currently mediated by computer and information technologies. The use of such technologies is predicated on the assumption that a range of equivalencies can be established between, say, a patented genetic sequence and the marketing of that sequence through genetic-based drugs.

What has become of the original issue put forth by the critics of the HGDP? Part of the problem is that the issues dealt with in the criticism of the HGDP have been handled in the way that criticism of genomic mapping and human embryonic cloning have been handled: they have been filed under the worrisome category of “bioethics.” As postcolonial critiques have pointed out, the HGDP came to a relative standstill because it could not reconcile Western scientific assumptions and intentions with non-Western perspectives toward “agriculture,” “population,” “medicine,” “culture,” and so forth. The gap in between the HGDP’s biocolonialism and those predominantly non-Western cultures that were to be the source of biomaterial for the HGDP database illustrates the degree to which “global” once again means “Western” (and, increasingly, “economic”). But it is equally important to note that biocolonialism need not be the familiar First World-Third World struggle that has characterized debates on post-colonialism recently. As we’ve seen, genomics projects articulate genetic difference according to a variety of standards (universal, individualized, groups) that in no way depend upon the marketing of biomaterials from indigenous collectives. If anything, biocolonialism will depend as much upon Euro-American health-care models as it will on the isolated or unique genetic reserves of indigenous populations. The development of DNA chips, genetic screening, genetic profiling, and medical genetics are just some examples.

One of the meanings of the decrease in the presence of the HGDP and the rise in bioinformatics developments and applications is that the issues of race and ethnicity have been sublimated into a paradigm in which they simply do not appear as issues. That paradigm is, of course, one based on the predominance of information in understanding the genetic makeup of an individual, population, or disease. When, as geneticists repeatedly state, genetic information is taken as one of the keys to a greater understanding of the individual and the species (along with protein structure and biochemical pathways), the issue is not race but rather how to translate race into information. In such propositions, race and ethnicity become split between their direct translation into genetic information (a specific gene linked to a specific disease predisposed to a given population) and its marginalization into the category of “environmental influence” (updated modifications of the sociobiological imperative, in which race and culture are accounted for by biology).

The biopolitics of genomic science is that of an informatics of the population in which cultural issues (ethnicity, cultural diversity) are translated into informational issues (either via a universal, generalized map of the human genome, or via individualized maps of genetic diversity). As Evelyn Fox Keller, Donna Haraway, and others have pointed out, information is not an innocent concept with regard to issues of gender and race. The questions that need to be asked of bioinformatics, online genomic databases, and genome mapping projects is not just “where is culture?” but rather, “how, by what tactics, and by what logics are technoscientific practices re-interpreting and incorporating cultural difference?” Public Memory, Privatized Body It is these intersections between populations, economics, and informatics that are performed in Diane Ludin’s project Harvesting the Net::MemoryFlesh. Combining a critical-artistic approach with a knowledge of Web technologies, Harvesting the Net fleshes out the connections between bodies and technologies in the biotech industry. Using her concepts of “wet code” and “dry code,” Ludin explores what she refers to as the “circumstance of the body” as it is interpolated between techno-hype and bioscientific anticipation.

From the technophilia evident in human genome projects (for instance, the New York Times has run a number of articles on how gene sequencing computers were largely responsible for the progress of the human genome map), to the many promises that biotech research narrates (any advertisement from a biotech startup illustrates this), the individuated body of the biomedical subject becomes a site of contestation. The debates over genetic privacy, patenting, and the promised medical benefits of biotech all extend from this individuated body, at once distributed through an array of databases, and condensed into a genetic profile.

In the midst of these tensions between wet and dry codes, hype and anticipation, Harvesting the Net inserts the figure of the artist as a kind of data-filtering module. Mainstream media reportage, science journalism, press releases, as well as images, are all routed through the critical lens of the artist, and re-purposed as an inquiry into the ways in which the body, for the biotech industry, is inextricably connected to questions of race and economics. Harvesting the Net shows us that the political approaches in biotech inform every aspect of its public face, from the images used in news articles and press releases, to the images depicting high-tech labs, to the professionalism of biotech websites that contain investment portfolios.

Using the “legitimate” materials from the biotech industry, Harvesting the Net recontextualizes the multi-medial language of the body, including it to articulate the wet and dry codes which are constantly transmitted and reformatted. Between media representations, computer databases, search engines, e-trading, and the molecular biology lab, Ludin asks us to consider how bodies might fulfill a “recombinant” functionality within the biotech industry.

Eugene Thacker

References Brower, Vicki. “Mining the Genetic Riches of Human Populations.” Nature Biotechnology 16 (April 1998): 337-339.

Burchell, Graham, Colin Gordon, and Peter Miller, eds. The Foucault Effect: Studies in Govermentality. Chicago: Univ. of Chicago Press, 1993. Celera Genomics: http://www.celera.com.

Chakravarti, Aravinda. “Population Genetics–Making Sense Out of Sequence.” Nature Genetics 21 (January 1999 supplement): 56-60.

Dawson, Michael and John Bellamy Foster. “Virtual Capitalism.” In Capitalism and the Information Age. Ed. Robert McChesney et al. New York: Monthly Review Press, 1998. 51-69.

deCODE Genomics: http://www.decode.com.

Enserink, Martin. “Iceland OKs Private Health Databank.” Science 283 (1 January 1999): 13

Foucault, Michel. The History of Sexuality, Vol.I. New York: Vintage, 1978.

—. Ethics: Subjectivity and Truth. The Essential Works of Michel Foucault 1954-1984, Vol. I. Ed. Paul Rabinow. New York: New Press, 1994.

—. Power. Ed. James Faubion. New York: New Press, 2000.

Haraway, Donna. Modest_Witness@Second_Millennium.FemaleMan©_Meets_OncoMouse™: Feminism and Technoscience. New York: Routledge, 1997.

Howard, Ken. “The Bioinformatics Gold Rush.” Scientific American (July 2000): 58-63.

Jameson, Fredric. “Culture and Finance Capital.” In The Cultural Turn: Selected Writings on the Postmodern, 1983-1998. New York: Verso, 1998. 136-62.

Kahn, Patricia. “Genetic Diversity Project Tries Again.” Science 266.4 (1994): 720-22.

Keller, Evelyn Fox. Reflections on Gender and Science. New Haven: Yale Univ. Press, 1995.

Persidis, Aris. “Bioinformatics.” Nature Biotechnology 17 (August 1999): 828-830.

Philipkoski, Kristen. “Everybody Into the Research Pool. ” Wired News (11 October 2000): http://www.wired.com.

RAFI: http://www.rafi.org.

Said, Edward. Orientalism. New York: Vintage, 1979.

Schiller, Dan. Digital Capitalism: Networking the Global Market System. Cambridge: MIT, 2000.

Shiva, Vandana. “Biodiversity, Biotechnology and Profits.” Biodiversity: Social & Ecological Perspectives. Ed. Vandana Shiva et al. New Jersey: Zed Books, 1991.

 ----. _Biopiracy: The Plunder of Nature and Knowledge_. Toronto: Between the Lines, 1997.

Stefansson, Kari and Jeffrey Gulcher. “The Icelandic Healthcare Database and Informed Consent.” New England Journal of Medicine 342:24 (15 June 2000).

Waldby, Catherine. The Visible Human Project: Informatic Bodies and Posthuman Medicine. New York: Routledge, 2000.

Eugene Thacker, Diversity.com/Population.gov, April 2001.

Eugene Thacker, 2001. First published by Gallery 9/Walker Art Center for Harvesting the Net::MemoryFlesh.