About this data:
TREEMAPSContinue reading the main story
A history of treemaps
- A treemap is a way of analysing large amounts of data in a small space
- In this example the top 500 supercomputers have been depicted in proportion to their speed as of 31 May
The data used to generate the interactive treemap visualisation come from a draft of the June 2010 TOP500 Supercomputing list. This ranks most of the world's fastest supercomputers twice a year. There may be minor differences between this list and the final published list.
The graphic allows you to visualise the list by the speed of each machine; the operating systems used; what it is used for; the country where it is based; the maker of the silicon chips used to build the machine and the manufacturer of the super computer.
The maps were produced using the Prefuse Flare software, developed by the University of California Berkeley.
Monday, May 31, 2010
I keep hearing people throw around the word “curation” at various conferences, most recently at SXSW. The thing is most of the time when I dig into what they are saying they usually have no clue about what curation really is or how it could be applied to the real-time world.
So, over the past few months I’ve been talking to tons of entrepreneurs about the tools that curators actually need and I’ve identified seven things. First, who does curation? Bloggers, of course, but blogging is curation for Web 1.0. Look at this post here, I can link to Tweets, and point out good ones, right? That’s curation. Or I can order my links in a particular order. That’s curation. Or I can add my thoughts to those links, just like Techcrunch or VentureBeat do. That’s curation. Or I can do a video like Leo Laporte does and talk about those links. That’s curation. Or I can forward those links to you via email. That’s curation. The editor who sits in a big building at New York Times or your local newspaper that chooses what content you’ll see in your newspaper is a curator. So is the page designer who decides what story is at the top of the page.
But NONE of the real time tools/systems like Google Buzz, Facebook, Twitter, YouTube, Flickr, give curators the tools that they need to do their work efficiently. That’s why I’m writing this post, to try to get the industry to see that there’s an unmet need that — if they were met — would mean all sorts of things from better scrapbooks for family photos and events to better news systems like what CNN or Huffington Post are trying to build on the Web. More on that after I get through the seven things.
As you read these things they were ordered (curated) in this order for a reason. If you give me #7 without giving me #1 first your tool will suck and you won’t be used by curators. If you give me #1 without #7, you’ll be way ahead of some tool that gives me #7 only.
This is a guide for how we can build “info molecules” that have a lot more value than the atomic world we live in now. First, what are info atoms? A tweet is an atom. A photo on Flickr is an atom. A conversation item on Google Buzz is an atom. A Facebook status message is an atom. A YouTube video is an atom.
Thousands of these atoms flow across our screens in tools like Seesmic, Google Reader, Tweetdeck, Tweetie, Simply Tweet, Twitroid, etc.
A curator is an information chemist. He or she mixes atoms together in a way to build an info-molecule. Then adds value to that molecule.
So, what are the seven needs of real time curators?
1. Real-time curators need to bundle. We need to be able to bundle certain tweets together. What do I mean by that? Well, let’s say a news event, like an earthquake, happens right now while I’m writing this post. Which are the best 10 tweets that describe that event? Can we bundle those together easily? Bloggers can bundle, but making Tweets look like Tweets is actually pretty difficult for normal people and even for geeks like me. Gotta take a screen shot of the tweet, upload that, then build an image tag in Wordpress, then link that image up to the original tweet’s permalink. Whew. What a lot of work for something that should be simple. This could look like tagging, but calling it tagging is pretty limiting because tags won’t get you to full curation. One question: why can we bundle Flickr photos together by applying a tag to them, but we can’t bundle Tweets together by tagging Tweets? For instance, here’s two photos I shot at Techcrunch’s offices showing their new TV team. How did I bundle those together? Simply by tagging them with “Techcrunch TV” tag. Now, what if I could bundle in Tweets about Techcrunch TV? How about a YouTube video? How about other people’s Flickr photos? How about photos on other services like Smugmug or Picasa? How about Google Buzz items? Now you’re starting to understand why we need bundling cross-platform so we can start pulling valuable atoms out of the real-time streams.
2. Real-time curators need to reorder things. Look at just those two photos. One is more important than the other. Now, imagine a bundle with dozens, hundreds, or even thousands of items. Why can’t curators put the most important ones at the top of the bundle, just like the New York Times front page editor puts the most important news at the top of the page? Or, even better, why can’t we organize them into sub bundles? During an earthquake, like the one in Haiti, some things happened on first day, other things happened on second day, etc. Why are they all in one flat stream? Or, look at Apple’s iPad launch. Some things are about the specs. Some things are about the people involved. Some things are about apps. Some things are about accessories. Why can’t we organize them all into sub bundles? All curated in order of importance?
3. Real-time curators need to distribute bundles. Let’s say I put together a report for my bosses at Rackspace about what is happening at YCombinator (they just had a launch this week of a new crop of companies). Let’s say I built a bundle of not just the Techcrunch article I just linked to, but the Tweets from the event as well as the reports from other tech journalists like those who work at GigaOm, who also had a report on that event. Now we need to distribute that bundle. Of course we’ll Tweet it. But that means a headline of less than 140 characters that must include a link to the permalink of the bundle. But what about Facebook? That can include a thumbnail. Google Buzz? That lets you upload items with longer headlines and multiple pictures. What about emailing this bundle around the way Chris Brogan emails his blog posts. Why can’t a curation tool be smart about distributing bundles and let you see and manipulate previews of how that bundle will distribute itself to the various places you need your bundles to go to get the right audience.
4. Real-time curators need to editorialize. So, now we have a bundle of Tweets, YouTube videos, Flickr photos, Google Buzz items, Facebook status messages, et al. We’ve seen a new pattern in the world and now we want to explain our view of that pattern. For instance, I was at the YCombinator event this week. What if I wanted to add my two cents into the patterns other people saw? I might want to blog like here. Or add a video of my own. Or a Cinchcast (audio recordings done on my iPhone). Or add a bunch of photos I shot, like this one of Paul Graham mentoring his startups at that event with what they did wrong and right. But why did I just need to click “img” and copy and paste a URL to do that? A curation tool would let me drag and drop on my new iPad that I’ll have next weekend.
5. Real-time curators need to update their bundles. When the Haiti earthquake happened, the news story changed over time. We had more information and many many more Tweets to bundle in, not to mention that the mainstream press started flowing stories into RSS and Twitter. If you can’t update a bundle then it will greatly limit the ability for us to communicate. Blogs are pretty bad at this. If I come back in two hours and update this post you probably won’t see the update. In fact, not only can I update this post, but everyone who leaves a comment underneath is really updating it too. Yet early readers won’t see the later comments. They are missing part of the story. Of course, once you update you need to redistribute. IE, let your Twitter and Facebook and Google Reader friends know that the story has changed and there is important new information on the bundle that you need to see.
6. Real-time curators need to add participation widgets. On some bundles you might want to ask your audience to take a poll. Some might want to add comments. Not everyone will. Seth Godin doesn’t have comments on his blog. Other bloggers might want to leave comments open for a few hours or a few days. Even here I’ve made it so you can only comment for 30 days on my blog posts. Why? Because of spammers and other bad actors. I can see a TON of widgets that would be available to get participation on widgets. These would be a great way for these systems to monetize, too. Would you pay $1 to add a poll to your bundle? I would.
7. Real-time curators need to track their audience. Look at this blog post. It has a TweetMeme button on it. That shows you how often this item has been retweeted. I would add such a button to every bundle I do. I’d also add Google Analytics and a few other things that would track where you’re coming from, what kind of engagement my items are getting, and even, how relevant you are based on your own participation in the system. Don’t think that’s already happening? Look at the curation system Spigit built for large enterprises. I met with them yesterday and their system does just that and is getting used by many of the world’s biggest companies like Wallmart and Starbucks.
Does such a curation system exist today? Yes, blogs, but blogs are HORRID for tracking this real time world. Just this post took me 30 minutes to bang out and that was after I had it in my head and I wrote it very quickly. Imagine I was talking about a real time event. The news is already 30 minutes old. We need a new system for real-time curation of what’s happening on my Twitter stream.
It’s interesting that no one has gotten close to even giving us the most basic curation tools. Why is that?
Why are companies ignoring our needs? In talking with CEOs at companies in the real-time space I’ve identified a few reasons:
1. Building-cross-platform tools is difficult. Each real-time feed has different APIs and isn’t set up to interoperate with other real-time systems. Twitter has no API to share its feeds with Flickr. Flickr’s tags don’t have any idea what YouTube’s tags are. Wordpress is blind to all of it. Etc Etc.
2. Fear of platform vendors. No one builds these kinds of features because they are scared that Facebook or Google will build these kinds of APIs and kill their businesses. Not unfounded, either. Tweetdeck built lists into its product and then Twitter came along and added lists in a way that was far more useful than the ones Tweetdeck built. So, companies like Tweetdeck and Seesmic choose to work on things that Twitter will be unlikely to do.
3. Assumption that these features are only going to be used by “weirdos or professionals or both.” I hear this all the time “oh, Scoble, you need these features, but what about normal people.”
The first two I can’t do much about. I agree that these are features that would be best built in at a platform level and have told many of the players to do that. But the third is provably false if entrepreneurs would do some customer research (shocking, but many San Francisco area social networking companies do very little real customer research, which explains why they so often screw up around privacy and fail to find new features that dramatically improve our lives).
Let’s consider the mother who has a 1-year-old son. She invites 30 of her friends to a birthday party for her son. They take videos, do Foursquare checkins, one or two might blog about the party for their mommy blogs. Many take photos, but some of those photos end up on Facebook. Some on Flickr. Some on SmugMug. Some on Picasa. Lots of them Tweet about the event, or Facebook status messages, or put some Google Buzz items up, not to mention FriendFeed, Whrrl, Pip.io, or other systems where you can capture your life’s most interesting events.
Now, how does that mother build an online scrapbook of all the items that were poured into the system? Sure you can use a tool like Scrapblog but how do you get Tweets into that? It’s not a curation tool for the real-time web.
Let’s also take on what would happen once we move into such a molecular world:
1. Search would INSTANTLY improve. (I need a whole blog post on why this is so).
2. Trends would INSTANTLY improve. (You’d have real meta data about important events, look at just the ordering data that would be available to study).
3. Brands would be able to advertise on bundles. (CocaCola would love to advertise on bundles of movie feedback, for instance, especially on bundles curated by the best movie curators — they will never advertise on raw tweets because the risk is too high that their brand would be next to something nasty).
4. A new monetization strategy would INSTANTLY become available for platform vendors like Twitter and Google Buzz.
5. Location services like Gowalla and Foursquare would be able to add real value onto bundles (showing location trends would be a key part of bundles, where they have no real play in augmenting “atoms” like Tweets or Flickr photos).
6. A new form of relevancy, credibility, and authority data would be available for systems to automatically present the best news. Look at how Techmeme appeared after blogging did. Imagine all sorts of new displays of best bundles that would now be possible. Even Techmeme would be able to recommend the best curators on topics, which would greatly improve the real-time news available there.
Anyone feel the need for this kind of new curation tool? Join in, please curate this post and push it around your networks. Let’s see if we can find some companies who are working on providing this new kind of real-time curation system. I’d love to work with startups who are working on just this. +1-425-205-1921 or email@example.com or leave a comment here and let’s work together in public.-->
Progress is a suspect word when applied to art.
But not always. In 1995, the painter Elizabeth Murray organized a group exhibition at the Museum of Modern Art. It was one of the museum’s series of “Artist’s Choice” shows, with contents drawn from the permanent collection. Ms. Murray was the first woman to participate in the series. She chose 100 or so pieces by some 70 artists and sardined them into tight quarters off the lobby. The artists she picked had one thing in common: they were all women. The show, “Modern Women,” was a MoMA first.
Now, 15 years later and nearly three years after Ms. Murray’s death, the museum’s gender demographics have changed significantly. This spring there are two permanent collection shows devoted almost entirely to female artists: “Pictures by Women: A History of Modern Photography” and the smaller “Mind and Matter: Alternative Abstractions, 1940s to Now.” They coincide with the publication of a big, deep, feisty book of essays, several years in the making, called “Modern Women: Women Artists at the Museum of Modern Art.”
In addition, the much-noticed Marina Abramovic retrospective is still on view (through Monday), as is a terrific installation piece, “Mirage,” by Joan Jonas. A trim Lee Bontecou minisurvey runs through the summer, as does a Maya Deren film program.
Add a handful of single works by women strategically installed through the premises — a Louise Bourgeois sculpture introduces the grand sweep of European Modernism on the fourth floor; one of Lee Lozano’s hammerhead paintings commands the fourth-floor public space — and the curatorial rethinking sparked by Ms. Murray’s show becomes clear.
MoMA’s photography collection has always been strong in female artists, sufficiently strong for “Pictures by Women: A History of Modern Photography” to almost live up to the sweeping promise of its title.
The show — 200 works by 120 artists — starts with a botanical print by the British photographer Anna Atkins from around 1850, when photography barely had a history, it was still so new. Because the curators — Roxana Marcoci, Sarah Meister and Eva Respini, all from the department of photography — have ordered the exhibition by date, we get a solid dose of late Victoriana in the opening room, with pictures by Julia Margaret Cameron and Gertrude Kasebier.
in Bb 2.0 - a collaborative music/spoken word project - play multiple videos & create audio piece- amazing
play these together, some or all, start them at any time, in any order. more info...
In Bb 2.0 is a collaborative music and spoken word project conceived by Darren Solomon from Science for Girls, and developed with contributions from users.
The videos can be played simultaneously -- the soundtracks will work together, and the mix can be adjusted with the individual volume sliders.
To see the full collection of work Prologue created for Iron Man 2, head over to Prologue’s site.
For those of you who’ve seen Iron Man 2 (has anyone not seen it?), you know that Robert Downey Jr. shares the screen with Mickey Rourke and Don Cheadle for some of the film’s most intense sequences. But the real co-stars—the real palladium in Iron Man’s chest—are the ubiquitous motion design elements laced throughout the entire narrative.
From heads-up displays to preternaturally responsive real-time 3D interfaces, Tony Stark is augmented as much by stunning graphics as by his trademark power suit. Motion graphics even play a crucial role at the film’s turning point, delivering a life-saving “eureka” moment to Stark just in the nick of time.
Interface, Meet Plot
While a huge number of people and crews worked on the visual effects shots that made Iron Man the box-office smash it quickly became ($251M domestic and counting), we’re going to zero in on the epic work put forth by Prologue, who’ve shared a generous chunk of the process work behind their staggering slate of deliverables.
Not since the Minority Report have interfaces played such a major role in a Hollywood blockbuster. For Iron Man 2, Prologue lifted screen design elements off of flat surfaces and into the three dimensional world surrounding Tony Stark. As he struts through his secret lab, a virtual world of swirling data and wireframe plans pops forth from the genius playboy’s fingertips, creating a seamless dance between man and machine that elegantly echoes the symbiosis between Stark and his exoskeleton.
Element Discovery Sequence + Process
For the last couple decades, movies have used secret data disks and high-pressure data transfers as important plot points. (Hackers, The Net and even Cloudy With a Chance of Meatballs all leap to mind against a backdrop of hundreds more.) But Iron Man 2 is the first film I can think of that actually puts the visualization of data at center stage, emphasizing not the content but the form it takes. I’m not saying it was a brilliant bit of screenwriting, but it certainly was an interesting move from the perspective of motion design in mass media.
Yeah yeah, it’s “just” fiction, but it’s edifying nonetheless. It points to the very real fact that many of the challenges facing contemporary society today can benefit from—or perhaps even by solved through—graphics. Or, more to the point, motion graphics.
Prologue’s challenge was to choreograph Downey Jr.’s finger-snapping, wrist-flicking bravura with an incredibly intricate graphical system that makes Tony Stark’s moment of realization feel like your moment of realization.
Lynette Wallworth is an Australian artist whose practice spans video installation, photography and short film. In her current body of work, she specialises in the creation of immersive installation environments that offer tactile gateways. Frequently, the works are developed in series to provide a sense of a cumulative process that changes over time. The environments are not passive spaces but rely on activation by the participant/viewer. The interplay between the moving image, sound, space and visitor as component elements in the ecosystem of a work is the artist’s primary focus.
Wallworth's work is about the relationships between ourselves and nature, about how we are made up of our physical and biological environments, even as we re-make the world through our activities. The activation of the work by the viewer becomes a metaphor for our connectedness within biological, social and ecological systems. She uses technology to reveal the hidden intricacies of human immersion in the wide, complex world.
Lynette Wallworth is represented by Forma Arts and Media Limited
Photography: Colin Davison, courtesy of the National Glass Centre, Sunderland, UK (R)
Sydney Festival ›
Beautiful Sunset ›
Duality of Light ›
Evolution of Fearlessness ›
Evolution of Fearlessness ›
Invisible by Night (solo exhibition) ›
Evolution of Fearlessness (solo exhibition) ›
Hold: Vessel 1 and 2 ›
Evolution of Fearlessness ›
Invisible by Night ›
Hold: Vessel 2, 2007 ›
Lynette Wallworth | National Glass Centre, UK ›
Hold: Vessel 1, 2001 ›
Damavand Mountain ›
Evolution of Fearlessness (various works) ›
Further information for presenters of Forma projects
Lynette Wallworth - Sydney Festival 2010
Sydney-based artist Lynette Wallworth presents her trilogy of interactive installations at the Sydney Festival 2010.
The trilogy was recently completed by Duality of Light, commissioned by the Adelaide Film Festival Investment Fund and presented at the Anne & Gordon Samstag Museum this year in the Festival's Art and Moving Image strand. It completes a series of works that together investigate powerful human emotions, such as grief, loss and the re-emergence of hope. Wallworth makes skillful use of video and responsive technologies to create deeply emotional and human experiences.
The trilogy was begun in 2004 with Invisible by Night, commissioned by Experimenta, Melbourne. The work was a response to the site of the City's former morgue and features a grieving woman who, when beckoned by a touch on the screen, wipes away the condensation to meet eyes with her visitor.
Evolution of Fearlessness premiered at New Crowned Hope Festival, Vienna in 2006. This work features the portraits of 11 women who together represent the human instinct for survival and hope. Following persecution in their native countries, these women have rebuilt their lives in Australia.
Duality of Light is an immersive environment to be experienced by one person at a time. It allows each visitor to go on a unique and singular journey through a soundscape that challenges our perception of space. In navigating the installation environment, visitors will encounter others whose surprising presence reveals the Duality at the heart of the work.
7-24 January 2010 | Sydney Festival | Sydney, AU
Lynette Wallworth – Duality of Light (2009)
Commissioned by Adelaide Film Festival Investment Fund (2009)
Produced by Forma
This project has been assisted by the Australian Government through the Australia Council, its arts funding and advisory body.
Lynette Wallworth – Evolution of Fearlessness (2006)
Commissioned by New Crowned Hope Festival, produced by Forma
Lynette Wallworth – Invisible by Night (2004)
Commissioned by Experimenta.org
This project has been assisted by the Australian Government through the Australia Council, its arts funding and advisory body.
Top left: Rocco Fasano
Top right: Colin Davison (courtesy of National Glass Centre and the artist)
Bottom left: Grant Hancock (courtesy of Samstag Museum of Art, Adelaide)
Further information for presenters of Forma projects
Sunday, May 30, 2010
Cloud Computing and How it Applies to Second Life
What is a “cloud”, and just what is it computing?
Wikipedia has this to say on the matter:
“Cloud computing is Internet-based computing, whereby shared resources, software and information are provided to computers and other devices on-demand, like a public utility”
For example, Amazon.com offers a prominent web hosting service knows as EC2. EC2 allows paying clients to rent hosting on Amazon’s “cloud.” The software EC2 is hosting for its clients isn’t being run on any one computer in particular. Instead, whatever is being hosted by EC2 (a website, or what have you) is run on a virtual machine, backed by bucket loads of server computers being maintained by Amazon.
I know, now you’re asking “What the heck is a virtual machine?” A virtual machine is basically a software program meant to mimic a full-blown computer. So, interaction with EC2 would be a simulation of working with a single computer. You would even be able to think of this virtual machine interface as the one computer at Amazon that is personally hosting all of your files. But this is a mirage, and behind the scenes everything is made up of a virtual sea of computation.
So there you have it, the cloud, a metaphorical method of providing different services over the Internet. When people say a web service is “on the cloud”, what they really mean is there is a group of servers on the Internet, running whatever software solution that service is providing. There is no one cloud, there are many. The semantic distinction here isn’t really being made over where these computers/servers are located, who owns them or what they’re doing, in as far as how they’re configured and being utilized.
To some people, the Second Life main grid sounds like it could be a cloud, but, personally, I wouldn’t consider it that way. The main reason being that each SIM is dedicated to a specific server core, with a specific allotment of RAM (etc…), and that’s it. The grid’s severs can’t say to themselves “Oh, well, there is less traffic on these SIMs over in this region, so let’s throw some of their resources at these other, crowded SIMs over here. A SIM’s resources are static, and the main grid isn’t just a blob of computing power ready to mold itself to different computing situations. Thus answers the age old question: If a tree falls in Second Life, and nobody is around to hear it, does it make a sound? The answer: a big ol’ YES. It would be fantastic if this weren’t the case, but I’m no expert on the ins and outs of the main grid’s design, and I’m sure there are a number of reasons why things are the way they are.
Second Life Enterprise, however, is a different situation. Recently, Joe Linden was quoted saying: “…standing up a Second Life grid in the cloud is something that we already do; we have customers running instances of Second Life Enterprise in the cloud, and no dedicated hardware is required to stand up a private grid.”
Normally, Second Life Enterprise is meant to be a sort of mini-grid, out of the box, for companies to utilize within their own, local communications network. A corporation would be provided the necessary hardware and software needed to integrate a Second Life Enterprise solution within their existing IT network. It would then work alongside the standard office network offerings, such as email or intra-net FTP, all behind the company’s Internet firewall.
What I think Joe Linden was referring to was the doings of a few Second Life Enterprise clients. Some companies don’t have a basement full of equipment dedicated to IT, or the resources to justify upgrading their office’s internal IT infrastructure to handle the hardware elements of Second Life Enterprise. I mean, in this day and age, a smaller business’ IT setup could feasibly consist of a broadband connection and a wireless router. So, in some cases, it may be more cost effective to have the server side elements of Second Life Enterprise hosted on rented server space (the “cloud”). I didn’t know Linden Lab was allowing that, but hey, sounds pretty cool to me....
The first edition of the (S8) Peripheral Film Festival to be held next June 3-6 in A Coruña, Spain, aims to 'combine the new cinematographic trends... with the temporary and historical trail that the cinematographic format has been providing since its appearance'. During its four days, the newly-born festival packs an incredible programme featuring partial restrospectives of Iván Zulueta and Eugenio Granell's s8 film works, a session of Spanish contemporary s8 experimental films focusing on the work of David Domingo, videoinstallations by Maria Cañas and Alberte Pagán and a 16mm projectors performance by Bruce McClure.
You can access the full programme here.
The Aquatic Thought Foundation
The Aquatic Thought Foundation is a privately funded research organization dedicated to the exploration of human-dolphin interaction. Since 1989, AquaThought has studied the neurological impact of close contact with dolphins on human subjects and the related therapeutic phenomena. Our research in neurological imaging has led to the development of MindSet, a low-cost neuro-mapping electroencephalograph program which makes advanced EEG research affordable. Our work has also led to the development Cyberfin, a virtual reality simulator which will bring virtual dolphin contact to a massive audience.
Aquatic Thought, in conjunction with Dolphin Discovery, hosts the annual International Symposium on Dolphin-Assisted Therapy, a conference focused on the exchange of research, methodologies, and ideas, and the application of neurotechnology and virtual reality in dolphin-assisted therapy.
Aquatic Thought is a founding member of an organizational alliance which has dedicated itself to the exploration of dolphin assisted therapy and the application of resultant neurotechnology. Spearheaded by Octavio Coleman Esq., this effort in forward thinking, ecologically conscious development will yield EcoPlex, the first dolphin interaction center in the world to focus exclusively on dolphin-human interaction. In conjunction with the Jejune Institute, the Aquatic Thought Foundation has created Aquatichought Labs, L.L.C. Aquatichought Labs will assimilate the products and technologies of the AquaticThought Foundation - and - will commercialize the neurotechnological developments of the dolphin interaction center.
The EcoPlex Center, to be located in San Jose, CA will facilitate dolphin assisted therapy programs, dolphin interaction research, adjunct therapeutic modalities, and research and development in related neurotechnology and virtual reality enterprises. Aquatic Thought's chairman, Blair Lucien, will chair the Dolphin Center's research committee.
Aquatic Tthought Labs will produce and market the Aquatic Thought Foundation's commercial projects- MindSet, a low-cost neuromapping EEG system, and CyberFin, an immersive dolphin encounter simulator. Aquatic Thought Labs will act as the research and development arm of EcoPlex and will resale any resultant products, as well as all above mentioned products, under the Aquatic Thought Labs brand.
Cyberfin Immersive Experience
Cyberfin was previewed at SIGGRAPH 94 and was subsequently exhibited at the Third Street Promenade, in Santa Monica, CA, and at the 1996 Summer Olympics, in Atlanta. D. Frerichs, moderator of the Internet SIGGRAPH Review said, "I would like to congratulate Cyberfin's creators for developing an immersive environment which actually triggered an emotional response... you get up with the biggest smile on your face. It was a great achievement in taking technology beyond crude hardware and turning it into a medium of expression."
Saturday, May 29, 2010
If you love all gadgets, check our new sister site - Meet Gadget
Remember earlier this month when the LG Ally VS740 smartphone was announced and there was a tie-in with the Iron Man 2 movie? Now that the phone is no longer in "Pre-Order" status and is being shipped out, we know more of the details. Customers who purchase the LG Ally, enV Touch, or Chocolate Touch can visit the LGIM2 web site with their PC, click on the NAV icon and then Gift with Purchase. You're then prompted to enter your contact information, as well as the UPC Code and MEID ESN number from the Ally, enV Touch, or Chocolate Touch. After you submit it, you'll have to wait for a special Limited Edition (of 7500) LG Iron Man 2 Comic Book to arrive in the mail.advertisment:
Once you have the Comic Book you can use the LG Ally to visit the LG2IM web site in the phone's web browser and download a special application for the phone. (The app is only for the Android-based Ally and will not work on the enV Touch or Chocolate Touch). Once it's installed, you can launch the app on the Ally and use the front cover of the Comic Book, as it will show a 3D Iron Man suite hovering around the screen on the phone. You can move it around or zoom in and out by moving the phone closer or further away from the Comic Book.
It you happen to have a web cam on your PC, you can download two computer programs from the LGIM2 web site. The first is Iron Man Yourself, where you use the front cover of the Comic Book and hold it up to the web cam, which will transform yourself into Iron Man on the monitor. The second computer program is the Sky Thruster Game, which uses the back cover to move the Iron Man around in the game.
The Augmented Reality app for the LG Ally and two computer programs are pretty cool, and so is the Limited Edition Iron Man 2 Comic Book. The only thing we feel that is missing is a true "Iron Man 2" version of the LG Ally, like there was last year with the Transformers LG Versa, that would have a special theme with wallpapers and sounds.
I recently found myself wondering what a video game might look like in the form of a stop motion animation. While a normal person's response to such a question would of course be "who gives a shit?" I possess few of the qualities typically associated with normalcy and was irrevocably compelled to find out. This is the result.
Also, I'd like to think I'm the first person to be inspired by Michel Gondry and R-Type on the same project.
Intriguing pre-launch strategy:
"Entry: Azureus Rising
Pronunciation: Azh•ur•ee•us Rise•ing
1. Proof of concept test for a feature film trilogy. 3. The story of an unlikely and troubled teen forced from his home planet across the galaxy on an epic journey. Eventually to return reborn as the freedom fighter known as Azureus.
Friday, May 28, 2010
“3-D is not only the future,” said Emily Carr President Ron Burnett. “It’s going to be the base of a transformation in how we view images, in how we relate to entertainment, and in how we live our lives. Because those little telephones that you carry around with you will also be 3-D.” Burnett noted that the next phase of the technology will involve 3-D screens that do not require special glasses."
The British Journal of Photography asked Chris Killip, Ute Eskildsen, Gerry Badger, Jeffrey Ladd and Yoko Sawada to select the best photobooks of the last 25 years.
The critics chose Masahisa Fukase’s book “Ravens” as the best book.
The best photobooks in 25 years
An obscure masterpiece is chosen in our critic’s poll of the best photobooks of the past 25 years.
Few readers will have heard of – let along seen – Masahisha Fukase’s 1986 book, Karasu (Ravens), first printed by Sokyu-sha, a Japanese publisher based in Tokyo. The original, and two further editions, are out of print, yet many regard it a modern masterpiece, and copies of the first edition change hands for more than £2000.
And now it has been named the best photobook of the past quarter century in BJP’s critic’s poll, run in conjunction with the And/Or Book Awards, which is celebrating its 25th anniversary.
Photographed 10 years before it was published on a trip to his birthplace in northern Japan following a recent divorce, the book is a mournful reflection on Fukase’s past relationship, but has also been interpreted as a allegorical critique of modern industrialised society.
Largely shot through the windows of a train, focusing obsessively on the flocks of ravens he observed along the way, the images are described as “an aesthetic tour-de-force” in The Photobook: A History, Volume 1, edited by Martin Parr and Gerry Badger. “Even though Fukase made his pictures in bad light and bad weather, never bothering with technical niceties, the results are both luminous and beautiful,” say the authors. “He enlarges tiny portions of his negatives, pushing for the limits of legibility. One climatic image of silhouetted birds in formation, wings outstretched against a grainy sky, metamorphoses into a wire news service image of overheard warplanes, a significant, and traumatic image for postwar Japan.”
Adding to its cult status is the tragic fate of the photographer, who fell down a flight of stairs five years after the book was published and has remained in a coma ever since.
Karasu was republished by Bedford Arts in the US the same year, 1991, under the title of Solitude of Ravens, and more recently in 2008 by Rat Hole Gallery in Tokyo. Mark Hayworth Booth showed images from the series in the 1989 show he curated for Victoria & Albert Museum, Photography Now.
It was one of four books that received two nominations in our critic’s poll, for which we asked Chris Killip, Ute Eskildsen, Gerry Badger, Jeffrey Ladd and Yoko Sawada to select their top five choices (presented below). Nan Goldin’s Ballad of Sexual Dependency, also published in 1986, was a close runner and, as Killip points out, it was a remarkable year for photobooks, which included the publication of Martin Parr’s The Last Resort and Beyond Caring by Paul Graham.
British Journal of Photography
Masahisa Fukase’s Ravens: the best photobook of the past 25 years?
Thursday, May 27, 2010
Participatory Chinatown is a 3-D immersive game designed to be part of the master planning process for Boston's Chinatown. You assume the role of one of 15 virtual residents and you work to complete their assigned quest - finding a job, housing, or place to socialize. But look out! Sometimes language skills, income level, or other circumstances can make your task more challenging. Whatever your experience, you'll then be tasked with considering the future of the neighborhood by walking through and commenting on proposed development sites. Every one of your comments and decisions will be shared with real life decision-makers.
James Ewing/Courtesy Park Avenue Armory Christian Boltanski, “No Man’s Land,” 2010.
Clothes may perform as fashion statements but that’s hardly all they’re good for. On a live body they also shelter and expose, enhance and diminish, challenge and flatter, instantly telegraphing taste, class, style and sense of self. Discarded garments tell another story — especially when 30 tons of them are tossed willy-nilly into a twenty-five-foot high, bonfire-like pyramid surrounded by a mass grave of 60,000 pieces of apparel in every size, shape and color. Then they speak for the dead.Christian Boltanski
That is just what they are doing in “No Man’s Land,” a formidable reclamation of lost souls created by the French artist Christian Boltanski for the Park Avenue Armory’s cavernous Wade Thompson Drill Hall. To enter it, visitors must skirt a 66-foot-long wall of rusted biscuit tins labeled with numbers, suggesting an old crypt of anonymous remains. Speakers mounted on vertical steel beams that define 45 rectangular plots of clothes spread across the hall’s 55,000-square-foot expanse emit a roaring soundtrack that Boltanski culled from 45,000 recordings of human heartbeats he has assembled over the last several years.
Sixty play in a syncopated rhythm that varies like footfalls, tripping over one another or colliding as one moves across the floor. Every now and then there is a loud boom. It’s eerie. Overhead, meanwhile, the bright red talons of a construction claw suspended from a crane slowly descend to the mountain of clothing at the hall’s center, opening wide to clutch a clump of the forlorn garments and then ascending back up like a carnivorous bird with its prey. But as soon as the claw reaches the ceiling, it drops the clothes back to the pile, only to begin its descent once again.
“It’s like a symphony,” Boltanski said when I visited, adding that the heartbeats I was hearing were all from Swedes. As we watched the claw approach the pyramid, it clapped its jaws as if licking its lips before moving in for the kill. “I love when the clothes drop,” Boltanski said. “Because then they are free.”
In his installations of found photographs, books and documents derived from prodigious research, Boltanski, 65, has made a career out of reviving personal histories lost to time and collective amnesia. Though he has archived the names of artists who exhibited in a hundred years’ worth of Venice Biennales (most forgotten now), his memorializing art more often relates to the Holocaust. The reference to death camps is unmistakable in the random scatter of coats, sweaters, dresses and suits lying on top of each other in “No Man’s Land.” But they just as easily call to mind the catastrophic effects of natural disasters like the recent earthquake in Haiti, or the attack on the World Trade Center in 2001.
As it happens, the only disaster that brought these particular clothes to New York was economic. They came from a bulk supplier in New Jersey stuck with tons of stuff no one wanted. (At the exhibition’s end, they’ll go back there.) When Boltanski erected a different version of the work at the Grand Palais last January, he ordered the clothes from a flea market. It looked more like a jumble sale there, he said. Being French, maybe the clothes were more stylish. In New York, they only suggest the spirits of the very recently departed.
Yet the exhibition is more poignant than it is depressing. First of all, the wall of biscuit tins is quite stunning, the way ruins can be. And the heartbeats add a living pulse that only becomes more personal when spectators enter a room where a clinician records their own heartbeats through a stethoscope while they listen through headphones. After that it becomes impossible to distance oneself from the installation and the steady pace of its random selections.
One might think Boltanski is obsessed with death. But dying remains a fact of life as well as art. For the surprisingly jolly Boltanski, it is ever-present — not because he is ill but because he has bet against it with a Tasmanian collector and gambler, David Walsh, who is paying him a handsome stipend to film every moment he spends in his studio until he dies. If he expires in less than eight years, Walsh will pay the artist’s beneficiaries a discounted price. If Boltanski lives longer, the collector will pay through the nose for the footage, which is currently being transmitted daily to a cave the collector owns in Tasmania.
“Art is not meant to give pleasure but to ask questions and make people think,” Boltanski said when we talked. All the same, for the 2011 Venice Biennale, where he will represent France, he intends to make what he called a happy piece, because, he said, “Life is also beautiful.”
Death can come at any moment, that’s for sure. Fortunately, the Park Avenue Armory has a lot of exits.
“No Man’s Land” is at the Park Avenue Armory through June 13.
NEW YORK (AdAge.com) -- The New Yorker wants to let readers pay once for digital access across the iPad, the Kindle and other platforms, hoping to improve on the current industry practice of charging even subscribers for each edition on each device.
"This is going to evolve," New Yorker editor David Remnick said during a panel on taking print brands online, convened by Conde Nast Digital partly in an effort to tell Conde's digital story more aggressively. "We're going to have a situation where if you pay us X dollars, you can have us in any form you like."
Magazine publishers have been excited to sell iPad editions, seeing it as a promising way to finally wring circulation revenue from digital media -- revenue the web has not delivered for most titles. But subscribers would appreciate a way to access brands' content wherever it appears without feeling nickel and dimed. And the current digital pricing model in the magazine business punishes existing subscribers....