The Art of Research

Research scientists often claim that they aren't creative. They say this, even though the work that they do produces something that didn't exist when they started. The truth is, they are highly creative. Their work inspires my own.

What they really mean is that they aren't "visual". Of course, that isn't true either. All they really, really mean is that they can't draw. Drawing trains a person to imagine and work towards a final aesthetic without much guidance. "I could see it in my head". Drawing lets me work through all manner of design problems in a way that lets others see and comment on the decisions I'm making. It becomes a very collaborative activity. Scientists theorize and artists visualize.

If I'm honest, I've been doing pretty much the same thing since I was just a kid. Art is the skill I have, and art is the skill I use to earn a living. Aside from a few forays into the worlds of construction and dishwashing, I've been employed as a designer of one sort or another since the 90's. From then until now, my goals haven't changed: Explain whatever my client wants explained as efficiently as I can.

In 2002 I began working with with scientists and engineers at the company operating IKONOS, an Earth observing Satellite. Initially, I was hired to help market their imaging products. As we continued to work together, it became clear that they were having trouble explaining certain concepts. It turns out, I could illustrate those concepts.

Initially, I created simple infographics showing the satellite in various situations. In order to depict the satellite well, I built a fairly realistic 3D model of it. The use of 3D models in the illustrations had a side benefit. Those same models could be repurposed for animations. As primitive as they look by modern standards, those animations were actually used in a few national news segments. Including an interview with, then President, George W. Bush.

The animations lead to an interactive CD-ROM exploring IKONOS, it's products and the science behind them. In it, Satellite imagery was combined with other data to produce detailed depictions of landscapes, use-cases and collection methods. The CD-ROM was well received. So well, that I was invited to participate in the Geobook project. That was my first real introduction to User Interface Design. I had experience working on games and CD-ROMs, but this was more complex. For it's time, Geobook was a novel way to look at pictures of a location within the context of a map.

Those experiences lead to more work visualizing everything from the the global spread of diseases to the way that a nuclear reactor works. In 2003, I answered an ad for a graphic designer with multimedia experience. FXPAL, a Silicon Valley research lab, needed someone to lead a group of designers producing artwork for an experimental multimedia platform. Even a small lab produces a fantastic number of ideas. I quickly found myself jumping from one project to another. By 2004 I was a full time member of the staff.

At that time, FXPAL was almost exclusively conducting research into software solutions for various forms of media creation and consumption. The focus was mainly on multimedia documents in the workplace. From the beginning, my own work fell neatly into two categories:

1) Make a video or illustration that explained a concept.

2) Make a research prototype more useable.

Task #1 was made much easier by my colleague at the lab, John Doherty. John had been a professional cameraman and electrician in Hollywood. I studied film in school, but all of my work experience was with video and fairly unorthodox. With his help, I've learned to incorporate all of the 3D and illustrative skills I can muster into "vision videos"—videos that describe not just a technology, but an imagined application of that technology.

The illustrative work includes everything from detailed concept renderings to icons that encapsulate the intent of the research. I've drawn so many people interacting with so many screens... 

For Task #2, I use classic graphic design principles to produce static or interactive software prototypes that distill research into usable User Interfaces. More often than not, my contributions help scientists refine their ideas. To me, it's all about communication. Does the UI tell the user what they need to know? Can the user tell the UI what it needs to know?

FXPAL, like all research labs, has evolved along with the technologies it investigates. Today we build as many devices as applications. This has lead to an unexpected evolution of task #1. In the past I might produce some concept art based on a prototype in our lab. As the illustrations become more refined, I generally build 3D models of the imagined devices.

Before the "maker revolution", my 3D models existed only to produce still and animated artwork. Now, I'm being asked to actually build some of the things that I illustrate. 

In 2014 and 2015 a group of researchers and I worked on a robotic telepresence device called Jarvis. The same 3D files that I created to make illustrations and animations of the device were repurposed to laser-cut and 3D print the pieces used in its construction. We went through nine different iterations of the design, but we only had to build three physical prototypes. The 3D renderings and animations improved the actual physical design, and vice versa. Any time we needed to build a physical prototype, we already had refined 3D files.

In 2016 I produced a series of concept renderings depicting telepresence and document sharing devices. One of these illustrations featured a floor lamp design.

This design was actually something that FXPAL could use as a platform for evaluating the technology. I was asked to build one. This involved designing and 3D printing various pieces that attach hardware to the lamp's frame. I also worked with a talented metal fabricator and a company that makes custom lampshades. This resulted in a simple, highly customizable test-bed for a collection of related technologies.

Drawing lets me work through designs very quickly. It enables me to share my work as I go. I make dozens of sketches, often while I'm meeting with the researchers involved. This way, most of the engineering problems have been resolved before I ever start working on a 3D model. I suppose this has become my Task #3.


Research scientists come up with all sorts of crazy ideas. Artists can visualize and build those ideas.

Artists come up with all sorts of crazy ideas. Research scientists can visualize and build those ideas.

Both statements are true.




Earth Day at NASA

I've been invited to work on an interactive Earth Day exhibit for the Silicon Valley SimCenter and the NASA Sustainability Base.  I'd like to design a space, or spaces where people can interact with the amazing images that NASA captures. NASA has released beautiful pictures of our planet on past Earth Days.

I spent a few hours sketching a space to showcase these images. Most of the illustrations are set within the NASA Ames's Visitor Center. A very cool building, but probably not the final location of the exhibit. The actual location will probably be the lobby of the Sustainability Base.

I like the notion of people using Augmented reality (AR) apps to see additional content that is tailored to suit them. One visitor might be interested in global weather patterns while another is interested in scenic photography. AR would provide both visitors with a cool personalized experience. 

Augmented Reality

FXPAL, the research lab I work for, is developing a set of technologies called Tabletop Telepresence. Put simply, it's a system that enables video conference participants to share paper documents and other physical objects more naturally. It's comprised of cameras, projectors and a system for controlling everything. Here's a practical example: I can present an English document to the system that is scanned, translated and then projected as a Japanese language document in another location with the original page layout preserved. This allows my colleagues in Japan to read, interact with and even print their own copy. Our lab also researches other advanced telepresence technologies. Another group of researchers and engineers at FXPAL is exploring robotics and methods for very accurately determining locations within a room or set of rooms.

I've incorporated many of these ideas into this next few illustrations. I'd like to provide visitors with a way to share messages with people at other locations. These illustrations depict an exhibit where placing a message under a document camera sends it to other exhibits to be translated and projected. When creating messages, users can also ham it up for a video camera. These video clips would be associated with their message. Later, if another visitor touches the projected message they'll see the video on a large screen. The screen could also cycle through clips.

Tabletop telepresence components of the exhibit

Tabletop telepresence components of the exhibit

I incorporated robots into some of these illustrations. These robots may function as mobile projectors, adding information overlays to the content of the exhibit. They may act as mobile telepresence devices, providing a way for people in distant locations to visit the exhibit. Robots like these would rely on technologies being developed by FXPAL and other labs to navigate the space autonomously and/or be easy for remote participants to control.

Another view of the exhibit that includes telepresence robots  

A few hours before our first meeting I sent these sketches to everyone involved. I got some great feedback from people representing the following points of view: The Silicon Valley SIMCenter's goal is to give humanity tools to manage the planet's resources more wisely. Nasa's Sustainability Base is the most energy efficient building ever constructed. Don Kimber's WorldViews is an image and video visualization tool that's in the very earliest stages of development. This input, lead to a whole new idea. 

Sketched out very quickly, this decision making game is made up of a tablet/smartphone app and a shared display (the globe in the drawing). People using the app adjust sliders that reflect their environmental impact. The goal of the app is to lower an individual's impact, improving the overall health of our ecosystem. A combination of AR and projections will enable visitors to see the overall health of the environment as well as their specific impact. Beyond that, the app might have an "at home" mode that helps people track their impact over time.

A very quick sketch of an interactive globe

These ideas will continue to evolve. If you'd like to share an idea of your own, or participate in this 2017 Earth Day activity, please leave a comment below.

Gender Icons

Recently, I was asked to create a set of icons that represented age and gender. I started with the classic AIGA restroom symbols.

The age brackets were pretty standard: 0-12, 13-19, 20-37, 38-63 & 64+. Assuming the AIGA icons represent the 20-37 year old group, the task was to rework them until they represented each of the other age categories. Every icon needed to work within the set, but also on it's own. For example, a smaller version of the standard icons wouldn't do a good job representing a child. You'd probably get the idea if you saw it next to an adult icon, but separately it just looks small. Anyway, I came up with a fairly functional set. 

This project got me thinking about those original AIGA restroom symbols. They work really well in a world where gender is binary and iconography can leverage classic stereotypes. Increasingly, this isn't true. The standard solution for depicting gender neutrality is to cut the classic symbols in half and stick them together. This modified symbol is often used for gender neutral restrooms.

Depicting gender neutrality is important, but maybe depicting the variation within each gender is important too. Especially where restrooms divide people into binary gender groups. A person with male genitals may identify as female. How can the sign on the door make her feel welcomed in a female restroom? The LGBT community has a symbol that represents transgender people. My first instinct was to try and blend the two. Combining this symbol with the AIGA stick figures could express inclusiveness & neutrality. When placed side by side the modified figures still depict gender neutrality, but without the Glen or Glenda connotations of the split & joined symbol. Separately, the figures lead people to choose the restroom that best matches their gender identity. 

These symbols are becoming awfully complex for bathroom signage. Not only that, the more complex and "inclusive" the symbols become, the more obvious it is that groups have been left out. To me, both versions feel antiquated. Like when you're liberal uncle starts talking about legalizing grass. If we're really going to adopt gender neutrality, maybe there's a better solution. A simpler solution. Maybe a pictogram isn't the right place to depict another human being's inner life.

Maybe the sign on the door should just represent what's on the other side. 

Want to see one of the "Best 10 sites ever created on the Web?" **

**According to a long forgotten issue of the long forgotten Microsoft Magazine.

This is my 1997 Design for

This is my 1997 Design for

Here it is: Vintage 90's web design. I was the lead designer for the Rock and Roll Hall of Fame and Museum's website in late '96 through the summer of '97. The same basic design was used until early 2000. Back then, I was working under the multimedia director of a Cleveland based design firm called Vantage One Communications Group. Not long after the site went live it was modified a bit. I remember being very upset by the changes. I felt like the revised version compromised my design, my rock & roll fantasy. Which is funny, since I ended up working for a pretty bad company a few years later. But anyway...

Rockhall. Com as it appeared from 1997-2000.

Rockhall. Com as it appeared from 1997-2000.

I've included both versions of the homepage in this post to illustrate a point. I cared about this site. The people at the Rock and Roll Hall of Fame cared about this site. We all worked hard to produce something that would be informative and cool to look at. Today, both versions look so antiquated that its hard to believe that I thought that one was better than the other. I have to remind myself that this was a successful project that was well received by critics and viewers. It was probably even "bleeding edge". To be honest, I had mostly forgotten about this website. Luckily Vantage One's old site is available through On it, I found the following list of accolades.

It was named one of the top 10 sites ever created on the Web by Microsoft Magazine

The site was named "Cool Site of the Day" by InfiNet (Billed as a Grammy Award for the Net). That day, the site received more than 850,000 hits (a great number back then) in a 12-hour period.

USA Today highlighted the site during the week of the launch

CNN featured it as part of a segment on the Rock Hall's opening

AOL named it "Cool Site of the Week"

Netscape had the site on its "What's Hot" list for nearly six months.

For a "cool site of the day" (week, six months and ever) this thing has aged pretty badly. It's in good company, most of it's contemporaries look just as sad. Spend a few minutes on the Internet Archive site and you'll see what I mean. Why is that? Obviously there were huge technical limitations. Getting text and images to look the same on different peoples' computers was much more difficult back them. I remember spending hours indexing the colors of the GIF images. The files sizes had to be incredibly small to allow for any animation over 28.8. I had to fight for a target resolution of 800x600. 640x480 was more common. Now websites are almost resolution independent.

I think something else is going on too. In the 90's, there wasn't a sense of what a website should be. The look and feel of most websites was a nearly unfiltered extension of a company's branding and the whims of a designer. Today, website owners and viewers have definite expectations. Everyone knows what a website is supposed to look like. For the most part, this is a good thing. Navigating websites has become much easier now that the majority of the important links can be found at the top of the page.

At some point in the early 2010's I stopped designing custom websites for small to medium sized companies. Somehow, the whole business started to feel like a waste of their money. We'd work together for weeks. Very skilled programmers would help us realize our complex goals. In the end, most of the sites look about like something that could be achieved with a modern template. Templates...

I remember the early templates. They were awful. Cheapskates would fill them with clipart and blinking headlines. Over time, those crummy templates were refined. The bad ones faded away while the better designs were iterated upon by hundreds (thousands?) of designers and coders. Good ideas from multiple sources have been combined into the suite of templates that we have access to today. Take a look at the latest batch of Webby award winners. Now take a look at the selection of templates available to you from any blog engine or web hosting service. Pretty close, right? Making good use of these templates still requires the skills of a designer. Photography, illustration and typography still matter. In that way, website design looks a lot like magazine design -- skilled artists working within a framework. 

As all of the underlying technologies have matured, a cool thing has happened to web design. It's grown beyond aesthetic design. Creating a truly unique browsing experience involves incorporating technologies that support animation, interaction, mobility, accessibility, content delivery speed and efficiency. These technologies, when presented artfully, produce websites that are unique in ways that I couldn't have imagined in 1996. Luckily, someone did.

Anyway, I'll leave you with this. I designed the website for the Grand Prix of Cleveland, Ohio. The site is old enough that the race got the domaine: (later offered for auction at the reasonable starting price of $500). Ever the innovator, I designed this site to resize with the browser. The main graphic was anchored to the left, the cars were anchored to the right and the rest of the content was centered. I mean, this site looked rad on screens as large as 1024x768.

My Adventures in a Chocolate Factory

In 2008, I was part of a team that built virtual representations of real factories. This project was lead by Dr. Maribeth Back and Dr. Don Kimber with contributions from many other research scientists. Tcho, an under construction chocolate factory, was the subject of our study. Our team traded insights gained for the right to lurk about and eat Tcho’s chocolate. If you’re interested in this research please read: The Virtual Chocolate Factory:Mixed Reality Industrial Collaboration and Control

Panorama of the factory as it existed in 2008-09

Panorama of the factory as it existed in 2008-09

It was my job to visualize our work. I sketched and 3D modeled all sorts of machines and their surroundings. This lead to the creation of an interesting set of artifacts. I’ve never had the opportunity to explore a space quite so thoroughly. I crawled up, over and into various machines. I measured and photographed everything I saw.

Sketch of  a  Carle Conch

Sketch of  a Carle Conch

All of the major elements of the factory were recreated in 3D. Chocolate making isn’t new. Some of the best machinery predates CAD files by decades. Luckily, I was in the factory while much of the vintage equipment was being restored. This gave me access to the inner workings of some beautiful machinery.

Inner workings of a Macintyre Conche

Inner workings of a Macintyre Conche

Detailed rendering of chocolate making conches

Detailed rendering of chocolate making conches

All of this work enabled us to build a virtual chocolate factory. I simplified the 3D models and gave them to a developer who incorporated them into a 3D game engine. Production data from the working factory was integrated into this new virtual space.

The factory as depicted in a game engine

The factory as depicted in a game engine

One of my favorite machines in the factory was the highly articulated Carle Conch. Its a fascinating device that grinds, polishes and heats dry cocoa into liquid chocolate.

With all of its exposed moving parts, the conch is an interesting machine to watch. The 3D model I built was fully animated.

We discussed creating smaller versions of the chocolate making machinery. Something a user could hold, perhaps as a way of interacting with the VR space and vicariously the real space. I modified the conch model so that it could be 3D printed. The result was an accurate small scale replica of the original.

3D printed model

Virtual, real, virtual... it all gets a bit blurry after a while. As the project developed, the notion of what a virtual factory might be drifted. One of the more successful offshoots of the project was this simple smartphone app.

Mobile application

Mobile application

In the Not Too Distant Future...

Reprinted from my blog posting at:

For about 20 years the cast of Mystery Science Theater 3000 has been entertaining science fiction fans with funny commentaries of bad movies. The concept is strangely simple: mad scientists (at various times: Trace Beaulieu, J. Elvis Weinstein, Frank Conniff and Mary Jo Pehl) have launched a man (Joel Hodgeson and later Michael J. Nelson) into space and are forcing him to watch the worst movies ever made. To keep his sanity, the unfortunate spaceman and his robot friends (at various times: Beaulieu, Weinstein, Kevin Murphy, Bill Corbett and Jim Mallon) make fun of these movies. The original show was canceled about 10 years ago but most of the people involved are still riffing on cheesy movies – “the worst they can find”.

One group of original cast members has formed a comedy troupe called “Cinematic Titanic” (Joel, Trace, J. Elvis, Frank and Mary Jo). Basically, they do a live version of the original show (minus the robot puppets). Recently I caught a performance in San Francisco. It wasn’t surprising that the group was as funny as ever. What was surprising was the fact that all of the performers were holding iPads. They didn’t make any sort of announcement about it. They just sat down and started to to read from them. They have always used paper scripts — even during live performances — so I was surprised to see this revival of a 90’s era show using such 21st century devices.

I wanted to learn more about this so I contacted Glenn Schwartz, their PR person. He explained that the iPad solved several longstanding problems involving the creative process, performance and even travel.

During their creative process the cast will watch a bad movie and write down any jokes that come to mind. These are then sent to one cast member, Weinstein, who compiles them into a script. The script is then emailed to each cast member’s iPad. They view the script in a PDF viewer and may make changes, which are shared via email directly from the iPads. The PDF reader allows each cast member to highlight their part and to make notes. The immediate effect of this is a tremendous reduction in wasted paper. It also allows for a very rapid iterative process even though all the participants are in different locations.

Apparently Apple has done an excellent job designing the UX of their PDF reader. The interaction is so natural that the cast is able to use it as if they were reading a paper script (paging to appropriate sections and etc). A side benefit of the glowing screen is that each performer is self illuminated, requiring much less stage lighting, if any at all. I was surprised to learn that the iPads are not synchronized to each other or to the film. The performers simply “turn the pages” of their scripts as necessary.

The troupe’s five performers each need an updated script. These scripts are fairly lengthy (and heavy), weighing in at about 50 lbs a set. Now that the cast uses iPads they’re no longer obligated to carry all that extra weight around. They simply bring along their iPads, something they would have probably done anyway for their own personal use.

Clearly the Cinematic Titanic troupe would benefit from a more integrated solution. Imagine if our own XLibris were an iPad app that was extended to include more collaborative features. A cloud version might enable these performers to iterate through changes in an even more natural way — retaining their local changes while automagically pushing or pulling in important global changes.

I wouldn’t be at all surprised if a technology very much like this became the standard way that scripts are distributed. Xerography made quickly revising and distributing scripts possible; some form of XLibris on an internet-enabled tablet might make it even easier and faster.

This is just one example of the way these technologies enable people to work collaboratively. Cinematic Titanic’s ad hoc script writing process isn’t very different from the way researchers might prepare a paper or the way a sales team might prepare a presentation. A robust, document-centric application that supports annotation and collaboration running on a lightweight tablet might well be that killer app we’ve all been looking for.