Here are a couple pictures of things the Autodesk Research team is talking about at Autodesk University. There's one more chance to come by the booth and find out more today.
Autodesk is sponsoring the Biofabricate conference in New York on December 4, 2014. This is the world’s first summit dedicated to biofabrication for future industrial and consumer products. Biofabrication comprises highly disruptive technologies enabling design and manufacturing to intersect with the building blocks of life. Computers can now read and write with DNA. This is a world where bacteria, yeast, fungi, algae and mammalian cells grow and shape sustainable new materials.
The event aims to answer a number of questions, including:
- What is biodesign and biofabrication?
- What are the enabling technologies?
Carlos Olguin from the Autodesk Research Bio/Nano/Programmable Matter group will speak in the Engineering Nature session followed by Danil Nagy of The Living speaking in the Cultured Technology session.
Great news! Autodesk Research was awarded a Best Talk Award at the 2014 User Interface Software and Technology Syposium (UIST) for Kitty, a tool that makes it possible to draw interactive experiences.
The team was very happy to present their research throughout the week, as you can in the photos below:
Baseball Video Lens Demonstration
Here's Rubaiat accepting the award
Congratulations to the Kitty Team!
Afterwards, the team celebrates with an awesome display of their fire dancing skills (as a researcher it is important to have many varied interests and abilities)
Some members of the team enjoy simpler pursuits
Beyond the awards and activities, there was food. Team members who didn't get to attend UIST were sad to miss the fancy breakfasts this year.
At least one UIST attendee found a new use for Oculus - making food better :-)
When you are making, do you 3D print sculptures or 3D print equipment that will need wires and sensors and lights? If it's the latter, you'll want to read about this new work from the Autodesk Research team called PipeDream that helps you easily put tubes into your models and extend the potential of your creations.
In the above example, you can see a 3D printed radio with tubes in it for the speaker, volume, power and tuning controls. Below is another example with tubes added into a desktop pen holder. Smaller tubes at the bottom have spaces for sensors to determine which pens are present. This kind of idea could be expanded to a work shop for tracking tools.
PipeDream is some new research that has been prototyped within Autodesk Meshmixer.
When creating pipes in your models, you are presented with a number of possibilities:
- Would you like to specify surface points where the tubes should start and end (like in our examples above)?
- Woud you rather specify a specific path through the object to make neon lights or a marble maze?
- What is the radius of the tube and does it very over the length?
- Does your tube connect two points or does it radiate like branches on a tree?
- Would you like your tubes to be capped so that you have a cavity in your object?
Capped pipes to make a cavity in an object?
Here's an example of a 3D printed bunny. This bunny is printed in a soft, pliable material. The cavity works as an air bladder so that the bunny can breathe with the help of an air pump. The air coming from the bunny could be used as a feedback mechanism in a toy or a teaching tool.
The cavity in a soft printed object could be fitted with other feedback devices like a noisemaker, buttons for lights, haptic buzzers, accelerometers and sensors to determine if an object has been touched.
Another great example is creating pathways for lights and wires. In the example below created for the 2014 UIST (User Interface Software and Technology) Symposium, the letters are connected to make a continuous path. With this continuous path, a neon sign can be created.
Adding tubes to models that will be 3D printed opens up a lot of possibilities. One other interesting thing we tried was to fill the tubes with conductive paint instead of pushing wires through the tubes. This allowed for easily powering LED's in our models.
To see more of this technology, you may watch the video below or refer to the PipeDream research paper on AutodeskResearch.com.
If you are at UIST, please stop by to talk with the team. If you would like to share your thoughts on this technology or have questions about it, feel free to let us know here on the Autodesk Research blog.
Ironically, as cell phones are getting bigger, we see increasing popularity in ultra small screen devices such as smart watches. With these smaller screens we need to find ways to work more efficiently with them or risk these new devices being regarded as novelty items. The same old interfaces don't work.
One of the most common things to do on a mobile device is to enter text. We've learned to enter text with our thumbs so we can continue to learn. The problem with a smart watch is that it's a one handed device and the size of the screen really only works for a single finger without obscuring too much of the screen. Not content to revert to hunt-and-peck typing 101 the Autodesk Research User Interface group set out to find a solution.
Swipeboard takes inspiration from Morse code and gestural input for an easy to master text entry paradigm that sees users entering more than 30 words per minute (wpm).
Swipeboard uses a QWERTY keyboard broken up into segments of 3 or 4 characters. The user simply taps in the region of the character block and then swipes to identify the character. Some users have achieved a level of comfort with the system that allows them to enter text without looking at the screen.
Hard to believe? Watch the video of Swipeboard in action below. Note that the video is not sped up - you're seeing it work in real time.
What's next for Swipeboard?
Well, we'll be talking about it at UIST 2014, the User rInterface Software and Technology Symposium, for starters.
For future work, this could be interesting to explore on other wearable devices like glasses and rings. It could also be interesting to see Swipeboard expanded from characters to complete words. What do you think?
If you liked this post, you might also like to read about Duet, a research project that looks at making a smart watch and smart phone work well together. Duet shows that 1 + 1 can equal more than 2.
Hopefully you're familiar with Project Draco, our answer to the question:
Can animation be made as easy as drawing?
We've discussed Draco here on the blog and have a video overview of what we were showing at this year's SIGGRAPH conference in Vancouver to catch you up.
Kitty builds on Draco and looks into the animation question and asks:
Can we make Draco interactive?
In the image above you'll see two interactions happening:
- the user can move the dragon's head into the frame
- the user can move the baby dragon into the pot
With the egg going into the pot, you'll notice that the monster's eyesfollow the egg and that the egg causes a particle splash as it enters the pot.
This opens up a lot of possibilities for iteractive storytelling.
- How would children like this for an ebook on a tablet?
- Does it make web content more dynamic?
- Could it be useful for game authoring?
- Is it useful for training and instructions?
Kitty builds on Draco but how does it work?
We've introduced a simple node network to define the relationships between objects. Let's look at the picture below of a different egg going into a different pot - yes we like cooking here at Autodesk Research.
We've set up the scene as you would in Draco with steam and splashing particles coming from the pot. In the following image you can see that we have a simple node graph that gets overlaid on the picture. This helps reduce UI while keeping the events and relationships in context.
You can see the path the egg takes to get into the pot as well as two blue circles representing the particle events. The user is making a connection from the egg to the circle on the right to tell the splash to only happen when the egg is close.
When the connection is made between the nodes, the egg path and the splash, the user can then choose how to link the events. In this case the movement of the egg is connected to the emission of the particles. The inlaid square defines the timing of the event.
The curve can be redrawn to control what happens. The horizontal axis represents the object that triggers the event (the egg). The vertical axis represents the object that is being driven (the particle splash). When the line is flat, there are no particles being emitted.
In this image below we explore using Kitty to explain how an electric doorbell works.
You can learn more about Kitty and see how easy it is to author these kinds of behaviours in the video below. More information is available on the Draco project page.
We'll be presenting this latest research at this year's UIST, the ACM User Interface Software and Technology Symposium, in Hawaii in October. If you are there, stop by to see the demo or attend the talk.
Whether you are at UIST or not, please let us know what you think about these tools and the possibilities that they open up for you.
Here's a great tool for those of you exploring the User Experience - the Paper Forager!
There is lots of good material nowadays for exploring user experience. Perhaps so much that it makes it hard to get started or find what you need. And that's where the Paper Forager steps in to help! For those with acess to the ACM Digital Library, the Paper Forager let's you explore more than 5000 research papers from ACM CHI and UIST.
The Paper Forager is easy to use and provides things like:
- filtering by date
- most popular authors
- thumbnail views
- paper overviews
All without having to download the paper and open it in a viewer. This can really speed up your research.
To make things even faster, when browsing papers, the Paper Forager preloads adjacent papers ro quickly move forwards and backwards through your search results.
Please enjoy this video overview of the Paper Forager, complete with a toe-tapping, finger-snapping beat. If you're at UIST 2014, you can talk to members from the Autodesk Research team and complement them on their musical tastes :)
What is the latest in state of the art baseball analytics?
Who said Autodesk was only about designing stuff?
How do you visualize and analyze mass amounts of video data?
Read on for answers to these pressing questions and more.
Video Lens is one of the freshest pieces of work from the User Interface group at Autodesk Research. Something really interesting is that this uses baseball as the foundation for this exploration into big data and analytics.
As we know from stories like Moneyball, sports analytics is a serious business and lots of data is captured. The team worked with about 8000 video clips and the metadata captured by the PITCHf/x toolset to quickly visualize what happens in the game of baseball and analyze specific plays. Video Lens had four design goals:
- Maximize Information Desnity: Reveal as much as possible about the underllying data
- Treat Events as the Primary Entity: Events are the important parts in the data; each data source (video clip) may have multiple events
- Support Rapid Playback of Video Results: If you have to wait for things to happen the system is of lower value
- Encourage Exploration of the Data: trying things out are low cost
Imagine you want to see all the knuckleball pitches thrown by a southpaw that were high and to the right. You would circle the target in the pitch zone UI, mark the pitch and pitcher type. Three quick steps in this interactive tool and all the appropriate video clips play back.
Want to narrow that down to the team you're facing next week?
What if you were a sports commentator and had to fill in one of those awkward moments where a streaker was being escorted off the field? You could go to the video Lens and talk about potential strategies with real time feedback to illustrate. Or maybe you just want to see how many balls are hit to the area of the field your streaker was spotted in and talk about the chances of him being hit by a live ball
Intrigued? Here's a video showing the Video Lens system in action:
Taking this back to Autodesk and the possibilities for use in other tools, here are some possibilities:
- TV broadcasters looking for B roll footage in their database of a rainy day at a famous landmark
- Film editor sorting through possibilities in their storytelling
- Advanced learning possibilities for tools like Autodesk Screencast
- Connecting security camera data to your BIM
- Connecting video recordings of your real work product stress testing (eg crash test dummies, force testing chairs, etc.) to your digital product design
We think Video Lens is a great toolset for so many things. Let us know what you think here on the blog or on our Video Lens Research page where you can also find more info and contact information.
By the way, if you happen to be in Hawaii around October 8th, please stop by the UIST Symposium where we'll be talking about Video Lens in person.
We've done a bunch of work with 3D printing. Some of the most notable work has been the work with Meshmixer, geometry preparation and creating branching support structures so that your objects come out faster, with less waste and no drooping. In the image below, our example support structure uses 75% less plastic than the manufacturer-provided supports, which also reduces print time by one hour.
There's another nice benefit to these branching support structures - they break off your print really easily for fast finishing and clean-up. If you haven't worked with them, you should get a copy of the latest Meshmixer and try them out. While you're at it, you can also play with some interesting tools to subtract components of your model for a more interesting statue, like we used in our example above.
Beyond researching 3D printing things, it's important to play and explore what is possible with 3D printing.
Part of the fun in 3D printing is creating something unique. Autodesk Research was fortunate to be involved in providing 3D printed awards for the Real Time Live! event at SIGGRAPH 2014. Real-Time Live! is:
An interactive extravaganza that celebrates the real-time achievements of evil geniuses, mad scientists, and creative computer gods! Real-Time Live! shows off the latest trends and techniques for pushing the boundaries of interactive visuals.
The Real-Time Live! logo looks like this:
The 2D artwork was taken into Autodesk Maya and turned into a printable 3D model.
Here are a couple photos of the awards as they came out of our Objet printer and then drying off after being cleaned with the water jet.
The finished awards look quite nice and are a goood compliment to the cool things that the Real-Time Live! participants showed this year, from video game technology, to film production to flying and helping vision impaired people see better. Like all of SIGGRAPH, it was very inspiring. Congratulations to everyone involved!
The Autodesk Research team will be attending and presenting at SIGGRAPH 2014 in Vancouver.
We have the following talks scheduled:
Pteromys: Interactive Design and Optimization of Free-Formed Free-Flight Model Airplanes: Interactive techniques for designing original hand-launched free-flight glider airplanes that can actually fly. Based on a compact aerodynamics model, a design tool allows users to interactively optimize wing configuration to maximize flight-worthiness.
Tuesday, 12 August 10:45 AM - 12:15 PM | Vancouver Convention Centre, East Building, Ballroom B-C
Branching Support Structures for 3D Printing: To 3D print a complex shape, a support structure is needed. Printing this support structure wastes time and material. We minimize this waste by generating a novel branching support structure which takes into account both the geometry of the model, and the properties of the printing process.
Tuesday, 12 August 10:45 AM - 12:15 PM | Vancouver Convention Centre, West Building, Rooms 116-117
Sensitivity-Optimized Rigging for Example-Based Real-Time Clothing Synthesis: A predict-then-blend scheme for data-driven real-time clothing animation that can produce realistic wrinkles. The prediction model is developed based on sensitivity analysis, and its implementation is as simple as rigging technique.
Wednesday, 13 August 10:45 AM - 12:15 PM | Vancouver Convention Centre, East Building, Ballroom B-C
Position-Based Elastic Rods: Efficiently simulate complex bending and twisting of elastic rods using position-based dynamics. Our formulation is highly efficient, capable of simulating hundreds of strands in real-time.
Wednesday, 13 August 9:00 AM - 10:30 AM | Vancouver Convention Centre, West Building, Rooms 109-110
Get your hands on Draco, our project exploring animation for illustration!
We'll be showing Draco as a SIGGRAPH Studio Project. Come by the Studio Sunday through Thursday to see it live and even try it for yourself. Draco allows users to sketch objects and their motions using familiar digital illustration techniques. Motion created with Draco enriches the picture and allows for new possibilities in storytelling.
Maya, 3ds Max and General Media & Entertainment Industry Discussions
Our colleagues in the Media & Entertainment group will be showing off 3ds Max and Maya. Along with that they'll be discussing industry trends in Design, Film and Games and the trends they see. For educators, there will be a breakfast to meet and discuss these things and more. You can get the full details and register on the Autodesk Area SIGGRAPH 2014 event page.
Whatever you're interest, please say hello!