Autodesk Research will be presenting five papers at the 28th ACM UIST User Interface Software and Technology Symposium in Charlotte, NC, from November 8-11. UIST is the premier forum for innovations in human-computer interfaces. UIST brings together researchers and practitioners from diverse areas including graphical & web user interfaces, tangible & ubiquitous computing, virtual & augmented reality, multimedia, new input & output devices, fabrication, wearable computing and CSCW.
This year there has been an explosion in research related to the areas of digital fabrication and fabricating electronics. You may browse the full program and see Autodesk's contributions below.
NanoStylus: Enhancing Input on Ultra-Small Displays
Candid Interaction: Revealing Hidden Mobile and Wearable Computing
MoveableMaker: Facilitating the Design, Generation, and Assembly of Moveable Papercraft
Smart Makerspace: An Immersive Instructional Space for Physical Tasks
Autodesk has contributed more to UIST 2015 than just papers. We're a platinum sponsor, Tovi Grossman has been serving as the Program Committee Co-Chair and Justin Matejka has been serving as the Video Previews Co-Chair.
Tuesday, 11 August 10:45 AM - 12:15 PM, Los Angeles Convention Center, Room 152
Justin Solomon, Fernando de Goes, Gabriel Peyré, Marco Cuturi, Adrian Butscher, Andy Nguyen, Tao Du, Leonidas Guibas
This paper introduces a new class of algorithms for optimization problems involving optimal transportation over geometric domains. The main contribution is to show that optimal transportation can be made tractable over large graphics domains, such as images and triangle meshes, improving performance by orders of magnitude compared to previous work.
Introducing OmniAD, a novel data-driven pipeline to model and acquire the aerodynamics of three-dimensional rigid objects simply by capturing their falling motion using a single camera. OmniAD enables realistic real-time simulation of rigid bodies and interactively designed three-dimensional kites that actually fly.
This course reviews current 3D printing hardware and software pipelines, and analyzes their potential and shortcomings. Then it focuses on computational specification for fabrication methods, which allow designing or computing an object's shape and material composition from a functional description.
Citeology is an interactive tool for visualizing relationships across research papers created by Justin Matejka, Tovi Grossman and George Fitzmaurice of the UI Group. Selecting any one of the 11,000 plus publications from CHI and UIST will show you its geneology; its parents (papers that it cites) and its children (papers that site it).
Beyond being helpful to the user interface community these graphs are beautiful. We have a wall size version of one graph in the Toronto Autodesk office.
The layout of the information is simple and effective. Across the horizontal axis is a listing of all the papers by year. As time progresses more papers have been published, much like our growing human population. Parent, or past papers are connected by blue lines while children, or future papers, are connected by red lines.
The lines drawn between papers are semi-transparent add build up to show multiple connections.
Similar to a word cloud, all the titles are displayed with the connected papers being shown in darker colors to stand out.
The complete tool shows some additional information and controls for refining your search results including:
shortest path between papers
number of children and parents to show
details about the active paper
Citeology uses research papers and it's interesting to think about what other kinds of relationships a tool like this could help to visualize:
Building on geneology, things like family trees, band memberships, and sports teams are likely candidates
Historical figures and events along with their triggers
Connections and dependencies between things in the Internet of Things
What would you use it for? Try Citeology and let us know what you think!
Multi-touch tabletop computers are useful tools and the User Interface group at Autodesk Research has explored ideas on ways to make them even better with a system called Medusa. Imagine a world where the tabletop can recognize multiple users, differentiate between right and left hands and support non-touch, virtual reality type gestures like in the sc-fi movie Minority Report.
It all starts by hacking a Microsoft Surface with 138 proximity sensors and Phidget Interface Kits. These sensors extend the touch capabilities of the computer surface to determine user proximity and the location of their hands. The sensors ane not only inexpensive but they remove complications like setting up cameras or requiring users to wear gloves or tracking markers. In a future incarnation, these sensors could be built into the table for a better aesthetic and to prevent the users from needing to worry about them.
Medusa'a sensors are arranged in three rings. An outward-facing ring of 34 sensors is mounted beneath the lip. Two upward facing rings atop the table are made-up of 46 sensors on the outer ring and 58 sensors on the inner ring.
All of this adds up to allow Medusa to support the following user interactions:
User Position Tracking
Independent Left and Right Hand Tracking
Hand Gestures (Pre-Touch Functionality)
Touch + Depth Gestures
This was tested with a prototype UI creation application called Proxi-Sketch. Proxi-sketch allows users to collaboratively develop new graphical user interfaces. You can see it all in action in the following video. If you want to know more about building the system or how parts of it worked, please refer to the Medusa publication.
The ACM Symposium on User Interface Software and Technology (UIST) is just around the corner. This year Autodesk Research is a platinum sponsor and has four cool papers to present covering a diverse range of topics from 3D printing interactive objects to a new text entry method for smart watches to big data analytics with baseball and a new paradigm for drawing interactive content.
A Series of Tubes: Adding Interactivity to 3D Prints Using Internal Pipes
Valkyrie Savage, Ryan Schmidt, Tovi Grossman, George Fitzmaurice, Bjoern Hartmann
10:30am Monday, October 6
PipeDream can help makers create interactive 3D prints.
Kitty: Sketching Dynamic and Interactive Illustrations
Rubaiat Habib Kazi, Fanny Chevalier, Tovi Grossman, George Fitzmaurice
12:30pm Tuesday, October 7
Kitty builds on the Kinetic Texturing work seen previously in Draco and allows the author to make it interactive.
Swipeboard: A Text Entry Technique for Ultra-Small Interfaces That Supports Novice to Expert Transitions Xiang 'Anthony' Chen, Tovi Grossman, George Fitzmaurice
11:00 Wednesday, October 8
Swipeboard takes inspiration from Morse code to present a new and fast method of gestural input for smart watches.
Video Lens: Rapid Playback and Exploration of Large Video Collections and Associated Metadata Justin Matejka, Tovi Grossman, George Fitzmaurice
11:00am Wednesday, October 8
Video Lens is a great tool for exploring massive amounts of data and is built around baseball videos to show a practical application.
Demos of all of these will be available on Monday night so stop by and say hello to the team!
When you are making, do you 3D print sculptures or 3D print equipment that will need wires and sensors and lights? If it's the latter, you'll want to read about this new work from the Autodesk Research team called PipeDream that helps you easily put tubes into your models and extend the potential of your creations.
A 3D Printed Radio with Tubes for Controls
In the above example, you can see a 3D printed radio with tubes in it for the speaker, volume, power and tuning controls. Below is another example with tubes added into a desktop pen holder. Smaller tubes at the bottom have spaces for sensors to determine which pens are present. This kind of idea could be expanded to a work shop for tracking tools.
A 3D Printed Pen Sensing Container
PipeDream is some new research that has been prototyped within Autodesk Meshmixer.
Looking at tubes in Meshmixer
When creating pipes in your models, you are presented with a number of possibilities:
Would you like to specify surface points where the tubes should start and end (like in our examples above)?
Woud you rather specify a specific path through the object to make neon lights or a marble maze?
What is the radius of the tube and does it very over the length?
Does your tube connect two points or does it radiate like branches on a tree?
Would you like your tubes to be capped so that you have a cavity in your object?
Capped pipes to make a cavity in an object?
Here's an example of a 3D printed bunny. This bunny is printed in a soft, pliable material. The cavity works as an air bladder so that the bunny can breathe with the help of an air pump. The air coming from the bunny could be used as a feedback mechanism in a toy or a teaching tool.
This bunny is connected to an air pump to simulate breathing
The cavity in a soft printed object could be fitted with other feedback devices like a noisemaker, buttons for lights, haptic buzzers, accelerometers and sensors to determine if an object has been touched.
Another great example is creating pathways for lights and wires. In the example below created for the 2014 UIST (User Interface Software and Technology) Symposium, the letters are connected to make a continuous path. With this continuous path, a neon sign can be created.
Adding tubes to models that will be 3D printed opens up a lot of possibilities. One other interesting thing we tried was to fill the tubes with conductive paint instead of pushing wires through the tubes. This allowed for easily powering LED's in our models.
If you are at UIST, please stop by to talk with the team. If you would like to share your thoughts on this technology or have questions about it, feel free to let us know here on the Autodesk Research blog.
Ironically, as cell phones are getting bigger, we see increasing popularity in ultra small screen devices such as smart watches. With these smaller screens we need to find ways to work more efficiently with them or risk these new devices being regarded as novelty items. The same old interfaces don't work.
What time is it? It's time for Swipeboard!
One of the most common things to do on a mobile device is to enter text. We've learned to enter text with our thumbs so we can continue to learn. The problem with a smart watch is that it's a one handed device and the size of the screen really only works for a single finger without obscuring too much of the screen. Not content to revert to hunt-and-peck typing 101 the Autodesk Research User Interface group set out to find a solution.
Swipeboard takes inspiration from Morse code and gestural input for an easy to master text entry paradigm that sees users entering more than 30 words per minute (wpm).
The fastest recorded Morse code entry is 140wpm.
Swipeboard uses a QWERTY keyboard broken up into segments of 3 or 4 characters. The user simply taps in the region of the character block and then swipes to identify the character. Some users have achieved a level of comfort with the system that allows them to enter text without looking at the screen.
First a QWERTY style keyboard is shown for selecting the character region
After a tap, the keyboard zooms in to prompt for a gesture to define the specific character
Hard to believe? Watch the video of Swipeboard in action below. Note that the video is not sped up - you're seeing it work in real time.
What's next for Swipeboard?
Well, we'll be talking about it at UIST 2014, the User rInterface Software and Technology Symposium, for starters.
Swipeboard could be applied to other wearable devices such as glasses
For future work, this could be interesting to explore on other wearable devices like glasses and rings. It could also be interesting to see Swipeboard expanded from characters to complete words. What do you think?
If you liked this post, you might also like to read about Duet, a research project that looks at making a smart watch and smart phone work well together. Duet shows that 1 + 1 can equal more than 2.
Hopefully you're familiar with Project Draco, our answer to the question:
Can animation be made as easy as drawing?
We've discussed Draco here on the blog and have a video overview of what we were showing at this year's SIGGRAPH conference in Vancouver to catch you up.
Kitty builds on Draco and looks into the animation question and asks:
Can we make Draco interactive?
In the image above you'll see two interactions happening:
the user can move the dragon's head into the frame
the user can move the baby dragon into the pot
With the egg going into the pot, you'll notice that the monster's eyesfollow the egg and that the egg causes a particle splash as it enters the pot.
This opens up a lot of possibilities for iteractive storytelling.
How would children like this for an ebook on a tablet?
Does it make web content more dynamic?
Could it be useful for game authoring?
Is it useful for training and instructions?
Kitty builds on Draco but how does it work?
We've introduced a simple node network to define the relationships between objects. Let's look at the picture below of a different egg going into a different pot - yes we like cooking here at Autodesk Research.
We've set up the scene as you would in Draco with steam and splashing particles coming from the pot. In the following image you can see that we have a simple node graph that gets overlaid on the picture. This helps reduce UI while keeping the events and relationships in context.
You can see the path the egg takes to get into the pot as well as two blue circles representing the particle events. The user is making a connection from the egg to the circle on the right to tell the splash to only happen when the egg is close.
When the connection is made between the nodes, the egg path and the splash, the user can then choose how to link the events. In this case the movement of the egg is connected to the emission of the particles. The inlaid square defines the timing of the event.
The curve can be redrawn to control what happens. The horizontal axis represents the object that triggers the event (the egg). The vertical axis represents the object that is being driven (the particle splash). When the line is flat, there are no particles being emitted.
In this image below we explore using Kitty to explain how an electric doorbell works.
You can learn more about Kitty and see how easy it is to author these kinds of behaviours in the video below. More information is available on the Draco project page.
We'll be presenting this latest research at this year's UIST, the ACM User Interface Software and Technology Symposium, in Hawaii in October. If you are there, stop by to see the demo or attend the talk.
Whether you are at UIST or not, please let us know what you think about these tools and the possibilities that they open up for you.
Here's a great tool for those of you exploring the User Experience - the Paper Forager!
There is lots of good material nowadays for exploring user experience. Perhaps so much that it makes it hard to get started or find what you need. And that's where the Paper Forager steps in to help! For those with acess to the ACM Digital Library, the Paper Forager let's you explore more than 5000 research papers from ACM CHI and UIST.
The Paper Forager is easy to use and provides things like:
filtering by date
most popular authors
All without having to download the paper and open it in a viewer. This can really speed up your research.
To make things even faster, when browsing papers, the Paper Forager preloads adjacent papers ro quickly move forwards and backwards through your search results.
Please enjoy this video overview of the Paper Forager, complete with a toe-tapping, finger-snapping beat. If you're at UIST 2014, you can talk to members from the Autodesk Research team and complement them on their musical tastes :)