Learning

The Open Translation Project

Two years ago, Autodesk embraced Open Learning by releasing our learning resources with Creative Commons licensing. The Open Translation project takes this to the next level by allowing anyone to help translate video learning materials to additional languages.

autodesk open translation project

This project was developed by Judy Bayne from Autodesk's Media and Entertainment group. Judy participated in Autodesk's Idea Exploration Innovation Workshop, a program run by the Research team to help employees bring their great ideas to life. 

As a passionate educator, Judy says it's critical that our world-wide vision for learning requires that we reach out in many languages. To help with this, we’ve partnered with Amara, a collaborative translation platform, to add subtitles to videos.

Autodesk Open TranslationUsers can add subtitles to videos and collaborate to improve each other's translations. 

We would love if you take a look at the results or, even better, try the open translation project yourself


How is a Researcher like an Octopus?

Autodesk Research is part of the Office of the CTO, or OCTO for short. So, we like things with octo in the name. Things like octopuses - octopi, if you like - and their parallels to members of the research world. Octopuses are:

  • quite intelligent (shown through maze and problem solving experiments)
  • good at learning with strong short and long term memory
  • behaviourally flexible
  • interested in play
  • good at photography

Good at photography?

Octographer

When we saw this video from Sony we were impressed as we also like exposing ourselves to new experiences and documenting our learning. It's also just really cool.

The behind the scenes video is also fascinating. Before learning to take a picture the octopus decided to taste the camera, perhaps thinking it was a new type of clam.  


Hacking a Multi-touch Tabletop for Design Collaboration

Multi-touch tabletop computers are useful tools and the User Interface group at Autodesk Research has explored ideas on ways to make them even better with a system called Medusa. Imagine a world where the tabletop can recognize multiple users, differentiate between right and left hands and support non-touch, virtual reality type gestures like in the sc-fi movie Minority Report.

Medusa Autodesk Research Multi-touch user interface

It all starts by hacking a Microsoft Surface with 138 proximity sensors and Phidget Interface Kits. These sensors extend the touch capabilities of the computer surface to determine user proximity and the location of their hands. The sensors ane not only inexpensive but they remove complications like setting up cameras or requiring users to wear gloves or tracking markers. In a future incarnation, these sensors could be built into the table for a better aesthetic and to prevent the users from needing to worry about them.

Medusa Autodesk Research Microsoft Surface Tabletop Multitouch
Medusa'a sensors are arranged in three rings. An outward-facing ring of 34 sensors is mounted beneath the lip. Two upward facing rings atop the table are made-up of 46 sensors on the outer ring and 58 sensors on the inner ring.

All of this adds up to allow Medusa to support the following user interactions:

  • User Position Tracking
  • User Differentiation
  • Independent Left and Right Hand Tracking 
  • Hand Gestures (Pre-Touch Functionality) 
  • Touch + Depth Gestures

This was tested with a prototype UI creation application called Proxi-Sketch. Proxi-sketch allows users to collaboratively develop new graphical user interfaces. You can see it all in action in the following video. If you want to know more about building the system or how parts of it worked, please refer to the Medusa publication.  


PDF Documents are Better with Animation

We've had books with pictures in them for hundreds of years. With modern computing powers we can move from static pictures in our PDF documents to dynamic animations and tell a more compelling and understandable story like in this Project Draco example (you may need to download it to see the animation in action and use Adobe Reader X or newer).

As we can see in the video above, there are things to consider when authoring a document with animated figures:

  • readers should not be burdened with complex UI controls
  • readers should not be distracted by the animation when reading text. 

Of course there are other things to consider when creating animated figures:

  • Duration: just like with a static figure, keep the animated figure short and concise
  • File Size: keeping the animations short will reduce file size
  • Number of Animated Figures: use them sparingly but where important to communicate
  • Audio: sound can be included but can be very distracting so use only if necessary

In a work of entertainment, like a comic book, publishers may be more free with including animations. When publishing an academic paper or instructional document, beyond showing an animation, here are some of the best places to use an animated figure:

  • Demonstrating How an Interaction Technique Works
  • Illustrating Cause and Effect
  • Contrasting Visual Differences 
  • Visualizing How an Algorithm Works

You can read more about this research and follow our instructions if you want to try it out. Happy publishing! 

 


Teaching the Computer How to Draw

Simon Breslav from the Environments and Ergonomics team likes to draw and did some work with his colleagues at the University of Toronto on teaching the computer how to draw. The computer can study the line styles on a drawing and apply it to a new 3D model creating a hand drawn look similar to the artists original creation. This can be useful for both replication and restoration of historic works as well as new applications such as animated cartoons. 

Teaching the computer to draw Autodesk Research

The group had artists hand shade 3D models of different objects and taught the computer to analyse what the artists had done. From the drawing they pulled out the following information:

  • Hatching level: whether a region contains no hatching, single hatching, or cross-hatching.
  • Orientation: the stroke direction in image space
  • Cross-hatching orientation: the cross-hatch direction when present
  • Thickness: the stroke width
  • Intensity: how light or dark the stroke is
  • Spacing: the distance between parallel strokes
  • Length: the length of the stroke

These factors can be visualized:

Analyzing Drawings Autodesk Research

 

And then synthesized - both making for interesting drawings themselves:

Autodesk Research Drawing Synthesis

 The results of the learning and application are pretty impressive as you can see below:

Autodesk Research Teaching the computer how to draw

This work is the first of its kind in learning the complexities and intricacies in the human artistic process. Future studies may include stroke textures, stroke tapering and randomness in strokes (such as wavy or jittered lines).


It is Indeed Possible to Type 30 Words Per Minute on a Smart Watch

Ironically, as cell phones are getting bigger, we see increasing popularity in ultra small screen devices such as smart watches. With these smaller screens we need to find ways to work more efficiently with them or risk these new devices being regarded as novelty items. The same old interfaces don't work.

Autodesk Research Swipeboard Smart Watch Text Entry
What time is it? It's time for Swipeboard!

One of the most common things to do on a mobile device is to enter text. We've learned to enter text with our thumbs so we can continue to learn. The problem with a smart watch is that it's a one handed device and the size of the screen really only works for a single finger without obscuring too much of the screen. Not content to revert to hunt-and-peck typing 101 the Autodesk Research User Interface group set out to find a solution.

Enter Swipeboard

Autodesk Research Swipeboard Title

Swipeboard takes inspiration from Morse code and gestural input for an easy to master text entry paradigm that sees users entering more than 30 words per minute (wpm).

Morse Code
The fastest recorded Morse code entry is 140wpm.

Swipeboard uses a QWERTY keyboard broken up into segments of 3 or 4 characters. The user simply taps in the region of the character block and then swipes to identify the character. Some users have achieved a level of comfort with the system that allows them to enter text without looking at the screen.

Autodesk Research Smart Watch Text Entry Swipeboard
First a QWERTY style keyboard is shown for selecting the character region
Autodesk Research Smart Watch Text Entry Swipeboard
After a tap, the keyboard zooms in to prompt for a gesture to define the specific character


Hard to believe? Watch the video of Swipeboard in action below. Note that the video is not sped up - you're seeing it work in real time.


 

What's next for Swipeboard?

Well, we'll be talking about it at UIST 2014, the User rInterface Software and Technology Symposium, for starters.

Autodesk Research Swipeboard Glasses
Swipeboard could be applied to other wearable devices such as glasses

For future work, this could be interesting to explore on other wearable devices like glasses and rings. It could also be interesting to see Swipeboard expanded from characters to complete words. What do you think?

If you liked this post, you might also like to read about Duet, a research project that looks at making a smart watch and smart phone work well together. Duet shows that 1 + 1 can equal more than 2.


Autodesk Screencast: From Idea to UI Research to Project Chronicle to You

ScreencastLogoHopefully you've heard that Autodesk Screencast is a new tool that lets you capture your workflows, to easily create powerful and engaging learning materials. What you may not know is the history of how this tool came to be. 

Way back in 2010, Tovi Grossman, Justin Matejka and George Fitzmaurice from the Autodesk Research User Interface Group published a paper entitled Chronicle: Capture, Exploration, and Playback of Document Workflow Histories

Chronicle started with the idea that the majority of tools today support undo functionality. The undo queue has a list of the commands that have been executed by the users and is therefore something that could be utilized to playback what the user did for others to learn from.

From that idea, there was exploration around how to improve the video playback experience. As video is a visual experience, it was important to give the user more insight into various parts of the video, as you can see below in the Chronicle prototype built into Paint.NET, with images showing what happens at various stages and a rich timeline referencing different events. Having a such a prototype allowed the group to test the concepts with users, measure the success of the tools and refine the workflows.

Autodesk Research Publication Project Chronicle User Interface

In reviewing the Chronicle functionality with the test users, the feedback was very positive and suggested for:

  • Team Support: review how a colleague carried out tasks to understand the current state of a document (e.g. for trouble-shooting)
  • Implicit Learning Aid: when working with publically shared documents, the user could review the associated tools and workflows (e.g. comparing software versions) 
  • New Tutorial Format: this is a much easier way to create tutorials
  • Self-Retrospect: help a user to remember how they did something or what their tool settings were 

With this in mind, the Autodesk Research Transfer group was engaged to help bring Chronicle to a wider audience. Project Chronicle was released to Autodesk Labs, our place to share innovative new technolgies in a way that we can collaborate with our users, for more people to try in the context of AutoCAD, Inventor and Revit.

Autodesk Research Project Chronicle Banner from Autodesk Labs

During this time, the toolset and interface went through some refinements (you can see a little of that in the above image). The user feedback continued to be positive and the Autodesk Knowledge Network stepped forward to make Project Chonicle into an official tool and rebranded it as Autodesk Screencast. Here is a nice overview:

The journey from the initial spark of an idea to finished tool can take patience and many hands. With Autodesk Screencast, we hope you'll agree that it's worth it. Download Screencast now for Windows or Mac and give it a try!