The Dreamcatcher team is helping MX3D with their design for a bridge that will be 3d printed in place by robots.
Autodesk CEO Carl Bass says that one of the really cool things about this project is that it will happen in public - not behind closed doors in a lab. Doing this project in public makes it more complicated and risky which increases the chances of learning new things.
You can see the novel printing process that MX3D has developed below. They have a multi-axis industrial robot hooked up to a robot via custom software.
Before MX3D developed their metal printing process, they perfected a resin-based method. This super fast curing resin neutralizes the effect of gravity during the printing process - the structure keeps it shape without drooping or sagging.
This may take a couple years to complete but should be fun to watch.
The finished bridge may end up looking like this model made with Dreamcatcher. The organic, tree-like structure fits nicely into the natural environment of a park.
ABC7 News in San Francisco put a nice story together on how the Dreamcatcher team is teaming up with Lawrence Livermore National Laboratory (LLNL) on generative design and material science. The team at LLNL is working on printing materials 1/10th the width of a human hair. Together the teams are considering what this could do for bicycle helmets.
The growing use of additive manufacturing lifts many constraints on form imposed by CNC machining and injection molding, and has lead to a renewed interest in applying triangle meshes, voxels, and implicit surfaces in real-world CAD systems. However, such systems should inter-operate with legacy B-Rep CAD solid modeling tools. I will discuss our ongoing attempt to combine these two domains, relying on a combination of dynamic triangle meshes and variational mesh processing.
If you work in a wet lab and need an assistant you should try out the Wet Lab Accelerator! The Wet Lab Accelerator is a tool for researchers working in synthetic biology and virology. The Bio/Nano group at Autodesk Research is developing this tool in conjunction with their experiments and is sharing it with others in the community for testing and feedback.
Working with an automated wet lab, like Transcriptic, it allows you to:
Design your robotic wet lab protocols using a visual UI — no coding or scripting required.
Start from scratch or use one of our templates to get started.
When you are ready to run your protocol, Wet Lab Accelerator generates the vendor-specific code and verifies it
Any issues are clearly highlighted so you can quickly find and correct them.
Seamlessly integrated with Transcriptic, our first automation partner, with more to come.
Set up each step of your protocol using graphical visualizations of your wet lab containers.
Often-used settings can be parameterized to ease running of variations on the same protocol
Interact with your results data through dynamic visualizations
The Wet Lab Accelerator has an easy-to-use UI that you can run from your web browser
If you like this tool, please share it with your friends and colleagues! You can also check out the Molecule Viewer for visualizing your data.
Imagine My City, a not-for-profit organization driven to enable and increase productive and meaningful community-based collaboration in issues related to our built environment, has been working with a number of partners including Autodesk and George Brown College to create a virtual reality model of Toronto. The City VR project showcases the use of mobile and immersive technologies to empower citizens to reimagine and share their aspirations about the kind of city they would like to inhabit.
Tuesday, 11 August 10:45 AM - 12:15 PM, Los Angeles Convention Center, Room 152
Justin Solomon, Fernando de Goes, Gabriel Peyré, Marco Cuturi, Adrian Butscher, Andy Nguyen, Tao Du, Leonidas Guibas
This paper introduces a new class of algorithms for optimization problems involving optimal transportation over geometric domains. The main contribution is to show that optimal transportation can be made tractable over large graphics domains, such as images and triangle meshes, improving performance by orders of magnitude compared to previous work.
Introducing OmniAD, a novel data-driven pipeline to model and acquire the aerodynamics of three-dimensional rigid objects simply by capturing their falling motion using a single camera. OmniAD enables realistic real-time simulation of rigid bodies and interactively designed three-dimensional kites that actually fly.
This course reviews current 3D printing hardware and software pipelines, and analyzes their potential and shortcomings. Then it focuses on computational specification for fabrication methods, which allow designing or computing an object's shape and material composition from a functional description.
Are the things you believe about user interfaces untrue? Maybe something seems logical but you've not really tested it out? Or you've just followed the crowd? Following up on our post about magic and user perception, we are now going to expose the five myths of user interface design.
Dr. Gordon Kurtenbach, Head of Autodesk Research, has been studying user interfaces in 3D computer graphics for more than two decades and gave a great talk on the myths of user interface design - things he once believed were true but didn't deliver as expected. The five myths of 3D user interface design according to Dr. Kurtenbach are:
3D Input Devices are Best for Working in 3D
3D Displays are Best for Working in 3D
Haptic Devices are the Future
Icons = Good Design
A Good User Interface is Natural
Myth #1: 3D Input Devices are Best for Working in 3D
The logic goes that we live in the physical world are always working in 3D. Everyday activities like washing the dishes, folding laundry and driving a car all happen in 3D. The challenge is that when dealing with a computer, there's a piece of glass between the data and the user, resulting in a number of problems:
Fatigue: people are used to resting their arms on a desk when dealing with a traditional mouse and keyboard.
Positioning is not the only task: 3D input devices are generally used for positioning objects but there is more to working with data than positioning it. For example, there are a lot of data entry type tasks for defining the properties of an object.
Depth perception is required: with a piece of glass in the way, we have to find ways to replicate depth. See myth #2 for more info.
Myth #2: 3D Displays are Best for Working in 3D
There's been a lot of work in stereography and many have experienced it in the constrained environment of a movie theatre. Most stereo solutions are faking 3D depth to fool your system. This is what can lead to motion sickness. This is not to say stereo is not well done or worthy of more work - it's just that there are still hurdles to overcome.
Human depth perception is complex. People with one eye can still perceive depth as we rely on a number of cues including the height of the viewer, the height of the viewed objects and the distance from viewer to objects.
The challenges for 3D displays include:
Intrusiveness: Viewers must wear glasses or head gear. Some people who don't wear eye glasses do not like to wear glasses. Some who wear glasses have trouble incorporating the additional gear.
The quality of display: Both the quality of content and the resolution can lower the viewer's experience. There are also range of view limitations that we don't have in the real world. We can look wherever we want.
Tangible benefit: A typical problem for marketers in all industries is explaining how much better the experience gets. Consumers are faced with choices like would you rather a bigger monitor or a smaller stereo monitor?
Myth #3: Haptic Devices are the Future
Haptic devices reproduce the sense of touch. Try a quick do-it-yourself haptic device by putting a pen on your computer monitor or something else close by. Trace the contours. How does it feel? Now try the same thing with your finger, the palm of your hand and the back of your hand. How does that feel? It's a richer experience with your finger and hand, isn't it?
Haptic devices are currently only giving basic feedback where our sense of touch is rich. We can get feedback on texture, hardness, temperature, weight, volume, contours, and the shape of the object. Like the myths above, there is a lot of information that we need to replicate in the digital world to make it a meaningful user experience.
One place you can use haptic feedback today is with rapid prototyping. If you were designing a headset, you could 3D print it at scale and try it on.
Myth #4: Icons = Good Design
3D users are visual people, right? And visual people prefer icons. Maybe. But you can get carried away with icons. It's very important not to confuse visual appeal with ease of use.
It's also important not to be lazy and copy the faults of others. Just because it's industry practice to use lots of icons does not mean that lots of icons are good design. Icons are a foreign language and we use pop up tool tips as the translator. To complicate things further, we still rely on antiquated technology to represent some operations. We still use a floppy disk to represent saving. We've got plenty of people in the world who have never seen a floppy disk, let alone used one.
User interface design innovators should seek to improve what exists by taking advantage of the latest technologies. How could one improve upon this situation by using the power of cloud computing? What if tool tips became more visual and played a learning video instead of a line of text? At Autodesk Research, we call it ToolClips!
ToolClips in AutoCAD provide access to extended documentation and video tutorials
Myth #5: A Good User Interface is Natural
Natural is a tricky word and can be misunderstood. Are we talking about grass and flowers in a meadow? Perhaps a health food store?
We can look at natural as a statement of skills - what do people already have? What experience from the physical world applies to operating a computer? What skills can be transferred from using a web browser to using a word processor? The pillars of direct manipulation provide additional insight:
Objects and results should be visible
Pointing and moving are strong metaphors
Incremental: allow users to work through the process
Reversible: allow users to back out of an error
Rapid: engage users with an interactive interface
Whatever we call it, we are really trying to accelerate the rate at which novices begin to perform like experts.
Helping novice users transition to experts
Is something natural the best way to turn novices into experts? A hammer and nail are relatively natural - we've been using tools to hit things for years - but nowadays many experts use a nail gun. It may not be natural but it can sure increase the rate at which someone works.
Marking menus, pictured below, are a great example of the novice (a) to expert (b) transition in software user interfaces. Looking at the expert workflow on the right, the pattern to buy fruit and vegetables looks more like Egyptian Hieroglyphics than a natural expression of grocery shopping in the real world but it is highly efficient.
Summary: Effective user interface technologies are not emulations of the real world
Being able to separate perception from reality is one of the most important things one needs to be able to do when looking at and working with technology. This doesn't mean that a piece of technology only gets one chance. Over time, problems may be resolved and the technology may live up to it's promise. Dr. Kurtenbach expects that he will be proven wrong at some point on the above myths.
Dr. Kurtenbach encourages user interface designers and researchers to invent an exciting future while remembering:
Human skills and the ability to learn are powerful and deep
Place the human at the centre of design
To be a keen observer of the details and the trade-offs
Ask and discover what has changed
If you liked this post, please share it with your friends and colleagues using the buttons below.
Many people are addicted to their mobile devices and the constant flow of information. In social settings, such as work meetings, people know it's wrong and try to hide their device checking in many ways, including:
going to the bathroom
faking migraine headaches
hiding the device under a table or their clothing
The User Interface group at Autodesk Research conducted a survey of more than 200 people and 94% reported getting caught using a mobile device. Helping people to sneak a peak more easily seemed like a good challenge and the team looked towards magicians to see if they could learn things that could be applied to software and device design.
The team came up with some pretty cool gadgets including:
secret recorders that could play back the last few seconds of a meeting through a small earpiece to cover up that you weren't paying attention
a sensor for knowing when people are behind you
information embedded in audio tones that could be perceived as meeting reminders, email notifications or a ringing phone
The Phoney Phone
The Phoney Phone is an app that makes ones' phone look likes it's sleeping while letting the user see the results of their tapping on an alternate screen that could be hidden in the bottom of a coffee cup. To an observer, they may just look like they are fidgeting or contemplating the last sip of a drink.
The Magput hides sensors in a pencil and a notebook. What may appear to be random tapping or doodling could actually be be some serious work.
You can see these gadgets in action and test how easy they are to tell when someone is using them in the following short video clip.
What does a Designer of Deceptive Devices Need to Know?
When designing for subtle interactions, designers should consider many of the same things magicians do:
User Customization: allow the user to customize their device. If they use a device that does not fit their environment or personality it could give them away
Modularity: allow the user to work with the system in pieces. Could a component of the system change location so that the user is not seen doing repetitive tasks?
Simulation and Dissimulation: Take advantage of existing devices that people obviously use. We know how most people type on a phone so if you can hide the interaction it, or make it appear inactive to observers, they will be less likely to suspect activity.
Separating Cause and Effect: Magicians introduce delays to misdirect the audience. This is counter-intuitive to traditional UI design so it requires special consideration.
User Training: Magicians practice and so should your users - so make it easy for them.
To take this magic further, Tovi covering for Fraser who was getting married at the time (congrats, Fraser!), added a magician to the presentation of this research at CHI 2015. The show is below.
Additional Uses for Subtle Interactions
Beyond helping people to sneak a peak at their devices, these techniques could be used to:
enhance presentations by giving presenters extra techniques to share their information in engaging ways
help with wearable device design and interactions where users cannot use a device in a traditional manner
Happy new year, Everyone! There's lots of 3D Printing news coming out of CES this week including cheaper printers, smaller printers and food printers. One of our favourites is the Voxel8 printer that prints both plastic and conductive ink for electronics (remember our research on creating tubes and cavities in your models for interactive objects?). The Voxel8 printer team is working with Autodesk via Project Wire to place components, route 3D wires and output multi-material print data for fabrication.
With this, we wanted to make sure that you know about something else we've done. Good things come in threes:
Autodesk announces the open 3D printing platform known as Spark
Autodesk announces Meshmixer 2.7 with an API and scripting
That's right! Meshmixer now has an API so you can customize workflows, automate repetitive tasks and create new tools and abilities for 3D printing on your own. Developers can access the examples via GitHub and can choose between using C++ and Python.
LIke using Meshmixer, it is very easy to get started with the API. As this is the first exposure of the API, the team is interested in feedback on what can be improved. You can share that on the Meshmixer forum.
With these three advances in 3D printing, what will you do to make 3D printing better?