ABC7 News in San Francisco put a nice story together on how the Dreamcatcher team is teaming up with Lawrence Livermore National Laboratory (LLNL) on generative design and material science. The team at LLNL is working on printing materials 1/10th the width of a human hair. Together the teams are considering what this could do for bicycle helmets.
Be3Dimensional, or B3D, will bring together global and local thought leaders to both inspire and discover how 3D technologies can disrupt industrial design, architecture, advanced manufacturing, arts and culture and communities while building deep connections both locally and internationally. Autodesk is a marquee sponsor at this year's event on October 23 and 24.
"Design drives value in everything we create and 3D technologies are leading a renaissance that is reshaping how we interact with our world."
Autodesk will also be supplying some speakers.
Tatjana Dzambazova will present RIP FIX BURN / RIP MIX LEARN: How digitizing reality will change the way we create, learn, teach and experience the world
Tom Wujec will present the closing keynote on the Future of 3D.
Tuesday, 11 August 10:45 AM - 12:15 PM, Los Angeles Convention Center, Room 152
Justin Solomon, Fernando de Goes, Gabriel Peyré, Marco Cuturi, Adrian Butscher, Andy Nguyen, Tao Du, Leonidas Guibas
This paper introduces a new class of algorithms for optimization problems involving optimal transportation over geometric domains. The main contribution is to show that optimal transportation can be made tractable over large graphics domains, such as images and triangle meshes, improving performance by orders of magnitude compared to previous work.
Introducing OmniAD, a novel data-driven pipeline to model and acquire the aerodynamics of three-dimensional rigid objects simply by capturing their falling motion using a single camera. OmniAD enables realistic real-time simulation of rigid bodies and interactively designed three-dimensional kites that actually fly.
This course reviews current 3D printing hardware and software pipelines, and analyzes their potential and shortcomings. Then it focuses on computational specification for fabrication methods, which allow designing or computing an object's shape and material composition from a functional description.
Are the things you believe about user interfaces untrue? Maybe something seems logical but you've not really tested it out? Or you've just followed the crowd? Following up on our post about magic and user perception, we are now going to expose the five myths of user interface design.
Dr. Gordon Kurtenbach, Head of Autodesk Research, has been studying user interfaces in 3D computer graphics for more than two decades and gave a great talk on the myths of user interface design - things he once believed were true but didn't deliver as expected. The five myths of 3D user interface design according to Dr. Kurtenbach are:
3D Input Devices are Best for Working in 3D
3D Displays are Best for Working in 3D
Haptic Devices are the Future
Icons = Good Design
A Good User Interface is Natural
Myth #1: 3D Input Devices are Best for Working in 3D
The logic goes that we live in the physical world are always working in 3D. Everyday activities like washing the dishes, folding laundry and driving a car all happen in 3D. The challenge is that when dealing with a computer, there's a piece of glass between the data and the user, resulting in a number of problems:
Fatigue: people are used to resting their arms on a desk when dealing with a traditional mouse and keyboard.
Positioning is not the only task: 3D input devices are generally used for positioning objects but there is more to working with data than positioning it. For example, there are a lot of data entry type tasks for defining the properties of an object.
Depth perception is required: with a piece of glass in the way, we have to find ways to replicate depth. See myth #2 for more info.
Myth #2: 3D Displays are Best for Working in 3D
There's been a lot of work in stereography and many have experienced it in the constrained environment of a movie theatre. Most stereo solutions are faking 3D depth to fool your system. This is what can lead to motion sickness. This is not to say stereo is not well done or worthy of more work - it's just that there are still hurdles to overcome.
Human depth perception is complex. People with one eye can still perceive depth as we rely on a number of cues including the height of the viewer, the height of the viewed objects and the distance from viewer to objects.
The challenges for 3D displays include:
Intrusiveness: Viewers must wear glasses or head gear. Some people who don't wear eye glasses do not like to wear glasses. Some who wear glasses have trouble incorporating the additional gear.
The quality of display: Both the quality of content and the resolution can lower the viewer's experience. There are also range of view limitations that we don't have in the real world. We can look wherever we want.
Tangible benefit: A typical problem for marketers in all industries is explaining how much better the experience gets. Consumers are faced with choices like would you rather a bigger monitor or a smaller stereo monitor?
Myth #3: Haptic Devices are the Future
Haptic devices reproduce the sense of touch. Try a quick do-it-yourself haptic device by putting a pen on your computer monitor or something else close by. Trace the contours. How does it feel? Now try the same thing with your finger, the palm of your hand and the back of your hand. How does that feel? It's a richer experience with your finger and hand, isn't it?
Haptic devices are currently only giving basic feedback where our sense of touch is rich. We can get feedback on texture, hardness, temperature, weight, volume, contours, and the shape of the object. Like the myths above, there is a lot of information that we need to replicate in the digital world to make it a meaningful user experience.
One place you can use haptic feedback today is with rapid prototyping. If you were designing a headset, you could 3D print it at scale and try it on.
Myth #4: Icons = Good Design
3D users are visual people, right? And visual people prefer icons. Maybe. But you can get carried away with icons. It's very important not to confuse visual appeal with ease of use.
It's also important not to be lazy and copy the faults of others. Just because it's industry practice to use lots of icons does not mean that lots of icons are good design. Icons are a foreign language and we use pop up tool tips as the translator. To complicate things further, we still rely on antiquated technology to represent some operations. We still use a floppy disk to represent saving. We've got plenty of people in the world who have never seen a floppy disk, let alone used one.
User interface design innovators should seek to improve what exists by taking advantage of the latest technologies. How could one improve upon this situation by using the power of cloud computing? What if tool tips became more visual and played a learning video instead of a line of text? At Autodesk Research, we call it ToolClips!
ToolClips in AutoCAD provide access to extended documentation and video tutorials
Myth #5: A Good User Interface is Natural
Natural is a tricky word and can be misunderstood. Are we talking about grass and flowers in a meadow? Perhaps a health food store?
We can look at natural as a statement of skills - what do people already have? What experience from the physical world applies to operating a computer? What skills can be transferred from using a web browser to using a word processor? The pillars of direct manipulation provide additional insight:
Objects and results should be visible
Pointing and moving are strong metaphors
Incremental: allow users to work through the process
Reversible: allow users to back out of an error
Rapid: engage users with an interactive interface
Whatever we call it, we are really trying to accelerate the rate at which novices begin to perform like experts.
Helping novice users transition to experts
Is something natural the best way to turn novices into experts? A hammer and nail are relatively natural - we've been using tools to hit things for years - but nowadays many experts use a nail gun. It may not be natural but it can sure increase the rate at which someone works.
Marking menus, pictured below, are a great example of the novice (a) to expert (b) transition in software user interfaces. Looking at the expert workflow on the right, the pattern to buy fruit and vegetables looks more like Egyptian Hieroglyphics than a natural expression of grocery shopping in the real world but it is highly efficient.
Summary: Effective user interface technologies are not emulations of the real world
Being able to separate perception from reality is one of the most important things one needs to be able to do when looking at and working with technology. This doesn't mean that a piece of technology only gets one chance. Over time, problems may be resolved and the technology may live up to it's promise. Dr. Kurtenbach expects that he will be proven wrong at some point on the above myths.
Dr. Kurtenbach encourages user interface designers and researchers to invent an exciting future while remembering:
Human skills and the ability to learn are powerful and deep
Place the human at the centre of design
To be a keen observer of the details and the trade-offs
Ask and discover what has changed
If you liked this post, please share it with your friends and colleagues using the buttons below.
Question: When does a motorcycle swingarm look like a pelvic bone?
Answer: When it's designed with Project Dreamcatcher!
A swingarm is the main component of the rear suspension of a motorcycle. It attaches the rear wheel to the motorcycle. The swingarms you see below are designed with Project Dreamcatcher and get their organic shape as the system iteratively tests the strength of the piece and removes unnecessary material as you can see below.
To set up for this simulation a designer needs to specify their objectives. In this case, the objectives include the forces, the bounding space for the swingarm (as seen in the initial state above - effectively stating that the finished solution must live within this space), the connection points (where the swingarm connects to the wheel and motorcyle) and objects that must be considered in the space (the wheels and chain).
Connection points for the swingarm
Obstacles for the swingarm - a chain is placed on both sides to create a symmetrical result
Dreamcatcher can produce many options for a designer to choose from. Here are some alternative swingarms.
From these options a designer could then decide to do further work, such as:
change the shape if they want something less organic and more traditional looking
develop wings for a footrest or saddlebags
add decorations like an embossed logo
Dreamcatcher is a collaboration between the Design Research and Computational Science groups at Autodesk Research. The Computational Science group is looking at the simulation and generation of these shapes using high performance computing options like GPU's and the cloud. The Design Research group is exploring the user experience for designers and how to push beyond the limits of what is possible today. This makes for a lot of exciting possibilities with Project Dreamcatcher - what would you like to design?
4D Printing adds the dimension of time to 3D Printing. Instead of printing stable and static objects, with multi-material printing we are starting to manufacture soft and active objects that can react to their environment. In our post on Synthetic Biology for Architects we talk about the potential of growing a house from a seed. In this post we'll talk about some of the steps being taken by the Autodesk Research Programmable Matter team to get there.
Other than growing a house, why else might you want a 3D printed object to change over time?
Soft robotics and bio-inspired robotics are one popular reason. These soft machines inspired by nature are particularly interesting to medical science at smaller scales that can be applied within a body. Another reason might be that the object being manufactured is larger than the printer but it can be folded up.
With this research we are using the Nucleus Physics solver to help simulate the behaviour of the objects - they can bend and stretch.
The objects are composed of bars and disks. The disks in the center act as stoppers. By adjusting the distances between the stoppers it is possible to set the final folding angle.
The magic of this process is the combination of two materials at printing time. We use a rigid plastic base and a material that expands upon exposure to water. The expanding material is a UV curable polymer that when exposed to water absorbs and creates a hydrogel with up to 200% of the original volume.
With this system we've been able to create a variety of shapes getting as complicated as this undulating grid pictured below.
In the video below you can see the objects change over time as they are immersed in water.
This project is currently focusing on trans-tibial (below the knee) prosthetics. Above the knee is known as trans-femoral and you may have heard of the complementary prosthetic knee project D-Rev is working on with the Autodesk Foundation. Both of these projects are helping the developing world by reducing costs from thousands or even tens of thousands of dollars down to tens of dollars.
The team recently went to Uganda to visit the prosthetists at CoRSU hospital to familiarize them with the latest tool developments, get their feedback and test the tools beyond the home lab.
The prosthetics lab at the hospital is a workshop with a lot of familiar hand tools. Here we see Dr. Ratto from the University of Toronto.
Prosthetics have two primary parts – the socket for the limb to fit into and the prosthetic limb. Here we see some lower legs with feet. These parts can be reused as the patient grows.
The current process is both time-consuming and produces more waste than necessary. The current process requires creating a plaster mold of the residual limband then creating a plaster positive of the limb to vacuum form a plastic socket around. Here we see some plaster positives ready to be discarded.
Once the plastic socket is created, the hand tools are used to improve the fit and comfort for the patient. Using a 3D scanner (the team is using Sense for this project) provides a better fit without the waste and lets the team go straight to 3D printing a socket.
The team has taken advantage of the API in Meshmixer to create a wizard to streamline the process of cleaning the scan and preparing it for printing. This can now take as little as thirty minutes.
This brings the process down from a week to a day. Here Moses Kaweesa from CoRSU inspects a 3D printed socket and the bolt assembly that attaches the prosthetic limb.
Dr. Schmidt works on the digital tools at a more traditional workstation.
And then takes a break from coding to untangle some filament for the 3D printer.
And then returning to code again in a more relaxing location.
Here is Ruth trying on the first 3D printed socket. She is not only a patient but also a volunteer at the hospital helping to develop this process while pursuing a degree in architecture.
Ruth`s socket fits and everyone is happy!
If you look closely at the socket Dr. Schmidt is holding you can see a horizontal line in this socket as it was printed in two pieces to increase the delivery time. The two pieces were connected with a mirror welder at the hospital.
This is Rosaline trying out her new leg.
It was a successful trip and the results show that the process is working. In thinking about the predictions of needing 40,000 prosthetists across the developing world, reducing the time for a new limb from a week to a day is very significant. This helps the doctors work with more patients but it also helps the patients save money on travel costs and lodgings during treatment. The time for 3D printing is the longest part of the process so as 3D printers get faster, the process will get even faster.
"I really wasn't expected to be called a Biohacker but I don't mind"
What a great way to start a presentation! Andrew Hessel is part of the Bio/Nano/Programmable Matter group at Autodesk Research and that is how he started his fascinating presentation at the WIRED2014 Conference. In his presentation, Andrew talks about how powerful cells are and how they form networks similar to LAN's (our organs and tissues) and WAN's (our bodies).
A human cell is the most powerful and complex machine in the known universe. It runs on sugar and lasts a long time.
This is what the program looks like for our bio computers
From this foundation he goes on to share how the maker movement is coming to biology. Andrew's ultimate goal with his work is to bring down the price of drug discovery and make more medicine available to all.
Autodesk's Makerspace at Pier 9 in San Francisco
Autodesk has a Life Sciences laboratory as part of the Makerspace at Pier 9
One can now 3D print cells and DNA
With the landscape set, Andrew begins to talk about fighting cancer with 3D printed viruses. You can create a really weak virus that our body can fight yet at the same time this virus can hack the cancer cells, using the cancer cells to create more viruses to kill the other cancer cells. We call these oncolytic viruses.
A synthetic virus designed on a computer and printed in the lab
Now that he has created a virus he will be working on a more specific cancer-fighting virus. You can watch the full video below and learn more about this important work.
At the beginning of the video, Andrew shows an interactive tool created by the Health Sciences group at the University of Utah to illustrate the scale at which he works. It is available to the public to learn from and explore.
Happy new year, Everyone! There's lots of 3D Printing news coming out of CES this week including cheaper printers, smaller printers and food printers. One of our favourites is the Voxel8 printer that prints both plastic and conductive ink for electronics (remember our research on creating tubes and cavities in your models for interactive objects?). The Voxel8 printer team is working with Autodesk via Project Wire to place components, route 3D wires and output multi-material print data for fabrication.
With this, we wanted to make sure that you know about something else we've done. Good things come in threes:
Autodesk announces the open 3D printing platform known as Spark
Autodesk announces Meshmixer 2.7 with an API and scripting
That's right! Meshmixer now has an API so you can customize workflows, automate repetitive tasks and create new tools and abilities for 3D printing on your own. Developers can access the examples via GitHub and can choose between using C++ and Python.
LIke using Meshmixer, it is very easy to get started with the API. As this is the first exposure of the API, the team is interested in feedback on what can be improved. You can share that on the Meshmixer forum.
With these three advances in 3D printing, what will you do to make 3D printing better?