Geomagic debuts parametric exchange for NX on labs website

Published 02 November 2009

Posted by Greg Corke

Article tagged with: nx, geomagic, reverse engineering, point cloud

At DEVELOP3D we love the increasing use of Labs websites to let users try out new tech in development - and this latest development from Geomagic is sure to interest users of NX. Geomagic has launched Parametric Exchange for NX which provides an intelligent connection between Geomagic Studio and NX. The tool is designed to take point cloud data and reconstruct it in NX as a parametric CAD model, complete with model tree. We’d love to hear from any NX users out there that try it out.

View comments (1 comment)

Hands on with Inventor Fusion Tech Preview 2.0 & Change Management

Published 28 October 2009

Posted by Al Dean

Article tagged with: autodesk, inventor fusion, direct modelling, change management

Finally got around my subscriptions update with Inventor 2010 (you need both SP1 and the Subs pack), so I’ve spent most of the day playing with Fusion Tech Preview 2.0 which was announced earlier this week and made live at the same time. What I’ve found has intrigued me, but before we get onto that, lets look at what we’re dealing with.

Fusion is about two things, in no particular order, other than it makes sense in terms of our focus. Fusion TP2 bring some changes and refinements to the user interface, which Autodesk is testing out with this project, so see what people like, what they don’t and what sticks. Some of the changes for this update include a rework of the geometry selection tools (the feature recognition for selection is still a bit flakey), but for me the work done on the triad to make aligning it to other geometry (there’s a small glyph you hit to then align the triad) makes the whole direct editing thing work, as you would expect.

While these are interesting, the big news is the introduction of the Change Management add-in for Inventor. Essentially, this enables the round tripping of data between Fusion and Inventor. You have an Inventor part, you want to edit it in Fusion, load it, edit it, directly, without recourse to geometry History. You save it. Load it back into Inventor and rationalise the changes to update the history and feature tree. It tracks what you’ve edited, what you’ve added and what you’ve deleted and updates the history and feature data accordingly.

This is the background to the Fusion name. Autodesk are attempting to fuse the ability to use both Direct Modelling in combination with History based modelling tools. The eventual goal, at present according to the powers that be, is not two separate applications, but to prove out the technology and integrate it all into core Inventor, rather than providing it as a standalone application. Hence, it’s Technology Preview, not a Product Preview.

So, let’s have a look and see what we can do.

1. Let’s start with a simple part. One of Inventor’s sample parts. As you’ll not particularly complex. The Feature tree shows a free extrudes, a few holes and that’s about it. Easy meat for Fusion to Edit.

2. Loaded up into Inventor Fusion, we started by removing this hole feature. Easy job, as you simply select the faces you want to take out, the system removes them and removes the internal boundaries of the faces and closes the surfaces out.

3. Once that counter bored hole is gone, I shifted the remaining hole along the edge and back into the part a little.

4. Next I grabbed this feature, all in all, 8 faces and shifted it a few mill up the face. The new Triad orientation thing works very nicely indeed.

5. Final move was to take these two other holes, use the Press/Pull comment to make them both 2mm larger by offsetting the internal faces. Not an accurate way to redesign a hole, but for this purpose, it’ll do.

There you have it. Four things have changed. Two things have moved in position, one has been deleted and two have been resized. It took a couple of minutes at most to effect these changes. So let’s see what happens when you read it back into Inventor Professional.

When you read the part into Inventor, the Change Management add-on kicks into gear and inspects the part you’re loading. What’s interesting is that because I was using .ipt files and Fusion saves out .dwg files, I was actually not loading the original file back into Inventor. Obviously Fusion, when making edits, stores the original feature information somewhere and the new edits alongside it. So what happens?

6. The Change Management add-on presents you with a list of edits made to the model and shows you graphically on screen on where those changes occur. It shows you the changes in blue/yellow slowing before/after for geometry changes and red showing deletes. In the image above you can clearly see the four edits we made.

You have the ability to control exactly what happens with each detected change. You can have them executed, so the feature gets rebuilt, you can ignore it, or if you encounter problems, then you can choose to have the faces extracted and the system try to patch them back into the model with the Sculpt command.

Thinking this was a reletively simple set of changes, I hit the Apply All.

It broke. and while it rebuilt one feature out of those three changes, very little else worked.

I was surprised so I did some digging.

What it boils down to is that because of the nature of whatever Autodesk is doing with Fusion and the edits it makes, there are at present, very explicit limitations in terms of what can and can’t be done when you’re moving data between Fusion and Inventor and hoping to have the history and feature tree reconciled and maintained.

Your features have to be features in their own right. There can be very little interdependency between features, no patterns, no mirrors, no nothing. If you try to edit a child of a pattern, you get an error. If you try to edit the parent, you have problems with updates. I looked into this simple part we have here.

If you look a the history tree, you’ll find it’s all driven by patterns and there are three core features that are repeated throughout the part. I’ve coloured coded them above and below - the grey is the base extrude, the green features are the parents, the red features are the children. Below shows what happened when I just edited the parents. Because patterns are based on linear references, you find that the pattern tries to update too and that breaks the history tree.

Now. I don’t want you to think I got the hump, threw the toys out the pram and proclaimed “this fusion stuff doesn’t work”. I didn’t. I tried out some more parts, built some myself, tried them out and found that there are instances where the Change Management tools actually work and work well.

But these are very simple operations in very rare conditions where the feature has almost no effect or influence on other parts of your geometry. For anything else, if you want to, purely for technical reasons, try to get this to work and with this first release (that’s key to note - this is a first public showing of a new technology - NOT a product), then you have to understand exactly how your changes will effect your history tree if you want to try and reconcile even the simplest of changes.

This struck me as odd. Part of the pitch for direct modelling is that its easier to use when you either aren’t aware of how a part’s been built, have forgotten or that make a simple change you need to make is going to royally mess with your history-based part. The irony is that if you do use a technology like Fusion and unless Autodesk pull something out of the bag in terms of improving this Change Management technology, you’re going to have to be just as familiar with your part’s build history, its method of construction and the effects a seemingly small edit will have - then you might as well just do it in core Inventor.

What should also be noted is that this doesn’t change Fusion’s potential as a direct modelling technology. The user interface, although much the same, feels slicker, with a nice workflow and use model now developing and it’s power is there. The ability to open a part, make edits and get on with the job at hand, without worring about data source is a good and valid one for many - and Fusion executed all the operations I wanted to do perfectly. it seems more consistent and more robust and repeatable than the first Tech Preview - or maybe that’s just me.

But in the end, I can’t help but wonder where the Change Management technology is going and whether it’s actually worth the effort or indeed, even possible to get this to work in a reasonably functional manner. Kudos to Autodesk for trying to pull off what’s an incredibly complex thing. The good news is that this is all done, for free to the user, so they can see what it can do, try it out and feedback into the process. Autodesk Labs (as with all Labs sites) is all about trying things out. Some work and some don’t. Some make it to market, some don’t. Portions of some products get shipped while the rest gets scrapped. The good thing is that these days, we all get to try it out and see for ourselves.

Looking at the industry as a whole, there are many different approach and different ways that vendors are looking at the Direct Modelling world and this is just one avenue of experimentation. For those with an interest in how product modelling is moving forward, these are truly interesting times.

View comments (1 comment)

Inventor Fusion Tech Preview 2.0 Goes live: history & non-history do the fandango.

Published 26 October 2009

Posted by Al Dean

Article tagged with: autodesk inventor, inventor fusion, history free modelling

I got briefed about this last Friday (as did the rest of the press/blogger community I’m sure), but I wanted to wait till it was up and live on the Autodesk Labs web-site and I’d had a chance to play with it. When Autodesk publicly and officially announced Inventor Fusion Technology Preview, much was made of the name and the aims of the technology.

This is Autodesk’s answer to the likes of Synchronous Technology, Instant3D and all the other non-history-based modelling technologies out there. What the company has focused on, with both the messaging and the name, was the ability to rationalise together traditional feature+history-based model edits with the history-less modelling practices founds in Inventor Fusion.

Essentially you could edit a traditional Inventor model in Fusion, without using features of any kind, then pass it back to Inventor to rationalise the changes and update the history tree to integrate those changes back into the history-based model. This isn’t the history-tree appending method used by the likes of NX and SolidWorks, but something more integrated at the very core of your modelling history. The problem was, that capability wasn’t available in the first Technology Preview - which many missed. That changes today with the introduction of the Change Manager add-on for Inventor alongside Tech Preview 2.0.

Today, Tech Preview 2 has gone live and you can download it and play with it. Again, you have the ability to try Inventor Fusion if you’re from the qualifying countries. But you’ll need to be a suscriptions customer using Inventor 2010 Subscriptions release if you want to try the Change Manager. The two peices of code (fusion 2.0 and Change Manage) are delivered as separate zips/installs.

It’s also worth remember that separate applications are not the end goal of this process. This is a Technology Preview, and separating out the Fusion tech from core inventor, allows the team to play with and distribute the code and see how users like the interaction between the two different types of modeling methodology - but the end goal is that Fusion technology will be built directly into Inventor, not sold as a separate application.

We’ll be trying it out later on today, once we’ve updated Inventor to the latest release (without which the change manager won’t work). There are also some new additions and changes to the core Inventor Fusion tools as well, so stayed tuned for more. In the meantime, here’s some video fun for you.

Add comment (0 comments)

New SketchBookMobile: 1.1 a-go-go

Published 23 October 2009

Posted by Al Dean

Article tagged with: autodesk, iphone, sketchbook mobile, muzak, drawing

As ever, it looks like Josh beat us to the jump with this one, but its worth covering a little. Autodesk just pushed out the 1.1 release of SketchBookMobile and it addresses some of the issues with the initial release, namely, layer perservation (you can now push out a .PSD file out to Photoshop) and for me, the big one, importing landscape image (which is something I’d asked about when it launched) and brush preview when you’re resizing them. It’s available now on the App store. Josh also has a very handy comparison chart looking at other sketching apps for the iPhone.

Finally, here’s a slick little vid* that shows a workflow with moving data from concept to 3D with SketchBookMobile and Inventor.

* nice video, but honestly. Where the hell are they getting this music from?

Add comment (0 comments)

Nvidia to take CAD rendering to the Cloud with RealityServer 3.0

Published 21 October 2009

Posted by Greg Corke

Article tagged with: nvidia, the cloud, tesla, gpgpu, realityserver

Nvidia and mental images are reaching for the Cloud to offer ray-traced rendering over the web using stacks of GPUs (Graphics Processing Units) instead of CPUs. Set for official launch at the end of November Nvidia’s RealityServer 3.0 platform will enable architects, automotive engineers and product designers to send 3D scenes up into the cloud with the rendered results streamed back over the web. The major sell for this is significantly reduced rendering times, but the tech will also be able to stream interactive 3D to any web connected device including mobile devices - though of course bandwidth will be an issue.

The platform is highly scalable, and more users can be serviced simply by adding more GPUs. Nvidia is already talking to a number cloud computing providers and expects to announce partnerships with several of them later this year, one of them being Amazon EC2 (Elastic Compute Cloud). The cost of cloud-based deployment is expected to be less than one EURO per hour.

While the Cloud computing aspect of the technology is sure to dominate the headlines, of equal interest is the fact that RealityServer 3.0 can be deployed within the confines of a firewall, not only as a GPU-based ‘render farm’ to serve up rendered scenes in double quick time, but also as a means to distribute interactive 3D graphics throughout the enterprise.

The background to this technology is Nvidia’s CUDA programming architecture that enables Nvidia GPUs to carry out computationally intensive tasks usually reserved for CPUs. CUDA was used to devise a new GPU-based rendering mode called iray, which is based on mental images’ mental ray 3.8 rendering engine. This is different to most rendering technologies which rely on CPUs to do the calculations.

On the hardware side, RealityServer consists of multiple Nvidia’s GPGPU (General Purpose GPU) Tesla cards, which are used to render out the scenes plus a few CPUs, which are really just used for housekeeping, says Nvidia.

The technology is already primed up to be exploited a number of 3D CAD companies. There are over ten major CAD applications that already use mental ray, including Autodesk (3ds Max, Inventor, Revit), SolidWorks, Dassault Systemes (CATIA), and most recently PTC (Pro/Engineer Wildfire).

The critical technology here is mental ray 3.8, which is due for release later this year and will enable GPU-accelerated mental ray rendering for the first time. Once these vendors implement mental ray 3.8 into their core products, they would have all the tools to hook up to RealityServer, says mental images, but for some CAD software, particularly the more mature products that carry a lot of ‘architectural baggage’ the implementation would not be trivial. That said, mental images told DEVELOP3D that development is already underway at many CAD companies and it expects to see applications supporting RealityServer next year.

While mental images was unable to name all the names it did confirm that all of the aforementioned CAD developers are already working on systems that would allow them to virtualise their applications or to at least have server-based collaborative solution directly connected to their applications. As a result the company is confident that this technology is well placed to take a lot of work off the CAD developers’ plate as they are essentially offering them a whole suite of tools to get started faster instead of doing everything themselves. mental images also disclosed that Autodesk showcased the technology at a conference in Munich, Germany only yesterday.

In terms of the actual rendering technology RealityServer is a progressive renderer, so users are able to get a good idea of the final render in seconds or minutes, even though the final rendering may take hours. For comparative render times between CPU and GPU-based solutions it was hard to draw mental images on exact figures. However, the company did provide an example of an architectural scene that took 45 mins to render on a four Tesla cluster system and 8-10 hours on a more traditional four core CPU-based system. That said, it was wary of comparing apples and oranges as the scenes were not identical because the GPU renderer is slightly different from the CPU renderer in terms of shading technology. The company did say that it that would be providing benchmark results from customers next month and the early results are encouraging.

While for most CAD uses the emphasis is likely to be on using Reality Server as a rendering server, mental images was keen to point out that it also provides a platform on which companies to build applications that utilise the technology in different ways. In the automotive sector, for example, it is already working with a number of manufacturers on projects to develop and enhance their in-house design / review pipelines. A dedicated car paint shader is also in development and will be released early next year.

For those that wish to set up their own facility there are three different packages. In true American style there is no small - instead just a M, L and XL. Medium is a 2U rack mounted system with 8 Tesla GPUs and is suitable for smaller architectural offices and product design teams with 10s of concurrent users. Of course, this depends on the intensity of use and some customers may need to dedicate four GPUs for a single task. The ‘Large’ package features 32 Tesla GPUs for 100s of concurrent users, while ‘XL’ features 100 Tesla GPUs for serving 1,000s of users over the web.

Nvidia is still working on overall system costs, but with a single Tesla cards costing in excess of 1,000 EUROS one may speculate that a medium system would cost around 15,000 - 20,000 EUROS just for the hardware. On the software side, however, customers should expect a one-time licensing cost of 2,000 EUROS plus 20% maintenance per Tesla card.

From complex architectural visualisations and 3D city modelling to product design and automotive styling, the CAD-centric target markets for RealityServer are huge. And with mental ray already the rendering engine of choice for most major CAD developers, one may speculate that it’s only a matter of time before RealityServer becomes a widely supported platform for CAD.

What makes this technology particularly interesting is the fact that it is designed to use GPUs in the Cloud and not CPUs, but this is also a current barrier to deployment. None of the large Cloud service providers currently offer GPUs in their facilities, but Nvidia expects this to change early next year. This coupled with the expected release of RealityServer-compatible CAD products should make 2010 a very interesting year for rendering in the Cloud.

Add comment (0 comments)

In search of Elegance #4: Surfacing. Without the headache

Published 19 October 2009

Posted by Al Dean

Article tagged with: design, delcam, powershape, in search of elegance, surfacing, intelligent surfacing, hybrid modelling

Basic construction of a Viking sailing ship. Boat design is one of the most elegant forms in the world of engineering. Simple, efficient and timeless (and when I say timeless, I mean, since 1500 BC). Image courtesy of the good folks at

Here’s something I was reminded of recently on a trip to see the team at Delcam (in their HQ quarters, a scant 15 miles from my home, rather than 8,000 miles away in Korea this time). If you’re not familiar with it’s solutions, Delcam has a huge range of technology which often solves real, live problems faced in the heady world of design and manufacture, rather than, as some vendors choose, creating solutions looking for problems. While there’s much elegance in many areas of Delcam’s offering, one thing lept out at me – and that’s how it’s flagship modelling system handles surface creation.

Surfacing is a complex business. From first principles, when you’re trying to create sculpted, complex forms, you’re looking at an inherently more complex workflow than when working with prismatic features. the geometry is more complex, so the creation of it is going to be more complex, right?

Traditionally, yes. Absolutely.

Surfacing requires that you first build a network of curves and the precise form of those are controlled by not only the form you want to create, but how you want to create it. There are many types of surfaces. Planar surfaces are flat and the simplest. Then you have four sided surfaces, n-sided, bi-rail surfaces, extrudes, lofted surfaces, swept surfaces, blends, flanges, fillets. Filleting in itself is a very complex art depending on your form requirements. If you’re working to corners, then you’re looking at trying to merge three or more surfaces converging on a single point and at that point, you might want different fillets, different set-back value.

All in all, its a complex and often daunting prospect – particularly for those that have learned their trade-craft using mainstream, solid modelling applications. Knowing what forms you’re aiming for is essential to create curves (often referred to as wires), before you even get to actually creating a surface.

Delcam’s PowerShape has been on the market for about ten years or so and the company has been through revision after revision to give its users a set of tools that allow you to work with complex geometry, fix it, prepare it for manufacture That’s given it a perspective that is only shared by a handful of vendors. Delcam has a set of tools that are used by a community that’s both a) demanding (as they need flawless data – which begets flawless tool forms) and b) very used to dealing with crappy third party data. These are the people that take crappy data and turn it into a manufacturable item – something that requires highly efficient tools.

Perhaps the perfect example of this is how PowerShape handles surface creation. As we’ve discussed, you’re often facing multiple decisions about what curves to create, then what exact type of surface you want to create, before you even start to think about creating any geometry. What Delcam has developed is Smart Surfacer and it takes many of these decisions out of your hands – or at least, gives you a helping hand.

Basically, you create the curve network you want, then invoke the Smart Surfacer command. This presents a simple dialog box. With this active, you then start to select the geometry, either from curves or from existing surface edges. The system inspects your selections, looks at the types of surfaces it can create, then presents you with the best guess is has for the best type of surface you could create based on that selection. As you add more geometry to the selection, it reevaluates the choice and switches the surface type and displays a preview.

1. Take this simple geometry set – two circles and a connecting arc.

2. Select the smaller circle and you get a planar fill surface.

3.Add in the connecting arc to the selection and it’ll switch to a drive Curve, to push the arc around the circle.

4. Adding in the large circle maintains a Drive Curve, but runs it between the two circles, using the arc as the Drive Curve.

Here’s another example

1: Rectangle, helix, circle. – selecting the rectangle gives you a planar surface.

2: Adding the helix into the selection gets you a drive curve that’s very similar to a swept feature.

3: Adding the circle in switches the Drive Curve to push between two forms, creating a smooth transition.

Of course, these are pretty simplistic demonstrations for the purposes of getting the concepts across, but the usefulness and the simplicity of the tool should be clear. Quite often you’re not dealing with singular surfaces such as these, but rather, dealing with the complexity of trying to finish up that set of surfaces, to squeeze that final last few in that tie together the whole form, the points where form quality is won or lost – and that’s exactly where this tool comes into its own. Rather you having to rework other surfaces in the set to patch in that final surface, the system can find the optimum solution and present it to you for inspection and fine tuning. there are also more manual tools avialable from the command, such as the Composite Curve creator, which can assist greatly when you have multiple, disjointed surfaces meeting at one area.

PowerShape’s Smart Surfacer is a perfect example of what I’m looking for in this search – it’s a deceptively simple tool that collects together best practice, knowledge and experience of dealing with some of the worse geometry known to man and presenting it in a tool that adds that intelligence in an unobtrusive manner, while giving you the freedom to dive in and edit things manually if needs be.

PowerShape-E is avialable for free, to play with at your leisure at – I’d recommend doing so to anyone with a passing interest in complex shape description.

View comments (1 comment)

Z Corp launch integrated monochrome 3D printer

Published 15 October 2009

Posted by Al Dean

Article tagged with: prototype, rapid prototyping, z corporation, zprinter 350

Z Corp have released the latest addition to it’s range of 3Dprinter products with the launch of the ZPrinter 350. As many readers will know, Z Corp is one of the leaders in the 3D printing world, where speed and low-cost are absolutely key to support the product development process. While the company always grabs headlines with its colour printers, there’s still a big market for monochrome machines. Running costs are lower, the machines cost less (due to the reduction in complexity) and for many, the ability to quick create a series of prototypes, discuss them around a table and progress design is all that they want.

What this brings to the product line-up is an advancement of the existing 310 monochrome product, adding in the integrated post processing capabilities of machines like the 450 and 650 (which we took a look at a while back), to give you a system that builds quickly, provides you with the tools to break out the model from the build chamber, recycle the unused material and post process the material. It also takes advantage of Z Corp’s most recent build powder (ZP150) which gives you a much whiter model (which is ideal for concept development and for architectural users, is ideal) and a much more robust green model (green refers to the state before you infiltrate the model to ‘fix’ it).

Build volume is a very usable 203 x 254 x 203 mm, it builds just under Z Corp’s benchmark 1” per hour (they quote 0.8” per hour), with layers in the 300 x 450 dpi resolution range (no mention of layer thickness). One thing I did find interesting was the discussion of the affordable nature of the machine.

The ZPrinter 350 costs around $25,900.

While that’s a cheap machine by historical standards, there are much lower cost commercial machines from traditional vendors on the market (the Solido machine and Dimensions’ uPrint spring to mind). The 350 pulls things back in cost of consumables and a greater build volume, but there’s changes afoot in the RP market. One of the other low-cost hopefuls, Desktop Factory, got into financial trouble recently and the assets got picked up by 3D Systems – the results of which still seem uncertain.

Alongside this, there’s the homebrew market that is gaining huge interest amongst many users, purely because of the ability to create parts with very low cost hardware, often self built. Take the MakerBot, the RepRap project (which is now on its second generation).

I’m not for a minute suggesting that professional designers and engineers are going to foresake investment in professional level technology that solves a serious requirement, but there’s a home brew enthusiasm for this type of technology which is now 30 years old in many areas.

Another thing to consider is that many of the original patents are now starting to expire and that always means that the technology can be freed from the stranglehold (a morally correct one I might add) that the originators have on it.

There are interesting times to come for 3D printing. Very interesting indeed.

View comments (3 comments)

Page 313 of 349 pages « First  <  311 312 313 314 315 >  Last »