12 March 2013
Ansys is a key simulation tool, developed by one of the biggest players in the field. Here Al Dean takes a look at what’s in store in the latest release and those new updates, especially Workbench, that will prove very beneficial to many users
Ansys is a name synonymous with the simulation industries. While the company was founded on the eponymously titled Ansys system for structural and mechanical analysis, the last ten years have seen both its offering and the capabilities within, expand massively.
Ansys has also been on an acquisition spree that’s seen not only Computational Fluid Dynamics (CFD) technologies from Fluent and CFX brought into the fold, but many others including Scade, for electronics embedded software and systems engineering, and Maxwell, for electromagnetic field simulation.
While acquisitions are always fascinating, as they flesh out a company’s portfolio, what is really interesting for the user is how those acquired technologies are integrated into the tools already there.
Having separate technology stacks is good (single vendor and all that), but being able to make use of them intelligently and efficiently is all the more appealing — particularly in the simulation space.
When taking into consideration the current movement towards combining multiple physics models, the need to have these technologies talk to each other is not just a ‘nice to have’, but rather something far more essential.
This is where Ansys’ Workbench steps into the fray.
Workbench represents a centralised application through which users can access all the solutions they have at their disposal. It’s from this schematically and workflow led application that the user works through the process; from geometry import and model set-up, material and boundary condition set-up and into post processing.
Where other systems have separate user interfaces, Workbench works differently. Simulation tasks are built up using building blocks for each process or technology. Connect the outputs from one to the inputs of the other and you’re off. That is, of course, a gross generalisation and over simplification, but it’s a valid one.
Workbench handles the complex interactions and passing of data from one stage of the process to the other so that the user doesn’t have to worry about it.
From CAD geometry import and abstraction to meshing, from meshing to load and restraint set-up to materials and of course, into solving, optimisation and post processing.
Where it comes into its own is when passing data between different physics models; from FEA to CFD and back again. It’s at this point in the process where experts are required with traditional, separate systems to ensure that the data from one, distinct domain, is reformatted and passed into the next in the correct manner.
While Workbench won’t remove that need for the expert, it does mean others can dive into the process and of course, that the expert can work more efficiently.
Geometry wrangling and set-up
For those working with CAD geometry, the problems associated with taking data from the design and engineering process and repurposing it for simulation will be well known.
For those that aren’t, the idea revolves around the simple fact that today’s CAD geometry can represent every feature, every fillet, every chamfer and every hole that’s needed in the final manufactured product.
The problem is that all of this complexity adds a burden to the simulation process that’s often unnecessary. Abstraction is the act of removing those features and sets of geometry that have little influence on the accuracy of the results and so making it more efficient.
There is also the question of sheer size of data. As simulation becomes more widespread and as computation capability increases, the natural desire is to simulate more complex models. Rather than breaking things down into smaller chunks, many users are looking to simulate whole sub-systems or, indeed, entire products.
This often means more geometry and more hassle to get it into a fit state to work with. So, what’s been added in Ansys 14.5 to help with some of these issues?
First is the ability to load lighter-weight representations of large geometry sets within Ansys’ DesignModeler application.
This is a custom built modelling system intended to bring basic CAD modelling and editing tools to Ansys users and it’s been out there for some time. These new lightweight options can dramatically reduce the time to load massively complex CAD parts, claimed up to ten times faster.
Of course, once the user has their massive assembly in Workbench, they’ll need to be able to find the sub-systems and parts that are being simulated and ensure they’re connected in the most appropriate manner.
To assist, there are a couple of additions to this release that will help. The first is a set of filtering tools that will find parts by keyword as well as a randomiser to automatically set random colours to parts, loads, boundary conditions or specific named selections.
Next up, contacts and linkages between the various mechanical parts need to be defined. There are a couple of updates that will assist with this. First and foremost, the new connection matrix provides a matrix-like schematic that shows exactly how parts are connected.
Each part is in a cell, shows the name, what it’s connected to (referencing the axes of the matrix) and how they’re connected (such as whether it’s a rotational joint, fixed, torsional) and the physics at play (friction etc).
Alongside this, there are also new tools to assist with setting up loading conditions on an assembly, particularly when there are the same conditions in multiple positions. It’s now possible to pattern the same simulation item, such as force, a pressure or a constraint, to different parts of the model and different positions. It retains all of the details from the original definition.
While the updates covered so far focus on the simulation of larger and more complex products, there are also some new tools to assist with breaking down large tasks and focussing on points of specific interest.
The idea of sub modelling revolves around conducting a coarse, quick simulation of such a large task, finding areas of interest, such as failures or stress concentrations, then using a sub-set of that model to conduct a more localised, finer simulation study.
It’ll take the geometry, loads and constraints from the coarse study, apply them to a finer mesh and allow the user to find out more about what’s occurring in that specific area. The whole process is managed, from reuse of data to ensuring that the data carries over and gets reused where most appropriate.
Using external data
While we’re on the subject of setting up loading conditions, one area that’s seen attention in this release is the use of external data.
Whether generated as part of a previous solve (think, pressures from a CFD run reused in a structural analysis) or from a physical test, this is a common occurrence. The problem is that the data is often in a format that’s not easily transposed into the current environment.
Alongside pure formatting problems in tabular data through to unit mismatches, it can be quite a task to reuse such data. The new tools bring some order to this chaos in a few ways.
First, there are tools that enable the user to map one set of data to the current application. Second, those tools can be used to automate the conversion of units or indeed, scaling factors.
The last is that there are better controls over how the data is mapped onto the surface of geometry. This uses a popular method called Kriging, named after a South African mining engineer.
When a mismatch is found, such as where the point for the value doesn’t align correctly with the element node, the system uses values surrounding it to make a best guess at that specific point.
It’s worth noting that this used to traditionally be a compute heavy process but it is now parallelised resulting in quicker calculations, particularly with larger, more complex meshes.
Of course, the user needs to validate that things are set-up correctly to ensure these tools are backed up with additional visualisation tools that allow them to easily see how well the data has been mapped onto the target structure.
Fracture mechanics and fatigue
Now, let’s get into some of the more advanced stuff that Ansys 14.5 has in store.
The first is the introduction of fracture mechanics into the structural analysis tools. The idea here is that the user introduces a fault into the structure of a part to see how it will fail and why it does.
While it might sound counter intuitive, simulation is still commonly used for retroactive studies of why parts fail. This allows the user to introduce that point of failure and gauge what happens.
Of course, if the user is working proactively with simulation, he will then need to identify where that point or, indeed, points are. That leads me nicely on to how Ansys has teamed up with one of the leaders in the world of fatigue simulation, nCode.
One of the benefits of the centralised nature of Workbench is that third party developers can ‘plug into’ this approach and allow their specialist code to interact with native tools, whether that’s meshing, solving or post processing.
One recent project has seen nCode develop an integration in its own fatigue simulation tools that enables the user to not only pass data to it (in terms of geometry, mesh, materials etc), but also to take the output from those specialised solves and reuse them in other studies.
For those involved in the more specialised areas of manufacture, composites are something burgeoning outside of the traditional stamping ground of high-end automotive and aerospace.
Today, many are looking at all types of composites as a way to not only save energy through reduction in weight, but also as a method to differentiate their products. With that growth in usage comes the need to be able to simulate composite forms’ performance under their typical loading conditions.
Whether an organisation is adopting composites manufacturing techniques for the first time or is highly experienced in their use, simulation can provide massive insight into their behaviour. Of course, that comes with a price.
Composites are complex material structures — the interaction of the form, the ply material, their orientation and stacking, the binder and manufacturing variables. Their failure modes are equally complex — delamination/debonding, stress interactions, crushing, wrinkling, localised failure and progressive damage. This means a highly specialised form of simulation tool is required.
Ansys has, for some time, had a set of composites modelling and simulation tools, but they’ve received attention for the 14.5 release to assist with more accurate simulation of thick composite parts.
Examples where thick composite simulation is critical is typically at composite bonded and bolted joints, fittings, and attachments where thickness stresses or localised effects cannot be ignored. Problems have traditionally arisen where a shell approach is used, which works best with parts with uniform wall thickness.
In contrast, the new tools use a 3D model based approach. A 3D surface model is used as the basis from which to extrude the ply definitions, generating 3D elements and automatically managing intersections, drop-offs and the like.
This means that a much more realistic and less idealised model is used for these more complex parts. It will also handle items such as inserts more effectively.
This is backed up with post processing tools that allow the user to dive into the results and interrogate the model at both a global and localised level, finding potential weaknesses.
What’s also interesting is that, because of the granular approach taken to build up the mesh, optimisation can be performed on not only the global form of the part, but also down to ply orientation and stacking level of detail.
Let’s move onto one of the key areas for any simulation system — post processing and results handling.
It’s all well and good being able to carry out all manner of complex and sophisticated simulation studies, but unless the user can inspect, interrogate and visualise the results in a meaningful manner, it’s worthless.
Workbench already provides access to a wide range of post processing tools for all of the systems it connects to, but there are a few key changes that are worth exploring a little further.
The first is the work done to reduce the sheer size of the results files the system generates. Those into heavy simulation, design of experiments and optimisation process, will be looking at gigabytes (if not terabytes) of data to handle and sift through.
That’s always going to place a strain on IT systems purely in terms of storage. The good news is that there are now tools that allow the result to be scaled back to single precision which allows the user to compute principal stresses on the fly as needed. While it varies for data type, this can provide a 50 per cent smaller results dataset, which is much easier to handle.
There’s also been work done on making animations much more efficient during creation. Again, for the hardcore analyst, animated assets always seem a little ‘Mickey Mouse’, but for those needing to clearly communicate results to non-technical folks, they’re invaluable.
The greatest time savings are found when undertaking post processing models containing large number of bodies and elements, which can be created up to 40 per cent faster.
The final major update for the post processing aspects of the system is better tools to minimise the memory requirements and computation needed to visualise cyclic symmetry. Instead
of showing a complete model, it can be broken up into fractions of the sectors and shown just a few at a time.
Long time Ansys users will be familiar with how things used to work — using a command line to punch in commands in a very granular manner.
While in comparison to today’s tools, it seems a little backward and archaic, the use of the APDL (Ansys) did provide a couple of benefits to the user.
The first was that there was a much greater understanding of each command and what it did. I’m sure I’ve got a notebook somewhere in the bowels of the office full of command descriptions and notes from 15 years ago.
It also gave you a clear ability to create customised batch commands, custom processes and workflows that could be cut, pasted and run. See, when you move to a more graphical user interface, it’s not all good.
While users new to the Ansys portfolio are more than likely to learn the system using Workbench as a central method of interaction, there’s a lot of knowledge and experience out there encapsulated in APDL from more experienced users and experts in an organisation.
Realising this, Ansys has begun a process of building a set of tools that allow users to take that knowledge and integrate it into Workbench.
This can take several forms. Of course, users might have custom scripts that they often run (perhaps set-up routines for common tasks), but they can also create new loads and boundary conditions that aren’t included as standard, such as custom results visualisation or plotting, or perhaps even in-house solver codes.
All of these can be captured in the UI, assigned to specific icons and menus and reused much more easily.
Furthermore, the company is also opening up a digital store front for users to share such items with the wider community of users, allowing users to upload and share their work.
Ansys will be seeding this with these “extensions” including acoustics, morphing and post-processing.
Ansys is a name to be reckoned with in the simulation field.
While we’ve barely scratched the surface of the updates to the structural simulation tools in the latest release, taking a look at the spread of technologies available, it’s breathtaking.
We’ll be looking at the updates to the CFD tools next month, but in the meantime, it should be clear that the company is ahead of the game in comparison to many other vendors out there. The concept of Workbench is a solid one.
Providing a single user interaction method and allowing the user to link in all the tools they need into a single, common, workflow and use model.
But underlying that interaction method is a platform that allows users to move data between physics models, different simulation methods and without the knife and forking often associated with making technologies, that are perceived to be fundamentally incompatible, work together.
And as we move into an age where users are looking to simulate more of the behaviour of their products, optimise that behaviour and build more robust products, that’s going to be increasingly key to many.