Simulation Archives - Engineers Rule https://www.engineersrule.com/category/simulation/ Engineering News Articles Fri, 23 Feb 2024 15:40:47 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 More Mistakes to Avoid in SOLIDWORKS Flow Simulation https://www.engineersrule.com/more-mistakes-to-avoid-in-solidworks-flow-simulation/ Fri, 29 Dec 2023 21:59:02 +0000 https://www.engineersrule.com/?p=8542 The good news is that SOLIDWORKS Flow Simulation makes it easy for engineers to use computational fluid dynamics (CFD), once an application reserved for specialists—often engineers with advanced degrees who became dedicated analysts. This could also be bad news. If such a powerful tool is used without the understanding of fluid flow fundamentals, it could provide inaccurate results without the design engineer ever realizing it. Much of what dedicated analysts have learned was learned the hard way: by making mistakes. In the previous article, we learned about a few of these mistakes. Here are a few more.

Trying to Replicate Physical Experiments

Referring back to the concept of an analysis plan discussed in the previous article, it’s important to decide if you’re trying to analyze the in-situ performance of your product out in the wild or replicate a physical experiment, as these often have very different requirements.

A frequent source of error is due to users trying to match the results of some test or physical experiment by using boundary conditions that would be more appropriate for the performance of the product under typical usage.

Consider the aerodynamics of a car. Replicating a wind tunnel test may involve a stationary model in a chamber of known size. Replicating the real use case of the car driving down the road may require incorporating effects such as wheel rotation, the relative motion of the ground under the car in an infinitely large open environment.

Another example is for automotive intake accessories or cylinder head geometry as in the figure below. There is a significant difference between trying to simulate the in-situ performance which could require time-dependent flow conditions representing the various strokes of the engine cycle, versus replicating a steady-state “flow bench” test where a fixed amount of vacuum is pulled to determine the resulting flow rate.

Figure 1. Replicating a flow bench test for an intake manifold.

Physical experiments usually only output results at a few key locations, while CFD provides output everywhere. Instrumentation usually involves additional test fixtures and rigging that may influence the device’s performance. If you’re attempting to replicate a physical experiment, representations of these fixtures should likely be included in the analysis. Also ensure that you probe the virtual measurements or define goals in the exact location of any physical test sensors.

Steady State vs Transient Analysis

When developing an analysis plan for thermal problems, it’s important to consider the “thermal mass” of the device and the time period of its intended operation. When a device has a low thermal mass and a long period of intended operation, a steady-state analysis makes a lot of sense. But what if the device has a heavy mass and only operates for short bursts? A steady-state analysis may provide a too-conservative and unrealistic result.

Steady-state analyses don’t reveal how long it takes for peak temperatures to be achieved. The engineer may think temperatures have stabilized quickly but there is no way of knowing whether those temperatures took 30 seconds, 30 minutes or 30 hours to reach a steady state.

When the thermal mass of the part is significant compared to the heat powers and time scale, it can be worth running a transient or time-dependent analysis. While the solve times of a transient analysis are much greater, they can be reduced significantly by taking advantage of solver options such as nested iterations, which perform sub-iterations for each solver timestep and allow specification of a much larger manual timestep size.

Figure 2. Transient thermal natural convection analysis.

In the transient thermal analysis above, the device takes approximately two hours to reach a temperature within a few percent of the steady-state value.

If you run a steady-state analysis and observe fluctuating goals or residual values, it’s possible that your problem may be “unsteady” in nature. This can occur due to vortex shedding or other dynamic effects that can spontaneously appear in the fluid flow at certain Reynolds numbers.

The more unsteady a problem is, the less accurate a steady-state solution will be. Even if you are after an averaged value as an output, it’s best in this case to switch to a transient solver.

To extract a steady-state value from an unsteady transient problem, you can export results from your monitored goals/sensors into Excel or other software and average the results over some relatively steady period.

Figure 3. Configuring averaged results for a transient study.

Alternatively, you may be able to specify in Calculation Control Options “averaged” results for a specific time interval. This will allow viewing time-averaged contour plots and other outputs that should be similar visually to the results you would expect from a steady-state study but with the confidence that the physics are properly supported.

Choosing Internal vs External Analysis

We commonly think of “internal” analyses for problems like manifolds and pipe flow and “external” analyses for flow over a vehicle. But the reality for many products is that the choice is not so obvious and although limiting the calculation to the internal region will almost always solve faster, it may neglect important factors about the outside environment.

Figure 4. Venting of a firebox as internal (left) and external.

If working with a room-scale or larger product, it may be desirable to see flow patterns through inlets or outlets and how they can interact with other geometry such as the floor or other obstructions.

For enclosures, if the model is too difficult to prepare as “water-tight,” we can use an external analysis as a workaround which allows the simulation of leakage through any small openings.

Figure 5. External analysis of an electronics enclosure with leakage from uncapped openings.

For problems where an internal analysis makes sense but there are still important effects to represent around the inlet and outlet, a balance can be achieved by creating a more representative inlet geometry shape. An example would be a hemispherical cap, which helps approximate the inlet air flow direction for devices that feature a rounded opening or velocity stack.

Figure 6. Hemispherical inlet on an internal analysis.

Extending geometry away from inlets and outlets can also help minimize any artificial effects imposed by boundary conditions. Guidelines for CFD typically recommend the length of these extensions in some multiple of the pipe diameter (three times diameter, six times diameter, etc.)

Figure 7. Extended inlets and outlets on a pump.

If you intend to neglect the frictional losses from these extended inlets, then be sure to specify an ideal wall condition on the inner faces.

Unrealistically High Heat Powers

For electronics cooling analysis specifically, a common issue seen is improper definition of the heat powers of electronic components.

Avoid confusing the absolute power rating with dissipated heat power. This can apply to power supplies, inverters, DC converters, etc. The waste heat for these could be estimated by multiplying the power rating by the efficiency – for a 300 W rated power supply that is 90% efficient, we could estimate about 30 W of waste heat that could be applied as an equivalent heat source within the CFD analysis. Applying the 300 W condition would result in some very high temperatures. Light emitting devices like LEDs also emit a portion of their energy as visible light so it’s important to apply their efficiency as well.

A trickier issue is understanding the duty cycle of various components. For many electronics, it may be unlikely that every component on the board will be operating at its maximum rated thermal power continuously 100% of the time. Detailed simulation of duty cycle can be carried out by cycling heat power on and off in a transient analysis but it is more common in practice to overlook minor transient fluctuations in temperatures and instead, simply scale down the heat powers by an appropriate factor.

Neglecting Thermal Radiation

Before neglecting thermal radiation altogether, it’s important to determine whether radiation has only a limited influence on your device’s thermal performance. For forced convection (fan or liquid cooled) electronics devices, the heat transfer tends to be dominated by convection and it is common to neglect thermal radiation.

While conductive and convective heat transfer rates are both proportional to linear difference in temperature, the radiative heat transfer rate is dependent on the difference of each body’s absolute temperature raised to the fourth power. This means that at higher temperature differences and higher absolute temperatures the effects of radiative heat transfer become much more significant.

For passively-cooled electronic devices that rely on natural convection, radiation can be worth investigating. Incorporating radiative heat transfer into your analysis would also be a requirement to analyze effects of different surface finishes and coatings such as black anodize which are known to have high emissivity values.

A quick way to investigate the effects of thermal radiation would be to duplicate your study and enable radiation, observing both the changes in temperature and heat transfer rate due to these newly included effects.

Figure 8. A flux plot showing heat transferred by radiation and convection.

The Flux plot in SOLIDWORKS Flow Simulation is a method to quickly filter by high power components and see where the heat is going. It’s also a useful tool to catch setup issues like lack of contact between components that should be conducting heat to each other.

Alternatively, investigation of radiation can be performed by a quick hand calculation. Incorporating temperatures obtained from your original analysis and measurements of external surface area should give a good idea of whether or not the radiative heat transfer will be significant factor for your problem.

Unsupported Physics

For problems beyond simple thermal and fluid flow, it’s important to ensure your CFD package supports the relevant physics.

For example, consider problems that involve the coupled motion of bodies with fluid flow or other effects such as “free surface” liquid/gas boundaries.

SOLIDWORKS Flow Simulation has the “free surface” functionality which can be enabled which uses the volume of fluid approach to solve simple problems involving sloshing or other behaviors. It also supports rotating components through the definition of rotating regions but doesn’t support any other type of body motion such as reciprocating or oscillating components. It also doesn’t support surface tension or capillary action.

Figure 9. Coupled body motion in SIMULIA XFlow.

For arbitrary motion of bodies with multiple degrees of freedom, coupled two-way FSI and more powerful and varied free surface methods, SIMULIA XFlow and SIMULIA Fluid Dynamics Engineer each have unique advantages.

Figure 10. Mixing tank with headspace in SIMULIA XFlow.

If your problem hinges on chemical reactions, combustion or phase change, you will want to find a solver that can tackle those effects.

Conclusion

It’s easier than ever to use CAD-embedded CFD to predict the performance of your product, with tools like SOLIDWORKS Flow Simulation making the setup process very straightforward.

While it still may require significant research and investment to build a very high accuracy simulation model (the saying, “getting the last 10% takes 90% of the effort” comes to mind), avoiding the mistakes in this article should help you achieve a level of accuracy sufficient to make design decisions and avoid some of the most common pitfalls.

Start by planning out the analysis with defined assumptions, inputs and outputs, then carefully choose where you place your virtual sensors or goals. Ensure your parameters of interest are solved through to their convergence and check that your mesh is adequate. Step back and look at the problem you are trying to solve, and determine if there are any changes required to match a physical experiment or to make sure the proper physics are incorporated.

Lastly, if you aren’t sure if you should make a certain assumption, use a certain approach or just how to proceed in general – reach out to your colleagues or your software provider for assistance! Sharing what you’ve tried so far and any documentation you have for your analysis plan should give them the info they need to quickly advise.

]]>
Ryan Navarro
The Most Common Mistakes Made in SOLIDWORKS Flow Simulation https://www.engineersrule.com/the-most-common-mistakes-made-in-solidworks-flow-simulation/ Fri, 29 Dec 2023 21:26:33 +0000 https://www.engineersrule.com/?p=8533 Computational fluid dynamics (CFD) is a tool no longer reserved for dedicated analysts. Advances in computer hardware and the automated setup provided by modern CFD software make it accessible for design engineers to evaluate and optimize their products during the design phase.

However, the extra accessibility afforded by modern CFD software allows users to neglect common analysis fundamentals. This can lead to poor accuracy or misleading results. It won’t be possible to address every possible source of error in this article, but I’ll do my best to address the major setup mistakes I’ve seen often working with engineers using CFD.

The article will focus primarily on SOLIDWORKS Flow Simulation but the principles should apply to most CFD packages.

Mistake: Not Creating an Analysis Plan

Before jumping into any kind of simulation, it’s a great idea to come up with an analysis plan. This should include defining the main parameters of interest (the “outputs” or independent variables) and the main inputs to be used in the simulation.

While you’re at it, it’s a good idea to document the relevant assumptions you’ll be making and the level of simplifications you plan on making to the models.

A little bit of planning upfront can keep you on track and save a lot of time in the long run so you don’t lose sight of your end goals and assumptions. Perhaps more important is that having your plan documented allows you to easily reach out for help to colleagues or your software technical support team and clearly explain what you’re trying to achieve.

Figure 1. Simplified analysis plan for a Tesla valve.

If you’re having trouble coming up with a plan, I usually recommend thinking of the simulation setup like a virtual experiment. If you were conducting a real-life experiment, how would you set it up? Where would you place measurement sensors and what parameters would you want to measure? Would you run the test only for a short duration or an extended period? How large would the test chamber/environment need to be?

Answering these questions provides a great starting point for an analysis plan. This will directly translate into decisions on where to place virtual sensors to monitor the solution, choosing a steady-state vs transient analysis, sizing the computational domain, etc.

Mistake: Not Monitoring Goals and Solution Convergence

Whatever the parameters of interest for the simulation are, appropriate goals should be defined to track them throughout the course of the solution. Depending on the type of analysis this could include monitoring values such as pressure loss from inlet to outlet, flow rates, lift and drag forces on surfaces, temperatures of key components in a thermal analysis, or the concentrations of various species of fluids in a mixing problem.

Since computational fluid dynamics is an iterative process, failure to monitor these values can cause one of two problems: the peak values may not be captured due to the solution not developing long enough (causing major inaccuracy) or the solve time may be drastically increased by solving for much longer than required.

To avoid this, a good rule of thumb is to think of these result monitoring locations as virtual sensors, so wherever you would place a sensor in a physical experiment, place a goal or output request to track the parameter of interest in that location.

You can then plot values at these locations over the course of the solution (practically every CFD package will allow viewing results at these monitored locations in real-time) to manually assess convergence once the values have stabilized within a certain threshold, but a better idea is to take advantage of stopping criteria.

Figure 2. Goals defined in SOLIDWORKS Flow Simulation (left) and solver monitor goals plot.

When goals are defined in SOLIDWORKS Flow Simulation, automatic stopping criteria are placed to stop the solver once the goals are converged to within a tolerance. The user can manually specify the tolerance if they want more control. As soon as all the parameters of interest are stabilized to within this threshold, the solver will automatically terminate and save the results, often cutting minutes or hours off the solve time.

It is still important to double-check the solution convergence after the fact.

Adjusting Stopping Criteria

If you find that the monitored locations are still increasing in value when the solution stopped, it’s likely that some secondary stopping criteria, such as a limit on number of iterations or “travel” is kicking in. For problems where your monitored locations are taking a particularly long time to converge, it may be necessary to raise the limit for the number of iterations/travels or eliminate these secondary stopping criteria entirely.

Figure 3. Finishing criteria for SOLIDWORKS Flow Simulation with manual goal tolerance.

Aside from helping ensure accuracy of the solution, defining the goal or other result monitors in advance saves time when it comes to post-processing or interpreting the results later, as these values can easily be graphed and exported to reports. 

If you find you need to extend the solution after completing an analysis, most CFD programs allow you to continue or resume the calculation from where you left off. You don’t have to solve the whole problem all over.

Monitoring Residuals

SOLIDWORKS Flow Simulation doesn’t require the monitoring of residuals for typical problems including steady-state and transient problems with the default settings.

In other CFD packages like SIMULIA Fluid Dynamics Engineer, the residuals will automatically have associated output requests so that they can be plotted over the course of the solution. Residuals are typically provided for mass flow, energy and turbulence parameters. The residual value can be thought of as the imbalance that remains between iterations of the solver and, due to the iterative nature of the solution process, they are expected to decrease over the course of the analysis before plateauing at some infinitesimal value.

If you observe residuals increasing over the course of the solution or their absolute value is much larger than expected, it is likely there is a problem with the CFD setup and the program is producing an instability or “divergent” solution. The results in this case should not be trusted until whatever is causing the residuals to diverge is corrected.  

Note that monitoring residuals on their own isn’t a replacement for directly monitoring the convergence of your parameters of interest. In SIMULIA Fluid Dynamics Engineer, this can be done by placing additional output request for whatever your key parameters are, similar to the goals in SOLIDWORKS Flow Simulation.

If you happen to be running a SOLIDWORKS Flow Simulation time-dependent study with “nested iterations,” you will have access to residuals for normalized mass, momentum and energy as additional goal plots. If you are trying to speed up the solution process by forcing a large manual timestep with nested iterations, checking these residuals will let you know if the solution is stable or if you’ve pushed the timestep size too far.

Mistake: Resuming or Continuing an Analysis

When running the CFD any time after the initial solve, there should be an option to resume the previous simulation.

Figure 4. Dialog showing options for either new or continued calculation in SOLIDWORKS Flow Simulation.

As previously discussed, this option to resume can be helpful if the goals were not converged and you wanted to solve for more iterations. However, take note that if you made any changes to the simulation setup, you will most likely want to do a “New calculation.”

This comes up often: you are making changes to the simulation setup and wondering why they are not being reflected in your results. It’s very possible you’ve accidentally picked the “Continue calculation” option rather than starting a new one.

Performing model adjustments or geometry changes will flag the mesh as out-of-date and start the process from scratch but you must pay close attention to these options when you’re only modifying simulation parameters.

Mistake: Lack of Mesh Refinement

Automatic meshing done by many CFD packages makes it easy to do simulation but it is important to examine the mesh to ensure it fulfills a few general guidelines. If time allows, a mesh convergence study should be performed to examine the effects of increasing mesh refinement and determine a point of diminishing returns. These studies are commonly called “mesh dependence” or “grid dependence” studies and are carried out with the goal of proving the solution is mesh or grid independent.

SOLIDWORKS Flow Simulation uses a fairly unique meshing technology that combines a Cartesian grid or octree mesh with the immersed boundary method, a technique that allows capturing features that are smaller than the mesh cell size.

These cells (referred to as solid/fluid or partial cells in SOLIDWORKS Flow Simulation) provide a great deal of flexibility in controlling the amount of detail in a simulation.

In the image below you can see a draft mesh for a server rackmount unit featuring hundreds of components.

Figure 5. SOLIDWORKS Flow Simulation mesh for electronics cooling displaying partial solid/fluid cells.

Note that the mesh cells in the zoomed-in view are larger than and not aligned with the memory chips. Thanks to the support for partial or solid/fluid cells, SOLIDWORKS Flow Simulation can resolve this detail which greatly reduces the mesh cell count (and time) required to solve such a problem.

Avoiding Severe Mesh Issues

While the automatic mesh creation is impressive, it’s not a magic bullet for all situations.

By far the most common issue I see regarding meshing for SOLIDWORKS Flow Simulation is a mesh with cell sizes many orders of magnitude larger than the solid features they are trying to resolve. There is only so much the partial cells can do and when there is too much variation in the size of the cells will result in very inaccurate results or generated cryptic solver error messages.

Simply checking the mesh that the software generates by inserting a Mesh Plot command. This will ensure the cell sizes are roughly of the same order of magnitude as the feature sizes you expect to resolve and will prevent these worst errors.

To increase accuracy further, it’s recommended that you provide a few cells across any narrow channels. If you’re expecting to see detailed thermal gradients through a solid, then you’d be well advised to include a couple cells through the solid thickness as well. For the best quality results, you will want to place 10-15 cells across a narrow channel.

Proving Mesh Independence

None of the guidelines discussed are replacements for performing a mesh independence or mesh convergence study. Performing a series of studies with increasing mesh refinement to observe the sensitivity of your key results to the mesh is crucial if you are striving to eliminate all errors.

One area where additional effort for mesh refinement is important is aero and hydrodynamic effects on curved surfaces – for example, lift or drag calculations for a vehicle or the performance of a rotating impeller.

Figure 6. Mesh convergence study for a propeller thrust calculation.

The figure above shows a simple mesh convergence study conducted on a rotating propeller, with the thrust as the key output. Zooming in to the propeller cross section and hiding the propeller body shows that at coarse refinement levels the profile of the propeller cross section is not well defined – this is known as discretization or local truncation error. If you consider that that pressure is resolved into forces across each cell, it is clear why this can yield large differences in the predicted thrust.

However, with minor refinements the solution converges to a thrust value while still maintaining a reasonable number of cells. As one might expect, this is the level (Refinement 5 in figure above) at which the outline of the profile of the cells on the zoomed in view matches the geometry of the original propeller blade. Refinement 6, which more than triples the cell counts, does not change the thrust, so the results may be considered converged.

Note that ensuring the solution is mesh independent does not necessarily mean it is accurate – only that you have addressed one possible source of error.

Solution-Adaptive Meshing

If you don’t want to fool around with a mesh convergence study, you can also check whether your CFD software offers a solution-adaptive mesh approach.

Figure 7. Solution-adaptive mesh solve progression for wind loading.

SOLIDWORKS Flow Simulation’s solution-adaptive mesh allows setting targets for the maximum number of cells and a time period to wait between refinements. The software will then periodically refine areas of the model with high pressure gradients. If you are tracking your goals, this will give you a record of the goals converged values at each level of mesh refinement – essentially built-in proof that your solution is grid-independent.

Body-Fitted Mesh

The structured Cartesian mesh used by SOLIDWORKS Flow Simulation excels at applications like electronics cooling but can require significant amounts of refinement on smooth curved geometries. For classical aerodynamic and hydrodynamic problems, many CFD experts swear by “unstructured” or body-fitted mesh like that available in SIMULIA Fluid Dynamics Engineer and other standalone CFD packages.

Figure 8. Coarse and semi-refined body-fitted mesh in SIMULIA fluid dynamics engineer.

This mesh structure relies on special boundary-layer elements (commonly referred to as an “inflation layer”) near the walls to precisely resolve the mesh.

This meshing method tends to require additional geometry preparation and simplification as it is much less forgiving about any faults or imperfections in the CAD model quality compared to the immersed boundary method used by SOLIDWORKS Flow Simulation, which is capable of healing over small faults and geometry errors.

The payoff (especially when combined with advanced turbulence models like k-ω SST) can be higher accuracy for predictions of lift and drag, potentially at overall lower element counts.

Meshing guidelines for conformal mesh are mostly related to the thickness of the boundary layer elements, which can be determined in SIMULIA Fluid Dynamics engineer by placing an output request to track YPLUS (y+) the normalized wall-normal distance.

Each turbulence model will have recommended values for y+ to aim for depending on whether you plan to rely on wall functions which approximate the local effects of the near-wall boundary layer (possible with even a crude mesh) or directly resolving the viscous-sublayer.

That’s all…for now. There’s a few more mistakes to be avoided. Stay tuned.

]]>
Ryan Navarro
Using SOLIDWORKS Simulation for Composites https://www.engineersrule.com/using-solidworks-simulation-for-composites/ Mon, 28 Aug 2023 14:03:00 +0000 https://www.engineersrule.com/?p=8164 SOLIDWORKS Simulation is a CAD-embedded finite element analysis (FEA) software that enables designers and engineers to perform structural analysis during the design phase. For predicting the performance of ductile metals such as steel and aluminum below their yield stress, the default “linear elastic isotropic” material model is generally suitable. Even brittle materials such as cast iron or unreinforced concrete are commonly modeled as linear elastic isotropic but may require an alternate failure criterion rather than the Von Mises yield criterion so commonly used. 

In past articles we’ve explored use cases for nonlinear analysis, including hyperelastic materials such as rubbers or plasticity models for loadings beyond the yield point of a material.

Something all the materials discussed so far share is that they are generally thought of as homogenous in nature or made up of a single constituent material.

Composite materials, on the other hand, are made up of multiple constituent materials and can be engineered to possess qualities such as stiffness, strength or toughness that exceeds the performance of most of the material.

In this article, we’ll explore a high-level overview of analysis techniques for composite materials within SOLIDWORKS and SIMULIA products.

Composites Applications

The domain of composites is quite broad. This article will attempt to focus on the “macro” scale composites commonly chosen by engineers and product designers for their rated properties. The constituent components of a composite are often classified as the matrix or binding medium, and the reinforcement which is added for additional strength or other physical characteristics.

Wood products such as common plywood are an example of this. Laminated veneer lumber (LVL) beams and other glue laminated or “glulam” construction are used to achieve open-concept floorplans that would only otherwise be possible with steel members.

Sandwich composites make use of an inner core layer such as foam or aluminum honeycomb, with encapsulating thin outer skins to carry shear forces.

Figure 1. Glulam beams (left) and aluminum honeycomb sandwich composite (right).

Reinforced concrete with embedded steel or carbon-fiber reinforced polymer is a mainstay of commercial and residential construction.

Injection molded plastic parts are commonly reinforced with chopped fiberglass or carbon fiber, with such strength that they can often replace aluminum castings or stamped steel components.

Fiberglass composites are commonly used in all sorts of commercial and recreational products, from boats to storage tanks and FR-4 is a fiberglass composite that also happens to be the base material for most printed circuit boards in use today. 

Figure 2. Fiberglass FR-4 PCB (left) and fiber-filled injection molded part (right).

Of course, the image most of us probably arrive at in our minds when we hear the word “composites” is carbon fiber, more specifically glass-reinforced polymer (GRP) and carbon-fiber reinforced polymer (CFRP), with the classical case being a rigid structure made up of multiple layers of woven continuous-strand textile bound together by an epoxy resin.

Depending on the application, engineers may choose to incorporate or modify pre-purchased composites or go into depth designing their own custom lay-ups.

This article will attempt to examine both approaches and the levels of simulation software required for performing structural analysis at various levels of fidelity.

Orthotropic Materials with SOLIDWORKS Simulation Standard

A key behavior for many composite materials is that due to their heterogenous makeup, they often exhibit different behaviors depending on the direction of loading. This behavior can be crudely approximated using the linear elastic orthotropic material definition available in the most basic levels of SOLIDWORKS Simulation – including the simulation included with SOLIDWORKS Premium and the SOLIDWORKS Simulation Standard license.

Figure 3. Linear elastic orthotropic material properties for a glass-reinforced polymer with reference geometry selection.

This material model is in contrast to the default linear elastic isotropic used for homogenous materials and allows definition of elastic modulus and shear modulus in the X, Y and Z directions. These directions must be defined by some reference geometry selection – either a plane, planar face, coordinate system or axis. The reference geometry selection maps the material properties to the appropriate axes so if there are multiple bodies with different orientations they must have a relevant individual reference geometry selection.

This necessitates some limitations – the geometry must either be planar, cylindrical or spherical in nature, depending on the reference geometry selection. If the end part cannot be defined as a single body while satisfying these rules, then it must be broken up into multiple more primitive bodies. Such a procedure is described in the help article Defining Orthotropic Properties.

It should be noted that this approach generally allows adequately representing the stiffness of a composite material for a structural analysis, allowing accurate predictions of the total displacement and any reaction or free body forces. As the stress information is not available per-ply, caution must be taken from drawing conclusions about failure of the composite material itself. 

Another benefit of this approach is that it is available for all study types in SOLIDWORKS Simulation, including thermal analysis.

Transverse Isotropy

It should be noted that most layered composites exhibit the behavior of transverse isotropy, meaning their behavior can be simplified to in-plane and through-plane behaviors. For this reason, you may find material might often be provided with only two components of elastic modulus (such as E1 and E2) and one component of shear modulus (G12).

SOLIDWORKS Simulation has no transverse isotropic definition, so the missing properties for a full orthotropic material may be back-calculated (e.g., E1 = E3 assuming plane 1-3 is the plane of isotropy). Poisson’s ratio may also be back-calculated from the elastic modulus and shear modulus.

Materials may also behave differently based on the physics in question. For instance, PCBs are sometimes approximated as isotropic materials for structural analysis but generally must be represented as orthotropic materials for thermal analysis where the conductivity of the copper plays a more significant role.

Analysis of a Quadcopter Frame

For an example of using this approach, we’ll apply a linear orthotropic material to a frame for a quadcopter. This is an ideal case for these assumptions since the frame will be routed out of a pre-purchased CFRP sheet and the geometry is planar in nature.

Figure 4. Quadcopter CAD assembly (top) and same assembly prepared for analysis (bottom).

The analysis is a static study with a 5 g downward acceleration applied. Restraints are applied at the motor mounting locations using the Remote Load tool acting as a remote displacement and representing a ball joint style pivot.

It should be noted that this fixture scheme was arrived at after comparing against the naïve approach of “fixed geometry,” as visualized in the exaggerated displacement plot below.

Figure 5. Fixed geometry restraints (top) and remote load pivot fixture (bottom).

It can be seen from above that the response with the pivot restraints appears much more natural and represents the facts that the propellers will apply thrust force normal to their current direction, rather than the global vertical direction. The fixed geometry fixtures appear to have a severe and artificial stiffening effect which reduces the overall displacement. The lower image also shows that the quadcopter must be somewhat tail heavy with the rear end dipping down lower than the front. I’ll have to check with the designer to make sure the mass of the battery is correct.

A true-scale animation plot is presented in the following figure.

Figure 6. True scale displacement animation.

As no per-ply stress information is available, we really shouldn’t use stress components to predict factor-of-safety for the frame.

One potential alternative is to use strain as a predictor for failure.

Figure 7. Strain plot for CFRP sheet.

The figure above shows a strain plot for the frame, with the plot scaled for a strain design limit. This is about the best we can do without having access to more detailed composites information, but may suffice for many users looking to work with pre-purchased composite sheet or tube.

Composites Analysis with SOLIDWORKS Simulation Premium

The next step up in realism is to utilize the composites analysis available in SOLIDWORKS Simulation Premium. This requires using a shell mesh definition for the composite in question, and so requires a constant wall thickness region. This type of composite definition is possible in the linear static, frequency and buckling study types.

Material properties are defined in the same fashion as the linear orthotropic material, except there is no reference geometry selection and they can vary per ply. The ply orientation is mapped by default to the U-V coordinates of the surface, allowing truly orthotropic behavior for compound surfaces that is not restricted to specific geometry primitives.

Figure 8. Composite shell definition.

Composite shell definition for a fixed-wing drone fuselage is pictured above. To simplify the shell definition, a surface was extracted from solid body using the Offset Surface command. If you’re unfamiliar with shell definition in SOLIDWORKS Simulation, I’d recommend the tutorial on SOLIDWORKS Simulation Shell Definition.

The ability to align orthotropic material properties with arbitrary compound surfaces is really one of the biggest features of this composite shell definition. It allows analysis of things that aren’t just plates and tubes.

Note though that the orientation of the coordinate mapping should be verified and can be altered or corrected per face if there is an alignment discontinuity or mismatch. This can be a daunting process for geometry with many distinct faces/fillets as pictured above. For geometry with a low number of faces or with smooth continuous compound surfaces very little correction should be required.

Individual plies can be specified with their own thickness, angle and material properties during the shell definition allowing room for substantial experimentation from the designer.

As far as the simulation setup is concerned, the drone is subjected to a combination of torsion and gravitational acceleration that is intended to represent a very rough landing. Contact sets were defined to locally bond the composite shell to the other components in the load path, including the front and rear injection-molded housings and the tail boom.

Figure 9. Simulation setup with contact definitions.

The response is pictured below using Von Mises stress for the first ply, but note the additional options to plot stress for other plies, maximum across all plies and also interlaminar shear. 

Figure 10. Stress visualization.

Failure Theories

As mentioned, the composites functionality in SOLIDWORKS Simulation Premium adds the capability to track and interpret per -ply stresses as well as interlaminar shear stress.

This enables additional failure criteria in the factor of safety plot, including the Maximum Stress, Tsai-Hill and Tsai-Wu failure criterion.

Figure 11. Predicted factor of safety using Tsai-Wu criterion.

A helpful SOLIDWORKS article discusses selecting a composites failure criterion can be found here.

In short, Tsai-Hill and Tsai-Wu take into account the interactions between stress components and thus are capable of predicting failure of plies due interlaminar shear stresses (failure modes such as delamination) but aren’t intended for predicting failure in the matrix. Maximum stress criterion looks only at maximal values and can be useful for predicting failure in the composite matrix but isn’t intended for predicting failure due to delamination. 

It should be noted that the method used by SOLIDWORKS Simulation Premium is first ply failure (FPF) and will never show or describe delamination visually or allow simulating behavior past the initial failure. The “rule of mixtures” assumption utilized by SOLIDWORKS Simulation also approximates the bulk behavior of the composite from its makeup information, potentially neglecting localized failure modes.

Additionally, the way the ply information is mapped to the shell mesh means that we can’t easily analyze the relatively common case of composites with variable wall thickness or tapering sections.

Composites Analysis in SIMULIA

The desktop software package SIMULIA Abaqus/CAE and the cloud-connected equivalent 3DEXPERIENCE SIMULIA structural simulation roles both provide in-depth composites modeling and analysis, adding several major capabilities that span across many domains such as fatigue and durability, fracture mechanics and vibrations.

Composites in SIMULIA structural tools may be defined at various levels of detail and represented by either “composite shells” or “composite continuum shells” as in the case for many layups or with high-detail solid mesh. This means it is possible to analyze both the performance of something like a complex vehicle body or other large-scale structure or zoom in to analyze specific failure modes at connections or joints in very high detail.

Composite Ply Definitions

Composite shell sections function similarly to the composites functionality module in SOLIDWORKS Simulation Premium, mapping over ply stack information to surfaces, except that they support variable wall thickness and so can easily represent tapered or variable wall thickness.

Figure 12. Composite lay-up example from Abaqus CAE user guide.

Composite continuum shell sections combine many of the benefits of shell and solid mesh by incorporating a 3D nodal representation of the thickness. The user guide for Composite Continuum Shell describes in further detail:

“Like a shell section, a composite continuum section has one dimension (the thickness) that is significantly smaller than the other two dimensions. However, composite continuum shell sections are modeled in three dimensions; therefore, the model defines the thickness and stresses in the thickness direction are not negligible.”

Ply information can be defined directly in the analysis tool or, if product design is being performed in CATIA or the associated 3DEXPERIENCE roles, the ply information can be imported for analysis providing a tight coupling between design and analysis for faster iteration.

Delamination and Crack Propagation

The SIMULIA structural tools also support simulating behavior such as crack initiation and propagation. Cohesive elements within the software can be applied. These cohesive elements may be assigned to fail at certain magnitudes of strain or shear stress, allowing visualization of the beginning of crack formation or delamination. The virtual crack closure technique (VCCT) is an alternative approach that can be applied to further simulate crack propagation from some existing flaw or defect, as visible in the figure below from the Abaqus user guide example: Post buckling and growth of delamination in composite panels.

Figure 13. Composite panel in post-buckled state with SIMULIA Abaqus.

The two techniques utilized together can provide a robust overview of resistance of composites to delamination or failure of any bonded connections.

Additional damage models combined with the available Explicit solver also allow representing brittle fracture for modeling detailed behavior of impacts, crashes and other high speed dynamic events.

Figure 14. Ballistic impact of fiber composite.

Embedded Reinforcement Elements

In some cases, it makes sense to directly represent the reinforcing material rather than represent it by its bulk properties. A common example is reinforced concrete, where for concrete damage modeling the concrete aggregate serves as the matrix and is meshed with solid elements. The rebar is represented by 1D elements defined as an “embedded region” within the concrete.

As a thorough example, I would highly recommend checking out this tutorial on analysis of a reinforced concrete beam in Abaqus/CAE by Dr. Clayton Petit: 2D Concrete Beam (Concrete Damage Plasticity).

Figure 15. Reinforced concrete beam damage modeling in Abaqus/CAE.

Summary & Conclusion

Composites are increasingly common as manufacturing costs decrease and designers seek to optimize structures for weight and efficiency. This article examined several approaches to analysis at various levels of detail, beginning with analysis of simple orthotropic materials in any package of SOLIDWORKS Simulation, to analyzing composites with various ply stacks in SOLIDWORKS Simulation Premium and ultimately the possibilities for analyzing failure modes such as delamination in detail with either desktop SIMULIA Abaqus/CAE or the cloud-connected 3DEXPERIENCE SIMULIA structural simulation roles.

]]>
Ryan Navarro
SOLIDWORKS Flow Simulation Parametric Studies https://www.engineersrule.com/solidworks-flow-simulation-parametric-studies/ Mon, 17 Jul 2023 15:36:55 +0000 https://www.engineersrule.com/?p=8070 SOLIDWORKS Flow Simulation is a CAD-embedded computational fluid dynamics (CFD) software that enables designers and engineers to perform thermal and fluid flow analysis at design-time.

Once a Flow Simulation project is set up, the analysis is associated with the SOLIDWORKS part or assembly file, so managing design changes is simple – either changing a model dimension and re-running the analysis, or “cloning” the Flow Simulation project to various SOLIDWORKS configurations to preserve different variations of geometry.

Figure 1. Cloning an existing SOLIDWORKS Flow Simulation project.

This workflow is useful for small numbers of iterations. In other cases, where a designer wants to iterate over a wide range of operating conditions and generate some kind of performance curve, such as a pump curve for impellers, lift/drag curves for aircraft or thermal resistance curve for a heatsink, a parametric study can be used.

Creating a parametric study from an existing study allows access to three modes: “what if analysis,” “goal optimization” and “design of experiments.” Each of these has its own advantages which will be explored over the course of this article.

Figure 2. Creating a parametric study.

Case 1: Lift/Drag Ratio Curve Using “What If” Analysis

Lets discuss the case of lift and drag calculations for an airfoil. The baseline study is presented below at zero degrees angle of attack. Solution-adaptive mesh was enabled to dynamically place additional mesh refinements where needed.

Figure 3. Baseline analysis of airfoil with solution-adaptive mesh.

Once the initial project is set up, a parametric study is created using the “what if analysis” mode. Parametric studies allow varying simulation parameters (such as airspeed around the airfoil) or model geometry parameters such as SOLIDWORKS dimensions and mates.

In this case, the airfoil is placed in a SOLIDWORKS assembly with a mating scheme that allows control of pitch via an angle mate. The pitch angle is specified as a “dimension parameter” in the parametric study to allow varying the angle of attack iteratively. A range and step size is specified and a number of resulting scenarios are automatically created.

Figure 4. Angle mate input as a dimension parameter.

The “what if analysis” mode is great for “blind” analyses like this where you simply want to solve across a range of conditions, regardless of the outcome. Key results such as goals and cut plots are referenced as outputs and then the batch of scenarios can be solved.

Once the scenarios are solved, results become available and can be quickly compared across iterations, as visible in the image below:

Figure 5. Parametric study cut plots.

Numerical values can be automatically plotted against the scenarios using the built-in graphing functions. These curves can also be exported to Excel.

Figure 6. Parametric study goal charts.

To take a look at any particular scenario or “design point” in more detail, a right click allows creating a standalone Flow Simulation project from that iteration of the parametric study and opens up the capability to use the full set of results post-processing available.

Figure 7. Creating a project from a particular design point.

Note that many input variables can be specified for the “what if” analysis, but the number of scenarios will increase very rapidly and can easily get out of hand. This is something the “design of experiments” modes can help with, which we’ll look at later.

Case 2: Goal Optimization for CPU Water Cooling Block

The second mode parametric studies provide is “goal optimization.” This allows input of a specific target such as a target temperature, pressure drop and so on, and will automatically iterate the variable specified to try and achieve the target.

These studies are straightforward to setup and the results are easy to interpret. The biggest limitation is that only a single input variable can be included as part of the optimization.

Consider the case of the CPU water-cooling block depicted below. In the baseline study with 30 liters/hour of coolant flow, the predicted temperature of the chip mating surface is about 12°C above the coolant temperature.

Figure 8. Baseline study of CPU cooler.

Suppose we’d like to calculate the minimum coolant flow rate required to achieve a temperature rise of only 8°C? In this case, we’ll vary a simulation parameter (the coolant flow rate) between the range of 30 and 600 liters/hour and set a target of our maximum solid temperature to 8°C above our coolant input temperature.

Figure 9. Goal optimization input range.

Figure 10. Goal target criteria.

The goal optimization iterates until either the target tolerance or iteration limit is reached. In this case, after 10 iterations the solver indicated that a flow rate of about 75 liters/hour should produce a desired temperature rise of just under 8 degrees (visible as Design Point 10 in the figure below).

Figure 11. Tabular results for goal optimization.

Aside from the single-variable limitation of the goal optimization, there’s no way to predict in advance how many iterations it will require to converge on its target—or if it will at all—and it can be difficult to glean design trends when compared to a “what if” or “design of experiments” study.

Case 3: Design of Experiments for a CPU Water Cooling Block

While the “goal optimization” is only capable of varying a single parameter, the “design of experiments” study type comes in with multi-variable analysis. In this case, both the channel width and number of channels cut into the cooling block are varied across a pre-determined range, as depicted below.

Figure 12. Range of geometry variations for DoE study.

Whenever varying geometry (regardless of the type of parametric study) care must be taken to ensure the model geometry rebuilds properly across the range of variables specified. It’s also important that the topology of the model remains relatively consistent -- especially on any faces, bodies or edges where boundary conditions or other setup conditions are specified.

It’s recommended to test the model by manually adjusting it to the “min” and “max” conditions before running the parametric study to verify that it’s able to rebuild correctly so you don’t come back to a slew of failed analyses. Such testing was performed to determine the ranges and the extremes of the two ranges are depicted below.

Figure 13. Extremes of input variable range used in DoE study.

Unlike the other study types, scenarios for a design of experiments study are not automatically created. The user specifies how many experiments to create, and the input variables are varied across the ranges. Then, after each scenario is solved, the optimum design point is estimated after-the-fact by defining an objective function.

Figure 14. DoE scenario setup.

An additional output provided by the design of experiments study is a response surface viewer, to view trends between multiple variables and their outputs in 3D. The response surface depicted below indicates that changing either variable has a nonlinear effect on pressure drop, but a mostly linear one on the predicted maximum temperature for the given flow rate.

Figure 15. Response surface viewer.

An example of extracting an optimum design point is presented below, where the temperature and pressure are set to minimize with an additional constraint on the pressure to be below a certain value. 

Figure 16. Extraction of optimal design point.

One of the great benefits of this approach is that if objectives shift, it’s not necessary to rerun the entire DoE study to achieve some new target, simply create a new optimum design point with its own objective function.

It’s worth noting that the “optimum” design points are only estimates based on the trends detected, and should be run as their own analyses to confirm that the performance matches up to their estimated performance.

Conclusion

The nature of CAD-embedded analysis tools lends itself well to fast iteration and exploration of the design space available to engineers. While it’s possible to accomplish this by manually copying existing projects, at a certain point the capabilities of a table-based iterative tool such as design studies for SOLIDWORKS Simulation or the parametric studies covered here for SOLIDWORKS Flow Simulation really start to show their benefits—whether for varying the model geometry or simulation parameters.

]]>
Ryan Navarro
SOLIDWORKS Flow Tips and Tricks https://www.engineersrule.com/solidworks-flow-tips-and-tricks/ Mon, 30 May 2022 20:12:00 +0000 https://www.engineersrule.com/?p=7056 Running SOLIDWORKS Flow, even if you are SOLIDWORKS Simulation users, will be a bit daunting. Tutorials, training and watching Internet videos can help, but the Flow interface is different and there are many more parameters to consider when analyzing fluid flow and heat transfer.

But don’t worry. Even if you find your knowledge of SOLIDWORKS Simulation is of little use with computational fluid dynamics (aka CFD, aka SOLIDWORKS Flow), Flow is not a mysterious and magical software. It may be a relatively small and specialized application but there is information available on its use—like this article where Flow users have assembled their tips and tricks over years of using Flow.

Before diving into SOLIDWORKS Flow’s capabilities, it is important to understand its limitations. It is a multi-physics simulation software based on the Navier-Stokes equations. There is a technical reference in PDF format in the help files that goes through all of the underlying mathematics, if you are curious.

A Few Caveats

We should not confuse multi-physics with all-physics. SOLIDWORKS Flow does not solve electromagnetic problems (i.e., it does not deal with Maxwell’s equations). It is limited to fluid flow and heat transfer. It also does not handle phase change. There are no calculations for heat of fusion or heat of vaporization. Interestingly, it can manage cavitation. But, you can infer a phase change from the results.

For example, if the results of a simulation show water temperatures above the boiling point while at standard atmospheric pressure, you can infer that the liquid has turned to steam. It is up to the user running the simulation to understand such phenomena have occurred.

SOLIDWORKS Flow can run simulations at different pressures, including negative gauge pressure (vacuum). Perhaps not an absolute, inter-galactic outer-space or surface-of-the-moon vacuum, but a “medium” vacuum (0.1 Pa or larger) is manageable. This roughly equates to a mean free path of 10 cm for the gas molecules. There are exceptions and geometry does play a role, so be warned—and do read the help file and seek further advice in particular cases.

Figure 1. The General Settings window.

Figure 2. The Calculation Control options window.

Figure 3. The Engineering Database (SOLIDWORKS Flow specific).

Figure 4. The Flow feature tree.

Set it Up

SOLIDWORKS Flow can be broken down into three different phases: setup, run and post-processing.

Setup is generally regarded as the most important phase and you will do well to remember this: garbage in, garbage out. Setup can also be broken down into three areas: Settings, Input Data and Goals.

The Settings portion of the Setup phase includes the General Settings window (a flavor of the setup wizard, shown in Figure 1), the Calculation Control Options window (Figure 2), the mesh settings and the Engineering Database (Figure 3).

The Input Data area includes things such as boundary conditions, fans and heat sources. It is basically the feature tree of the SOLIDWORKS Flow tab (Figure 4).

Goals are always mentioned in training, but the “why” is normally omitted. Goals are specific parameter-driven calculations that the application stores for further use. They can be created and used as part of the input data, and they are also used extensively during post-processing. An example of goals for input data is creating a temperature surface goal on the tip of a thermocouple. A surface heat source (input data) can be set up as a function of the temperature surface goal during a transient analysis (very important that it be transient, as a steady state will never finish solving), thereby simulating a thermostat in the study.

Working backwards, input data is where things such as fluid sub-domain, fans, boundary conditions and heat sources are created. A volumetric heat source can have a constant temperature; that is to say, a given volume (solid body) can always remain the same temperature and serve as an input parameter into the system being studied. A surface heat source will not offer this option.

Don’t Mix It Up

Multiple fluid sub-domains are fine as long as the fluids do not come in contact with each other (like water traveling through a pipe and the pipe is in air). The sub-domain must be completely isolated in the model. There is a workaround that allows for multiple miscible fluids in the same domain: use a mass fraction or volume fraction—for example, X% propane and Y% methane in the same pipe. Just be sure to include all the fluids in the setup (General Settings window).

Since SOLIDWORKS 2018, it is also possible to have immiscible fluids (such as oil and water, or water and air) interacting. This is done with the “Free Surface” feature (also in General Settings).

Join the Fan Club

You can create a custom fan from SOLIDWORKS Flow’s fan feature and, using Microsoft Excel, you can enter points on the fan-curve.  This is very useful when fan manufacturers provide fan properties and data sheets.

Resist Contact

Be sure to insert a contact resistance if two solids of dissimilar materials are in contact with each other and heat transfer is important. This is normally reserved for high order, more accurate solutions after a design survives its preliminary simulation.

Figure 5. Right Click on Solid Materials.

Material Properties

Keep the material of the model current and material properties as complete as possible. If heat transfer (conduction) is involved, the density, specific heat and thermal conductivity of the material are required. If these properties are all up to date, then importing them into the simulation is as easy as right-clicking on “Solid Materials” and “Import Material from Model” (Figure 5). It will upload all unknown materials to the Engineering Database at the same time.

The SOLIDWORKS Flow Engineering Database is unique from the SOLIDWORKS material database. If “Solid Materials” does not appear, it is because “conduction” has not been enabled in the General Settings.

Understand Gravity

Gravity can be a function of time when running a transient analysis (Figure 6). This allows the solution of objects turning upside down (or pouring out).

Figure 6. Gravity as a function of time.

Making a Big Mesh of It

The Run phase is straightforward. There are a handful of options available to make sure the simulation is converging to a solution. You can monitor the goals that have been defined.

Check your mesh. You can get an accurate number of elements with a run without a solution. The solver will show the mesh count (once completed) and it will be easier to determine a rough solve time for future studies. For example, a 300,000 cell model took 52 minutes.

Look at the model with the mesh visible to see if the mesh should be refined anywhere.

Batch Run and Go

The “Batch Run” option, in conjunction with the Clone Project feature (which in layman’s terms means “copy simulation”), allows up to two simulations to be run either concurrently or multiple simulations to be run sequentially without user interaction. You can set the number of cores to process with, so you can continue to do other work during the simulation.

If multiple simulations need to be run after changing just one parameter, use the “parametric study” option to set up multiple values for the same parameter/variable. The number of variables translates to the number of simulations solved. If it takes three hours to run one simulation and you have five different values, it will take 15 hours. It is often best to set up a batch run or parametric study at the end of the day.

The post-processing phase is the most gratifying—because that is where all the pretty pictures are. Every screenshot you grab from the post-processing phase should tell a story about that particular result.

Red is Not Always to the Swift

It can be very confusing to see the same color in one screen shot represent different values in another screen shot. It is better to keep the colors consistent, with the gradient in every screenshot showing the same range of values. For example, screenshot A has a minimum temperature of 153°C in blue and a maximum temperature of 347°C in red. Screenshots B, C and D should have the same minimums and maximums.

When in Doubt…

Trust your gut when you are doing the post-processing. When in doubt, throw it out. It is better to ask for an extension than to present bad data.

Cut to the Plot

The Cut Plot feature has an extra field below the surface selection field for planes that may be hard to find the first time. Be careful when inspecting a cut across solid and fluid phases. Some properties are material-phase specific but will be indicated by the color gradient of fluid temperatures after the Cut Plot command.  

Use Isosurfaces in the post-processing to analyze an entire 3D volume of any given parameter. For example, use Isosurfaces to show the temperature of the volume of air surrounding a PCB board that is above 80°C.  

Use the flow trajectories feature in post-processing to create animations of the fluid flows. But keep in mind that the color of streamlines can represent temperature or pressure, not just velocity. It may be counterintuitive to see the little arrows moving fast and colored red and realize too late that you are seeing temperature , not velocity.

Get to the Point

There is a feature called “Point Parameters” in the Results section of the feature tree. If you create a sketch with a point, use Point Parameters to analyze any available parameter (temperature, velocity, pressure, etc.) at that point. This will allow you to repeat multiple design changes using that point as a reference. The same can be done across a line in a sketch using the XY Plot feature.

The Goal Plots feature is fabulous for exporting goal plots across iterations (if steady state) or time (if transient) to Microsoft Excel for further data mining and analysis. This feature alone makes creating goals a higher necessity than explained during those introductory courses. It is important to note that the goals need to be established prior to running the simulation if you intend to use this feature.

Under Pressure

Finally, the pressure field of a SOLIDWORKS Flow simulation can be exported to SOLIDWORKS Simulation for structural examination. However, this option must be enabled inside of SOLIDWORKS Flow. Do this in the Flow Simulation dropdown, Tools and then selecting “Export Results to Simulation.” This feature is in different locations depending on the SOLIDWORKS version being used.

In conclusion, SOLIDWORKS Flow is a virtual sandbox of possibilities for CFD simulation. With the introductory examples and lessons provided, the few resources available to apply the program practically, and a few common-sense rules (“garbage in, garbage out,” “when in doubt, throw it out”), your Flow simulations will be a success.

Learn more about SOLIDWORKS with the eBook SOLIDWORKS 2022 Enhancements to Streamline and Accelerate Your Entire Product Development Process.

]]>
Sean Borchert
How to Win at Engineering with Generative Design https://www.engineersrule.com/how-to-win-at-engineering-with-generative-design/ Thu, 17 Feb 2022 19:48:00 +0000 https://www.engineersrule.com/?p=6907 Engineering is a journey that ends with “good enough.”

Good enough is subjective, amorphic—and perfectly appropriate place to call it quits. A good enough design, product or outcome satisfies the stated objectives and is a feasible solution.

Good enough is not, however, the best solution. Good enough often leaves the best solution undiscovered.  

We have reasons to avoid pursuing the best solution. Cost is the main reason, the ever-present constraint on the number of solutions attempted. Other constraints include lead time, performance targets, manufacturability or regulatory rules.

The best solution is a costly goal because the goal line, like perfection, can never be reached. Every solution comes at a cost and attempting another solution will draw from available resources. In this way, each solution competes against the previous one. When do you stop iterating and move to the next step in the process, knowing you have the winning solution?

Winning implies competition, as does best.

One might not think of engineering as competitive. But businesses that employ engineers are always in competition. Grants and contracts are a competition and so is market share. The globalization of industry and the steady democratization of skills has created fierce competition, with more participants, in every industry. While not every internal project is directly competitive, most projects will support a mission that is ultimately competitive.

Engineers have grown accustomed to iterating, but normally do 1 to 10 iterations per problem. They understand the value of iterating, learning with each iteration, or “refining the design.” It normally leads to good enough.

Of course, every engineer will have to balance the number of iterations against the cost to produce those iterations on a per-project basis. There is significant pressure to find ways to increase the iteration/dollar ratio. Tools that increase this ratio are providing competitive advantages that are hard to ignore.

Over the last few years, three tools have caused a shakeup in how engineers think about the pursuit of the best solution.

  1. Topology optimization
  2. Generative design
  3. Additive manufacturing

Each of these tools presents considerable advantages by themselves, but when strung together they have the potential for creating (and confirming) a winning design while reducing overall cost.

Topology Optimization

Topology optimization is an objective-based design method that utilizes finite element analysis (FEA) to determine feasible shapes for a given set of loading conditions.

Topology optimization software frees the engineer from having to guess the overall shape or form of the part. A poor guess can lead to a dead end. This is why so many designs resemble one another. Blazing a new trail can be risky. Engineers are playing it safe.

In a conventional design workflow, the engineer will draw their part in CAD and validate their design using FEA or other numerical methods. If the FEA results show that there is room for improvement—perhaps the factor of safety is higher than required or the lifespan is longer than necessary—the engineer may iterate the design. They may choose to sacrifice performance for gain in another category such as manufacturability or cost of materials. Repeated multiple times, this process eventually results in a good enough part.

Typical FEA visualization. Colors are used to communicate stresses, displacement or other metrics. The engineer can use these visualizations as clues to an optimized shape.

But if the goal is the best shape, the engineer might as well be searching for a black cat in an unlit gymnasium. Other than visual clues, there is little help provided to find the best solution.

Topology optimization flips the process. It begins with the loading conditions. An engineer will set the objective and constraints of the problem and then turn on the optimization. The model is broken down into finite elements and solved. Finite elements with no stress means the finite element is not necessary. If a stress is too high, more finite elements are needed. This way, an optimal shape is systematically derived.

Topology optimized control arm. Three loading conditions are evaluated and confirmed to have room for optimization. Algorithms reveal resultant load paths, which are interpreted as the minimum volume of material necessary to meet the engineering criteria.

The resulting shapes can take life-like shapes and be reminiscent of natural designs all around us. From tree limbs to dragonfly wings, nature has proven to be an exceptional objective-driven designer. But however optimized these designs are, they need to be manufacturable. That has been a barrier to wide adoption of topology optimization.

Topology optimized wheel. The load paths that emerged from the loading conditions resemble patterns found in nature.

Additive Manufacturing

Among the many benefits of additive manufacturing (also known as 3D printing) is that it enables topology optimized shapes to be manufactured cost-effectively. “Complexity is free,” frequently quoted by the additive manufacturing (AM) industry, may not be true, but complexity is definitely encouraged.

In traditional manufacturing, increased complexity definitely correlates to increased cost. The more complex part will cost more to produce. Project managers are especially averse to individually complex parts because they understand their downstream risk. When a shape is too complex, we’d rather break it down into multiple, simpler parts to be assembled.

However, within the last decade additive manufacturing has emerged as an acceptable method of manufacturing complex parts and has fueled a paradigm shift toward complexity. In a time where complexity is encouraged, topology optimization can deliver.

Generative Design

So, is it starting to sound as if topology optimization and additive manufacturing combine for an easy win? We offer a few caveats:

  • Topology optimization works in the concept phase of the design process. The output of this stage is a low-precision model that satisfies the basic requirements of the part.
  • This shape must be processed in a CAD tool such as SOLIDWORKS for detailed design.
  • The detailed design is validated with more robust FEA and CFD tools than typically found in topology optimization software.
  • The validated design is cleared for manufacturing and the drawings or CAD files are transferred to those responsible for making the part. This is where additive manufacturing becomes an option.
  • After manufacturing, the part is inspected for quality assurance.
  • If the part is destined for assembly, it makes its way there. Otherwise, it is sent for packaging and shipping.

A winning engineering design requires optimization within every stage of this workflow. This quickly becomes difficult because of the sheer volume of possible combinations. At each stage, decisions are made that affect later stages but also previous stages. This is not linear like an assembly line. It’s more like a spider web.

But what may be challenging for an engineer is perfect for iteration through automation.

There are challenges to automation within every stage, but until topology optimization, there was no automation possible at the beginning—the concept stage of a design. Topology optimization tools, by minimizing the effort required of the human operator, enable an unprecedented level of automation at this stage.

Most topology optimization software isn’t currently capable of complete automation, but it is a good start. Rather than a discrete objective, a topology study could be given an array of directives. So too, the constraints (such as manufacturing method and material) could be varied and a user-selected number of iterations could be evaluated. The extreme computational intensity can be mitigated by pushing the number crunching to the cloud, another recent and necessary advancement in technology.

A single component, multi-variable generative design study will result in several viable options to select from. (Image courtesy: Buonamici, Francesco & Carfagni, Monica & Furferi, Rocco & Volpe, Yary & Governi, Lapo. (2020). Generative Design: An Explorative Study. Computer-Aided Design and Applications. 18. 144-155. 10.14733/cadaps.2021.144-155.)

A generative design tool will utilize topology optimization in addition to other layers of optimization, resulting in an array of possible solutions. The array is presented to the engineer for evaluation. The engineer might make a selection based on information not included in the generative design study, such as current supply chain disruptions. Manufacturing methods, material selection, cost of inspection, etc. can and should be investigated at the conceptual stage with this method.

This is the promise of generative design: a holistic automation that ends with the presentation of potential solutions, and from them, we can select the best of the best. It’s our choice.

Learn more with the whitepaper Designers Greatly Benefit from Simulation-Driven Product Development.

]]>
Tyler Reid
Mold Filling Simulation in SOLIDWORKS Plastics https://www.engineersrule.com/mold-filling-simulation-in-solidworks-plastics/ Wed, 22 Sep 2021 22:45:03 +0000 https://www.engineersrule.com/?p=6515 Injection molding is an efficient manufacturing process for producing high-volume, low-cost parts. Advances in polymer technology have enabled replacement of structural metal components with injection molded fiber reinforced materials. Advances in machine automation have driven per-part cost down even further, but the mold tooling required for part production remains a significant capital expense.

Mold filling simulation allows for verifying part design before mold tool steel is ever cut—ensuring that major revisions will not be necessary to either the part or the molds. Incorporating a CAD-embedded molding simulation software enables designers to make decisions to improve moldability early on and throughout the design process.

Figure 1. Fill analysis in SOLIDWORKS Plastics.

SOLIDWORKS Plastics is an add-in product for SOLIDWORKS that enables simulations for plastic injection molding and is available in three levels: ranging from Plastics Standard for predicting single part mold filling and common defects; Plastics Professional which allows analysis of the pack or “pressure hold” cycle, more complex materials and multi-cavity molds; to Plastics Premium, which enables molders and tool designers to simulate cooling channel design and predict warpage.

This article will examine some of the common molding defects that can be predicted and avoided using SOLIDWORKS Plastics—as well as the setup process required to perform an analysis.

Creating a Plastics Fill Study

Creating a Plastics study is a straightforward process, beginning with loading the SOLIDWORKS Plastics add-in.

Geometry Preparation

Geometry preparation involves making sure you are working in the SOLIDWORKS Part environment. For single parts there is no work required, but multi-cavity layouts may require conversion from assemblies into multi-body parts.

Note that for Plastics analysis the “positive” volume of the net molded part is required, rather than the negative shape of the mold cavity. If you are working from mold tooling and do not have the shape of the target part, it can easily be extracted using the Intersect feature in SOLIDWORKS.

The last task before beginning analysis setup is to define areas for the gate or injection locations where polymer will be injected into the cavity. This is commonly defined as a simple sketch point coincident with an edge or face of the model.

Study Setup

At this point a new study can be created. For most studies, the only choice the user needs to make is Solid or Shell mesh. We will explain meshing later in detail but the short version is that a Shell mesh allows rapid analysis of relatively constant wall thickness in thin-walled parts, while Solid mesh has an increased solve time but offers more rich results with less assumptions.

Figure 2 below illustrates the Plastics Study feature tree after mesh generation. The injection location was specified under boundary conditions and a sketch point was specified. The polymer was chosen inside the Injection Unit options.

Figure 2. Plastics study tree.

Starting with the 2020 version of SOLIDWORKS Plastics the user-interface was re-structured to match other SOLIDWORKS Simulation products more closely.

Like SOLIDWORKS Simulation and Flow Simulation, the study setup is stored on the part file and can take advantage of multiple configurations for analyzing geometry variations. Results files are saved in the same folder as the CAD files by default. 

The Injection Units settings let you specify the polymer as well as fill settings such as injection pressure, mold temperature and fill time, if known. In the absence of user-specified parameters here, Plastics will use data from the material specifications for mold temperature as well as automatically calculated values for fill time.

For a more detailed guide on how to set up a study in SOLIDWORKS Plastics check out this video: SOLIDWORKS Plastics: Simulation Setup Guide.

Interpreting Results

A variety of part-level defects can be identified easily with Plastics Standard, which has automatic checks and visualization for short shots (incomplete fills), weld (knit) lines, sink marks and air traps in the model. Therefore, most severe molding problems can be identified and mitigated early with a quick first-pass analysis, often set up and run in just a few minutes.

Machine selection can also be considered by predicting the clamp tonnage and pressure required to mold the part.

Cooling time is estimated at this stage and part design changes can be made to reduce cooling time or minimize knit lines or sink marks.

Testing Geometry Variations

Once the initial analysis has been performed, analyzing different geometry variations or polymers is as simple as clicking the “Duplicate Study” button to create a new SOLIDWORKS configuration and copied Plastics study.

Alternatively multiple geometry variations that are prepared in advance can be set up and batch solved through the Batch Manager.

Runner Balancing

Plastics Professional adds the ability to represent multi-cavity molds. Especially troublesome for molders are “family molds” (where multiple different parts are grouped together to save on tooling cost) which can suffer from unbalanced fill times. Observe Figure 3 below, where due to their different volumes the smaller part fills much more quickly. In the pack cycle, this may result in excessive flashing and other issues.

Figure 3. Family mold prior to runner balancing.

The process of “runner balancing” or resizing the runners is normally a manual undertaking to attempt to even out the flow between disparate parts. SOLIDWORKS Plastics Professional automates this process using a runner balancing wizard which iterates and attempts to automatically optimize runner and gate sizes until equal fill rates are achieved.

Runner systems in general can be quickly prototyped using single line sketches and assigning profiles and sizes, or they may be modeled in more detail by creating a solid body to represent the runner system and flagging it as part of the runner domain.

Figure 4. Runner system: sketch lines (left) vs solid body (right).

Modeling a solid body to represent the gate and runner is the preferred method to accurately represent localized gate effects and test different gate designs such as submarine gate, cashew gate, etc.

Solid Mesh

The examples thus far have featured a shell mesh—which is appropriate for initial predictions of fill performance on thin-walled parts.

For parts that may feature thin and thick regions, or out of desire for more accuracy and rich results, it may be desirable to utilize a solid mesh.

Figure 5. Solid mesh and boundary layer element closeup.

The shell meshes only the interior and exterior faces and performs extra calculations to interpolate what the flow front is doing through the thickness of the cavity.

The solid mesh generates boundary layer elements on the inner and outer skins of the model and then fills in the inside with tetrahedral elements, which allows it to calculate the flow front explicitly.

This means that in addition to ensuring accuracy, solid mesh enables additional results outputs.

Isosurface plots allow visualizing the 3D flow front of polymer, as visible in Figure 6 below.

Figure 6. Isosurface display for fill time plot with solid mesh.

The presence of “through thickness” elements also means that accurate cut plots can be created and data can be probed at any location internal to the part, which is practically a requirement for parts that feature thick wall sections.

Solid mesh functionality is available even in Plastics Standard but only for single body parts.

Solid Mesh Performance

Despite requiring many more elements, solid mesh solvers are very well multi-threaded. Despite the speed advantage shell mesh may have in terms of solve time on lower-end hardware, the gap in solve time can be reduced on hardware taking advantage of a high number of cores in the CPU and GPUs.

Overmolding

Figure 5 and Figure 6 also show an example of insert overmolding, a feature which requires Plastics Professional. The ability to specify multiple different domains makes it easy to separate the runner body, the insert and the part cavity itself.

By including the insert in the analysis, the appropriate material can be applied and more accurately represent the thermal effects in the mold.

Plastics Professional also supports multi-shot injection molding for plastic-on-plastic parts, as well as materials such as TPU and TPE for soft-touch overmolding.

Mold-level Analysis: Cool & Warp

At the high end of SOLIDWORKS Plastics simulation in Plastics Premium, mold-level analysis can be performed by representing the runners, cooling channels and a mold body around the cavity.

Figure 7 below shows the cooling channels, defined in this case using simple sketch lines. Flow rates and a fluid are input to the cooling channels, which will perform a fluid dynamics calculation to determine their heat removal performance.

Figure 7. Cooling channel and runner system defined in Plastics Premium.

Cooling channels can also be modeled using solid bodies to represent more complex cooling, such as conformal cooling channels.

Figure 8. Virtual mold mesh cross section.

Figure 8 shows a cross section of the virtual mold—a volume of metal to represent the mold body. The mold could be represented as separate solid bodies and inserts for more complex scenarios.

Including the detail of the mold and cooling channels allows a molder or tool designer to simulate the cooling portion of the mold cycle using Plastics Premium.

Mold Temperature Assumptions & Cooling Analysis

Note that all the analyses possible in Plastics Standard & Professional are predicated on a “uniform mold temperature” assumption—meaning that the mold temperature specified is assumed to be uniform throughout. This is usually a reasonable assumption for parts with uniform wall thickness but the more non-standard the part design the higher the odds that assumption is invalid.

In any case, running the cool analysis provides much more detail. The cool analysis can be performed up front (even before Fill or Pack) which makes for a convenient workflow if the tooling designer is simply trying to optimize cooling channels.

Figure 9. Cool results - mold temperature.

Once the cooling results are available, as visible in Figure 9 above, they will automatically be incorporated into subsequent calculations such as Flow, Pack and Warp, increasing their accuracy.

Running Cool/Flow/Pack/Warp provides the most rich results the software is able to offer. Warpage is predicted and contribution of thermal versus viscous effects can be evaluated.

Figure 10. Warpage result (exaggerated deformation).

An exaggerated warp deformation is visible in Figure 10 above. If it is not possible to correct the warp by improving the part or tooling design, the inverted shape of the warpage may be exported using the Reverse Warp option when creating a deformed body. This allows exporting a shape with “windage” with the intent of cutting the reverse-warped shape into the mold to help improve accuracy of the final part.

Conclusion

SOLIDWORKS Plastics add-in allows a mold filling simulation from within SOLIDWORKS. For part designers, Plastics Standard represents a great value and will typically make moldability problems obvious with a quick “first pass” analysis.  

Overmolding, family molds and an expanded range of molding processes are available in Plastics Professional. Plastics Premium adds the ability to represent cooling system design and the mold itself, as well as performing warpage analysis.

Aside from helping avoid costly tooling challenges, having a plastics simulation tool on hand is especially helpful whenever the part design veers from normal or known, tried and true existing designs. Plastics simulation allows for design innovation, rather than playing it safe, giving engineers confidence and security that approaches that of “tried and true” designs.

For more about the benefits of simulation, check out the whitepaper Enhancing Data Management Workflows Through CAD-Integrated Simulation.

]]>
Ryan Navarro
Thermal Analysis with SOLIDWORKS Flow Simulation https://www.engineersrule.com/thermal-analysis-with-solidworks-flow-simulation/ Wed, 30 Jun 2021 16:21:22 +0000 https://www.engineersrule.com/?p=6325 SOLIDWORKS Flow Simulation is a powerful, general-purpose CFD package integrated directly into the SOLIDWORKS CAD environment. Because it is a general-purpose fluid dynamics analysis package, Flow Simulation can analyze a wide variety of problems, including: aerodynamic and hydrodynamic problems such as pump and propeller design, head loss in piping systems and coefficient of drag calculations for vehicles.

One of the most common applications of Flow Simulation today though is thermal analysis for predicting cooling performance of electronics and other heat generating components. The ability to simulate heat conduction combined with convective heat transfer generated by airflow over heatsinks and chip packages offers a high degree of confidence in temperatures predicted, especially when compared to traditional hand calculations or FEA-based thermal analysis where assumptions about airflow must be input in the form of convection coefficients.

This article will examine the use cases of SOLIDWORKS Flow Simulation as it relates to thermal analysis, with a specific focus on predicting the performance of electronics cooling systems.

Background & Terminology

Flow Simulation is a computational fluid dynamics (CFD) analysis package using the finite volume method. The computational domain is broken up into a Cartesian mesh, a grid-like mesh made up of box-shaped cells, which will be discussed later in this article.

Key parameters of interest are tracked during the solution by the creation of user-defined goals. In the case of steady-state analysis, the convergence of goals is tracked and utilized as a stopping criteria for the solver. In other words, the solver continues to iterate until the values of the goals flatten off, indicating that the system has reached steady-state equilibrium. The appropriate definition of goals is thereby crucial to ensuring accuracy and reasonable computation time.

Steady-state and transient calculations can be performed. By default, new projects are treated as steady-state. Transient analysis is enabled via the time-dependent checkbox in the Project Wizard, which iterates over physical time steps and stores results over the time history of the solution, at the cost of extended solution time. Transient analysis makes it possible to input curves for conditions such as heat sources to represent duty cycle, or analyze problems which may be constantly fluctuating and have no steady-state solution at all.

Analysis can be internal or external. Internal analyses represent closed wall systems such as electronics enclosures, or a piping system or manifold. External analyses represent a larger computational domain, such as a room full of air around the product to be analyzed.

Mechanisms of Heat Transfer

By default when creating a new project, Flow Simulation simulates heat transfer in fluids and performs a steady-state analysis.

The Project Wizard allows selection of various physical effects at the time of project creation. Enabling the option for heat conduction in solids will open up a variety of new options in the project, namely: the ability to define materialswith thermal conductivity properties, heat sourceswith their own temperatures or heat generation rates and goals that track the temperature and  thermal properties of various solids.

Figure 1. Flow Simulation Project Wizard.

The ability to simulate heat transfer in fluids is maintained, so heat generating solids will automatically convect heat away to the surrounding fluid.

Enabling the radiation option in the Project Wizard allows definition of emissivity and performs simulation of radiative heat transfer.

Heat removal in the high-powered electronics manufactured today is typically accomplished by air or liquid cooling. As the thermal performance of such systems is often dominated by conduction and convection, radiative heat transfer is often assumed to be minor and neglected in the simulation to reduce solution time.

Radiation becomes crucial between components with very high temperatures, or those operating in near-vacuum conditions. Applications where radiative heat transfer is critical include: design of light bulbs and lamps, heating elements and furnace equipment, and spacecraft and satellites.

For the special case of problems that are dominated by conductive heat transfer and/or radiation, Flow Simulation has an option for “Heat conduction in solids only” which completely disables the fluid flow calculations—effectively removing the “flow” in Flow Simulation. This option is appropriate for drastically speeding up calculation of systems that operate in a vacuum.

Example 1: Natural Convection Analysis

The amplifier pictured in Figure 2 below is the subject of a natural convection thermal analysis, using an external analysis project type, as well as the options for heat conduction in solids and gravity (appropriately oriented). Solid materials with appropriate thermal conductivity are defined and heat sources are applied to any heat generating components.

As the option for Gravity is enabled in the Project Wizard, the heating of the surrounding fluid will cause convective currents to form as the heated fluid rises due to its lighter density and heavier cooler fluid descends to take its place—a process known as natural convection.

Figure 2. Example of Natural Convection Analysis of Amplifier.

Aside from setting up goals to track the temperatures of the solids of interest in the study, not much else is required from the user to quickly establish baseline results. Accuracy of the simulation can be improved by refining the mesh and establishing thermal contact resistances, as well as some specific considerations unique to external analyses.

Conjugate Heat Transfer

The convection coefficient, or “h” value, varies over a wide range based on fluid flow and geometry. It is calculated as an output from the Flow Simulation and can be extracted as a results parameter. This makes a CFD-based thermal analysis capable of solving coupled or “conjugate” solid/fluid heat transfer, which is a much more reliable tool for predicting cooling performance than hand calculations or analysis performed in thermal FEA, which require inputting a best guess at an h-value.

As a thought exercise, consider the case of a thermal FEA study for a heatsink. Without a way to accurately predict changes in h-value due to geometry, the results of the study will indicate that more fins on a heatsink is always superior due to the increased surface area they present. In reality, there will be an optimal point in terms of fin density that, once exceeded, will begin to impede the surrounding fluid flow and reduce the effective convection coefficient.

Thermal FEA will also fall short if the orientation of a heatsink is changed with respect to gravity direction, which can drastically affect thermal behavior for passive cooled systems.

SOLIDWORKS Flow Simulation is able to accurately predict these behaviors and allows for optimizing heat sink design, as well as predicting performance in alternate orientations.

Special Considerations for Natural Convection

Two considerations of note can affect results accuracy for natural convection problems and other external analyses. First is representing the geometry in its appropriate orientation and positioning. If a device is to be mounted flat on a table, the Gravity direction in the project must be oriented appropriately. Additionally, if the device is being mounted flat to some piece of equipment, the flow restriction from this should be modeled in.

The geometry used for analysis should match as closely as possible the geometry used in physical testing or production implementation, which may necessitate modeling a blocking surface in CAD.

The impact of such geometry is visible in the flow trajectories of Figure 3 below.

Figure 3. Flow Trajectories With and Without Blocking Surface.

The second consideration for external analyses is the sizing of the computational domain. Figure 4 below compares three different computational domain sizes.

Figure 4. Sizing of Computational Domain.

There is no exact rule for predicting adequate computational domain size. Rules of thumb can be found in the literature and typically will vary based on the velocities of fluid flow present in the analysis. Too small of a computational domain may negatively impact results, while an oversized computational domain will needlessly extend solution time.

Much like mesh cell refinement, a best practice would be to conduct a virtual experiment by cloning (duplicating) the Flow Simulation projects and iterating on computational domain size until an adequate balance of accuracy and solve time is determined. Such an experiment can determine guidelines that can be used for similar geometries and analyses moving forward.

Rapid iterations can be performed via the parametric study functionality, which is discussed later in the article.

A step-by-step tutorial covering set up of a simple natural convection problem and discussing these special considerations can be found in this video.

Example 2: Internal Analysis with Forced Convection

Electronics enclosures are often best represented by an internal analysis. Setup of an internal analysis requires capping off any openings of the enclosure. Flow Simulation offers a Lid Creation tool that can help speed up the process of creating lids to cap off the geometry. In order to conduct an internal analysis, the software must extract a “watertight” fluid body from the enclosed space.

Results from an internal analysis of a rackmount server are visible in Figure 5 below.

Figure 5. Internal Flow of Rackmount Server.

If it ends up being too difficult to create a watertight body, a fallback method is to approach the project as an external analysis. In this scenario, it could be thought of as a a pseudo-internal analysis within a slightly larger computational domain, as visible in Figure 6 below.

Figure 6. External Approach for an Enclosure.

Representing an enclosure within an external analysis in this way has a couple additional benefits: it can predict air leakage through any openings, as well as predict the convective cooling of the entire enclosure to the environment, with the trade-off of additional solve time required.

Forced convection from cooling fans is represented by definition of fan conditions. A handful of fan definitions are included with Flow Simulation by default, but additional representations of cooling fans (and even water pumps) can be user-defined by inputting a static-pressure curve. They can be placed as inlet or outlet fans at the edge of the computational domain, or internal fans in the middle of an enclsoure.

Figure 7 below shows a static pressure curve as input in the Flow Simulation Engineering Database, using data from the manufacturer’s specification sheet for a fan.

Figure 7. Fan Curve Definition.

The optional add-on module for SOLIDWORKS Flow Simulation known as the “Electronics Cooling” module greatly expands the built-in library of manufacturer’s fan definitions, as well as adding a variety of other useful features for electronics analysis such as enhanced materials for PCBs and IC packages, additional heating models such as two-resistor components, Joule heating and native support for heat pipes.

Aside from preparing the geometry for internal analysis and defining fans, the bulk of the setup work for this type of problem lies in defining the appropriate solid materials and heat sources. There are productivity tools that can help, such as importing setup conditions from child components and propagating definitions across all assembly instances.

Example 3: Liquid Cooling with Fluid Subdomain

It is increasingly common for high powered equipment such as computers in machine learning workloads to utilize liquid cooling. Liquid cooling may be used as a complete cooling solution, or in tandem with air cooling. An example analysis setup for combined liquid and air cooling is visible in Figure 8 below.

Figure 8. Open-Loop Liquid Cooling with Fluid Subdomain.

This liquid cooling within an air-cooled internal analysis is accomplished via a fluid subdomain. Fluid subdomains allow separating distinct fluid regions and applying unique conditions such as inlet temperature and flow rate to the distinct region. This process allows for simulation and prediction of liquid-air heat exchangers and radiators.

Parametric Studies

SOLIDWORKS Flow Simulation projects update automatically with changes to the CAD geometry. Another capability is to manually “clone” or duplicate projects to test and store the results of such variations.

Anytime there are many iterations required, such as when attempting to optimize a geometry, users can take advantage of the in-built parametric study functionality, which allows creating a virtual design of experiments (DoE). Such a DoE setup for optimization of the CPU cooling waterblock is visible in Figure 9 below.

Figure 9. Waterblock Optimization with Parametric Study.

Parametric studies allow varying multiple parameters (in this case, the thickness and number of fins), tracking results parameters and optionally calculating an optimal design point.

Meshing Technology in Flow Simulation

The Cartesian or grid-shaped mesh that SOLIDWORKS Flow Simulation utilizes is relatively unique among analysis tools. This meshing technology brings with it some distinct advantages, such as ease of generating the mesh and precise control over how much detail is included in the analysis.

Compared to a tetrahedral-based mesh which must start by resolving the solid geometry, the approach used in Flow Simulation begins by subdividing the computational domain. This means that it’s possible to force a coarser mesh density, which will effectively ignore tiny details in the model without actually performing extensive geometry simplification. This ability to “look past” tiny solid features means that Flow Simulation has a much easier time generating meshes for complex geometries.

This also comes with some additional responsibility for the user to ensure that areas of interest are being adequately resolved by the mesh. While the default mesh settings are often a good starting point to establish a baseline, manual adjustment of the global refinement settings, as well as a few carefully placed Local Mesh refinements, can go a long way toward ensuring accuracy of the solution.

Figure 10 below shows an example of mesh refinement around a CPU heatsink using a local mesh control, and a second lesser tier of local mesh refinement defined around the RAM modules.

Figure 10. Mesh Refinement around Heatsink.

Color-coded mesh refinement plots help the user to identify levels of refinement and the areas to which they apply throughout the meshing process.

Conclusion

This article presented the case that thermal analysis involving convection is best analyzed using a tool capable of natively solving conjugate heat transfer, such as the CFD-based approach presented by SOLIDWORKS Flow Simulation. Predicting thermal performance in this way prevents the need to estimate convection coefficients, as would be required to perform hand-calculations or an FEA-based thermal approach.

Several examples were showcased detailing setup of natural convection and forced convection problems for air-cooled electronics, as well as how combined liquid-to-air cooling can be analyzed.

Use of a CAD-integrated CFD tool also allows for rapidly analyzing geometry variations either through manual duplication of projects, or by conducting a virtual design of experiments utilizing the parametric study functionality in Flow Simulation. Lastly, a robust set of meshing defaults should make it easy to establish baseline analyses with limited geometry preparation required by the user.

Advances in software multithreading, combined with the rise of affordable many-core CPUs, has also drastically lowered the solution time requirements for CFD analysis on modern hardware—making it a viable tool to incorporate early on and throughout the design process.

To learn more, check out the whitepaper Design Through Analysis: Today's Designers Greatly Benefit from Simulation-Driven Product Development.

]]>
Ryan Navarro
Large Assembly Analysis with SOLIDWORKS Simulation 2021 https://www.engineersrule.com/large-assembly-analysis-with-solidworks-simulation-2021/ Wed, 28 Apr 2021 20:48:00 +0000 https://www.engineersrule.com/?p=6188 Finite element analysis with SOLIDWORKS Simulation allows analyzing load cases and predicting the resulting stresses and displacements of a model directly within SOLIDWORKS CAD. The general process of setting up an analysis is the same for parts and assemblies, but as the number of components in an assembly grows, more care must be taken to ensure feasibility of an analysis.

This article will examine two case studies and describe techniques relevant for setting up any large assembly analysis, as well as some of the key relevant enhancements in SOLIDWORKS Simulation 2021 and other recent versions that make analyzing large assemblies much easier.

Example 1: Mold Base Solid Mesh Analysis

Consider the case of the injection mold in Figure 1 below. It’s important to establish a plan for the level of detail expected in an analysis before beginning study setup. In this analysis, the goal is to predict deflections and stresses around the mold cavities when subjected to the injection pressure and clamping forces.  

Figure 1. Case Study - Mold Base Analysis.

Contact interactions (formerly called “no penetration”) will be used sparingly at the interface between the mold halves. Contact significantly increases solve time, so the remaining plates will be assumed to be bonded. For loadings and restraints: one side of the mold is fixed, a clamping force is applied to the other side and a pressure is applied to the cavity faces exposed to the polymer.

Note: for this type of setup, it’s important that the clamping force is sufficiently large that it exceeds the clamp tonnage requirement, which can be estimated as the molding pressure multiplied by the 2D surface area of the cavities. Insufficient clamping force would cause the halves to separate and the solution would become unstable.

The pins and bolts in the assembly will be suppressed and replaced with the appropriate Virtual Connectors, a special type of simplified representation available in SOLIDWORKS Simulation to represent features such as pins, bolts, springs and welds.  

Geometry Preparation Tips

In an assembly with this many components, it’s worth creating a separate configuration just for analysis. SOLIDWORKS provides a variety of selection tools such as Select by Size to select small components or Select Toolbox to select all Toolbox components.

The most powerful method for identifying and suppressing small components is to use Assembly Visualization, pictured in Figure 2 below. This command is accessed on the Evaluate tab of the Command Manager and the column style display can be customized to sort by the volume of various components.

Figure 2. Assembly Visualization for small component suppression.

Shift-selecting components from the top selects the bulk of the small components that can be suppressed. If there are small components that are critical to the analysis, they can be Ctrl-selected to remove them from the selection before suppressing.

Calculating Mesh Sizes

The automatic defaults for mesh generation are often sufficient for creating a baseline mesh on many applications. But for a complex assembly, it may be necessary to manually determine mesh sizes. A foolproof workflow of determining the required mesh size is presented in Figure 3 below.

Figure 3. Simple Process for Meshing Success.

Open or isolate any parts that have small detailed features. Unnecessary detail can be suppressed or eliminated with extruded cuts or direct editing features such as Delete Face. Remaining detailed features must be resolved by the mesh. This means that, at minimum, the mesh size must be equal to or smaller than small edges and fillets.

Use the Measure Tool to size up the smallest features in the simplified model and use that measurement as an estimate for mesh element size. It’s best to apply a Mesh Control only in the areas where the refinement is necessary, then create the mesh.

Note that performing these steps at the part level has the benefit of fast mesh generation and the ability to quickly experiment with mesh sizes. Mesh controls and other features defined on “child” component studies like this can later be imported to the top level assembly using Import Study Features, described in more detail in Example 2 of this article.

These mesh controls can later be refined further for additional accuracy in prediction of local stresses. I would generally recommend waiting on any additional refinement until a baseline or “first pass” analysis is performed on the top level assembly to verify that the study setup is correct.

Mixing Mesh Quality

For solid mesh studies such as this one, SOLIDWORKS 2020 added the capability of mixed mesh quality.

Before this enhancement, it was necessary to choose on a global level between draft quality and high quality mesh. Draft quality mesh uses linear tetrahedrals that have many fewer nodes and degrees of freedom than high quality mesh, resulting in faster solution times with the tradeoff that it tends to underpredict stresses and displacements.

The global choice meant that draft quality was generally reserved exclusively for validating study setup on a crude first pass analysis.

Now that high quality mesh can be applied selectively to areas of interest, there is the possibility of carefully incorporating draft quality into regions that are sufficiently distant from the area of interest.  The two mesh qualities are color coded by default, and visible in Figure 4 below.

Figure 4. Completed Setup for First Pass Analysis.

Note that the presence of any shell or beam elements will remove the mixed mesh quality capability.  Care must still be exercised to ensure that the mesh refinement is adequate.

This study resulted in 1.5 million degrees of freedom and solved in under 30 minutes on an entry level workstation laptop. The results for stress and displacement, shown at an exaggerated deformation scale, are visible in Figure 5 below.

Figure 5. First Pass Results for Stress and Displacement.

The utilization of contact interactions between the mold halves is what is likely responsible for the solve time being as long as it is, but the 1.5 million degrees of freedom total leaves plenty of headroom for additional mesh refinement or reintroducing more high quality mesh later on.

The exaggerated deformation presented is useful to verify the loading setup conditions. In this case, it can be seen that the load and fixture scheme employed is probably not accurately representing the real life loading inside the injection mold machine.

This is because in reality the mold assembly will be squeezed between large and extremely stiff platens, which would greatly restrict the ability of the base plates to deform.  A more realistic analysis could be performed by using a new restraint scheme, or by modeling rigid bodies to represent the molding machine platens and squeezing the mold assembly between those.

Setup problems such as this can be more quickly determined using a “first pass” analysis rather than waiting hours for a very refined mesh study to solve. In fact, it probably could have been identified by running a globally bonded study, which would have solved in only a few minutes.

Example 2: Gantry Shell Mesh Analysis

Consider the example of the gantry crane pictured in Figure 6 below. While not inherently necessary for such an application, this model was purpose built with simulation in mind.

Figure 6. Case Study - Gantry Analysis.

In the case of the truss, geometry like this would often be represented as weldments in SOLIDWORKS, which would automatically convert into beam elements in the simulation. However, in this case, the truss was modeled using surface bodies, which will automatically convert into shell mesh.

The shell mesh provides greater detail and prediction of local stress concentrations than would be possible using a beam mesh. It also enables the use of contact interactions between the trolley, which will slide along the frame. Contact is also defined on the rollers at either end of the gantry. The remaining geometries such as the trolley and end rails were meshed with the default solid mesh. A Remote Load/Mass is used to represent the weight of the payload. These are useful to represent the effects of excluded components or other cantilevered loadings.

Simplification such as shell mesh definition requires additional setup compared to running an exclusively solid mesh study. Shell thickness, offset and orientation must be defined for the relevant surface bodies, and frequently it is necessary to manually define local contact interactions.

This process is expedited by using the Import Study Features command, which was added in SOLIDWORKS 2018 and is visible in Figure 7 below.

Figure 7. Import Study Features.

Import Study Features allows defining the simulation setup on a study of a child component, and then importing this setup up to the top level. Features to import, such as material/shell definition, mesh controls, loads and fixtures, can be selectively filtered.

Perhaps the most useful feature is “Propagate … to all instances” which will automatically pattern the imported features to the relevant component instances in the assembly.

Import Study Features also allows for rapid re-use of components which may need to be analyzed in different top level assemblies.

The heavy use of shell mesh on this study and the limited contact areas allowed it to solve quickly. It was easy to test alternate configurations with the trolley at various distances along the gantry.

Figure 8. Gantry Analysis Results.

The results of the analysis with an off-center trolley are depicted in Figure 8 above. The analysis totaled 850k degrees of freedom and solved in just a few minutes.

Productivity Shortcuts & Simulation API

Setting up simulations on larger projects often involves repetition of monotonous tasks such as defining many loads and fixtures. SOLIDWORKS Simulation allows for keyboard shortcuts, mouse gestures and shortcut bar customization for simulation to help speed up the process.

Pinning contact and connector menus so they remain persistent on the screen is another way to save time. Outside of the “Import Study Features” mentioned earlier, there are limited means to pattern loads or connectors.

One solution is to use the SOLIDWORKS Simulation API. The macro recorder is capable of recording simulation actions and outputting a VBA macro with the appropriate API calls. These can form the basis for automation to quickly generate simulation setups and perform detailed results post-processing.

Figure 9 below shows an example VBA macro integration with Microsoft Excel, which coordinates with SOLIDWORKS Simulation to setup and run a study and then extract the results.

Figure 9. Microsoft Excel Simulation API Example.

A variety of useful downloadable Simulation API examples are posted in a SOLIDWORKS blog article.

Simulation 2021 Enhancements

The examples portrayed throughout this article took advantage of a number of the performance enhancements in SOLIDWORKS 2021.

Performance Enhancements

The Blended Curvature Based mesher was rearchitected for the 2021 version, and now produces much better aspect ratios, as well as generating large meshes very efficiently. In SOLIDWORKS Simulation Professional and higher, the BCB mesher is multi-threaded very well. 

When meshing large assemblies, the 12-thread CPU in my laptop frequently maintained 100% utilization, as depicted in Figure 10 below, and the mesh generated much faster than in previous versions. 

Figure 10. BCB Mesher Multi-threading in Simulation Professional 2021.

There’s a similar story in terms of solver performance. The FFEPlus solver has substantially improved performance in 2021, primarily due to improvements in multithreading, as visible in Figure 11 below.

Performance increases have also extended to the Intel Direct Sparse solver and results post-processing which can both manage much larger data sets effectively.

Figure 11. FFEPlus Solver Performance in 2021. (Image from SOLIDWORKS Help Files.)

One of the great things about SOLIDWORKS Simulation is that even the base simulation packages are able to analyze large problems effectively – there are no hard restrictions on problem size in terms of components or node/element count.

Performance by Package Level

Up until 2021, all versions of SOLIDWORKS Simulation also offered similar performance in terms of mesh and solve time. With the 2021 version and the substantial rearchitecting of the meshers and solvers, for the first time there is a difference in performance gap between the different packages.

SOLIDWORKS Premium and the SOLIDWORKS Simulation Standard are limited to single-core meshing for the new Blended Curvature Based mesher. The solvers for SOLIDWORKS Premium and Simulation Standard are limited to 8 cores / 16 threads – admittedly still a generous limit that is unlikely to be bumped into on a run-of-the-mill system today.

SOLIDWORKS Simulation Professional and higher packages offer unrestricted multi-threading for both meshing and solving. With the advent of affordable many-core CPUs, this distinction will mean better quality of life for performing large assembly analysis with the higher level simulation packages in the future.

Interface Changes

Figure 12. New Settings for Mesh & Contact in 2021.

Interface changes also made their way into Simulation 2021. In the context of large assemblies, one of the most useful options is the ability to quickly toggle all part definitions to solid mesh, as visible in the left of Figure 12 above. This is a great diagnostic tool or fallback to troubleshoot faulty beam or shell conversion, or for cases where the additional detail of solid mesh is required.

Weldments can also then be batch converted to beams, and sheet metal parts to shells.

In the Simulation options, there are many more contact settings exposed that were previously inaccessible to the user – including a gap range for global bonding which can be adjusted to automatically bond over small gaps.

A new feature is stabilization for contact areas. This is said to produce much more accurate stress distributions for certain contact problems, as illustrated in the right of Figure 12 above.

For more detailed information on the Simulation 2021 enhancements, consider taking a look at these SOLIDWORKS Help files.

For an additional resource, consider viewing the associated live presentation this article was based on, which explores shortcuts and the setup process of these analyses.

Conclusion

This article examined two case studies of large assembly analysis in SOLIDWORKS Simulation, discussing the planning process as well as a number of useful simplification and setup techniques.

Enhancements to SOLIDWORKS Simulation for 2021 and recent years have greatly improved the viability of setting up large simulation studies.

Additional resources were presented to learn more about What’s New in 2021 and the Simulation API.

Check out the whitepaper Design Through Analysis: Simulation-Driven Design Speeds System Level Design and Transition to Manufacturing to learn more.

]]>
Ryan Navarro
Learn How GE Healthcare Optimizes Products with CFD https://www.engineersrule.com/learn-how-ge-healthcare-optimizes-products-with-cfd/ Fri, 11 Dec 2020 14:59:04 +0000 https://www.engineersrule.com/?p=5962 Learn how engineers at GE Healthcare Anesthesia and Respiratory Care (ARC) Group are successfully using SOLIDWORKS Flow Simulation throughout their device design process.

Watch a demonstration of how GE engineers were able to optimize the liquid and gas exchange paths when transferring an anesthesia agent from a sealed bottle to its vaporizer using the free surface capabilities in SOLIDWORKS Flow Simulation. They will also discuss how virtual prototyping with CFD fits into their design process and how it eliminates the need for multiple physical prototypes, speeding development and cutting out costs.

]]>
The Engineer
Design Through Analysis: Today's Designers Greatly Benefit From Simulation-Driven Product Development https://www.engineersrule.com/design-through-analysis-todays-designers-greatly-benefit-from-simulation-driven-product-development/ Fri, 20 Nov 2020 17:22:35 +0000 https://www.engineersrule.com/?p=5893 In today’s increasingly competitive global market, product designers face mounting pressures to not only create more innovative products but to also deliver designs of higher precision and fidelity—in terms of manufacturability and performance—more quickly and cost-effectively than ever before. Demands for increased innovation, automation, and throughput across manufacturing organizations are already affecting the work of designers who now face greater expectations for more complete designs earlier, with fewer, if any, design changes or manufacturability surprises encountered late in the product development process. What product designers need to respond to this growing demand are integrated, easy-to-use, and automated design simulation and analysis tools, such as those included with SOLIDWORKS® Simulation software.

In this paper you will learn how designers are expected to deliver more robust designs early in the process and how integrated simulation capabilities can help them drive the design creation process to achieve that goal.

]]>
The Engineer
Design Through Analysis: Simulation-Driven Design Speeds System Level Design and Transition to Manufacturing https://www.engineersrule.com/design-through-analysissimulation-driven-design-speeds-system-level-design-and-transition-to-manufacturing/ Fri, 13 Nov 2020 18:19:34 +0000 https://www.engineersrule.com/?p=5886 Today’s engineers and chief designers, who lead system-level or large assembly design teams, work under increasing pressure to develop more innovative, better-performing, and easier-to-manufacture products more quickly and cost effectively. As manufacturers pursue strategies that emphasize increased innovation, automation and throughput across their product development and manufacturing organizations to meet increasing global competition, engineers and chief designers are being tasked with achieving a challenging new set of objectives.

Fortunately, engineers and chief designers can overcome these challenges by adding robust CAD-integrated simulation and design analysis capabilities to their toolbox.

This paper explores the range of new challenges that engineers and chief designers face and how SOLIDWORKS Simulation Professional analysis software can help them meet their individual and organizational goals.

Complete the form below to gain access to this paper.

]]>
The Engineer
Giaffone Racing: Expanding Into New Racing Markets with Topology Optimization Tools https://www.engineersrule.com/giaffone-racingexpanding-into-new-racing-markets-with-topology-optimization-tools/ Fri, 06 Nov 2020 14:16:09 +0000 https://www.engineersrule.com/?p=5821 Giaffone Racing standardized on the SOLIDWORKS 3D development system in 2006, implementing SOLIDWORKS Professional design, SOLIDWORKS Premium design and analysis, and SOLIDWORKS PDM Professional product data management software.

Because Giaffone Racing has realized substantial productivity gains since standardizing on integrated SOLIDWORKS solutions including a 70 percent reduction in development time, and was interested in supporting additive manufacturing techniques, Giaffone's engineers recently decided to add SOLIDWORKS Simulation Professional analysis software so they can utilize the software’s new topology optimization tools, after learning from reseller SKA that those capabilities had been added to the latest software release.

In this whitepaper you will learn how Giaffone Racing:

  • Cut two months from suspension upright development cycle
  • Reduced suspension upright weight by 60 percent
  • Entered rally-race market with lighter, stronger, better-looking parts
  • Supported both conventional and additive manufacturing
]]>
The Engineer
Use of CFD & FEA Analysis at GE Health Care, Part 1: CFD https://www.engineersrule.com/use-of-cfd-fea-analysis-at-ge-health-care-part-1-cfd/ Wed, 22 Jul 2020 05:10:03 +0000 https://www.engineersrule.com/?p=5309 Within GE Health Care (GEHC), FEA and CFD modeling are used extensively in each modality, and the type of modeling and tools used are dependent on the unique challenges within that modality. While there is traditional analysis of a given design to check for safety factors or thermal margin, the real power is driving designs using FEA and CFD tools.

The modeling tools are often used in conjunction with numerical design of experiments (DOE) to reduce modeling effort and to gain more insight into design parameter interactions. Often the optimal solution isn’t the numerical optimal, but the optimum within an insensitive region based on manufacturing tolerances and noise variables. My personal philosophy is to use numerical modeling as digital experiments, often run in conjunction with lab experiments. The experiments confirm the models and the models inform the experiments.

Note that due to the proprietary nature of the work done within GEHC, only high-level details can be shared.

Types of CFD Modeling at GEHC

CFD modeling spans the range from simple, such as pneumatic pressure drops or temperature limited electronics cooling problems, to the complex, such as precise temperature control problems or compressible flow studies through valves. Additionally, time depend transient studies are often needed in modalities such as respiratory and anesthesia care (ARC).

Below is a summary of simulation uses within GHEC:

  • Concept Development: Map Design option for NPI’s
  • Design Optimization: Hone selected concepts
  • Design Implementation: Optimize system trade offs
  • Root Cause Analysis: Understand field failure/Test results

Modeling Examples From GEHC

Electronics Cooling Example – FloEFD w/ Electronics Cooling Module

This is a typical electronics cooling problem. In this case, a new system on chip module was replacing an obsolete processor, and a new heat sink cooling solution that could fit within the existing space needed to be developed. To solve this problem, the SOLIDWORKS CFD package with the electronics cooling module was utilized to design the heat sink blower solution.

Figure 1. Example of a Typical Electronics Cooling Problem.

Figure 2. Example results.

This example demonstrates use of 2R chip models, PWB tool and flow and surface plots post processing of the temperature and flow results.

Valve Characterization Example

This effort consisted of mapping the performance of a high-speed fluid pulse width modulation (PWM) controlled injection valve as a function of inlet pressure, temperature and fluid. The results were used to determine the correct valve geometry, PWM duty cycles and the controls plant model of the valve, (partial results shown in Figure 3). The model established the dynamic range capability versus inlet pressure and fluid type, and the impact of noise variables such as temperature, pressure fluctuations and valve opening time.

Figure 3. Some results from high speed CFD modeling.

Ultrasonic Flow Sensor Design

The initial request was to evaluate a design that was developed experimentally. Empirical development led to poor performance and lack of understanding of what was driving it. CFD simulations were used to identify issues with flow field uniformity and transient flow pulsations. Ultimately, a new design was proposed and quantified numerically and experimentally.

Figure 4. Overview of ultrasonic flow sensor simulation project.

Figure 5. Sample CFD results from Ultra Sonic flow sensor.

In Figures 5 and 6, it can be seen that the flow fields are non-uniform spatially (steady state), and temporally (transient) with large difference between the minimum and maximum flows (0.15lpm to 15lpm).

Figure 6. Overview of transient analysis of ultrasonic flow sensor.

A new design was proposed where the flow was brought in around the perimeter of the ultrasonic transducers. The passageways around the perimeter were gridded to simulate a “honeycomb” structure to eliminate flow pulsations. As can be seen in Figure 8, the flow fields are very uniform, which was confirmed experimentally.

Figure 7. Design developed through CFD modeling of Ultra sonic flow sensor.

Particle Study Capability

In this example, a CFD particle study was used to understand how injected liquid would be entrained by flowing gas and how both fluids would be heated in a mixing chamber.

Figure 8. An example of using particle study.

This model was used to gain qualitative flow interaction knowledge, which helped guide the inlet and mixing design.

Case Study Example of Design Space Mapping and Use of Numerical DOE

CT Detector Temperature Control System Development Example – VCT

In the early 2000’s, GE designed a revolutionary 64 slice CT system from the ground up, which presented significant temperature control design challenges. The heat generating from the A to D electronics needed to be packaged near the temperature sensitive photodiode and scintillator of the x-ray sensor. The available cooling air DT was limited, as the maximum electronic temperature was close the maximum Tair. Additionally, an altitude of 0-3500m had to be accommodated.

A CT scan cycle consists of rotating from rest to high speed in just a few seconds, which caused a simultaneous rise in Tair and a change in air velocity near the detector (convective boundary condition shift). For artifact-free imaging, the x-ray sensor needs to remain essentially constant throughout a scan cycle. To solve this problem an architecture was proposed, then CFD/electronics modeling was used to map the design space and drive the design details.

These details included the obvious, such as number of fans, heat sink geometry and ducting, to the non-obvious such as placement of the chips, underfilling of chips, circuit board copper layout and flex effective thermal conductivity, to name a few.

Pictures of the exterior and interior of the GE 64 slice scanner are given in Figure 9. In the right-most picture of Figure 9, the system is rotating at maximum speed, which translates to a linear velocity of ~35mph near the x-ray sensors. The transition from stationary to final rotation speed takes only a few seconds, resulting in a large transient shift in convective boundary condition.

Figure 9. GE VCT 64. (Images taken from public domain.)

System Architecture – Start of the Design Space Mapping

CFD modeling was initiated to both investigate feasibility and to drive the design decisions. Figure 10 shows a sketch of the system and lays out the noise and design variables along with design outputs. The approach from architectural concept, critical variable identification, modeling methodology and sample of numerical DOE results are shown in Figures 11-14.

Figure 10. Sketch of system air flow path.

To start, a simple network representation (figure 11) was developed and used to drive the numerical DOEs. A global local modeling approach was used, where flow resistances from local models (numerical wind tunnel studies) were used to obtain global model flow boundary conditions, which were in turn fed into local models. Detailed electronics cooling models were used to determine steady state and transient temperatures of key components. From the DOE studies, transfer functions were built and used to map the design space, significantly reducing the number of computational runs and providing key insight into variable interactions.  

Figure 11. Node Network representation.

Figure 12. Global-Local methodology.

Figure 13. Example of DOE results.

Once the design space and component interactions were mapped, detail optimization began.

Figure 14. Example of Numerical Optimization and experimental and CFD Data.

Summary

A combination of CFD (with electronics cooling module), conduction heat transfer modeling using global local modeling technique driven by numerical DOE were used to show and drive design feasibility. The modeling effort took approximately nine months and was confirmed experimentally with a ¼ system bench model, followed by a system prototype. Prototype to production consisted of controls algorithm development only.

To learn more about SOLIDWORKS simulation for product development in health care, check out the whitepaper Simulating for Better Health and the webinar Free Surface Flow Simulation at GE Healthcare.

]]>
Joe Lacey
SOLIDWORKS Simulation Makes Meshing Easy. Too Easy? https://www.engineersrule.com/solidworks-simulation-makes-meshing-easy-too-easy/ Thu, 02 Jul 2020 05:26:00 +0000 https://www.engineersrule.com/?p=5224 Basics of FEA meshing

You can’t run a simulation without meshing; it’s not possible. Meshing is to simulation what chicken is to chicken soup – the core building block. Meshing creates the finite elements that make up a finite element model so that you can do a finite element analysis, or FEA.

What do you have to know about meshing to run a FEA using SOLIDWORKS Simulation? Surprisingly little. But let’s suppose you want to run better simulations, or get the most accurate results.

It is the mesh that determines the accuracy of the results. Read on and you will find enough about meshing to get the most accurate results from your simulations.

What is Meshing?

Meshing is the process of breaking down your model into simple shapes called finite elements. Let’s define basic terms:

  • Finite: a certain amount, as opposed to infinite.
  • Element: A piece or block of a simple, predefined shape.

The elements are the basic components of the model. Meshing takes a complicated problem and breaks it down into a number of pieces (finite) of a shape that is easily calculated (elements).

Simulation software uses equations to solve for things such as stress on complicated shapes. But it breaks it down into thousands or hundreds of thousands of little pieces of a simpler shape. It works like this:

Creating the Mesh

Creating a mesh is easy in SOLIDWORKS Simulation. To get started, right click on the mesh icon in the Simulation Tree and click Create Mesh.

This will open the mesh interface. Think of meshing in SOLIDWORKS Simulation as having two levels: Level 1 we’ll call Quick Mesh and Level 2 is Advanced Mesh. This isn’t official SOLIDWORKS terminology, but rather the two ways to use the meshing interface.

Level 1: Quick Mesh – The Simplest Way to Mesh your Model

With a quick mesh, you don’t look at any options or even numbers. You only see a slider that controls the density or size of the elements. To the right is fine and to the left is coarse, referring to the element size. A fine mesh will have a greater number of smaller elements, while a coarse mesh will have fewer or larger elements.

You can change the average size of the elements in a part with a mesh slider. A powerful algorithm makes changing all the element sizes at once super easy. But when you create a mesh with the mesh slider, you are putting a lot of faith in its ability to give you a good mesh.

Level 2: Advanced Mesh – Have Control Over the Mesh

To take your mesh from quick to advanced we expand the options. Various options give you more control of your mesh than the mesh slider.

The first choice is the mesh algorithm you want to use. This defines the scheme used to build the mesh from the CAD geometry. There are three choices: standard, curvature-based, and blended curvature-based. As you can see, each one of these algorithms offers different settings.

Standard Mesh

This is the original meshing scheme of SOLIDWORKS Simulation and a good starting point. It works well for the simplest geometry.

Curvature-based Mesh

This offers the ability to specify a maximum and minimum element size. While this is great for geometry with a lot of small features, it can add unnecessary elements if used on simple geometry. It is very good at capturing changes in geometry from curved features to prismatic shapes.

Blended Curvature-based Mesh

Introduced to Simulation back in 2016, this is an extension of the Curvature-based mesh in that it offers a more advanced option to capture very small geometric features. This algorithm has one subtle difference, however: the option to “calculate minimum element size.” With this option you can capture small geometric features automatically.

Optional Level 3: Mesh Controls

With SOLIDWORKS Simulation, this is the closest you’ll get to manually building the mesh. Mesh controls are a way to locally define an element size in a particular region. This enables you to focus the resources on a specific area rather than the entire model.

This is especially useful if one area of the model has a small feature such as a radius, like that seen in the example below.

When working with a larger and more complicated models, mesh controls go from an optional step to a required one. In the example below of the industrial equipment, a simulation was done on the boom subassembly to ensure it could withstand the required forces during operation. With all the different components of various sizes, eight different mesh controls had to be used to get a mesh that captured all the geometry.

When working with just the one bracket part from the boom, no mesh controls were required because the geometry was simple—meaning it was consistent and uniform. However, the same could not be said when working with the entire boom assembly. The boom assembly contained many components of varying sizes—some small and others large—so mesh controls are required to get a good mesh. Without mesh controls, the mesh would have at worst completely failed to generate, or at best it would have been a bad mesh.

What’s a Good Mesh?

The secret of a good mesh can be hidden in the details, but SOLIDWORKS Simulation makes it easy to find them. Simply right click on “Mesh” and click “details.” You will be presented with a list from which we can determine the quality of the mesh:

  • Maximum aspect ratio
  • Percentage of elements with aspect ratio < 3
  • Percentage of elements with aspect ratio > 10

What’s the Aspect Ratio?

The aspect ratio tells you the shape of the element. A value of 1 is optimum. The bigger the aspect ratio, the worse the shape of the element.

A good way to understand aspect ratio is to think of it as the element shape; however, the aspect ratio reported by Simulation is more than simply the shape. The simplest way to define the aspect ratio is by looking at the ratio of lines drawn normal from a face to the opposite vertex. As you can see, the higher the aspect ratio the more skewed the element.

How does this tell you if you have a good mesh? Since you know that a perfect element has an aspect ratio of 1, you would ideally want all your elements to have an aspect ratio of 1—but that just isn’t reasonable. The key is to make sure that your elements have low aspect ratios, so you look at the summary percentages; specifically, the percentage of elements with aspect ratio greater than 10 or less than 3. By looking at these values, you can know for sure you have a good mesh or if you need to improve on it.

Now you know how to create a mesh, improve the mesh with mesh controls and even determine if you have a good mesh. These are the three things that make up the foundation of meshing in SOLIDWORKS Simulation.

Learn more about using SOLIDWORKS for simulation with Design Through Analysis: Simulation-Driven Design Speeds System Level Design and Transition to Manufacturing.

]]>
Stephen Petrock
Simulation Called in for the Coronavirus https://www.engineersrule.com/simulation-called-in-for-the-coronavirus/ Thu, 11 Jun 2020 13:59:39 +0000 https://www.engineersrule.com/?p=5163 You’ve likely seen technical and news articles around the COVID-19 pandemic with pictures and videos comparing simulations of sneezes and coughs, or investigating how masks, face-shields and other protective equipment help people to resist infection.

It’s amazing to see these and realize how mainstream this type of analysis has become, and how the graphics and presentation of these results feels familiar, even to the general public. We also marvel at how quickly these types of analysis have been conducted and presented, and how widely democratized they’ve become, with new results and simulations being presented every day.

Sneeze simulation developed as part of Dassault Systèmes' 3DEXPERIENCE Open COVID-19 community.

This is just one more sign that the use of analysis tools to predict product performance has gone from being a niche application only performed by expert users, to something that is routinely done as a standard part of the design process in all industries. The medical device industry is no different, but the design of medical devices does have several unique characteristics that sets it apart in its use of engineering analysis tools.

Analysis Elements Unique to Medical Devices

Complex Material Behavior

Whether you’re trying to model the response of human tissue to your models, or designing a shape-memory alloy for a vascular stent, the types of materials used for medical device design are often complex, and designed to perform very specific demanding tasks. The analysis techniques we use to analyze steel and aluminum structures just won’t cut it here.

Product Release is Glacially Slow

With the testing, clinical trials and documentation needed to release a medical product, it can take several years to get a product to market. Because of this, you can’t rush a minimum-viable product to market and then rapidly design a secondary iteration to optimize and improve it, like you can in other industries. Analysis helps in two ways: it allows you to fully test out a wide range of design ideas to get the released product as optimized as possible, and by spending up-front time in analysis, you can hopefully reduce the number of physical and clinical trial cycles as much as possible.

Failure is Not an Option

The medical device industry is notoriously risk-averse, for good reason. If your toaster breaks, you won’t be happy, but you can go buy a new one—but if your pacemaker quits unexpectedly, the consequences will be much more severe. The industry is well armed for this, with extensive computer analysis, physical testing and clinical trial protocols before any product release, but there are several applications that can’t easily be tested physically, where analysis is used as the sole validation tool.

More Power!

Devices, particularly those involving electronics, are getting smaller and more powerful. Where electronics are involved, that means more heat, and with very strict reliability and touch-temperature requirements to meet, that heat has to be removed safely from the device. Best-guess techniques for thermal management just don’t cut it anymore, and a robust CFD analysis is needed to optimize heat dissipation from the PCB to the environment.

Keep the Noise Down

Electromagnetic noise, that is. Medical devices need to not only be compatible with the radio waves that are generated by other equipment and our cellphones, but also need to limit the amount of electromagnetic interference they emit. This EMI/EMC testing is expensive and complicated in real-life, so being able to predict a successful test is incredibly valuable.

Nonlinear Materials, Everywhere

If there’s one characteristic that rings true for almost any structural analysis of a medical device, it’s that the materials are nonlinear. Most materials in the human body are highly nonlinear in their structural behavior, and most devices that interact with the body are, too. From human tissue, nitinol stents, and elastomeric valves, nonlinear materials are everywhere.

To assess these properly, you’re going to want an analysis software with strong nonlinear capability. This means lots of material models, built-in tools for mapping real-life behavior to those material models, and a robust solver that is able to iterate through challenging problems. One example of a software package that meets all these requirements is SIMULIA Abaqus, which combines one of the most powerful nonlinear solvers in the industry with automated contact modeling and an incredible range of material models.

Test, Test, Then Test Again:

Because the clinical trial and product release process takes so long, medical device companies want to minimize the number of times a device goes through it. Because of this, the released product needs to be fully optimized – the opposite of other industries where an early product can be released to the market and then improved in later versions.

Because of this, you’re going to want to look for analysis solutions that let you fully explore the design space and provide tools for optimizing to an ideal solution.

This product exploration typically has three potential angles:

  • Parametric iterations – where certain parameters within the design can be varied, and a large batch of analysis cases run with those variations. Key analysis numerical and graphical outputs can be reviewed for each design iteration to investigate the sensitivity of the device to changing parameters.
  • Parametric optimization – similar to the above, but with the ability to home in on the ideal solution to meet a certain design set of design goals by varying those parameters. Tools like this allow an ideal solution to be developed by changing model parameters.
  • Topological optimization – where the physical shape of the object is changed to provide the ideal shape to meet the design goal. This technology can be found in tools such as Tosca (and now available within the SOLIDWORKS 3D Creator cloud CAD package) and designers can apply these techniques for structural strength or fluid flow. Topological optimization methods were originally developed by mimicking how the human body grows bone, so applying them for medical devices feels appropriate!

Images before and after topology analysis performed with SIMULIA Tosca.

A good design exploration process often involves all three elements, so medical device designers are increasingly looking for a suite of tools that offer all these options.

Safety First

The impact of a failed medical device can be catastrophic, so designers will do everything they can to minimize the risk of failure. This means that rigorous clinical testing will be employed over many years before a device is released; however, there are aspects of device performance, especially with how a device performs over a long period of time, that just can’t be feasibly tested in a clinical environment.

Text Box: Expansion analysis of a vascular stent, performed in SIMULIA Abaqus.

Expansion analysis of a vascular stent, performed in SIMULIA Abaqus.

Fatigue-life testing for evaluating the functional life of a vascular stent is one area where the use of analysis is becoming more critical. A stent is typically made up of a shape-memory metal alloy, which is compressed and inserted into a patient’s blood vessel, and then expands to its original shape, holding the blood vessel open and improving blood flow. It will remain in the patient forever and needs to continue to perform its function. Increasingly, regulatory bodies, such as the FDA, will accept finite-element stress results as a primary part of the product validation package for a product.

If you review the guidance published by bodies like the FDA, they stress the importance of capturing as many real-life elements as possible, including the temperature-dependent behavior of the material and accounting for the residual stresses created during the manufacturing process. When selecting an analysis package, you should make sure that it can capture all of these real-life aspects.

Keeping Your Cool

Devices are getting smaller and smarter every day, and that leads to big challenges when it comes to keeping electronics cool. All electronic devices give off heat, and the more powerful a device is, the more heat there is to give off. Removing heat is particularly important in the medical device realm, as increased operating temperature normally results in a reduction in reliability, which is unacceptable for most medical applications.

Traditionally, a fan or other forced cooling device was often included to keep things cool, but space, noise or aesthetic concerns increasingly rule this out as an option.

Today, a lot of the cooling strategy starts at the PCB board level, with thermal vias, in-plane heat spreaders or thermo-electric coolers, being employed to get the heat out of the system as efficiently as possible. Newer technologies, such as heat-pipes, are also being employed to aggressively move heat from one area in the system to a place where it can be safely dissipated.

CFD analysis of an electronic device, showing device temperatures and airflow paths.

These thermal problems are so intense that a best-guess approach to a solution is not good enough. Heat management strategies need to be carefully optimized, and a thermal analysis using a computational fluid dynamics (CFD) software package, such as SOLIDWORKS Flow Simulation, is a critical part of that process.

When selecting a software package, you’ll want to make sure that it can model all the heat management strategies you’ll need to employ – both now and in the future. The best of them have dedicated sub-models for things like heat-pipes and TECs, as well as the ability to approximate in-board cooling elements.

Electromagnetic Interference and Compatibility

Electromagnetic testing is one of the most important validation steps in the medical device design process and is often one of the most feared. Simple changes to a system can have major impacts on the electromagnetic interference it generates, as well its vulnerability to outside interference—and wireless transmission is critical to the function of many devices today. On top of that, the testing is expensive and logistically challenging. This combination of complexity and cost make electromagnetics a great candidate for investigation through electro-magnetic analysis.

Being able to test a device for EMI/EMC virtually means that design improvements can be rapidly cycled, giving confidence that a product will pass when subjected to validation testing. When selecting an EMI/EMC analysis package, it’s important to be able to consider all aspects of electromagnetics, so look for the different types of solvers and physics available. For example, CST Studio Suite offers finite element, finite integration and transmission line matrix solver techniques, covers low-frequency and high-frequency, and can optionally include thermal and particle calculations too.

Much has changed in the world of medical devices over the past 20 years, and the adoption of analysis technology has a big role to play in the advancement of that change. These tools allow products to be developed more quickly, safely and cost-effectively than ever before, and over the coming years we’ll see FEA, CFD and electromagnetic analysis tools becoming increasingly mainstream.

To learn more about SOLIDWORKS simulation for product development in health care, check out the whitepaper Simulating for Better Health.

]]>
Glenn Whyte
The Simulation Essentials for SOLIDWORKS Professionals https://www.engineersrule.com/the-simulation-essentials-for-solidworks-professionals/ Tue, 02 Jun 2020 05:06:09 +0000 https://www.engineersrule.com/?p=5142 What do you really need to know to use SOLIDWORKS Simulation like a pro? The answer may surprise you. Even those fresh out of college with a mechanical engineering degree in hand will be pleasantly surprised to see how little they have to learn to use SOLIDWORKS Simulation—and we’ll review it all here. Think of this as your crash course in engineering for using SOLIDWORKS Simulation.

Is This Crash Course in Simulation For You?

Do you use SOLIDWORKS? Then, yes, this crash course is most likely for you. Over recent years, Simulation has transitioned from being the final step in the design process to becoming the tool guiding you every step of the way throughout the entire design process. It’s not just a validation tool; rather, simulation is a design guide—your “GPS navigation” for CAD, providing step-by-step insight into your design.

Material Science Essentials: Knowing the Material Input and Understanding the Results of your Simulation.

Material Science is the one topic that gives you a good foundation for your simulation setup and how to understand the results. We will introduce the key topics needed so you’ll be ready to run SOLIDWORKS Simulation like a pro.

Please note that for the purposes of this article, when we say Simulation, we are referring specifically to linear static analysis in SOLIDWORKS Simulation. This means that the materials are linear, elastic and isotropic (but those are topics for another article). This article serves as an introduction to the engineering background needed to understand Simulation.

Material Science

Materials define exactly how the CAD model behaves in Simulation. Setting this up in Simulation is incredibly easy. You simply make your selection from the database to apply the material, just like you would at the CAD level. This is shown in the image below. It’s even easier If you apply your materials at the CAD level, because they’ll automatically be populated in Simulation.

Although the database is the same, it will look slightly different. Notice that from within Simulation you’ll see properties that are black, red and blue. These colors indicate what’s required for the study (red) and what might be required (blue) depending on the set up. As you can see in the image below, the line items are the same, but there is that added visual color cue indicating the required values for Simulation.

In my opinion, the three most important numbers for Simulation from the material library are:

  1. Modulus of elasticity (Young’s modulus)
  2. Yield strength
  3. Poisson’s ratio

These are the three properties that you really need to run a Simulation. Here’s what they mean.

Modulus of Elasticity

The modulus of elasticity, also known as Young’s modulus, or E, defines a material’s inherent strength. What you would call a stronger material would have a higher modulus of elasticity. For example, steel has a higher modulus of elasticity than aluminum, which has a higher modulus of elasticity than rubber.

This number, or property, is important because it defines exactly how much stress a material undergoes after it is loaded in Simulation, just like in the real world. Stress is what most designers look to as an indicator of failure in their model. Too much stress is bad. How much stress is okay? We answer this question later. (Spoiler Alert! It has to do with the yield strength.)

When you consider this at the atomic level, this is the strength of the bonds between atoms. In the context of the modulus of elasticity, a material is stronger because the atoms hold on to each other better. It takes more force to separate the atoms from one another. This separation, or space between atoms, is what we see as deformation or the change in shape. In other words, when you pull on something, it stretches because the atoms are being separated.

You can think of the modulus of elasticity as the “spring” between the atoms, as illustrated in the image below. The larger the modulus of elasticity, the stronger the “spring,” which means a greater force is needed to stretch the spring or move the material.

Stress & Strain

Before we can continue with the modulus of elasticity, we need to introduce the concepts of stress and strain. Every material has a map that defines its behavior when loaded. This map is called the stress strain curve. We know exactly how much stress a material will see when it is “strained” or loaded.

In static Simulation the stress strain curve is linear. This line can be defined entirely by the modulus of elasticity. In other words, a material’s behavior in Simulation can be defined in large part by the modulus of elasticity. See the image showing a stress strain curve for a material. This outlines the material’s reaction to loadings through the relationship of stress and strain via the modulus of elasticity.

  • Stress (σ) is what you, as a designer, want to know to determine part failure. It describes the intensity of the load on an object. It is literally the force over an area.
  • Strain (ε) is what you can measure. It’s defined as the ratio of the change in shape of an object.
  • Modulus of elasticity (E) is used to determine stress from strain. It’s the slope of the stress strain curve for a material (in a linear static analysis, especially in this article). The higher the slope, the stronger the material. This is shown in the graph below.
  • Hooke’s Law is the equation relating it all together.

Yield Strength

The yield strength is used by designers as a measure of pass or fail for the design. Technically, the yield strength is a material property which marks the transition from the elastic deformation to plastic deformation. The difference between elastic and plastic is just temporary or permanent. Elastic deformation means it will go back to its original shape. Plastic deformation means its shape has been deformed too much and it will not go back to its original shape.

For a designer (in most circumstances), elastic deformation is okay, but plastic deformation is not—it’s considered a failure. Since the yield strength marks this transition, we use it to quantify pass or failure in terms of a value called the factor of safety.

The factor of safety is a number which relates the maximum stress to the yield strength. If this number is larger than 1 it passes; if it is less than 1 it is a failure.

A factor of safety plot can be easily shown in Simulation. The easiest way to use this plot is to have it indicate areas below a factor of safety of 1. This makes it obvious where there could be a failure. If you see any red, that’s where you need to focus your attention.

In Simulation, a stress result could be higher than the yield strength. However, it is important to note that this result is not accurate because beyond the yield strength the stress-Strain curve is no longer linear—meaning you need more than just the modulus of elasticity to get an accurate result. This is where a more advanced nonlinear simulation needs to be used. This is illustrated in the image below.

Poisson’s Ratio

Poisson’s ratio describes how a material changes shape. As the material is stretched in one direction, it needs to contract in another; this is known as Poisson’s effect. When a material deforms, you aren’t adding or reducing the mass, just changing its shape. Poisson’s ratio describes this change of shape. This is illustrated in the image below. As you apply a force to a material it will expand in the direction of the force, and contract perpendicular to the force.

The checkerboard started out as perfect squares. When it deformed from the force, it changed its shape to a rectangle. The edges in the direction of the force (longitudinal) are now longer, while the edges perpendicular to the force (transverse) are now smaller. The ratio of the change of shape, or strain, in the transverse direction to the longitudinal direction is the Poisson’s ratio.

Putting it All Together in Simulation

Now that you understand what these important terms mean, let’s take a simplified look at the steps for how they are used by Simulation to get results.

  • Step 1: Run Simulation and see how the applied forces change the material shape based on its stiffness from the modulus of elasticity.
  • Step 2: Determine the change in shape to then calculate strain.
  • Step 3: Use Hooke’s Law and the calculated strain to calculate stress.
  • Step 4: Check the factor of safety to see if the part will fail or not.

With a combination of these essential material properties, you can paint the picture of your model’s performance. Is your design strong enough? Will it break? These are all questions you can now answer with Simulation.

This article was just an introduction, meant to build a foundation of understanding for using Simulation. There are even more advanced foundational topics worth exploring such as meshing. But when it comes to materials in Simulation, there are many more advanced topics covering more advanced materials such as composites, hyperelastic and viscoelastic. There are even other material failure modes beyond yield, such as resonance and fatigue. But at the end of the day, you don’t need to know this to be successful with using Simulation to guide you through the design process.

People spend their entire careers and devote their life’s work to studying these topics. But that is not necessary to use and benefit from Simulation. Behind all the complexity of the numerical methods and engineering concepts in FEA lies a level of elegance and intuitiveness that make SOLIDWORKS Simulation a powerful tool in the hands of SOLIDWORKS designers.

Learn more with the whitepaper Design Through Analysis: Simulation-Driven Design Speeds System Level Design and Transition to Manufacturing.

]]>
Stephen Petrock
Tutorial: Combining Loading Conditions in Fatigue Studies https://www.engineersrule.com/tutorial-combining-loading-conditions-in-fatigue-studies/ Wed, 20 May 2020 00:58:04 +0000 https://www.engineersrule.com/?p=5116 With SOLIDWORKS Simulation, the engineer has many ways to analyze a design against loads. For example, a static simulation will determine the stresses and a factor of safety.

The design may pass a static simulation, but another mode of failure should be considered: fatigue. Fatigue failure results in a part that has been subjected to loading and unloading for many cycles—to the point where microscopic defects in the material grow into cracks, which eventually causes the part to fail.

It is extremely important to consider this mode of failure if the part will be in use for a long time. SOLIDWORKS Simulation has a fatigue study functionality built in, so it can take loading conditions and determine the consumption of usable life after a set number of cycles.

SOLIDWORKS Simulation fatigue studies work on the principle of high cycle fatigue, which is when the stresses experienced by the part result in negligible plastic deformations for any given cycle. Low cycle fatigue is when significant plastic deformation occurs causing the part to fail in a low number of cycles, as the name implies. Low cycle fatigue is currently not supported with fatigue studies and the engineer should seek to use an analytical strain-based life approach.

We need to know what the load is in order to run the study. Therefore, the first task is to define and run one or more static studies.

If you would like to follow along, you can download the file SimplePulley.SLDPRT (SOLIDWORKS 2020+).

For this exercise, we will investigate how multiple loading conditions can be taken into consideration for this fatigue study. There will be two: the stresses caused by an axially directed load, and the stresses caused by a radially directed load.

First Study: Static - Radial Load

Step 1

After making sure the simulation add-in is active, start a new static study named “Static 1.”

Step 2

For fixtures, add the inner cylindrical face as Fixed Geometry. (This is assumed to be keyed or held mechanically fixed with some other method, but we will neglect the stresses caused by this in this study.)

Step 3

For Loads, add a Force to the recessed groove, selecting “Plane1” as the Selected Direction. Enable to “Normal to Plane” option and enter 10,000N (10kN). Direction does not matter.

Step 4

For the mesh, set up a standard mesh with the slider all the way to fine. Mesh the part.

Step 5

Run the study. You should get a result similar to the one below.

Second Study: Static - Axial Load

In addition to the radial load, the machine part experiences a smaller—but significant—amount of axial load. This will be accounted for in our fatigue study.

Step 6

Similar to the previous part, create a new static study named “Static 2.”

Step 7

The fixture will be identical to the previous study, the internal cylindrical face.

Step 8

The load will differ. Add a Force load to the annular face on the front with a magnitude of 2,000N (2kN) normal to the face.

Step 9

Mesh the part with the exact same settings as the previous study (standard with the slider all the way to fine). This is not coincidental. In order for the fatigue study to run, the meshes need to be the same in all studies that it is considering.

Step 10

Run the study and look at the results. They should be similar to the image below.

We are now at the halfway point. Time for the fatigue study.

Final Study: Fatigue Study

With all the requisite data, we can now move on to the fatigue study.

Step 11

Start a new study and specify a fatigue study. There are several kinds of fatigue study, but in order to use the results from our static studies we will use “Constant Amplitude Events with Defined Cycles.” Other fatigue studies available are variable amplitude studies and harmonic or random vibration.

The first thing we need to do is add an event. An event is analogous to a load in a static study, as it is the main input for a fatigue study. Notice that there are no nodes for “Loads,” “Fixtures,” or “Mesh.” All of that information is defined in their respective studies. (This is also why the mesh must be the same across all studies to consider.)

Step 12

Right Click on the Loading node and click “Add Event…” The property manager will change.

The first box is the number of cycles to test; this could be anywhere from hundreds to millions. For this example, we will subject the part to 100k cycles of the loading from the first study.

The next parameter is the nature of loading. This could either be Fully Reversed, meaning that the load will alternate stress directions, or Zero Based, which is just loaded and unloaded. It is assumed that this machine part will be changing directions and loads, so we will choose Fully Reversed.

The third section is how to tell which study we are considering. For this, make sure the drop down is set to “Static 1.”

All these parameters are described in the following image.

Step 13

Set the other event and repeat step 12—except use 50,000 for the number of cycles, Fully Reversed for loading type, and select “Static 2” from the drop down.

Now we must verify or enter the S-N material data. It is important that the material has an S-N curve, as this is what SOLIDWORKS checks against to determine the amount of damage sustained by the part. This information can be found in many engineering resources for common materials (such as bronze, aluminum, etc.). If the material is specially alloyed by a company, then that company should have the S-N data for the material they make.

SOLIDWORKS can also derive the S-N curve from the elastic modulus. This should be only done on austenitic materials (such as carbon steel).

As the results of the study directly relies on the S-N curve data, extra effort should be made to ensure that the data is as accurate and trustworthy as possible.

Step 14

Right Click on the node where it shows the name of the part, in this case “SimplePulley,” and click on “Apply/Edit Fatigue Data.”

The material window will pop up with an opportunity to enter or derive an S-N curve; for this example, we will derive the data from the ASME Carbon Steel Curves. Make sure to hit “Apply.”

Step 15

We now have all the information to run the study! But first, we will point out a few of the settings you can change in a fatigue study. If you right click “Fatigue1” (the top node) and click on “Properties…” more information about how the fatigue study will solve can be accessed.

In this example, we do not need to change any of these settings, but wanted to point out the event integration options.

If we are sure that the loadings we have specified will never happen at the same time, we can select “No interaction.” If there is a chance that the loading conditions can happen at the same time, it is recommended to select “Random interaction.” This is the case for our machine part, so we will leave as is.

Step 16

Right Click and run the study.

These results can be a bit tricky to read, so we will change the chart to be more readable.

Step 17

Right click on the “Results1” Plot and hit “Chart options.” We like to have the “Show Max Annotation” enabled, to show the location that is receiving the most damage. We also like to change the “Automatically defined maximum and minimum value” to 100 and 0, respectively. That way, anything that appears red is 100% damaged (or at end of life) and blue means 0%.

Lastly, we like to change the numbers from scientific to general, to make it easier to read.

From here, we can analyze the data. By default, it will display the damage percentage (usable life consumed) throughout different regions of the part. We can observe that the part is approximately 40% through its usable life (as denoted by the Max annotation). The location of this damage is where we would expect to find it, at the small radius of the spokes.

Other than the rest of the fillets also being around 40%, the rest of the part is around 15% through its usable life, denoted with the washed-out blue color (but which could be more accurately determined with the probe tool).

With that, we can determine what 100k cycles of primary loading amounts to in time, and determine whether that fits the part’s expected service life.

Learn more about SOLIDWORKS with the whitepaper Understanding Nonlinear Analysis.

]]>
Rob Maldonado
Finite Element Analysis of Pneumatic Tire Loading on Wheel https://www.engineersrule.com/finite-element-analysis-of-pneumatic-tire-loading-on-wheel/ Thu, 30 Apr 2020 15:48:24 +0000 https://www.engineersrule.com/?p=5080 When carrying out a stress analysis, it’s important that the boundary conditions are accurately modeled. For wheels that are fitted with pneumatic tires, it isn’t obvious what these loading conditions are.

Forces act on the rim of the wheel due to both the air pressure in the tire and the reaction force of the ground on the tire. The way that these forces are transferred through the tire into the rim have a significant impact on the stress in the wheel. Although it is possible to directly model the tire, this is generally unnecessary and will significantly increase the complexity of the model. There are, however, established analytical and empirical ways of simplifying the tire into a few boundary conditions that can be applied directly to the rim. The theory behind these is explained by Stearns et. al.

The Complexity of Modelling Tire-Rim Interaction

Firstly, let’s look at why we don’t want to model the actual tire using finite elements. There are a few reasons for this.

Firstly, as the toroidal shape of the tire makes contact with the planar ground surface, it must deform significantly to form a planar contact patch. This involves large deformations, which requires non-linear modeling. Secondly, a tire is not made up of a homogeneous isotropic solid material. Rather, a tire is a composite structure with a rubber matrix surrounding anisotropic textile casings and bead wires. Modeling all of this would require considerable pre-processing and solution times. Although tire interactions are modeled academically, it doesn’t make sense to do this type of work when designing a wheel.

Identifying the Forces Acting on the Rim

Before identifying the forces acting on the rim, the terminology used to refer to parts of the rim should be explained. The key parts of a rim and a tire are labeled below.

The tread is the part of the tire which contacts with the ground, the bead is a wire running around the edges of the tire which contact with the rim, and the sidewall is the vertical section of the tire connecting the bead to the tread.

The bead seats are the sections of the rim where the beads rest on the tire and vertical forces are transferred, and the rim flanges extend vertically to resist horizontal movement of the bead seat.

Forces act on the rim due to two primary sources: the air pressure within the tire, and the ground reaction forces. The air within the tire exerts a uniform pressure on all internal faces of the tire and rim; this is the inflation pressure, P. Where this pressure acts on the inside of the tread, it is contained by the tire casing and bead, causing internal hoop stresses in the tire but no reaction forces on the rim. However, where the inflation pressure acts on the side wall of the tire, it causes the beads to splay outwards. These sideways forces, Fs, are contained by the rim flange.

Calculating the force of the sidewall on the rim flange involves integrating the pressure over the area of the tire. The integration is quite simple because we’re only interested in the area of the sidewall projected onto the vertical plane. This area is given by π(r22 - r12). The area is multiplied by the pressure P to give the total force acting on each side wall of the tire. The force acting on the rim flange is half of this because the bottom of the side wall is constrained by the rim while the top of the side wall is constrained by the tread of the tire itself. Therefore, these forces are given by the equation:

The other forces acting on the rim result from the ground reaction forces. These may be classified as vertical forces supporting the weight of the vehicle, torque resulting from acceleration and braking forces, and axial forces caused by cornering. These forces are transferred through the bead seat and rim flange, but they are not constant over the circumference of the rim. Instead, these forces act over a section of the rim, related to the tires contact patch and stiffness and given by the angle of loading, θ. The forces are distributed according to a cosine function over this region.

The loading angle depends on the combination of tire and rim, the tire pressure and the ground reaction force. In practice, it is not possible to determine this angle analytically, and an empirical method must be used. One approach is to run the simulation with several different loading angles and observe how this affects the stress in the wheel. It may then be possible to use a worst-case value. Alternatively, the results can be compared with experimental measurements to determine the actual loading angle.

Due to the rotation of the wheel and the periodic nature of the spokes, the stress in the wheel will cycle between two extreme states. In one state, the vertical ground reaction will be directly centered over the spoke and in the other it will fall halfway between the spokes. For every revolution of the wheel, each spoke will experience one cycle which should be taken into account for fatigue calculations.

Additional ground reaction forces also occur when cornering, braking or accelerating. Cornering results in an axial force which is transferred through the flange. This can also be expected to be sinusoidally distributed and act over a similar loading angle to the vertical reaction force.

In summary, there are five forces acting on the rim:

  • Inflation pressure, P, acting uniformly on the internal faces of the rim not in contact with the tire.
  • Side wall pressure reaction, Fs, acting on both rim flanges.
  • Vertical ground reaction, Fv, distributed sinusoidally over both bead seats.
  • Axial cornering reaction, FA, distributed sinusoidally over one of the rim flanges depending on cornering direction.
  • Tangential braking or cornering reaction, FT, acts over the same region of the bead seats as the vertical ground reaction and is also sinusoidally distributed, with the tangential load transfer related to the normal force.

Simulating the Tire Loads in SOLIDWORKS

Before attempting to apply the tire forces to the rim, a few changes should be made to the solid model to simplify the analysis. Firstly, split lines must be added to the bead seats, so that the vertical ground reaction can be applied over the load angle. It also makes sense to cut the wheel in half so that symmetry can be used to simplify the model. Further defeaturing and the creation of surfaces for shell elements may also be desirable.

Here you can see the simple sketch containing a single line used with the Split Line command to split the bead seats, followed by the split lines in the two bead seats.

A static stress analysis is given as an example here. The following fixtures were used:

  • Symmetry fixture on the three cut surfaces. This simply constrains all nodes on the surface so that they are able to move tangential to the surface, but no normal motion is allowed.
  • Roller/Slider fixture on the inner face in contact with the hub.
  • Foundation Bolts through the bolt holes to ground.

Next, the air pressure was applied to the rim. First, a uniform 50 psi pressure to the internal faces of the rim which were not in contact with the tire.

Next the side wall reaction force was calculated, using the equation for Fs described above. The radius of the inner face of the tire tread is 268 mm and the bead seat radius is 163 mm, resulting in an area of 142,173 mm2. The inflation pressure of 50 psi is equal to 0.345 N/mm2 halving the force on the sidewall gives a reaction force of 24,525 N, a surprisingly large force. Because of the symmetry in the model, this force is halved again and then separately applied to each sidewall, setting the direction normal to a reference plane.

Finally, the ground reaction force is added. Because this is sinusoidally distributed, the easiest way to apply it is using a Bearing Load. Before creating the bearing load, a coordinate system must be created with its z-axis on the axis of rotation for the wheel and the x axis through the center of the distribution.

The model can now be meshed and solved. A coarse mesh is shown, with local refinement after adaptive meshing. Although the polynomial solid elements in SOLIDWORKS Simulation cope reasonably well with thin walled sections, there is still an argument for meshing regions of this model with shell elements to efficiently obtain an accurate solution.

The simulation shows that the maximum von Mises stress is 146 MPa, occurring on the inner radius of the rim flange. This stress is almost entirely caused by the sideways force on the rim flange as the sidewalls attempt to spread outwards as a result of the inflation pressure. In fact, suppressing the ground reaction force produces no visible change in the stress distribution and only reduces the maximum stress by 3%. This shows the critical importance of properly considering boundary conditions when setting up a simulation.

Learn more about SOLIDWORKS with the whitepaper Understanding Nonlinear Analysis.

]]>
Jody Muelaner
Structural Analysis: How Do You Know If Your Part Will Fail? https://www.engineersrule.com/structural-analysis-how-do-you-know-if-your-part-will-fail/ Thu, 02 Apr 2020 13:48:08 +0000 https://www.engineersrule.com/?p=4935

SOLIDWORKS simulation provides a wide range of tools to simulate stress in mechanical parts. As with any simulation, the results are only as good as the assumptions we make when setting up the model and analyzing the results. This article focuses on the way we interpret the calculated stress to determine whether a part will fail.

Before we get into determining whether your part will fail, it’s important remember that this is only one aspect of a good stress analysis. It’s also vital that the boundary conditions and mesh realistically simulate the loading of your part.

The first question to ask is whether the boundary conditions accurately represent the way the part will be loaded. The mesh must be of sufficient quality to provide numerically accurate calculations. Aspect ratio is one important measure of mesh quality, this means that the triangular faces of elements should be as close to equilateral triangles as possible. Very elongated elements with high aspect ratios over 3 will reduce the accuracy of the simulation. Similarly, distorted elements, as measured by the Jacobian, may cause the simulation to fail.

Selecting Mesh Details from the context menu of the element in the simulation tree brings up useful information to evaluate mesh quality. Geometry should be simplified, and mesh controls should be added to achieve a reasonable mesh quality.

Another important consideration for the mesh is whether it has sufficient detail to provide the actual maximum stress with stress concentrations.

There can be something of a trade-off here between removing features to improve mesh quality and maintaining the features that will actually affect the result. This is somewhere that the skill and experience of a stress analyst can be very valuable.

Simulation using finite elements usually isn’t actually the best way to determine the peak stress within stress concentrations. It’s much better to use the FEA to determine the stress field surrounding the stress concentration, and then determine the actual peak stress using an analytical method. Formulas for a wide range of features and loading conditions can be found in reference books such as Roark's Formulas for Stress and Strain, or Peterson's Stress Concentration Factors.

This is a very brief overview of what’s required to perform a good stress analysis. So, assuming you have accurately determined the stress in the part, how do you know whether it will fail?

Failure Criteria

Material failure may occur under static stress, fatigue or buckling. Under static stress conditions, materials generally fail in one of two ways, either by brittle failure (fracture) or by ductile failure (yield).

Mild steel is a typical example of a ductile material and ceramic is an example of a brittle material, although almost any material can behave in a brittle way under conditions such as very low temperature or highly cyclic loading. Similarly, most materials can be ductile at very high temperature. Over the full range of typical conditions, most materials can be considered to be either brittle or ductile. However, some materials, such as aluminium alloys, are a little less clear – a single large force is likely to result in yielding while a cyclic fatigue load will result in fracture.

The way that failure is defined may also vary according to the way a part is used. For example, if a shackle on a safety harness yields while arresting a fall, this probably doesn’t constitute a failure. In fact, it may be desirable for the shackle to yield, since this will dissipate some energy, protecting both inline equipment and the falling person from higher peak forces. [Hopefully, the shackle carries a warning that it should not be used after a fall, now that it has a shape different than its design shape. --Ed.]

In this case, a simple failure criterion could be when the average stress over its cross section exceeds the material’s ultimate tensile stress. However, if the same shackle was used as lifting gear, requiring repeated use, then yielding would be considered as a failure of the part. In this case, the same part, undergoing the same loading, can have different failure criteria – defined according to the usage requirements.

Some potential physical mechanisms for failure include yielding, fracture and buckling. A different analysis is required to check for each failure criteria.

For a part loaded in pure tension, the yield criterion is simply the yield stress for the material. However, most parts have more complex loadings, resulting in a three-dimensional combination of tension, compression and shear.

The simplest way to deal with this is to resolve the stresses into their principal directions, using Mohr’s circle. If these individual values are less than the material’s yield stress, this theory would assert that it shouldn’t fail. The principal stress approach is a simplification and other failure theories take a more sophisticated approach to determine when yield will occur. They consider the micro-mechanics of materials, involving atoms slipping within the crystal lattice and grain boundaries moving over each other.

Different approaches should, therefore, be used for ductile and brittle materials. They typically assume that a material is ductile and isotropic, meaning it has the same strength and stiffness in all directions. Metals can generally be considered to be isotropic, while wood and composites cannot.

The most common failure criteria used for static stress are von Mises, Tresca and maximum normal stress. They all involve first calculating principal stresses and then combining them into a single stress value that represents the stress at a point in the 3D solid. If this combined stress is less than the tensile yield stress for the material, it should not yield.

Von Mises and Tresca are used for ductile materials, while maximum normal stress is used for brittle materials. Von Mises is the most common, used for most static stress analysis. Tresca is very similar, but can give slightly higher stress values under certain circumstances; it is therefore more conservative, resulting in improved safety.

There is no stress plot listed as Tresca in SOLIDWORKS simulation but the Stress Intensity (P1-P3) gives the same values as Tresca.

For brittle materials, it is best to use the maximum principal stress (P1). However, brittle materials may require more detailed consideration of their fracture mechanics, which describes the way that cracks propagate and result in sudden and catastrophic failures.

Many theoretical failure criteria have been devised for brittle failure. The below plots show that for von Mises, Tresca and maximum principal stress, the results are very similar but slightly different maximum values are calculated.

In this article, I’ve given a quick overview of some of the most important failure criteria for failure under static stress.

It’s important to also check whether your part might fail under buckling or fatigue. In an ideal world, we would have clear rules for which failure criteria to apply for specific materials, loading conditions and functional requirements. Unfortunately, material science isn’t quite there yet. Instead, a number of different theories are in use and each has strengths and weaknesses.

Although a skilled analyst can consider relevant criteria for each case, ultimately no simulation can be considered an infallible way to determine whether a part will fail. Ultimately, testing is still required. Simulation, can however, dramatically reduce the number of design and test iterations.

Simulation is more useful for providing rich qualitative data showing stress fields than it is in predicting exact failure. This can be invaluable in assisting a designer, or optimization algorithm, to design parts which put material where it is needed to carry loads.

To learn more about analysis with SOLIDWORKS Simulation, check out the eBook Understanding Nonlinear Analysis.

]]>
Jody Muelaner