-
-
September 6, 2019 at 5:23 pm
Raciel
SubscriberHello,
I am working in the structural analysis of heat exchangers at very high temperature conditions. Thermal expansion is an important issue in this kind of device. Then, thermal load and internal pressure is considered. I have a relative big domain with around 3 millions of nodes. What amount of memory and CPU is recomendable to achieve my solution? Where can I set the BCSOPTION or DSPOPTION command?
Thank you in advance.
Regards.
Raciel
-
September 7, 2019 at 10:58 am
peteroznewman
SubscriberHello Raciel,
Are you working in Workbench and Mechanical or are you using APDL? What release of ANSYS are you using?
Here are old instructions for setting the number of cores and turning on Distributed ANSYS in Mechanical.
In the new user interface of 2019 R2, it is simpler. They are right on the ribbon.
If you use the settings above, there is almost never a need to use the DSPOPTION in Mechanical.
Distributed solves are almost always faster than Shared Memory solves, so you won't need BCSOPTION.
The only time I have used DSPOPTION is to force the solver to run in-core when its own logic chose out-of-core, but I knew it would fit in-core.
I have a 16 core computer and did some benchmarking on a range of different models, solving them at 2, 4, 8, and 16 cores. There are diminishing returns going from 8 to 16 cores for Structural solutions. CFD models continue to scale well. I have done no testing on Thermal models. I also have two licenses for solving, so I sometimes run two jobs in parallel at 8 cores each. The two jobs running in parallel on 8 cores each finish in nearly half the time compared with running the two jobs sequentially on 16 cores. I also found using 16 cores results in a longer solve time than using 15 cores. It is best to not use every core on the computer for the solver.
On the topic of memory, I put as much RAM in the computer as Windows 7 would support, 192 GB, so most of my models run in-core. For your models, you can find out if they are running in-core or out-of-core by solving the model using the Direct Sparse solver (not the Iterative PCG solver) by configuring that under Analysis Settings. After the solver has run, in Mechanical, you click on the Solution Information folder and set the details for the Solver Output in order to have the solve.out file shown in the main window. Ctrl-F to find memory and click Next till you see something like the text below.
 DISTRIBUTED SPARSE MATRIX DIRECT SOLVER.
 Number of equations =       9261,   Maximum wavefront =   168
 Local memory allocated for solver             =    12.157 MB
 Local memory required for in-core solution    =    11.659 MB
 Local memory required for out-of-core solution =     5.591 MB
 Total memory allocated for solver             =    43.594 MB
 Total memory required for in-core solution    =    41.832 MB
 Total memory required for out-of-core solution =    20.945 MB
 *** NOTE ***                           CP =      1.859  TIME= 06:52:00
 The Distributed Sparse Matrix Solver is currently running in the      Â
 in-core memory mode. This memory mode uses the most amount of memory Â
 in order to avoid using the hard drive as much as possible, which mostÂ
 often results in the fastest solution time. This mode is recommended Â
 if enough physical memory is present to accommodate all of the solver Â
 data.
See where it says "Total memory allocated for solver" find that number in your output to get an idea of the minimum amount of RAM you should have, because you want your models to run in-core.
If I get the message that the solver is running out-of-core, I go back and change the mesh density to use less nodes until the problem will run in-core. Sometimes I can't get there so I might use the Iterative PCG solver, which requires less memory. But even with a small amount of installed RAM, ANSYS can still provide a solution, it will just take more time.
I maintain accuracy by having closely spaced nodes in areas of high stress (or temperature) gradient, and use larger elements where the gradient is low.
Regards,
Peter -
September 13, 2019 at 6:35 pm
Raciel
SubscriberHello Peter,
I am working in Mechanical with the ANSYS version 18.1. I have verified that the solver automatically calculate in-core when it is possible, and out-of-core if the required memory with in-core mode is higher than memory system. In that cases, I have had an error refer to the low space in disk, but the space is sufficient.
Anyway, I am trying all geometric domains my study have around 2.5 millions of nodes to can solve with in-core mode. I have 128 GB of memory and I have seen that per each GB of memory can be solved models with around 20000 nodes.
Thank you,
Raciel
-
- The topic ‘Balance between the computer capability and accuracy of the model’ is closed to new replies.
- The legend values are not changing.
- LPBF Simulation of dissimilar materials in ANSYS mechanical (Thermal Transient)
- Convergence error in modal analysis
- APDL, memory, solid
- How to model a bimodular material in Mechanical
- Meaning of the error
- Simulate a fan on the end of shaft
- Real Life Example of a non-symmetric eigenvalue problem
- Nonlinear load cases combinations
- How can the results of Pressures and Motions for all elements be obtained?
-
4102
-
1487
-
1318
-
1156
-
1021
© 2025 Copyright ANSYS, Inc. All rights reserved.