TAGGED: #fluent-#cfd-#ansys, linux, windows-10
-
-
October 12, 2023 at 12:03 pmLachlan WhiteheadSubscriber
Hi! I am currently trying to use a high performance computing system to run some aerodynamic simulations of a rocket, but I'm running into some inconsistensies between the solutions that my PC gives compared to the solutions the HPC system gives. In effect, when given the same case file (.cas.h5), my PC gives a convergent and expected solution, while the HPC system gives a wildly divergent solution. Both systems run almost the same version of ANSYS - 2023R1 - however, my PC runs on Windows 10 and the HPC system runs on Linux.
My question is: has anyone ever encountered differences this wild in solutions between different OS's? It seems very strange to me that this would occur. I have made case files from scratch on the HPC system and then downloaded them and run solutions on both, with again my PC giving good results and the HPC system giving divergent/poor results.
All help appreciated!
-
October 12, 2023 at 1:43 pmRobForum Moderator
There may be a slight difference due to rounding effects (LINUX v Win10 numerical effects) but the solution should be virtually identical. How many cores are you using on each system?Â
-
October 12, 2023 at 10:49 pmLachlan WhiteheadSubscriber
Both machines I've specified to use 4 cores, though when I change the core count on the HPC system the results change wildly from if I use 8 cores on the HPC system. Just for context of what I mean, here's the 60th-70th iterations of that HPC system solution for drag force:
60 15870.00649096797
61 15126.98161440299
62 14230.27875018823
63 13508.82336605396
64 12435.49345904301
65 10756.44406350329
66 7503.293226205074
67 -108.4567288922171
68 -11433.06256412564
69 -33888.14975064303
70 -86248.22022665566My PC converges at around this amount of iterations to something like 25 N. I should also mention my PC uses the student license version, while the HPC system uses the full license.
-
-
October 13, 2023 at 11:21 amRobForum Moderator
To confirm, with 4 cores the results are the same, with 8 on the cluster they're different? The results above suggest the model is diverging.
The licence only alters T&Cs and available mesh & compute, the solvers are the same.Â
-
October 13, 2023 at 11:48 amLachlan WhiteheadSubscriber
No, the results are different (HPC system being divergent) regardless of the number of cores used.Â
-
-
October 13, 2023 at 11:51 amRobForum Moderator
Are you using any UDFs?Â
-
October 13, 2023 at 11:56 amLachlan WhiteheadSubscriber
I haven't set up the case with any UDFs, I'm very unfarmiliar with them so I have avoided them until I get time to learn. Is it possible that UDFs could be being used on the HPC system version even though I give it the exact same case file? Sorry for my unfarmiliarity.
-
-
October 13, 2023 at 12:08 pmRobForum Moderator
No, the same case should behave the same. It's more incase you had a UDF and didn't transfer it to/from the HPC.Â
-
October 13, 2023 at 12:28 pmLachlan WhiteheadSubscriber
Right yeah. It's a really weird thing I'm not understanding. If it's any help: I've been trying to solve this inconsistency issue for a while now and a couple weeks ago I thought I had found the problem. In the geometry, there was just 1 pressire far-field being used to describe two orthogonal faces of the enclosure (inlet and open air) and when I changed this to be 2 separate pressure far-fields for each face, the divergence didn't occur in the HPC system case as well. (This was just in this specific case - though it made me think I had solved the problem)
But this wasn't the issue for everything, since I do this (split up far-field procedure) with new case files of slightly different geometry and the divergence returns again, just for the HPC solution.
-
-
October 17, 2023 at 12:33 pmRobForum Moderator
Which solver are you running (Density or Pressure)?Â
-
October 17, 2023 at 10:12 pmLachlan WhiteheadSubscriber
I'm using pressure based.
-
-
October 18, 2023 at 11:45 amRobForum Moderator
If you read a case & data from one machine to the other what happens? Ie is there a spike in residuals?Â
-
October 19, 2023 at 7:44 amLachlan WhiteheadSubscriber
As in solve on the working one and then load that on the not-working machine? Or just initialise the case and load it?
-
-
October 19, 2023 at 1:05 pmRobForum Moderator
Load the working case & data onto the cluster and continue the run. Look for any warnings on reading and any spike in residuals.Â
-
October 21, 2023 at 12:11 amLachlan WhiteheadSubscriber
So I had to stop the simulation from re-initialising to do your suggestion, and then I realised that the initialisation on HPC was the Standard Initialisation, while my PC was using Hybrid Initialisation. So, I had the HPC system use hybrid initialisation and it all converges now! Thank you so much, Rob, really appreciate your help. While I still have you, could I ask what the difference is between standard initialisation and hybrid initialisation?
-
October 24, 2023 at 9:10 amRobForum Moderator
Ah, that shouldn't happen either, unless you've got something odd/missing in the set up or run script.Â
You're welcome, and since you asked nicely... Standard is a single value (usually - you can alter this) for each field over the whole domain. Hybrid solves the Laplacian (I think) equation over the domain, so whilst the solution is "wrong" it's often reasonable relative to the converged state. Depending on the physics and boundary conditions both options are "best", but if you have reactions & pressure boundaries you may find hybrid to give a better starting point. From there, with a very stiff solution the solver may struggle from a poor starting point. Check the Fluent CFD course in Learning, if it's the same slides as I give when teaching it's covered: I give an edited version to University course every year. Â
-
- The topic ‘Vastly Different Solutions on Different Machines for the Same Set Up’ is closed to new replies.
- Non-Intersected faces found for matching interface periodic-walls
- Unburnt Hydrocarbons contour in ANSYS FORTE for sector mesh
- Help: About the expression of turbulent viscosity in Realizable k-e model
- Fluent fails with Intel MPI protocol on 2 nodes
- Cyclone (Stairmand) simulation using RSM
- error udf
- Mass Conservation Issue in Methane Pyrolysis Shock Tube Simulation
- Script Error
- Facing trouble regarding setting up boundary conditions for SOEC Modeling
- UDF, Fluent: Access count of iterations for “Steady Statistics”
-
1406
-
599
-
591
-
555
-
366
© 2025 Copyright ANSYS, Inc. All rights reserved.