Fluids

Fluids

Topics related to Fluent, CFX, Turbogrid and more.

Vastly Different Solutions on Different Machines for the Same Set Up

    • Lachlan Whitehead
      Subscriber

      Hi! I am currently trying to use a high performance computing system to run some aerodynamic simulations of a rocket, but I'm running into some inconsistensies between the solutions that my PC gives compared to the solutions the HPC system gives. In effect, when given the same case file (.cas.h5), my PC gives a convergent and expected solution, while the HPC system gives a wildly divergent solution. Both systems run almost the same version of ANSYS - 2023R1 - however, my PC runs on Windows 10 and the HPC system runs on Linux.

      My question is: has anyone ever encountered differences this wild in solutions between different OS's? It seems very strange to me that this would occur. I have made case files from scratch on the HPC system and then downloaded them and run solutions on both, with again my PC giving good results and the HPC system giving divergent/poor results.

      All help appreciated!

    • Rob
      Forum Moderator

      There may be a slight difference due to rounding effects (LINUX v Win10 numerical effects) but the solution should be virtually identical. How many cores are you using on each system? 

      • Lachlan Whitehead
        Subscriber

        Both machines I've specified to use 4 cores, though when I change the core count on the HPC system the results change wildly from if I use 8 cores on the HPC system. Just for context of what I mean, here's the 60th-70th iterations of that HPC system solution for drag force:

        60 15870.00649096797
        61 15126.98161440299
        62 14230.27875018823
        63 13508.82336605396
        64 12435.49345904301
        65 10756.44406350329
        66 7503.293226205074
        67 -108.4567288922171
        68 -11433.06256412564
        69 -33888.14975064303
        70 -86248.22022665566

        My PC converges at around this amount of iterations to something like 25 N. I should also mention my PC uses the student license version, while the HPC system uses the full license.

    • Rob
      Forum Moderator

      To confirm, with 4 cores the results are the same, with 8 on the cluster they're different? The results above suggest the model is diverging.

      The licence only alters T&Cs and available mesh & compute, the solvers are the same. 

      • Lachlan Whitehead
        Subscriber

        No, the results are different (HPC system being divergent) regardless of the number of cores used. 

    • Rob
      Forum Moderator

      Are you using any UDFs? 

      • Lachlan Whitehead
        Subscriber

        I haven't set up the case with any UDFs, I'm very unfarmiliar with them so I have avoided them until I get time to learn. Is it possible that UDFs could be being used on the HPC system version even though I give it the exact same case file? Sorry for my unfarmiliarity.

    • Rob
      Forum Moderator

      No, the same case should behave the same. It's more incase you had a UDF and didn't transfer it to/from the HPC. 

      • Lachlan Whitehead
        Subscriber

        Right yeah. It's a really weird thing I'm not understanding. If it's any help: I've been trying to solve this inconsistency issue for a while now and a couple weeks ago I thought I had found the problem. In the geometry, there was just 1 pressire far-field being used to describe two orthogonal faces of the enclosure (inlet and open air) and when I changed this to be 2 separate pressure far-fields for each face, the divergence didn't occur in the HPC system case as well. (This was just in this specific case - though it made me think I had solved the problem)

        But this wasn't the issue for everything, since I do this (split up far-field procedure) with new case files of slightly different geometry and the divergence returns again, just for the HPC solution.

    • Rob
      Forum Moderator

      Which solver are you running (Density or Pressure)? 

      • Lachlan Whitehead
        Subscriber

        I'm using pressure based.

    • Rob
      Forum Moderator

      If you read a case & data from one machine to the other what happens? Ie is there a spike in residuals? 

      • Lachlan Whitehead
        Subscriber

        As in solve on the working one and then load that on the not-working machine? Or just initialise the case and load it?

    • Rob
      Forum Moderator

      Load the working case & data onto the cluster and continue the run. Look for any warnings on reading and any spike in residuals. 

    • Lachlan Whitehead
      Subscriber

      So I had to stop the simulation from re-initialising to do your suggestion, and then I realised that the initialisation on HPC was the Standard Initialisation, while my PC was using Hybrid Initialisation. So, I had the HPC system use hybrid initialisation and it all converges now! Thank you so much, Rob, really appreciate your help. While I still have you, could I ask what the difference is between standard initialisation and hybrid initialisation?

    • Rob
      Forum Moderator

      Ah, that shouldn't happen either, unless you've got something odd/missing in the set up or run script. 

      You're welcome, and since you asked nicely...  Standard is a single value (usually - you can alter this) for each field over the whole domain. Hybrid solves the Laplacian (I think) equation over the domain, so whilst the solution is "wrong" it's often reasonable relative to the converged state. Depending on the physics and boundary conditions both options are "best", but if you have reactions & pressure boundaries you may find hybrid to give a better starting point. From there, with a very stiff solution the solver may struggle from a poor starting point. Check the Fluent CFD course in Learning, if it's the same slides as I give when teaching it's covered: I give an edited version to University course every year.  

Viewing 9 reply threads
  • The topic ‘Vastly Different Solutions on Different Machines for the Same Set Up’ is closed to new replies.