Licensing

Licensing

Mechanical APDL distributed solver 2020 R1 (may also apply for other versions) on Windows 10, may print extreme slow (in 10s MB/s or 200-300MB/s, should be in 4-5GB/s) “Communication speed from master to core #” in output file and slow down the solving: Communication speed from master to core 1 = 36.96 MB/sec Communication speed from master to core 2 = 36.05 MB/sec Communication speed from master to core 3 = 33.20 MB/sec Communication speed from master to core 4 = 44.90 MB/sec Communication speed from master to core 5 = 35.06 MB/sec Communication speed from master to core 6 = 32.52 MB/sec Communication speed from master to core 7 = 32.37 MB/sec Communication speed from master to core 8 = 35.84 MB/sec Communication speed from master to core 9 = 43.41 MB/sec Communication speed from master to core 10 = 38.41 MB/sec Communication speed from master to core 11 = 34.43 MB/sec Communication speed from master to core 12 = 44.37 MB/sec Communication speed from master to core 13 = 35.71 MB/sec Communication speed from master to core 14 = 36.46 MB/sec Communication speed from master to core 15 = 46.46 MB/sec Communication speed from master to core 16 = 41.15 MB/sec Communication speed from master to core 17 = 36.24 MB/sec Change to IBM MPI or MS MPI, same issue. Run MAPDL MPI test, everything works as expected: latency = 0.3398 microseconds Bytes Bandwidth(MB/s) ——- —————– 8 23.544842 1024 1613.448247 4096 3162.126055 16384 4319.923306 65536 2234.998517 262144 4293.039927 1048576 4850.385359 4194304 5141.109631 The /parallel/bandwidth In Fluent prints expected values (4-5GB/s). Only Mechanical APDL distributed solving gets such extreme slow MPI bandwidth issue.

    • FAQFAQ
      Participant

      Checked .dll used by ANSYS.exe in Process Explorer and found below suspected .dll file: umppc11009.dll CrowdStrike Falcon Sensor Support Module CrowdStrike, Inc. C:WindowsSystem32umppc11009.dll Renaming this file could resolve this issue: Communication speed from master to core 1 = 3664.57 MB/sec Communication speed from master to core 2 = 4403.35 MB/sec Communication speed from master to core 3 = 4252.71 MB/sec Communication speed from master to core 4 = 2462.62 MB/sec Communication speed from master to core 5 = 3032.25 MB/sec Communication speed from master to core 6 = 4411.23 MB/sec Communication speed from master to core 7 = 4429.14 MB/sec Communication speed from master to core 8 = 1096.84 MB/sec Communication speed from master to core 9 = 4244.59 MB/sec Communication speed from master to core 10 = 4403.03 MB/sec Communication speed from master to core 11 = 4393.25 MB/sec Communication speed from master to core 12 = 4377.22 MB/sec Communication speed from master to core 13 = 4321.73 MB/sec Communication speed from master to core 14 = 4430.78 MB/sec Communication speed from master to core 15 = 4381.27 MB/sec Communication speed from master to core 16 = 4387.57 MB/sec Communication speed from master to core 17 = 4299.54 MB/sec And solving performance is back to expected, 2-3X faster than before renaming. Assuming something in security software slows down the MPI communication among ANSYS.exe processes.