TAGGED: mpp
-
-
November 23, 2022 at 9:42 pmSantiago RuizSubscriber
Hi,
I am trying to run LS-DYNA in an HPC system. The computer center provided the following symbolic links to run the software for double precision and single precision running:
- ls-dyna_mpp_d_R13_1_0_x64_centos78_ifort190_avx2_openmpi2.1.3.l2a
- ls-dyna_mpp_s_R13_1_0_x64_centos78_ifort190_avx2_openmpi2.1.3.l2a
I haven't been able to run LS-DYNA using these commands. By looking at online documentation https://ftp.lstc.com/anonymous/outgoing/support/FAQ/ASCII_output_for_MPP_via_binout & https://lstc.com/download/ls-dyna) I can see that the l2a executable that appears at the end of the commands is used to extract ASCII files from a binout file. Therefore, I am not sure if the commands are the ones that should be used to run an analysis.
Could you please help me confirm if the provided commands are the correct ones to run LS-DYNA or if I might be missing something in the execution? -
November 28, 2022 at 3:55 pmBen_BenSubscriber
I think you need to define the input files after your command. I use i=input_file to define the input file.Â
Here is the command I use to run it on my HPC:
ls-dyna_smp_d_R13_1_0_x64_centos79_ifort190 i=input_file.k NCPU=12
Â
-
November 28, 2022 at 6:40 pmReno GenestAnsys Employee
Hello Santiago,
The file ending with .I2a is not the LS-DYNA solver. Please take the other file in the folder:
Â
Here are the commands I use to run LS-DYNA on LInux:
SMP:
/data2/rgenest/lsdyna/ls-dyna_smp_d_R13_0_0_x64_centos610_ifort190/ls-dyna_smp_d_R13_0_0_x64_centos610_ifort190 i=/data2/rgenest/runs/Test/input.k ncpu=-4 memory=20m
Note that for SMP, we recommend setting ncpu=-4 (that is a negative number of cpus). The negative means that the solver will use consistency checking to get consistent results with different number of cores.Â
Â
MPP:
To run MPP, we specify the location of the mpiexec or mpirun file first.
Intel MPI:
/data2/rgenest/intel/oneapi/mpi/2021.2.0/bin/mpiexec -np 4 /data2/rgenest/lsdyna/ls-dyna_mpp_d_R13_0_0_x64_centos610_ifort190_sse2_intelmpi-2018/ls-dyna_mpp_d_R13_0_0_x64_centos610_ifort190_sse2_intelmpi-2018 i=/data2/rgenest/runs/Test/input.k memory=20m
Â
Platform MPI:
/data2/rgenest/bin/ibm/platform_mpi/bin/mpirun -np 4 /data2/rgenest/lsdyna/ls-dyna_mpp_d_R13_0_0_x64_centos610_ifort190_sse2_platformmpi/ls-dyna_mpp_d_R13_0_0_x64_centos610_ifort190_sse2_platformmpi i=/data2/rgenest/runs/Test/input.k memory=20m
Â
Open MPI:
/opt/openmpi-4.0.0/bin/mpirun -np 4 /data2/rgenest/lsdyna/ls-dyna_mpp_d_R13_0_0_x64_centos610_ifort190_sse2_openmpi4/ls-dyna_mpp_d_R13_0_0_x64_centos610_ifort190_sse2_openmpi4.0.0 i=/data2/rgenest/runs/Test/input.k memory=20m
Â
You will have to modify the above commands for your own path for the LS-DYNA solver, the MPI mpiexec file, and the input file.
Â
Let me know how it goes.
Â
Reno.
-
November 30, 2022 at 10:35 amBen_BenSubscriber
Hi Reno,Â
For an SMP run on 32 cores should I still set NCPU=-4 ?Â
Ben
-
November 30, 2022 at 6:21 pmReno GenestAnsys Employee
Hello Ben,
To run SMP on 32 cores, you need to set NCPU=-32. But, note that SMP does not scale well beyond 8 cores; this means you won't see much speedup running on 32 vs 8 cores. To fully use all 32 cores, you should run MPP. Also, note that you need to have a model that is large enough to run on 32 cores. We recommend having at least 10 000 or more elements per core. If you have fewer elements per core, than the communication between the cores becomes the bottleneck. So, to run efficiently on 32 cores, I would like to have a model with 320 000 or more elements. This is a rule of thumb, but will be model specific. You can do benchmarks with your model and compare calculation time with different number of cores (8, 16, 24, 32 for example) and see what is fastest for you.Â
You will find more information on MPP here:
https://ftp.lstc.com/anonymous/outgoing/support/PRESENTATIONS/mpp_201305.pdf
https://ftp.lstc.com/anonymous/outgoing/support/FAQ/mpp.getting_started
Â
Note that you should expect slightly different results with different cores with MPP. This is because the FEM domain is divided into the number of cores requested. So, once you find the number of cores that gives you the best performance, please use the same number of MPP cores to compare results between different runs.
Let me know how it goes.
Â
Reno.
-
December 1, 2022 at 5:06 pmBen_BenSubscriber
I'm going to give it a go. Just need to set up MPP on my HPC.Â
Will let you know how it goes,Ben
-
- The topic ‘Run LS-DYNA in HPC’ is closed to new replies.
- LS-DYNA Installation Issues with Student Workbench 2024 R2
- Cross-coupled stiffness elements in LS-DYNA
- Initial Stress Shell Application and HistVarCosine in LS-DYNA
- shape memory alloy material in LS-DYNA
- Initial Velocity Generation
- MAT072R3- concrete damage rel3 validation
- *** Error 40058 (SOL+58) retractor 1000002:convergence failure at time1.01E+02
- Mohr_Coulomb material model (MAT_173)
- LS-Dyna, Negative volume problem.
- Fluid-structure interaction coupling
-
1882
-
802
-
599
-
591
-
366
© 2025 Copyright ANSYS, Inc. All rights reserved.