TAGGED: ls-dyna-run-time, timestep, timestepsize
-
-
April 30, 2026 at 6:16 pm
lsdyna928
SubscriberHi there,
I have an explicit simulation that is running very slowly. One of the parts in the simulation is rigid, and according to the messag file, this part has the lowest timestep. Based on the LS-DYNA documentation, I decreased the young’s modulus of the rigid part (by assigning different material properties) in order to increase the timestep and reduce the overall runtime. I then ran the simulation again; however, it is now running even slower. Could anyone help me understand why reducing the young’s modulus of the rigid part would make the simulation slower instead of faster (although the document says it should make it faster)?
I also have another question. The other components of the simulation consist of uniformly distributed SPH particles. I checked the d3hsp file for the "smallest" timestep entries, and indicate a few SPH nodes. My question is: if the SPH particles are distributed uniformly, why are only a few of them controlling the smallest timestep? What could be causing this? How can I correct the issue??
I would appreciate any help with these issues.
-
May 1, 2026 at 12:47 pm
Nanda
Ansys EmployeeHello User,
Time step scaling with Young’s modulus primarily applies to deformable elements, where elastic wave speed governs numerical stability. For rigid parts, this mechanism does not apply in the same way. Rigid bodies typically affect the timestep through contact interactions, not through their elastic stiffness. When a rigid part participates in contact:- The stable timestep is often controlled by contact penalty stiffness, not the material’s Young’s modulus.
- Reducing the modulus on a rigid material can increase contact penetration.
- LS‑DYNA may automatically compensate by increasing penalty forces or stiffness, which makes contact enforcement more computationally expensive.
- As a result, even if the nominal timestep increases slightly, the CPU cost per timestep increases, leading to a slower overall run.
I would recommend using mass scaling by *CONTROL_TIMESTEP. I don’t know if increasing SLSFAC under *CONTROL_CONTACT helps, but I would give it a try.
I never worked with SPH, but I gathered some information that was available. It appears that, in SPH formulations, the stable timestep is governed by local particle conditions, not just by uniform particle spacing. Even when particles are initially distributed uniformly, a small number of particles can dominate the timestep if they experience extreme local behaviour.
Common causes include:
- High local velocities, typically near impact regions or contact interfaces with solids.
- Local pressure or density spikes due to compression, shocks, or particle clustering.
- Boundary and free‑surface effects, where particles have incomplete kernel support and may develop nonphysical pressure or sound‑speed values.
- Loss of neighbours or damage/erosion effects, which can reduce the effective smoothing length and collapse the timestep locally.
Because LS‑DYNA always selects the most restrictive local timestep, only these few “worst‑case” particles appear as timestep‑controlling in the d3hsp output.
Regards,
Nanda.
Have a look at our public help documentation website: Ansys Help
For more exciting courses and certifications, hit this link: Ansys Innovation Courses | ANSYS Innovation Space
Guidelines for Posting on Ansys Learning Forum
-
May 6, 2026 at 3:10 am
lsdyna928
SubscriberThank you for your explanations. I also used DT2MS to reduce the runtime. The documentation recommends checking the added mass and states that it should remain below 5% of the total mass. I checked the d3hsp file, and for each cycle it reports the percentage of added mass; this percentage seems to decrease at each timestep. My question is: should the added mass remain below 5% at all timesteps, or should I allow more simulation time for it to reduce further? A portion of the d3hsp output is attached below.
1 t 0.0000E+00 dt 6.40E-07 flush i/o buffers 05/05/26 19:35:48
1 t 0.0000E+00 dt 6.40E-07 write d3plot file 05/01/26 19:35:48problem cycle = 100
time = 6.3460E-05
added mass = 1.5007E-01
percentage increase = 1.9945E+01
problem cycle = 200
time = 2.0746E-04
added mass = 1.5007E-01
percentage increase = 1.9945E+01
problem cycle = 300
time = 2.6146E-04
added mass = 1.5007E-01
percentage increase = 1.9945E+01
problem cycle = 400
time = 3.1546E-04
added mass = 1.5007E-01
percentage increase = 1.9945E+01.
.
.
5000 t 2.6995E-03 dt 6.40E-07 flush i/o buffers 05/01/26 19:37:32problem cycle = 5100
time = 2.7535E-03
added mass = 1.1533E-01
percentage increase = 1.5327E+01
problem cycle = 5200
time = 2.8075E-03
added mass = 1.1338E-01
percentage increase = 1.5068E+01
problem cycle = 5300
time = 2.8615E-03
added mass = 1.2393E-01
percentage increase = 1.6470E+01I also have another question. I initially applied a higher DT2MS value, which increased the computational speed significantly, but the added mass percentage became quite high, so I decreased the value. However, the resulting timestep is now actually very close to the smallest timesteps reported in the d3hsp file (for 100 elements). So I wanted to ask whether the process I am following is correct, or if I may be missing something.
-
May 6, 2026 at 9:18 am
Nanda
Ansys EmployeeHello User,
When using DT2MS, the guideline that added mass should remain below about 5% applies to the entire time interval of the simulation. Even if the added‑mass percentage decreases with time (as commonly observed due to contact stabilization or evolving element stiffness), any period where it is high already affects inertia, wave speeds, accelerations, and contact forces, and those early errors are not “undone” later. Therefore, you should not rely on running the model longer for the added mass to drop unless the early part of the analysis is known to be non‑physical and will be completely disregarded.Your tuning approach is correct: increasing DT2MS to gain speed, observing excessive added mass, then reducing DT2MS until the added‑mass level becomes acceptable. If the resulting timestep is now close to the smallest natural element timesteps reported in the d3hsp file, that is actually expected and indicates that DT2MS is no longer driving the timestep and that mass scaling is minimal, which is desirable. DT2MS should be viewed as a fine‑tuning tool rather than a way to force large speedups without consequence. Finally, global added‑mass percentage alone is not sufficient; it is also important to keep an eye an energy ratio and energy summary graphs.Best,Nanda
-
- You must be logged in to reply to this topic.
-
6319
-
1906
-
1457
-
1308
-
1022
© 2026 Copyright ANSYS, Inc. All rights reserved.