TAGGED: ansys-hfss, ansys-hpc, batch-hpc
-
-
July 15, 2025 at 5:49 pm
umme
SubscriberI am attempting to run antenna array simulation using HFSS 2022 R1 on an HPC cluster with two nodes allocated via a Slurm scheduler.Â
Although the initial MPI test usingÂ
mpirun.fl
 successfully executes theÂhf3d INTEL_MPI -mpi_mem_info
 command across both nodes and returns memory information from each, the actual simulation (includingÂG3dMesher
 andÂhf3d
 iterations) appears to run only on the primary node, not utilizing both nodes as expected.I would appreciate your guidance on whether this behavior is expected and how to properly enable distributed parallelism across multiple nodes for this kind of HFSS job.
-
July 22, 2025 at 5:45 pm
Gia
Ansys EmployeeIf you are an Academic user, please re-post this question into this forum for Installation, Systems, and HPC Job Submission questions:
 https://innovationspace.ansys.com/forum/forums/forum/installation-and-licensing/ansys-products/
If you are a commercial user with access to the Ansys Customer Portal, please submit a new case here: support.ansys.com
-
July 23, 2025 at 5:00 pm
umme
Subscriber Thank you for your suggestion. I’ve already posted the question accordingly.
-
- You must be logged in to reply to this topic.
- Lumped Port Deembed
- Hfss 3D pcb via capped and filled with epoxy
- Optimizing Via Impedance in Ansys HFSS 3D Layout Using Geometric Parameter Sweep
- HFSS libnvidia-ml.so too old or could not be found – Warning in slurm job output
- AEDT Natural Convection with default correlation is failing solver initializatio
- STL Import Errors in HFSS After Cleaning in SpaceClaim
- Three-Phase Voltage Imbalances in dual stator electric generator
- Calc Error in Field Calculator after PyAEDT Analyze
- import file autocad 3d
- Co-simulation in Q3D, Icepak – meshing problem
-
4032
-
1461
-
1308
-
1136
-
1021
© 2025 Copyright ANSYS, Inc. All rights reserved.