TAGGED: ansys-hpc, hpc, hpc-command-lines
-
-
March 7, 2024 at 5:37 pmIheanyi OgbonnaSubscriber
Hi, I'm currently having issues running a system coupling simulation on my university's HPC cluster. The simulation runs perfectly on my PC, but is infinitely slow considering the nature of the simulation, and the timeframe I intend to study. It barely solves 0.1seconds worth of results per day, hence the need to accelerate result generation with HPC infrastructure, as I would require anything between 50-100seconds worth of results to study behaviour from rest till angular velocity reaches a terminal point. The error message I get everytime I run is attached to this message. There was an initial issue "awaiting connection from coupling participants" and the participants never connected, but that I believe was resolved when IPV6 was disabled on worker nodes. However I might be beyond my depth on this one, and would sincerely appreciate any form of support in resolving this.
Regards
Traceback (most recent call last):
 File "PyLib/kernel/remote/RemoteObjects.py", line 32, in _nodeCaller
 File "PyLib/cosimulation/remoteclient/Client.py", line 82, in makeRemoteCall
RuntimeError
+-----------------------------------------------------------------------------+
| Error when making a remote call to participant Solution 4. Â Â Â Â Â Â Â Â Â |
+-----------------------------------------------------------------------------+
Traceback (most recent call last):
 File "PyLib/cosimulation/remoteclient/__init__.py", line 98, in makeRemoteCall
 File "PyLib/kernel/remote/RemoteObjects.py", line 52, in func
 File "PyLib/ComputeNodeCommand.py", line 118, in newfunc
 File "PyLib/ComputeNodeCommand.py", line 85, in raiseException
RuntimeErrorDuring handling of the above exception, another exception occurred:
Traceback (most recent call last):
 File "PyLib/cosimulation/solver/__init__.py", line 150, in solve
 File "PyLib/kernel/util/Memory.py", line 208, in wrapper
 File "PyLib/cosimulation/solver/__init__.py", line 587, in __initializeControlled
 File "PyLib/cosimulation/participantmanager/couplingparticipant/OldApisParticipant.py", line 145, in initializeOutputs
 File "PyLib/cosimulation/participantmanager/couplingparticipant/OldApisParticipant.py", line 386, in __advanceToSyncPoint
 File "PyLib/cosimulation/remoteclient/__init__.py", line 103, in makeRemoteCall
cosimulation.solver.CosimulationError.CosimulationError: Error when making a remote call to participant Solution 4.Â
Traceback (most recent call last):
 File "PyLib/cosimulation/remoteclient/__init__.py", line 98, in makeRemoteCall
 File "PyLib/kernel/remote/RemoteObjects.py", line 52, in func
 File "PyLib/ComputeNodeCommand.py", line 118, in newfunc
 File "PyLib/ComputeNodeCommand.py", line 85, in raiseException
RuntimeErrorDuring handling of the above exception, another exception occurred:
Traceback (most recent call last):
 File "CommandConsole", line 1, in
 File "PyLib/kernel/commands/CommandDefinition.py", line 74, in func
 File "PyLib/kernel/commands/__init__.py", line 30, in executeCommand
 File "PyLib/kernel/commands/CommandManager.py", line 165, in executeCommand
 File "PyLib/cosimulation/externalinterface/core/solver.py", line 137, in execute
 File "PyLib/cosimulation/solver/__init__.py", line 150, in solve
 File "PyLib/kernel/util/Memory.py", line 208, in wrapper
 File "PyLib/cosimulation/solver/__init__.py", line 587, in __initializeControlled
 File "PyLib/cosimulation/participantmanager/couplingparticipant/OldApisParticipant.py", line 145, in initializeOutputs
 File "PyLib/cosimulation/participantmanager/couplingparticipant/OldApisParticipant.py", line 386, in __advanceToSyncPoint
 File "PyLib/cosimulation/remoteclient/__init__.py", line 103, in makeRemoteCall
cosimulation.solver.CosimulationError.CosimulationError: Error when making a remote call to participant Solution 4.Â
>>>Â
now exiting CommandConsole...
Exiting Command Console...
Shutting down System Coupling compute node processes. -
March 12, 2024 at 2:58 pmMangeshANSYSAnsys Employee
Hello
Can you see if setting the environment variable below helps ?
ANSYS_RPC_LOCALACCESSONLY=1 -
March 23, 2024 at 9:38 pmIheanyi OgbonnaSubscriber
I finally got it to work on one of the university machines.
However, I would like to request the linux equivalent for this setting, so I could pass it along to the HPC managers
-
March 30, 2024 at 4:14 pmIheanyi OgbonnaSubscriber
Hi @Mangesh Bhide, thank you very much for your previous response. Please, how can this environment variable setting be replicated on the linux HPC. We still get this error message despite disabling the IPV6 worker nodes.
I would sincerely appreciate a response on this issue.
Â
Regards
-
April 30, 2024 at 9:30 amRichard MartinSubscriber
Hi Iheanyi, I am experiencing exactly the same problem above. To set the environment variable in Linux, you would normally use:
export ANSYS_RPC_LOCALACCESSONLY=1
in your terminal or job-script. However, I just tried that and the problem persists.Â
You said above: "I finally got it to work on one of the university machines." - how did you get it to work? Any what version of Ansys are you using?
Â
Thanks.
-
April 30, 2024 at 5:25 pmIheanyi OgbonnaSubscriber
Â
Â
Hi Richard,Â
Pardon my wording, by machines I was referring to the University PCs.
I already modified my job script with the line, and also passed the information to the HPC manager to implement on the backend (if necessary, I’m not familiar with what is obtainable on the backend). The problem still persists on the University HPC, and has seriously stalled my research. I am awaiting a response from anyone from ANSYS, while interfacing with my University to get expert support on the subject matter from ANSYS also.
I’m still stuck, and there doesn’t seem to be a fix accessible in sight.
Â
-
- The topic ‘Ansys System Coupling Error on University HPC’ is closed to new replies.
-
1191
-
513
-
488
-
225
-
209
© 2024 Copyright ANSYS, Inc. All rights reserved.