Electronics

Electronics

Topics relate to HFSS, Maxwell, SIwave, Icepak, Electronics Enterprise and more

Facing issues while setting up Distributed memory simulations in ANSYS EDT HFSS 2020R1

TAGGED: , , , ,

    • mahesh2444
      Subscriber
      HellonI have two machines that are connected through a router with ANSYS EDT 2020R1 installed. I want to perform distributed memory simulations. For this I need to install MPI software(intel or IBM) on both machines. I have installed the intel MPI firstly but it didn't work for me and simulation stopped with progress window indicating n[project name] - HFSSDesign1 - Setup1: Determining memory availability on distributed machines on [target machine name]nDon't know why it hasn't worked for me. When googled about this issue I came across some interesting threads n/forum/discussion/5534/best-way-to-create-a-cluster-of-4-computers-for-ansys-electronics-desktopto-share-memory-and-coresn/forum/discussion/14155/hpc-setup-for-ansys-2020r1n/forum/discussion/10353/mpi-authentication-in-hpc-using-multiple-nodes-in-ansys-electronicsn/forum/discussion/7313/hfss-hpc-setup-issuesnAll of these contain a magical six step procedure which says to use IBM Platform computing MPI. So I have removed the intel MPI libraries from pc and installed the IBM MPI which comes with the installation. nIn order to check whether this one helps in setting up distributed simulation I have followed the test mentioned in one of the above mentioned threads.n%MPI_ROOT%\bin\mpirun -hostlist localhost:2,:2 %ANSYSEM_ROOT201%\schedulers\diagnostics\Utils\pcmpi_test.exenBut this method too didn't work for me and throw some more errors which I didn't saw in the forum.nC:\Program Files (x86)\IBM\Platform-MPI\bin>%MPI_ROOT%\bin\mpirun -pass -hostlist localhost:2,:2 %ANSYSEM_ROOT201%\schedulers\diagnostics\Utils\pcmpi_test.exe                              nPassword for MPI runs:                                                                                                            nmpirun: Drive is not a network mapped - using local drive.                                                                                          nmpid: PATH=C:\Program Files (x86)\IBM\Platform MPI\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\WINDOWS\System32\OpenSSH\;D:\Matlab install\runtime\win64;\Matlab install\bin;C:\Users\HP\AppData\Local\Microsoft\WindowsApps;       nmpid: PWD=C:\Program Files (x86)\IBM\Platform-MPI\bin                                                                                            nmpid: CreateProcess failed: Cannot execute C:\Program Files (x86)\IBM\Platform-MPI\bin\%ANSYSEM_ROOT201%\schedulers\diagnostics\Utils\pcmpi_test.exe                                             nmpirun: Unknown error  nI want to know if I have to perform any user registration for the Platform MPI to work on my machines. If yes please let me know how to do it.nIf someone knows the solution please reply to this question.nThanks nMahesh
    • mahesh2444
      Subscriber
      Hello Array , Array , Array , Array , Array nAn update on my question. Among the two machines (DESKTOP-CLH2LM1-->(A), DESKTOP-B4I9FQ7-->(B)). nWhen I run the test with A as localhost and B as the other machine the MPI testing command results in Hello world! output indicating a good connection between A & B.nC:\Users\Mahesh>%MPI_ROOT%\bin\mpirun -pass -hostlist localhost:2,DESKTOP-B4I9FQ7:2 n%ANSYSEM_ROOT201%\schedulers\diagnostics\Utils\pcmpi_test.exenPassword for MPI runs:nmpirun: Drive is not a network mapped - using local drive.nHello world! I'm rank 0 of 4 running on DESKTOP-CLH2LM1nHello world! I'm rank 1 of 4 running on DESKTOP-CLH2LM1nHello world! I'm rank 2 of 4 running on DESKTOP-B4I9FQ7nHello world! I'm rank 3 of 4 running on DESKTOP-B4I9FQ7nBut when I tried to run the same MPI testing command with B as localhost and A as other machine, following output is obtained in command prompt window.nC:\Users\HP>%MPI_ROOT%\bin\mpirun -pass -hostlist localhost:2,DESKTOP-CLH2LM1:2n%ANSYSEM_ROOT201%\schedulers\diagnostics\Utils\pcmpi_test.exe                                                nPassword for MPI runs:                                                 nmpirun: Drive is not a network mapped - using local drive.                               nERR-Client: InitializeSecurityContext failed (0x80090308)                                nERR - Client Authorization of socket failed.                                      nCommand sent to service failed.                                             nmpirun: ERR: Error adding task to job (-1).                                       nmpirun: mpirun_mpid_start: thread 19792 exited with code -1                              nmpirun: mpirun_winstart: unable to start all mpid processes.                              nmpirun: Unable to contact remote service or mpid                                    nmpirun: An mpid process may still be running on DESKTOP-CLH2LM1  nI want to know why the output is like this and what settings do I have to make for getting same output as described earlier in this comment. nFor testing this distributed simulation feature I have started simulation of Helical_Antenna { available in examples (it is advised to consider this simulation as test case ANSYS 2020 R1 Help) } on Machine A. I have setup analysis configuration consisting of two machine with Machine B being the first one among the list followed by localhost. nBut the simulation steps like meshing and solving are only performed in Machine B and didn't used any of the hardware available in Machine A. Why this occurred ?nWhat settings do I need to modify for using both machines in the simulation ?nP.S: Machine A has Windows 10 Pro OS while Machine B has Windows 10 Home OS installed. Also there is one generation difference between processors on both machines. I have disabled the firewalls completely on both machines. They are on the Domain WorkGroupnnThanksnMaheshn
    • ANSYS_MMadore
      Ansys Employee
      Hello Mahesh, do you have the same username & password on each machine? Please note, MPI requires machines to be on a domain, it does not support a Workgroup environment so this may not work exactly as expected.nRegarding using B first and localhost second in the list, the first listed machine will be responsible for meshing and adaptive passes prior to distribution to the other machines listed.nnThanks,nMattn
    • mahesh2444
      Subscriber
      nYes, all the machines have same username & password.nMay I know what should be the Workgroup name ?nAlso I have observed that sweep frequencies are getting solved locally rather than distributed. Isn't this feature available ?nThanksnMaheshn
    • ANSYS_MMadore
      Ansys Employee
      ArrayThere is no special requirement for the workgroup name. Can you share a screenshot of the HPC and Analysis Settings you are currently using? Please also click on each of the machines listed in the settings and select Test Machines and share the output.nnThank you,nMattn
    • mahesh2444
      Subscriber
    • mahesh2444
      Subscriber
      Hi Array nI would like to know whether it is possible for solving a single sweep frequency in distributed manner on two machines simultaneously. I will try to convey my need through the following scenario.nI am trying to simulate an array antenna at 25GHz having dimensions of 70 x 20 mm. I have unchecked the automatic settings in HPC and Analysis Settings and set one task per each machines shown in above image. During adaptive meshing it has used both machines and computed the mesh passes as per the convergence criteria. (Total memory used by 2 Distributed Processes : 9.2GB memory). But before starting sweep frequencies it has stopped the simulation with the message similar to following :nsweep frequencies require 5.9GB memory per task and requires 11GB memory in total.nBut I have a total of 12.6GB memory available in combined. When I tried to re simulate the above design, simulation got completed by consuming 6.27GB memory per each sweep frequency stating that switching to mixed precision to save memory. During re simulation only one machine (first one in the list) was used for solving sweep frequencies. nWhy the second machine in the list hasn't been used for solving sweep frequencies ? nWhen automatic settings were enabled simulation never got completed stating more memory is needed.nSo my another question is to know whether it is possible for HFSS to solve the sweep frequency that would require 12GB memory per each one in a distributed manner just as happened with adaptive meshing process. nThanksnMaheshn
    • ANSYS_MMadore
      Ansys Employee
      Array Can you try solving C:\Program Files\AnsysEM\AnsysEM20.1\Win64\schedulers\diagnostics\Projects\HFSS\OptimTee-DiscreteSweep.aedt to test your setup? This will confirm if the Sweep in Setup1 will distribute across both machines.nnThanks,nMattn
    • mahesh2444
      Subscriber
      Thanks it's working with direct solver, Would this distribution work the same way with domain solver too ?nThanks nMaheshn
    • ANSYS_MMadore
      Ansys Employee
      Yes, it should.n
    • mahesh2444
      Subscriber
      Hi ArraynI am performing reflectarray simulations using Domain solver.and Horn are separated by creating FE-BI boxes surrounding each of them. For clear picture of my simulation setup see this Youtube VideonVideo summarynHFSS using Hybrid technique to implemented here. The entire simulation domain is divided into two FEBI region and we can avoid mesh between horn antenna and reflectarray. Hence reducing simulation time and memory consumption.nWhen I tried to simulate the above described setup, only first machine in the list is getting used for adaptive meshing and second machine remains idle. Eventually, this causes Out of memory issue leading to abrupt termination of simulation. nHow could I make my two machines to be used for adaptive meshing just happened with the case of direct solver ? Please help me.nThanks nMaheshn
    • mahesh2444
      Subscriber
      nCould you please look at my other questionnIssue with Domain solver still persists in case of MPI computing.nThanks nMaheshn
    • mahesh2444
      Subscriber
      Hi ArraynMy analysis setup is shown below along with the message displayed in analysis configuration.n
    • ANSYS_MMadore
      Ansys Employee
      ArrayI have received this feedback. In short that is the way HFSS works. DDM allows the whole problem to be divided after an initial mesh is created that is why we see meshing on only 1 compute node. After meshing is completed HFSS now knows where to divide the objects for further analysis and solve. Very very top level the objects are divided where mesh is minimal. The objects are not divided by geometry parts but electrically through the mesh. The mesh is generated by determination of the Electric field so this initial mesh in necessary on one node before it can be split.nnPlease let me know if this helps to explain the difference.nnThanksnMattn
    • mahesh2444
      Subscriber
      I will check this and get back to you mattn
    • mahesh2444
      Subscriber
      nCould you please look at my other questionnThanksnMaheshn
Viewing 16 reply threads
  • The topic ‘Facing issues while setting up Distributed memory simulations in ANSYS EDT HFSS 2020R1’ is closed to new replies.