Platform

Platform

Topics related to optiSLang, HPC, DesignXplorer, Cloud and more.

*.scl file is not updated, participants are not coupled when running FSI on HPC

TAGGED: , ,

    • Jirong
      Subscriber
      Hi All,nI'm trying to run a FSI problem on HPC via slurm script. The two participants (ansys mechanical + FLUENT) seems not being coupled together, *scl file stopped updating after the Setup Validation. Although the two participants were generating some results, the results were solved separately and not coupled. n I know that ANSYS does not officially support slurm. But I see many people shared their successful experiences, and our school only uses slurm. So I would like to see if anyone has suggestions about this. I would appreciated it very much.Thanks,nJirongn
    • Stephen Orlando
      Ansys Employee
      .

      Hi Jirong,

      When submitting a System Coupling job to a job scheduler (System Coupling supports Slurm), you'll need to add the command PartitionParticipants to the .py launch script. The details are discussed more here: https://ansyshelp.ansys.com/account/secured?returnurl=/Views/Secured/corp/v211/en/sysc_ug/sysc_userinterfaces_advtasks_parallel.html

      Please see this tutorial for how to set up the .py launch script. I recommend using the method in this tutorial with .scp files instead of the .scl file method.

      If you post your .py script and Slurm script I might be able to provide more suggestions.

      .
    • Jirong
      Subscriber
      Hi Steve,
      Thanks for your information! And I'm glad that system coupling supports Slurm, that's a great news for me. But I am using ANSYS via our the high performance computer in my university. I don't have an account to open the links you shared. Is there any other way to look those links?
      BTW, under my FSI file directory, I created those files with the guidance from System Coupling Users' Guide (Workflows for system coupling using the Command Line). I have *.dat file for ANSYS Mechanical,*.cas and *.jou (journal file) for FLUENT, and *.sci file for system coupling. I didn't notice any python script needed, so I don't have a .py script. But I have my slurm script pasted here. Here is my slurm script. Any suggestions would be nice.
      #!/bin/bash 
      #SBATCH --job-name   ANSYS_FSI
      #SBATCH --time     02:00:00     # Walltime
      #SBATCH --ntasks    3
      #SBATCH --mem-per-cpu  16gb        # Memory per CPU
      #SBATCH -o s2.out        # stdout
      #SBATCH -e s2.err        # stderr
      #SBATCH --hint     nomultithread   # No hyperthreading
      cd /panfs/roc/groups/14/tranquil/li000096/ansys/dhp/Slurm/New_large
      module load ansys/20.1
      COMP_CPUS=$((SLURM_NTASKS-1))
      MECHANICAL_CPUS=1
      FLUID_CPUS=$((COMP_CPUS-MECHANICAL_CPUS))
      export SLURM_EXCLUSIVE="" # don't share CPUs
      echo "CPUs: Coupler:1 Struct:$MECHANICAL_CPUS Fluid:$FLUID_CPUS"

      echo "STARTING SYSTEM COUPLER"
      ANSYSDIR=/panfs/roc/msisoft/ansys/20.1/v201

      SERVERFILE=scServer.scs
      srun -N1 -n1 /bin/hostname | sort > my_node_list
      $ANSYSDIR/aisol/.workbench -cmd ansys.services.systemcoupling.exe -inputFile coupler.sci &
      # Wait till $SERVERFILE is created
      while [[ ! -f "$SERVERFILE" ]] ; do
        sleep 1 # waiting for SC to start
      done
      sleep 1
      # Parse the data in $SERVERFILE
      cat $SERVERFILE
      (
      read hostport
      read count
      read ansys_sol
      read tmp1
      read fluent_sol
      read tmp2
      set `echo $hostport | sed 's/@/ /'`
      echo $1 > out.port
      echo $2 > out.host
      echo $ansys_sol > out.ansys
      echo $fluent_sol > out.fluent
      ) < $SERVERFILE
      read host < out.host
      read port < out.port
      read ansys_sol < out.ansys
      read fluent_sol < out.fluent
      echo "Port number: $port"
      echo "Host name: $host"
      echo "Fluent name: $fluent_sol"
      echo "Mechanical name: $ansys_sol"
      echo "STARTING FLUENT"
      fluent 3ddp -g -t$FLUID_CPUS -ssh -mpi=intel -scport=$port -schost=$host -scname="$fluent_sol" -cnf=my_node_list -i fluidFlow.jou > fluent.out || scancel $SLURM_JOBID &
      sleep 2
      echo "STARTING ANSYS"
      # Run Ansys
      ansys201 -b -mpi ibmmpi -np $MECHANICAL_CPUS -scport $port -schost $host -scname "$ansys_sol" -cnf=my_node_list -i structural.dat > struct.out || scancel $SLURM_JOBID &
      Thanks,
      Jirong
    • Stephen Orlando
      Ansys Employee
      To get the links to work, launch the Ansys Help from Fluent (or any other Ansys software). This will open the help in a browser. Copy the link I sent into the same browser session.

      Please do this tutorial, and you'll see how to set up the python script. When System Coupling is launched from the slurm script, System Coupling will automatically launch Fluent and Mechanical. So manually launching Fluent and Mechanical from the slurm script isn't the correct process.

      https://ansyshelp.ansys.com/account/secured?returnurl=/Views/Secured/corp/v202/en/sysc_tut/sysc_tut_oscplate_cli_fluent.html
    • Azdine
      Subscriber
      .

      Hi Jirong,

      Did you manage to solve your issue? I have bumped into the same problem. Could you help me work it out?

      Thank you

      .
Viewing 4 reply threads
  • The topic ‘*.scl file is not updated, participants are not coupled when running FSI on HPC’ is closed to new replies.