We have an exciting announcement about badges coming in May 2025. Until then, we will temporarily stop issuing new badges for course completions and certifications. However, all completions will be recorded and fulfilled after May 2025.
Ansys Products

Ansys Products

Discuss installation & licensing of our Ansys Teaching and Research products.

Multiple Nodes Using Fluent on HPC under SLURM

    • Nabilkhalifa
      Subscriber

      Hello

      I have a question regarding running fluent under slurm asking for 2 nodes and 32 core per node having in total 64. The Batch code i am submitting to SLURM is:

      #!/bin/bash

      #SBATCH --job-name=64cores     ## Name of the job.

      #SBATCH -A hetaha_lab    ## account to charge

      #SBATCH -p standard         ## partition/queue name

      #SBATCH --nodes=2           ## (-N) number of nodes to use

      #SBATCH --mem=15G             ## request NGB of memory

      #SBATCH --ntasks-per-node=32 ## number of processes to launch per node

      #SBATCH --cpus-per-task=1   ## number of MPI threads

      #SBATCH --error=slurm-%J.err ## error log file

      module load ansys/2019R2

      fluent 3ddp -t$SLURM_NTASKS -g -i jou_64.jou

      =================================

      When i do so fluent runs only on one machine and limits the number of cores to the max of this machine, here is what i get from fluent console

      ======Console=====

      data/opt/apps/ansys/2019R2/v194/fluent/fluent19.4.0/bin/fluent -r19.4.0 3ddp -t64 -g -i jou_64.jou

      /data/opt/apps/ansys/2019R2/v194/fluent/fluent19.4.0/cortex/lnamd64/cortex.19.4.0 -f fluent -g -i jou_64.jou (fluent "3ddp -pshmem -host -r19.4.0 -t64 -mpi=ibmmpi -path/data/opt/apps/ansys/2019R2/v194/fluent -ssh")

      /data/opt/apps/ansys/2019R2/v194/fluent/fluent19.4.0/bin/fluent -r19.4.0 3ddp -pshmem -host -t64 -mpi=ibmmpi -path/data/opt/apps/ansys/2019R2/v194/fluent -ssh -cx hpc3-14-09.local:34889:44422

      Starting /data/opt/apps/ansys/2019R2/v194/fluent/fluent19.4.0/lnamd64/3ddp_host/fluent.19.4.0 host -cx hpc3-14-09.local:34889:44422 "(list (rpsetvar (QUOTE parallel/function) "fluent 3ddp -flux -node -r19.4.0 -t64 -pshmem -mpi=ibmmpi -ssh") (rpsetvar (QUOTE parallel/rhost) "") (rpsetvar (QUOTE parallel/ruser) "") (rpsetvar (QUOTE parallel/nprocs_string) "64") (rpsetvar (QUOTE parallel/auto-spawn?) #t) (rpsetvar (QUOTE parallel/trace-level) 0) (rpsetvar (QUOTE parallel/remote-shell) 1) (rpsetvar (QUOTE parallel/path) "/data/opt/apps/ansys/2019R2/v194/fluent") (rpsetvar (QUOTE parallel/hostsfile) "") )"


                   Welcome to ANSYS Fluent 2019 R2


                   Copyright 1987-2019 ANSYS, Inc. All Rights Reserved.

                   Unauthorized use, distribution or duplication is prohibited.

                   This product is subject to U.S. laws governing export and re-export.

                   For full Legal Notice, see documentation.


      Build Time: Apr 17 2019 13:39:08 EDT Build Id: 10133 



          --------------------------------------------------------------

          This is an academic version of ANSYS FLUENT. Usage of this product

          license is limited to the terms and conditions specified in your ANSYS

          license form, additional terms section.

          --------------------------------------------------------------

      Host spawning Node 0 on machine "hpc3-14-09" (unix).

      /data/opt/apps/ansys/2019R2/v194/fluent/fluent19.4.0/bin/fluent -r19.4.0 3ddp -flux -node -t64 -pshmem -mpi=ibmmpi -ssh -mport 10.240.58.22:10.240.58.22:36789:0

      Starting /data/opt/apps/ansys/2019R2/v194/fluent/fluent19.4.0/multiport/mpi/lnamd64/ibmmpi/bin/mpirun -e MPI_IBV_NO_FORK_SAFE=1 -e MPI_USE_MALLOPT_MMAP_MAX=0 -np 64 /data/opt/apps/ansys/2019R2/v194/fluent/fluent19.4.0/lnamd64/3ddp_node/fluent_mpi.19.4.0 node -mpiw ibmmpi -pic shmem -mport 10.240.58.22:10.240.58.22:36789:0


      -------------------------------------------------------------------------------

      ID    Hostname   Core  O.S.     PID         Vendor                    

      -------------------------------------------------------------------------------

      n0-63 hpc3-14-09 64/40 Linux-64 19514-19577 Intel(R) Xeon(R) Gold 6148

      host  hpc3-14-09        Linux-64 19292       Intel(R) Xeon(R) Gold 6148


      MPI Option Selected: ibmmpi

      Selected system interconnect: shared-memory

      -------------------------------------------------------------------------------



      Reading journal file jou_64.jou...


      ====================

      Can some one help, how to run fluent on multiple node under SLURM.

    • JakeC
      Ansys Employee
      Hi You need to tell fluent which nodes slurm has picked out for it.
      Try the following:
      #!/bin/bash
      #SBATCH --job-name=64cores## Name of the job.
      #SBATCH -A hetaha_lab## account to charge
      #SBATCH -p standard## partition/queue name
      #SBATCH --nodes=2## (-N) number of nodes to use
      #SBATCH --mem=15G## request NGB of memory
      #SBATCH --ntasks-per-node=32 ## number of processes to launch per node
      #SBATCH --cpus-per-task=1## number of MPI threads
      #SBATCH --error=slurm-%J.err ## error log file

      module load ansys/2019R2

      FLUENTNODES="$(scontrol show hostnames)"
      FLUENTNODES=$(echo $FLUENTNODES | tr ' ' ',')

      fluent 3ddp -t$SLURM_NTASKS -g-cnf=$FLUENTNODES -i jou_64.jou

      Thank you Jake
Viewing 1 reply thread
  • The topic ‘Multiple Nodes Using Fluent on HPC under SLURM’ is closed to new replies.