


{"id":158414,"date":"2021-05-12T05:57:48","date_gmt":"2021-05-12T05:57:48","guid":{"rendered":"\/forum\/forums\/topic\/multiple-nodes-using-fluent-on-hpc-under-slurm\/"},"modified":"2021-05-18T13:02:52","modified_gmt":"2021-05-18T13:02:52","slug":"multiple-nodes-using-fluent-on-hpc-under-slurm","status":"closed","type":"topic","link":"https:\/\/innovationspace.ansys.com\/forum\/forums\/topic\/multiple-nodes-using-fluent-on-hpc-under-slurm\/","title":{"rendered":"Multiple Nodes Using Fluent on HPC under SLURM"},"content":{"rendered":"<div class=\"Item-Body\">\n<div class=\"Message userContent\">\n<p>Hello <\/p>\n<p>I have a question regarding running fluent under slurm asking for 2 nodes and 32 core per node having in total 64. The Batch code i am submitting to SLURM is:<\/p>\n<p>#!\/bin\/bash<\/p>\n<p>#SBATCH &#8211;job-name=64cores&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;## Name of the job.<\/p>\n<p>#SBATCH -A hetaha_lab&nbsp;&nbsp;&nbsp;&nbsp;## account to charge <\/p>\n<p>#SBATCH -p standard&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;## partition\/queue name<\/p>\n<p>#SBATCH &#8211;nodes=2&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;## (-N) number of nodes to use<\/p>\n<p>#SBATCH &#8211;mem=15G&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;## request NGB of memory<\/p>\n<p>#SBATCH &#8211;ntasks-per-node=32 ## number of processes to launch per node<\/p>\n<p>#SBATCH &#8211;cpus-per-task=1&nbsp;&nbsp;&nbsp;## number of MPI threads<\/p>\n<p>#SBATCH &#8211;error=slurm-%J.err ## error log file<\/p>\n<p>module load ansys\/2019R2<\/p>\n<p>fluent 3ddp -t$SLURM_NTASKS -g -i jou_64.jou<\/p>\n<p>=================================<\/p>\n<p>When i do so fluent runs only on one machine and limits the number of cores to the max of this machine, here is what i get from fluent console<\/p>\n<p>======Console=====<\/p>\n<p>data\/opt\/apps\/ansys\/2019R2\/v194\/fluent\/fluent19.4.0\/bin\/fluent -r19.4.0 3ddp -t64 -g -i jou_64.jou<\/p>\n<p>\/data\/opt\/apps\/ansys\/2019R2\/v194\/fluent\/fluent19.4.0\/cortex\/lnamd64\/cortex.19.4.0 -f fluent -g -i jou_64.jou (fluent &quot;3ddp -pshmem&nbsp;-host -r19.4.0 -t64 -mpi=ibmmpi -path\/data\/opt\/apps\/ansys\/2019R2\/v194\/fluent -ssh&quot;)<\/p>\n<p>\/data\/opt\/apps\/ansys\/2019R2\/v194\/fluent\/fluent19.4.0\/bin\/fluent -r19.4.0 3ddp -pshmem -host -t64 -mpi=ibmmpi -path\/data\/opt\/apps\/ansys\/2019R2\/v194\/fluent -ssh -cx hpc3-14-09.local:34889:44422<\/p>\n<p>Starting \/data\/opt\/apps\/ansys\/2019R2\/v194\/fluent\/fluent19.4.0\/lnamd64\/3ddp_host\/fluent.19.4.0 host -cx hpc3-14-09.local:34889:44422 &quot;(list (rpsetvar (QUOTE parallel\/function) &quot;fluent 3ddp -flux -node -r19.4.0 -t64 -pshmem -mpi=ibmmpi -ssh&quot;) (rpsetvar (QUOTE parallel\/rhost) &quot;&quot;) (rpsetvar (QUOTE parallel\/ruser) &quot;&quot;) (rpsetvar (QUOTE parallel\/nprocs_string) &quot;64&quot;) (rpsetvar (QUOTE parallel\/auto-spawn?) #t) (rpsetvar (QUOTE parallel\/trace-level) 0) (rpsetvar (QUOTE parallel\/remote-shell) 1) (rpsetvar (QUOTE parallel\/path) &quot;\/data\/opt\/apps\/ansys\/2019R2\/v194\/fluent&quot;) (rpsetvar (QUOTE parallel\/hostsfile) &quot;&quot;) )&quot;<\/p>\n<p><\/p>\n<\/p>\n<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Welcome to ANSYS Fluent 2019 R2<\/p>\n<p><\/p>\n<\/p>\n<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Copyright 1987-2019 ANSYS, Inc. All Rights Reserved.<\/p>\n<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Unauthorized use, distribution or duplication is prohibited.<\/p>\n<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;This product is subject to U.S. laws governing export and re-export.<\/p>\n<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;For full Legal Notice, see documentation.<\/p>\n<p><\/p>\n<\/p>\n<p>Build Time: Apr 17 2019 13:39:08 EDT&nbsp;Build Id: 10133&nbsp;<\/p>\n<p><\/p>\n<p><\/p>\n<\/p>\n<p>&nbsp;&nbsp;&nbsp;&nbsp;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;<\/p>\n<p>&nbsp;&nbsp;&nbsp;&nbsp;This is an academic version of ANSYS FLUENT. Usage of this product<\/p>\n<p>&nbsp;&nbsp;&nbsp;&nbsp;license is limited to the terms and conditions specified in your ANSYS<\/p>\n<p>&nbsp;&nbsp;&nbsp;&nbsp;license form, additional terms section.<\/p>\n<p>&nbsp;&nbsp;&nbsp;&nbsp;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;<\/p>\n<p>Host spawning Node 0 on machine &quot;hpc3-14-09&quot; (unix).<\/p>\n<p>\/data\/opt\/apps\/ansys\/2019R2\/v194\/fluent\/fluent19.4.0\/bin\/fluent -r19.4.0 3ddp -flux -node -t64 -pshmem -mpi=ibmmpi -ssh -mport 10.240.58.22:10.240.58.22:36789:0<\/p>\n<p>Starting \/data\/opt\/apps\/ansys\/2019R2\/v194\/fluent\/fluent19.4.0\/multiport\/mpi\/lnamd64\/ibmmpi\/bin\/mpirun -e MPI_IBV_NO_FORK_SAFE=1 -e MPI_USE_MALLOPT_MMAP_MAX=0 -np 64 \/data\/opt\/apps\/ansys\/2019R2\/v194\/fluent\/fluent19.4.0\/lnamd64\/3ddp_node\/fluent_mpi.19.4.0 node -mpiw ibmmpi -pic shmem -mport 10.240.58.22:10.240.58.22:36789:0<\/p>\n<p><\/p>\n<\/p>\n<p>&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-<\/p>\n<p>ID&nbsp;&nbsp;&nbsp;&nbsp;Hostname&nbsp;&nbsp;&nbsp;Core&nbsp;&nbsp;O.S.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;PID&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Vendor&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<\/p>\n<p>&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-<\/p>\n<p>n0-63&nbsp;hpc3-14-09&nbsp;64\/40&nbsp;Linux-64&nbsp;19514-19577&nbsp;Intel(R) Xeon(R) Gold 6148 <\/p>\n<p>host&nbsp;&nbsp;hpc3-14-09&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Linux-64&nbsp;19292&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Intel(R) Xeon(R) Gold 6148 <\/p>\n<p><\/p>\n<\/p>\n<p>MPI Option Selected: ibmmpi<\/p>\n<p>Selected system interconnect: shared-memory<\/p>\n<p>&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-<\/p>\n<p><\/p>\n<p><\/p>\n<\/p>\n<p>Reading journal file jou_64.jou&#8230;<\/p>\n<p><\/p>\n<\/p>\n<p>====================<\/p>\n<p>Can some one help, how to run fluent on multiple node under SLURM.<\/p>\n","protected":false},"template":"","class_list":["post-158414","topic","type-topic","status-closed","hentry"],"aioseo_notices":[],"acf":[],"custom_fields":[{"0":{"_bbp_author_ip":[""],"_bbp_old_reply_author_name_id":["Anonymous"],"_bbp_old_is_reply_anonymous_id":["false"],"_btv_view_count":["5168"],"_bbp_likes_count":["0"],"_bbp_subscription":["249654"],"_bbp_topic_status":["unanswered"],"_bbp_status":["publish"],"_bbp_topic_id":["158414"],"_bbp_forum_id":["27796"],"_bbp_engagement":["157443","175484"],"_bbp_voice_count":["2"],"_bbp_reply_count":["1"],"_bbp_last_reply_id":["178555"],"_bbp_last_active_id":["178555"],"_bbp_last_active_time":["2021-05-18 13:02:52"]},"test":"nabilkhalifa"}],"_links":{"self":[{"href":"https:\/\/innovationspace.ansys.com\/forum\/wp-json\/wp\/v2\/topics\/158414","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/innovationspace.ansys.com\/forum\/wp-json\/wp\/v2\/topics"}],"about":[{"href":"https:\/\/innovationspace.ansys.com\/forum\/wp-json\/wp\/v2\/types\/topic"}],"version-history":[{"count":0,"href":"https:\/\/innovationspace.ansys.com\/forum\/wp-json\/wp\/v2\/topics\/158414\/revisions"}],"wp:attachment":[{"href":"https:\/\/innovationspace.ansys.com\/forum\/wp-json\/wp\/v2\/media?parent=158414"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}