Licensing

Licensing

Workbench LS-DYNA 2021 R1/2 MPP RSM job fails with below errors, when # cores requested is larger than max. # of cores in compute node. Windows HPC: External operation: ‘submit’ has failed. This may or may not become a fatal error Microsoft.Hpc.Scheduler.Properties.SchedulerException: This job requires at least 64 cores, but the list of candidate nodes that the Job Scheduler service returned for this job contains only 0 cores. The Job Scheduler service determines the candidate node list using the following job properties: NodeGroup, RequestedNodes, MinMemoryPerNode, MaxMemoryPerNode, MinCoresPerNode, MaxCoresPerNode, and ExcludedNodes. Either reduce the number of resources that the job requires, or redefine the relevant job properties, and then submit the job again. at Microsoft.Hpc.Scheduler.Store.JobEx.Submit(StoreProperty[] submitProps) at CliTools.SubmitJob.Execute(List`1 argsIn) at CliTools.CommandVerbList.Execute(List`1 args) at CliTools.Program.RunList(CommandVerbList list, String[] args) Failed to submit job to cluster SLURM Linux HPC cluster: External operation: ‘submit’ has failed. This may or may not become a fatal error sbatch: error: CPU count per node can not be satisfied sbatch: error: Batch job submission failed: Requested node configuration is not available Failed to submit job to cluster

    • FAQFAQ
      Participant

      It’s a known Bug # 497117 in 2021 R1 and R2 for LS-DYNA MPP RSM job submission using the new RSM API. It sets RSM_HPC_DISTRIBUTED=FALSE by mistake for LS-DYNA MPP job. The bug is fixed in 2022 R1. A workaround is to set environment variable USE_OLD_RSM_API=1 before start Workbench 2021 R1/R2 on client machine. For example, create a batch file on Windows with below content: set USE_OLD_RSM_API=1 start “” “C:Program FilesANSYS Incv212FrameworkbinWin64runwb2.exe”