Electronics

Electronics

Topics related to HFSS, Maxwell, SIwave, Icepak, Electronics Enterprise and more.

How to make the available GPU on my Desktop to be used for direct solver simulations in ANSYS HFSS ?

    • mahesh2444
      Subscriber
      I am facing a problem in using the GPU available on my desktop. nMy desktop configuration : Intel i7 9th Gen Hex Core CPU, 16GB RAM, Nvidia RTX 2080Ti GPU.nHow can I force the ANSYS EDT to make use of my powerful RTX card ?nI have read all the GPU settings in help section of ANSYS under HPC Administrator's Guide. It was mentioned that Tesla and Quadro series cards are recommended but not compulsory to be used for hardware acceleration. nSo I am wondering is there any way for forcing the HFSS to use my GPU irrespective of the conditions that are mentioned in Why GPU is not used in the same section of ANSYS Help. When it comes to clock frequencies RTX card out performs the CPU available on my desktop. nPlease respond if anyone knows how to enable GPU on my desktop.n
    • mahesh2444
      Subscriber
      nCould you please look into this ?nThanks nMaheshn
    • ANSYS_MMadore
      Ansys Employee
      ArrayThe GPU will only be used in specific analyses. If you select to enable the GPU, it will be used if possible. It cannot be forced, it will be used only if supported, and will not be used if unsupported. You can refer to the HFSS Help Documentation for further information on GPU support.nnThanks,nMattn
    • mahesh2444
      Subscriber

      @mahesh2444 The GPU will only be used in specific analyses. If you select to enable the GPU, it will be used if possible. It cannot be "forced", it will be used only if supported, and will not be used if unsupported. You can refer to the HFSS Help Documentation for further information on GPU support.Thanks,Matt/forum/discussion/comment/101797#Comment_101797

      Yes matt I agree with you, but RTX 2080Ti with 11GB VRAM is no lesser compared to Nvidia Quadro K6000 series. So I want to know why it wasn't using this card for acceleration n
    • ANSYS_MMadore
      Ansys Employee
      The RTX is not a supported device and is untested. You should use a supported device for GPU support, use of other devices is unsupported.n
    • ANSYS_MMadore
      Ansys Employee
      nhttps://ansyshelp.ansys.com/Views/Secured/Electronics/v202/en/Subsystems/HFSS/HFSS.htm#HFSS/GPUAcceleration.htm%3FTocPath%3DHFSS%2520Help%7CHigh%2520Performance%2520Computing%7C_____12nhttps://ansyshelp.ansys.com/Views/Secured/Electronics/v202/en/Subsystems/HFSS/HFSS.htm#HPC/EnablingGPU.htm%3FTocPath%3DHFSS%2520Help%7CHigh%2520Performance%2520Computing%7CDistributed%2520Analysis%7CDistributed%2520Analysis%2520Configuration%7C_____4nnhttps://ansyshelp.ansys.com/Views/Secured/Electronics/v202/en/Subsystems/HFSS/HFSS.htm#../Subsystems/HPC_Admin/Content/GPUs.htm%3FTocPath%3DHPC%2520Administrator's%2520Guide%7CGeneral%2520Considerations%7C_____7n
    • mahesh2444
      Subscriber

      @mahesh2444 The RTX is not a supported device and is untested. You should use a supported device for GPU support, use of other devices is unsupported./forum/discussion/comment/101811#Comment_101811

      Agree with you Matt, but I just hope there is a way to use RTX for speeding up simulations by making it usable somehow with help of someone in this forum. Seems there isn't any way to achieve this feat.nThanks Mattnn
    • AndyJP
      Subscriber
      Post removed by I recommend not reposting the same content. n
    • AndyJP
      Subscriber
      Aha, they not allow Geforce anymore by the reason above ↑nNot very friendly taking customers for, you know.nn@Rob, If you don't like such posts, just give a certain explanation WHY in the previous versions Geforces worked, and now don't. Or this topic will raise up again and again. Customers want to know!nnGeforce and Quadro are architecturally the same boards with different name and a few id-resistors placement.nnthe same content included the exact requirements for GPU acceleration to work, which moderators do not tell you here. I would offer you to send a PM for the answer, but there is no PM here.n
    • Rob
      Forum Moderator
      The reasoning, very simply, is we support the more powerful boards that are fitted to the bigger clusters, and not the (possibly more common) boards in the gaming PCs. Small changes in chip/board architecture may require a significant code rewrite, and where the RAM is slightly lower the gpu may not be worth using. We no longer support certain operating systems for much the same reason. nWe only test so many boards and chips, so the gaming boards may work but we just haven't tried it. In terms of the moderation, we're fine to discuss where we can, and if we can't please respect that it's often due to the limitations we work under on here and don't repost. Re the post I removed, the wording was out of line. The above question isn't, but using bold isn't necessarily going to do anything other than annoy people. Remember that we (the community) read the forum in English, but with many different native languages (which is why we allow the Americans to spell colour without the u). What may be acceptable in your location may not be elsewhere. n
    • mahesh2444
      Subscriber
      Hey can you mail me the PM at maheshbabu2444@gmail.com nRegards nMaheshn
    • mahesh2444
      Subscriber
      Hey please don't take his comment in a wrong manner, he was just tired of this not being able to use the powerful geforce graphics compared to what were presented in supported cards list. nIf possible please let him know to connect me personally over mail that I have posted in above comment. As you said certain things are not allowed in this forum so we will communicate personally.nI hope you will understand my concern and will help me in connecting to him. nRegardsnMaheshn
    • Rob
      Forum Moderator
      I'll leave the email up but please keep the comments (or failing that conclusions) on here so others can benefit. n
    • mahesh2444
      Subscriber
      Thanks for understanding my concern. I will definitely let the community through my comments on a swift manner. nI am hoping to connect to him with the help of you.nRegards nMaheshn
    • AndyJP
      Subscriber
      >The reasoning, very simply, is we support the more powerful boards that are fitted to the bigger clusters, and not the (possibly more common) boards in the gaming PCs. Small changes in chip/board architecture may require a significant code rewrite, and where the RAM is slightly lower the gpu may not be worth using. We no longer support certain operating systems for much the same reason.n1)Geforce xx70, xx80, xx90, and Titan boards are more powerful than any Quadro and Tesla. You don't need a doctoral degree to check the specs. Quadro's unique feature is excessive multimonitor support which is not used by HFSS&Maxwell. 10 years ago Quadro provided more (slower) RAM, but today the said Geforces have enough onboard RAM.nIn other terms, the said Geforce boards are absolute equivalent of Quadro boards and can be simply converted in each other by changing device-id, without any code rewrite.n2)Bigger clusters require more cheaper Geforce boards. Any company tries reducing costs. And purchasing unnecessary expensive CGI features is strongly undesired.n3)Small changes in chip/board architecture may require a significant code rewrite - it is simply NOT TRUE(in big bold letters). Nvidia, like Intel does not provide any microcode documentation. It is a highly protected company top-secret. Developers use standard APIs. CUDA api became popular just because it does not require significant code rewrite. Point.n>We only test so many boards and chips, so the gaming boards may work but we just haven't tried it.nWe did not ask for any test or support. Just why do you block the use of Geforce forcefully in software? Yo did not do it 5-7 years ago, when GPU acceleration trend started.n- Is it your decision to lock the hardware by device-id?n- is it a request from nVidia?n>and where the RAM is slightly lower the gpu may not be worth usingnwhy don't you live it to the customer? nGeForce RTX 3080 - 10Gb RAM $1100. Quadro RTX 4000 - 8Gb $1100nTell me, which is more powerful and which has more RAM onboard.nnIn short:nWhen the boards were rolled out, we could buy twice more Geforce boards than Quadros. Consequently, we would use the remaining funds for HFSS licenses. But we did not.nnmahesh2444nSince Rob tells me it is just the words, I will tell it here:nHFSS 2020 uses GPU only for models with 100% isotropic dielectrics. Whenether you have an anisotropy, it disables the GPU code. When I simulate ferrites in DrivenModal, I have to move to single-CPU 8-core workstation, because single-CPU-high-clock machines are just faster in HFSS. A cluster offers a fast sweep, but again, it is better doing 1task with 4-6 cores per workstation, rather combining all the tasks at one machine.nWhen you have all pure dielectrics, you will get a crazy benefit from GPU in model of any size. Just be sure to use a local license, or tweak your licensing server/network fine, because you will feel the penalty of license negotiation with the lic-server.nEigenmode still does not support GPU to my disappointment.nTransient seems supporting GPU, but I do not use it much to check in details. Transient still does not support any anisotropy, even in CPU code. Try defining a matrix property, and it will throw an error.nI did not try GPU with Maxwell yet, I have no license at my WS, and where we have license, there is no serious GPU to try. Anyway, it does not take that much time as HFSS.n
    • mahesh2444
      Subscriber
      I am just trying to simulate an antenna using rogers substrate ( within material library). Any clue of why it wasn't using GPU ? I am using waveport in drivenmodal solution. I have limited access to my university workstation where Tesla card is used for simulations. In that machine GPU acceleration has worked. I am wondering why not my RTX is used.nRegardsnMaheshn
    • AndyJP
      Subscriber
      GeForce RTX 3090 - 24Gb $1900 vs. Quadro RTX 5000 - 16Gb $2700, Quadro RTX 6000 - 24Gb $5500nNot mentioning that Quadro boards are underclocked, comparing to Geforce.nCompetition on the gaming market just requires the top performance. On professional market, there is no competition, so the boards are overpriced and inferior to gaming equivalents. CUDA is the same everywhere, or nVidia would simple lost the competition to AMD. Yes, CUDA versions are different, but all the boards support the latest CUDA version at the time of the release, and generally up-and-down-compatible starting from Keppler architecture till this day.n
    • AndyJP
      Subscriber
      $5500 Quadro RTX 6000 GPU 1770 MHz, RAM 24Gb 384-bit 1750 MHz 672 Gib/s, CUDA 7.5, 4608 cores, single 16.3 Tflop double 509.8 Gflop.n$3800 Nvidia Titan RTX GPU 1770 MHz+FactoryOverClock, RAM 24Gb 384-bit 1750 MHz+FactoryOverClock 672 Gib/s, CUDA 7.5, 4608 cores, single 16.3 Tflop double 509.8 Gflop.n$1900 GeForce RTX 3090 1770 MHz, RAM 24Gb 384-bit 2438 MHz 936 Gib/s, CUDA 8.6, 10496 cores, single 35.7 Tflop double 1250 Gflop.nTell me, which you choose, if it is not locked in the software?n
    • AndyJP
      Subscriber
      mahesh2444 , I have the same question. Geforce is superior to most Quadro/Tesla. But it appears banned for use in the EDT/HFSS 2020nWith my Quadro RTX 5000 I was calculating power splitters on GPU before finishing pop-corn box. With Geforce I wouldn't even have time for taking a coffee break.n
    • Rob
      Forum Moderator
      Looking at the above cards.For home use they're all too expensive. For work use, I'd want to see the service contract, period and power/heat load and whether they're supported & fit into the racks. Remember, clusters are running 24/7 for 2-3 years usually with a service contract and may be expected to last around 5 years before replacement. nWe haven't banned any cards, we simply haven't coded for them. However we do code to take advantage of certain features and board architecture. I don't know how the cards work, in Ansys we have a department for that, at home a friend builds PCs. My knowledge stops at electric goes in one end, heat and pretty pictures come out of various bits of the card: I use the pretty pictures to figure out how to cool the hot bits based on what temperature they're happy at. nSo, by all means help Array to see if there's a driver or library fudge to let the GeForce card work, and report back if the smoke escapes from the board or it works. Leave the banned cards and the like for another platform or the playground. n
    • AndyJP
      Subscriber
      Pruning by Array>we simply haven't coded for them. nThat's the same, if you hardcoded a narrow list of cards, that means you have banned all other cards. You don't need to code a card list first of all. CUDA code works with any card supporting the compiled version. AFAIK, driver should report the supported version#. If it was compiled for Turing TU architecture with CUDA 7.5 - it should and will work on ANY Turing-based card and on many others, depending on nVidia code compatibility list.n>certain features and board architecturenAgain - exact board architecture is a top secret protected by nVidia with machine guns and ninjas. All the developers have on hands are standard APIs, and unofficial hacks that you can not use legally.nn>library fudge to let the GeForce card work,nI want to know myself. In the post you deleted, I was speculating about reasons. It may be a limited up-compatibility for older Geforce cards. The latter is a tricky thing, as I know it (I may be mistaken here)nStarting from Kepler, nvidia boards have good up-compatibility in SDK (compiler). i.e. a code compiled for CUDA up to 7.5 will work on Kepler cards, that natively support version 3.5. Starting from CUDA SDK 11 (CUDA 8.6) Kepler was removed from the up-compatibility list. But it is not the case with HFSS 2020, which just could not use CUDA 8.6 released later. Therefore any board with Kepler and newer architectures should be compatible with HFSS. At least every Quadro board, which we see.nBut compatibility policies of nVidia are tricky. It may happen, that the code compiled with SDK10 may opt-out Kepler Geforce class device ids.nThat does not explain why Maxwell Pascal, Volta and native Turing class Geforce boards do not work in HFSS 2020. Personally, I could not find a pascal based GTX card around, but seems like other users find them not working with HFSS.nI would like to clarify this topic.n
Viewing 20 reply threads
  • The topic ‘How to make the available GPU on my Desktop to be used for direct solver simulations in ANSYS HFSS ?’ is closed to new replies.