{"id":6819,"date":"2025-11-07T13:58:52","date_gmt":"2025-11-07T02:58:52","guid":{"rendered":"https:\/\/www.leapaust.com.au\/blog\/?p=6819"},"modified":"2025-11-07T16:02:12","modified_gmt":"2025-11-07T05:02:12","slug":"how-to-approach-hardware-selection-in-2025","status":"publish","type":"post","link":"https:\/\/www.leapaust.com.au\/blog\/dem\/how-to-approach-hardware-selection-in-2025\/","title":{"rendered":"How to approach hardware selection in 2025 for Ansys CFD solvers (all budgets considered  &#8211; from laptops to clusters!)"},"content":{"rendered":"<div id=\"bsf_rt_marker\"><\/div>\t\t<div data-elementor-type=\"wp-post\" data-elementor-id=\"6819\" class=\"elementor elementor-6819\" data-elementor-post-type=\"post\">\n\t\t\t\t<div class=\"elementor-element elementor-element-7ed579e7 e-flex e-con-boxed e-con e-parent\" data-id=\"7ed579e7\" data-element_type=\"container\" data-e-type=\"container\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t\t\t<div class=\"elementor-element elementor-element-c21b6d2 elementor-widget elementor-widget-text-editor\" data-id=\"c21b6d2\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><em>Note: This was written in October 2025, but this is an ever-changing space, so please contact your local LEAP support engineers if you would like to discuss your specific needs!<\/em><\/p><p>Working at the bleeding-edge of engineering simulations, LEAP\u2019s engineers are frequently asked for advice from our clients across many different industries, who are looking to successfully navigate the balance between cost and optimal solver performance when purchasing new hardware. The following advice is primarily based on LEAP\u2019s accumulated recent experience with a particular focus on CFD workloads (Ansys Fluent), as FVM codes are capable of scaling incredibly well if the system is well designed. Other tools such as Ansys Mechanical or Electromagnetics have very similar requirements, with a few notable differences\/exceptions:<\/p><ul><li>FEA codes generally do not scale as well as FVM, so do not expect to see the same degree of speedup as shown in some of the graphs below.<\/li><li>Mech and EM both perform a large quantity of I\/O during the solve, thus it is imperative that the system has a very fast storage device (i.e. NVMe SSD) used as the scratch\/project directory.<\/li><li>Ansys EM (HFSS and Maxwell 3D) often require significantly more memory capacity than CFD.<\/li><\/ul><p><strong>General high-level advice to optimise your solver performance:<\/strong><\/p><ul><li>Memory bandwidth is the key performance specification in most situations. Populate every memory channel available in your system, and try not to exceed a ratio of ~4:1 cores per memory channel (or ~2:1 if looking for absolute maximum performance)<\/li><li>DDR5 memory is imperative \u2013 try to aim for high clock speeds (5600 MT\/s +).<\/li><li>Look for CPUs with larger cache pools \u2013 this increases the <em>effective<\/em> memory bandwidth<\/li><li>High base CPU clock-speeds are important \u2013 do not buy low wattage CPUs with high core-counts.<\/li><li>Ansys solvers should always be run on physical cores rather than individual threads (we often advise turning off multi-threading on dedicated workstations \/ servers).<\/li><li>Ensure your working directories and scratch locations are on high-speed NVMe storage devices.<\/li><li>Ensure you have a discrete GPU for pre\/post processing (except for headless solver nodes).<\/li><\/ul>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-1c39ae1 elementor-widget elementor-widget-image\" data-id=\"1c39ae1\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img fetchpriority=\"high\" decoding=\"async\" width=\"927\" height=\"190\" src=\"https:\/\/www.leapaust.com.au\/blog\/wp-content\/uploads\/2023\/12\/HP-Z-1.png\" class=\"attachment-full size-full wp-image-4972\" alt=\"\" srcset=\"https:\/\/www.leapaust.com.au\/blog\/wp-content\/uploads\/2023\/12\/HP-Z-1.png 927w, https:\/\/www.leapaust.com.au\/blog\/wp-content\/uploads\/2023\/12\/HP-Z-1-300x61.png 300w, https:\/\/www.leapaust.com.au\/blog\/wp-content\/uploads\/2023\/12\/HP-Z-1-768x157.png 768w\" sizes=\"(max-width: 927px) 100vw, 927px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-591a470 elementor-widget elementor-widget-text-editor\" data-id=\"591a470\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<h5><strong>Small systems \u2013 laptop\/desktop for solving on 4 cores:<\/strong><\/h5><p>Here our goal is to obtain the maximum possible performance on 4 cores.<\/p><p><strong>Primary considerations for a laptop:<\/strong><\/p><ul><li>Choose a chassis with ample cooling (sorry, no thin\/light ultrabook style laptops here\u2026).<\/li><li>Look for CPUs with a suffix beginning with \u201cH\u201d \u2013 these are the high-power versions, typically with TDPs of 35 W or higher. Note that this is no longer a blanket rule, as some newer AMD laptop CPUs are branded \u201cAI\u201d or \u201cAI Max\u201d and no longer carry the H or U suffix.<\/li><li>Ensure that the CPU has at least 4 \u201cperformance\u201d cores \u2013 we do not want to run simulations on \u201cefficiency\u201d cores.<\/li><li>There is no benefit to super high core-counts. We do not recommend using HPC packs on laptops due to memory bandwidth constraints (most only have 2 memory channels). However, high-end CPUs often have increased clock-speeds and cache in addition to high core-counts, so you may find that the \u201cbest\u201d laptop CPUs have significantly more cores than you\u2019ll end up solving on.<\/li><li>AMD parts from 6000 series onwards support DDR5 \u2013 prefer Zen 5 cores, e.g. 9000 series or \u201cRyzen AI Max\u201d (which also has quad channel memory).<\/li><li>Intel parts from 12<sup>th<\/sup>\u00a0generation onwards support DDR5 \u2013 prefer \u201cSeries 2\u201d e.g. 265HX or 285 HX. (<a href=\"https:\/\/www.intel.com\/content\/dam\/support\/us\/en\/documents\/processors\/core\/Intel-Core-Comparsion.xlsx\">https:\/\/www.intel.com\/content\/dam\/support\/us\/en\/documents\/processors\/core\/Intel-Core-Comparsion.xlsx<\/a>).<\/li><li>We recommend a bare minimum of 32 GB of RAM, preferably 64 GB for the ability to launch larger jobs.<\/li><li>NVMe storage \u2013 at least 1 TB to fit OS, software, and project data.<\/li><li>Discrete GPU preferred for display\/visualisation purposes (Nvidia preferred \u2013 wider support amongst Ansys tools). However, some modern high-end laptop CPUs have suitable built-in graphics \u2013 such as the \u201cAMD Ryzen AI Max\u201d range.<\/li><\/ul><p><strong>For a gaming\/consumer-class 4+ core build:<\/strong><\/p><ul><li>\u2018Gaming\u2019 type systems are often quite suitable.<\/li><li>High clock speeds (base &gt; 4 GHz, turbo &gt; 5GHz).<\/li><li>There is no benefit to super high core-counts. We do not recommend using HPC packs on gaming systems due to memory bandwidth constraints (only 2 memory channels available). However, high-end CPUs often have increased clock-speeds and cache in addition to high core-counts, so you may find that the \u201cbest\u201d gaming CPUs have significantly more cores than you\u2019ll end up solving on.<\/li><li>DDR5 memory (faster is better \u2013 e.g. 6000 MHz), generally 64 GB is suitable \u2013 depending on the size of your simulation models.<\/li><li>For AMD systems, do not use more than 2 memory modules \u2013 the memory controller is not capable of running multiple modules per channel at maximum speed. This will likely limit you to 128 GB (2x 64 GB) when using consumer-grade CPUs<\/li><li>NVMe storage \u2013 at least 1 TB to fit OS, software, and project data.<\/li><li>Discrete GPU for display\/visualisation purposes (Nvidia preferred \u2013 wider support amongst Ansys tools), e.g. Nvidia 5060 Ti 16 GB, or Nvidia RTX 4000 Ada.<\/li><\/ul><p><strong>Both AMD and Intel have suitable options in this category:<\/strong><\/p><ul><li>AMD Ryzen 9000 series (Zen 5).<\/li><li>AMD 3D V-cache CPUs (9800X3D, 9950X3D) are particularly suitable as CFD workloads benefit greatly from large cache pools.<\/li><li>Intel Core Ultra 7 or 9.<\/li><\/ul>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-42a563f elementor-widget elementor-widget-text-editor\" data-id=\"42a563f\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<h5><strong>Small-medium systems \u2013 desktop\/workstation for ~12 solver cores:<\/strong><\/h5><p>While it can be tempting to use a \u2018gaming\u2019 type system as above, there are a couple of new considerations to make here:<\/p><ul><li>It is best to avoid hybrid processor architectures with \u201cperformance\u201d and \u201cefficiency\u201d cores \u2013 we want our simulations to be running on 12 identical high-performance cores.<\/li><li>Memory bandwidth needs to be considered. Gaming\/consumer CPUs only have 2 memory channels, which will cause a bottleneck when distributing a simulation across 12 cores (6 cores per channel). We prefer to limit the ratio of cores\/channel to ~4:1 (or preferably even ~2:1) as mentioned above.<\/li><\/ul><p><strong>For a dedicated workstation-class 12+ core build:<\/strong><\/p><ul><li>AMD Threadripper:<ul><li>Non \u201cPro\u201d versions suitable as 4 memory channels is sufficient for these relatively low core counts.<\/li><li>Prefer 9000X series for Zen 5 cores and higher memory speeds.<\/li><\/ul><\/li><li>Intel Xeon:<ul><li>Xeon-W (Sapphire Rapids).<\/li><li>Many options, prioritise higher base clock speeds.<\/li><\/ul><\/li><li>Make sure to use at least 4 memory channels:<ul><li>i.e. populate the motherboard with 4 individual memory modules.<\/li><li>For example, if you want 128 GB of RAM, make sure to use 4x 32 GB DIMMs.<\/li><\/ul><\/li><li>NVMe storage \u2013 at least 1 TB to fit OS, software, and project data.<\/li><li>Consider a second storage device to handle a large quantity of project files.<\/li><li>Discrete GPU for display\/visualisation purposes (Nvidia preferred \u2013 wider support amongst Ansys tools), e.g. Nvidia 5060 Ti 16 GB, or Nvidia RTX 4000 Ada.<\/li><\/ul>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-eeec29e elementor-widget elementor-widget-text-editor\" data-id=\"eeec29e\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<h5><strong>Medium systems \u2013 workstations for ~36 solver cores:<\/strong><\/h5><p>Similar requirements to the workstation-class build in the previous section, with a few modifications.<\/p><p><strong>For a dedicated workstation-class 36+ core build:<\/strong><\/p><ul><li>AMD Threadripper Pro:<ul><li>Do not use the non \u201cPro\u201d versions \u2013 these only have 4 memory channels.<\/li><li>Prefer 9000WX series for Zen 5 cores and higher memory speeds.<\/li><\/ul><\/li><li>Intel Xeon:<ul><li>Xeon-W (Sapphire Rapids).<\/li><li>Many options, prioritise higher base clock speeds.<\/li><\/ul><\/li><li>Can also use server-grade parts (e.g. Xeon 6, AMD Epyc).<\/li><li>Make sure to use at least 8 memory channels:<ul><li>i.e. populate the motherboard with 8 individual memory modules.<\/li><li>For example, if you want 256 GB of RAM, make sure to use 8x 32 GB modules.<\/li><\/ul><\/li><li>NVMe storage \u2013 at least 1 TB to fit OS, software, and project data.<\/li><li>Consider a second storage device to handle a large quantity of project files.<\/li><li>Discrete GPU for display\/visualisation purposes (Nvidia preferred \u2013 wider support amongst Ansys tools), e.g. Nvidia 5060 Ti 16 GB, or Nvidia RTX 4000 Ada.<\/li><\/ul>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-ae286d9 elementor-widget elementor-widget-text-editor\" data-id=\"ae286d9\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<h5><strong>Large systems \u2013 workstations \/ servers \/ clusters for ~128+ cores:<\/strong><\/h5><p>Multi-CPU systems requiring server-grade parts. Larger core counts will require clustered nodes with high-speed interconnects. Our goal is to ensure that total system memory bandwidth is sufficient to support the increasing number of cores.<\/p><ul><li><strong>General advice:<\/strong><ul><li>With 12 channels of DDR5, we recommend using up to 48 core parts to maintain good scaling (~4:1 ratio of cores to channels).<\/li><li><strong>To maximise price to performance<\/strong>, you may wish to use 2x 64 core CPUs to squeeze the most out of 3 HPC packs without distributing your job across multiple nodes, but please note that the scaling is likely to drop off significantly due to the core to channel ratio exceeding ~4:1.<\/li><li><strong>To maximise outright performance<\/strong>, ideal scaling can be obtained by using many smaller CPUs and distributing jobs across multiple nodes \u2013 using a core to channel ratio of ~2:1 can maintain near-linear scaling performance from 1 to 100s (or 1000s) of cores. However, this can get very expensive very quickly\u2026!<\/li><li>Prioritise models with high base clock speeds and large cache.<\/li><li>I\u2019m sure you get the point by now, but make sure to populate every single memory channel available (8 per CPU for mid-range Intel, 12 per CPU for high-end Intel and AMD)<\/li><li>In multi-node configurations, make sure to use high-speed interconnects such as InfiniBand.<\/li><\/ul><\/li><\/ul>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-994e194 e-flex e-con-boxed e-con e-parent\" data-id=\"994e194\" data-element_type=\"container\" data-e-type=\"container\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t<div class=\"elementor-element elementor-element-41e99ce e-con-full e-flex e-con e-child\" data-id=\"41e99ce\" data-element_type=\"container\" data-e-type=\"container\">\n\t\t\t\t<div class=\"elementor-element elementor-element-b2a08c2 elementor-widget elementor-widget-text-editor\" data-id=\"b2a08c2\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<ul><li><strong>AMD Epyc:<\/strong><ul><li>9005 series (5<sup>th<\/sup> gen) preferred, although 9004 (4<sup>th<\/sup> gen) still suitable<\/li><li>Notable performance gains can be obtained with 3D V-Cache \u201cX\u201d series parts, which have additional L3 cache \u2013 see dedicated section below.<\/li><\/ul><\/li><\/ul><ul><li><strong>Intel Xeon:<\/strong><ul><li>Xeon 6 (Granite Rapids) preferred, although Sapphire Rapids still suitable.<\/li><\/ul><\/li><\/ul>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-2f0827e e-con-full e-flex e-con e-child\" data-id=\"2f0827e\" data-element_type=\"container\" data-e-type=\"container\">\n\t\t\t\t<div class=\"elementor-element elementor-element-e3c4725 elementor-widget elementor-widget-image\" data-id=\"e3c4725\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"247\" height=\"265\" src=\"https:\/\/www.leapaust.com.au\/blog\/wp-content\/uploads\/2023\/12\/cluster-computing-1.jpg\" class=\"attachment-full size-full wp-image-4973\" alt=\"\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-379efe2 e-flex e-con-boxed e-con e-parent\" data-id=\"379efe2\" data-element_type=\"container\" data-e-type=\"container\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t\t\t<div class=\"elementor-element elementor-element-115c29d elementor-widget elementor-widget-text-editor\" data-id=\"115c29d\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<h5><strong>Notes on AMD 3D V-Cache<\/strong><\/h5><p>Single node performance can be significantly improved \u2013 many benchmarks (note: older gen 7003X series shown) infer speedups of 10 to 30%, with one extreme even reaching to 80%:<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-b633976 elementor-widget elementor-widget-image\" data-id=\"b633976\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"792\" height=\"558\" src=\"https:\/\/www.leapaust.com.au\/blog\/wp-content\/uploads\/2023\/12\/hardware-graph.png\" class=\"attachment-full size-full wp-image-4974\" alt=\"Ansys Fluent Single-Node Performance\" srcset=\"https:\/\/www.leapaust.com.au\/blog\/wp-content\/uploads\/2023\/12\/hardware-graph.png 792w, https:\/\/www.leapaust.com.au\/blog\/wp-content\/uploads\/2023\/12\/hardware-graph-300x211.png 300w, https:\/\/www.leapaust.com.au\/blog\/wp-content\/uploads\/2023\/12\/hardware-graph-768x541.png 768w\" sizes=\"(max-width: 792px) 100vw, 792px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-4bb19c4 e-flex e-con-boxed e-con e-parent\" data-id=\"4bb19c4\" data-element_type=\"container\" data-e-type=\"container\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t\t\t<div class=\"elementor-element elementor-element-5fb1361 elementor-widget elementor-widget-text-editor\" data-id=\"5fb1361\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Multi-node performance can exhibit super-linear scaling \u2013 i.e. the speedup factor can outweigh the number of nodes. As more nodes are added to the system the total cache capacity increases accordingly, thus a higher proportion of the simulation is able to reside in cache rather than DRAM \u2013 one example below shows a speedup of 11x for a system consisting of 8 nodes:<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-9e47d51 elementor-widget elementor-widget-image\" data-id=\"9e47d51\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"768\" height=\"634\" src=\"https:\/\/www.leapaust.com.au\/blog\/wp-content\/uploads\/2023\/12\/Ansys-Fluent-multi-node-superlinear-scaling-on-AMD.png\" class=\"attachment-full size-full wp-image-4975\" alt=\"\" srcset=\"https:\/\/www.leapaust.com.au\/blog\/wp-content\/uploads\/2023\/12\/Ansys-Fluent-multi-node-superlinear-scaling-on-AMD.png 768w, https:\/\/www.leapaust.com.au\/blog\/wp-content\/uploads\/2023\/12\/Ansys-Fluent-multi-node-superlinear-scaling-on-AMD-300x248.png 300w\" sizes=\"(max-width: 768px) 100vw, 768px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-19deb23 elementor-widget elementor-widget-text-editor\" data-id=\"19deb23\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<h5><strong>GPU Solver Considerations<\/strong><\/h5><p>GPUs can offer dramatic speedups for codes that are optimised to run in massively parallel environments. Particle-type solvers such as LBM, DEM, SPH etc. or raytracing solvers such as SBR+ are well suited to GPUs as parallelisation of the code is trivial; however, this is considerably more difficult to achieve with traditional CFD and FEA methods such as FVM and FEM.<\/p><p>Ansys have recently released a native multi-GPU (FVM) solver for Fluent which currently supports features such as sliding mesh, scale resolved turbulence, conjugate heat transfer, non-stiff reacting flows, and mild compressibility. Benchmarks reveal that one high end GPU (e.g. Nvidia RTX 6000 Pro or A100\/H100) can offer similar performance to ~500-1000 CPU cores \u2013 thereby offering significant hardware cost savings and enabling users to run large simulations in-house rather than having to rely on external clusters \/ cloud compute.<\/p><p>Ansys Rocky is another tool which is suited to GPUs. Rocky is primarily a DEM solver, capable of handling complex solid motion involving millions of particles; but also has an SPH solver for dealing with fluid motion of free-surface flows.<\/p><p><strong>Please see separate blog for more information on GPU selection for Fluent and Rocky!<\/strong><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-6f8df592 elementor-widget elementor-widget-text-editor\" data-id=\"6f8df592\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<h5><strong>Additional notes on Operating Systems:<\/strong><\/h5><p>The vast majority of Ansys tools can be run on either Windows or Linux (for a full list of supported platforms, please check the Ansys website:\u00a0<span style=\"text-decoration: underline;\"><a href=\"https:\/\/www.ansys.com\/it-solutions\/platform-support\" target=\"_blank\" rel=\"noopener\">Platform Support and Recommendations | Ansys<\/a><\/span>). We generally recommend performing pre- and post-processing on local Windows machines for a number of reasons:<\/p><ul><li>Ansys SpaceClaim and Discovery are currently only available on Windows.<\/li><li>Display drivers tend to be more mature\/robust on Windows.<\/li><li>Input latency can be an issue if trying to manipulate large models over a remote connection to a workstation\/server.<\/li><li>Pre\/post can be a waste of compute resources on a server \u2013 it is often desirable to keep as close to 100% solver uptime as possible.<\/li><\/ul><p>Windows is also suitable for solving on small to moderately large machines (e.g. ~32 core workstations), however, we generally recommend dedicated Linux servers for larger solver machines and clusters \u2013 particularly if a multi-user environment is desired (e.g. queuing systems etc.).<\/p><p><strong>\u00a0<\/strong><\/p><p><strong>Additional notes on Cloud Computing:<\/strong><\/p><p><strong>Stay tuned for our separate blog on Cloud Computing coming online in November 2025!<\/strong><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>Working at the bleeding-edge of engineering simulations, LEAP\u2019s engineers are frequently asked for advice from our clients across many different industries, who are looking to successfully navigate the balance between cost and optimal solver performance when purchasing new hardware. The following advice is primarily based on LEAP\u2019s accumulated recent experience with a particular focus on CFD workloads (Ansys Fluent), as FVM codes are capable of scaling incredibly well if the system is well designed.<\/p>\n","protected":false},"author":4,"featured_media":4831,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"neve_meta_sidebar":"","neve_meta_container":"","neve_meta_enable_content_width":"","neve_meta_content_width":0,"neve_meta_title_alignment":"","neve_meta_author_avatar":"","neve_post_elements_order":"","neve_meta_disable_header":"","neve_meta_disable_footer":"","neve_meta_disable_title":"","neve_meta_reading_time":"","footnotes":""},"categories":[323,161],"tags":[171,174,339,200,435,439,281,282,492,501],"class_list":["post-6819","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-cfd","category-dem","tag-ansys","tag-ansys-cfd","tag-ansys-rocky","tag-dem","tag-hardware","tag-hpc","tag-rocky","tag-rocky-dem","tag-solver-performance","tag-tips"],"_links":{"self":[{"href":"https:\/\/www.leapaust.com.au\/blog\/wp-json\/wp\/v2\/posts\/6819","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.leapaust.com.au\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.leapaust.com.au\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.leapaust.com.au\/blog\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.leapaust.com.au\/blog\/wp-json\/wp\/v2\/comments?post=6819"}],"version-history":[{"count":5,"href":"https:\/\/www.leapaust.com.au\/blog\/wp-json\/wp\/v2\/posts\/6819\/revisions"}],"predecessor-version":[{"id":6825,"href":"https:\/\/www.leapaust.com.au\/blog\/wp-json\/wp\/v2\/posts\/6819\/revisions\/6825"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.leapaust.com.au\/blog\/wp-json\/wp\/v2\/media\/4831"}],"wp:attachment":[{"href":"https:\/\/www.leapaust.com.au\/blog\/wp-json\/wp\/v2\/media?parent=6819"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.leapaust.com.au\/blog\/wp-json\/wp\/v2\/categories?post=6819"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.leapaust.com.au\/blog\/wp-json\/wp\/v2\/tags?post=6819"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}