Home / ANSYS Hardware Recommendations

ANSYS Hardware Recommendations

ANSYS Logo

We understand that choosing hardware for cutting-edge computer-aided engineering software can be a challenging topic, due to rapid improvements that can occur in hardware technology which can make it difficult to stay up-to-date with industry best practices.  This webpage is designed to help you get started with your hardware selection to maximise the performance of various ANSYS physics solvers, and stay up-to-date with changes following new software releases.

Note that ANSYS partners closely with all of the major hardware vendors. Towards the bottom of this page you can also find vendor-specific hardware suggestions.

If you would like to talk to our engineers about how to choose the right hardware configuration for your needs, please contact LEAP today.

This information is designed to help you get started with your hardware selection. ANSYS is vendor neutral and partners with all of the major hardware vendors. At the bottom of this page are some specific hardware model recommendations based on the vendor that you normally purchase from. If you would like any further assistance in selecting the right hardware for your needs, please contact LEAP today.

ANSYS Electronics at LEAP AustraliaEBU (Electronics)

Intel XEON Broadwell – E5 26xx v4 series (High clock speed on chip but not necessarily the highest – optimise depending on $ per core and HPC licence availability)
Note: HFSS is primarily an in core solver so clock speed and RAM speed is important

  • Turn off Hyper-threading
  • Leave Turbo boost on

Memory (Fastest memory, so 2400 MHz)

  • To operate at maximum speed all memory channels in both processors should be populated with equal amounts of memory.
  • 8 GB per core HFSS, Maxwell, and other Electronic Packages or minimum 128 GB – 256 GB

Interconnect

  • QDR or FDR IB interconnect if you will be running on 2 or more nodes in parallel (on a cluster, Minimum 10 G network)

Hard Drives

  • If possible, SSD drives for the solving drive and/or multiple SAS HDs in RAID 0 array (2 or 4 HDs), 15,000 RPM, RAID 0. Local disk is recommended for the solver for temporary file I/O during solves. (Please see below for more hard drive information)

Graphics

  • Nvidia Quadro M4000, M5000, K5200, Nvidia Quadro M6000

GPU (HFSS, Maxwell Transient, Matrix Solvers)
Nvidia Tesla Kepler series (K20, K20x, K40, K80) (Best Speed up is seen using the Keplar series)
Nvidia K5000, K5200, K6000

ANSYS Fluids at LEAP AustraliaCFD (Fluids)

Intel XEON Broadwell – E5 26xx v4 series (High clock speed on chip but not necessarily the highest – optimise depending on $ per core and HPC licence availability)

  • Turn off Hyper-threading
  • Leave Turbo boost on

Memory (Fastest memory, so 2400 MHz)

  • To operate at maximum speed all memory channels in both processors should be populated with equal amounts of memory.
  • 4 GB per core for CFD

Interconnect

  • QDR or FDR IB interconnect if you will be running on 4 or more nodes in parallel

Hard Drives

  • CFD does not do a lot of Disk I/O so you can use SATA drives

Graphics

  • Nvidia Quadro M4000, K5200, Nvidia Quadro M6000

GPU (CFX does not support the use of GPU. GPU is only applicable for Fluent)

  • Nvidia K5000, nVIDIA Quadro K5200, K6000
  • Keplar, Fermi and Tesla cards (K80 recommended for Fluent, if applicable) Note : Please read ANSYS Help on supported models for Fluent GPU usage

ANSYS Structures at LEAP AustraliaMechanical

Intel XEON Broadwell – E5 26xx v4 series (High clock speed on chip but not necessarily the highest – optimise depending on $ per core and HPC licence availability)

  • Turn off Hyper-threading
  • Leave Turbo boost on

Memory (Fastest memory, so 2400 MHz)

  • To operate at maximum speed all memory channels in both processors should be populated with equal amounts of memory.
  • 8 GB per core

Note: Purchasing more RAM will alleviate I/O issues as (1) the solver can use this memory to avoid doing as much I/O, and (2) the operating system can then use available RAM to cache or buffer these I/O that MAPDL writes.

Interconnect

  • QDR or FDR IB interconnect if you will be running on 2 or more nodes in parallel

Hard Drives

  • If possible, SSD drives for the solving drive and/or multiple SAS HDs in RAID 0 array (2 or 4 HDs), 15,000 RPM, RAID 0.

Note: The faster the hard drives the faster the simulation. For DMP, the program writes 8 sets of files (one for each core they run on), so having SSDs with a lower seek time can help avoid having to wait while the hard drive seeks to read/write to all those different files. (see below for more information about hard drives)

Graphics

  • Nvidia Quadro K5200, Nvidia Quadro M6000

GPU

  • We recommend the Kepler K40 (K80 if models are over 5 Million DOF)
  • Nvidia M4000, K5000, K5200, K6000

Additional Hard drive information (Electronics and Mechanical)

I/O to the hard drive is the third component of a balanced system. Well balanced systems can extend the size of models that can be solved if they use properly configured I/O components. If ANSYS simulations are able to run with all file I/O cached by a large amount of physical memory, then disk resources can be concentrated on storage more than performance.

A good rule of thumb is to have 10 times more disk space than physical memory available for your simulation. With today’s large memory systems, this can easily mean disk storage requirements of 500 GB to 1 TB (TeraByte). However, if you use physical disk storage routinely to solve large models, a high performance file system can make a huge difference in simulation time.

A high performance file system could consist of solid state drives (SSDs) or conventional spinning hard disk drives (HDDs). SSDs typically offer superior performance over HDDs, but have other factors to consider, such as cost and meantime-to-failure. Like HDDs, multiple SSDs can be combined together in a RAID0 array. Maximum performance with a RAID array is obtained when the ANSYS simulation is run on a RAID0 array of 4 or more disks that is a separate disk partition from other system file activity. For example, on a Windows desktop the drive containing the operating system files should not be in the RAID0 configuration. This is the optimal recommended configuration. Many hardware companies do not currently configure separate RAID0 arrays in their advertised configurations. Even so, a standard RAID0 configuration is still faster for ANSYS simulations than a single drive.

Servers versus Workstations

  • If you have or will obtain in the future 32 or more parallel licenses (HPC Packs or Enterprise Licenses) and have one or more users that need to submit jobs using a higher core count than what is available on current Workstations we tend to recommend servers (or a cluster) and a supported Job Scheduler is required.

Suggested Hardware Configurations from major vendors:

DELL
Workstations
Dell Precision T5810, T7810, T7910

Servers
If you have 32 or more ANSYS parallel licenses and have more than one user that needs to submit jobs we tend to recommend servers (for a cluster) and using a Job Scheduler

Dell PowerEdge R630, M630 (blade), R730, C6320.

HP
Workstations
HP Z640 or Z840

Servers
If you have 32 or more ANSYS parallel licenses and have more than one user that needs to submit jobs we tend to recommend servers (for a cluster) and using a Job Scheduler

HP BL460c (HP WS460c Graphics Expansion blade. The WS460c is focused on GPU and graphics rendering) Apollo 2000

LENOVA
Workstations
ThinkStation P700, P900
ThinkPad P70

Servers versus Workstations

  • If you have or will obtain in the future 32 or more parallel licenses (HPC Packs or Enterprise Licenses) and have one or more users that need to submit jobs using a higher core count than what is available on current Workstations we tend to recommend servers (or a cluster) and a supported Job Scheduler is required.

Supported Job Schedulers

  • Microsoft 2008 R2 SP 4 (support will be dropped in second quarter 2016 for 2008 Server) and 2012 HPC Server R2 (2012 R2 recommend) Patch 3
  • IBM Platform LSF (Linux)
  • Altair PBS Pro (Linux)
  • Univa (SGE) (Linux)
  • Torque with Moab (Linux) this is open source software from Adaptive Computing. For Support and Setup please visit their website