Choosing a Computer to Run MATLAB and Simulink Products
Predicting how MATLAB will perform while running an application on a particular computer is difficult. MathWorks offers this general guidance on platform selection criteria and emphasizes that it is not a substitute for testing your application on a particular computer.
MATLAB performance is similar on Windows^{®}, Mac OS^{®} X, and Linux^{®}, although differences can occur among platforms for the following reasons:
 MathWorks builds its products with a different compiler on each platform, and each has its own performance characteristics.
 MathWorks incorporates thirdparty libraries into its products that may perform differently on each platform.
 The operating systems perform differently, especially in the case of disk or graphicsintensive operations.
In general, performance differences in operating system releases (for example, between Windows 7 and Windows 8) are negligible.
Each component of a typical computer configuration has an impact on MATLAB performance.
Central Processing Unit (CPU)
Computers with more CPU cores can outperform those with a lower core count, but results will vary with the MATLAB application. MATLAB automatically uses multithreading to exploit the natural parallelism found in many MATLAB applications. But not all MATLAB functions are multithreaded, and the speedup varies with the algorithm. For additional capability, Parallel Computing Toolbox offers parallel programming constructs that more directly leverage multiple computer cores.
MATLAB performance is dependent on the presence of floatingpoint hardware. On many CPUs, the number of FloatingPoint Units (FPUs) equals the number of CPU cores. However, on some processors, a single FPU may be shared between multiple CPU cores, potentially creating a performance bottleneck.
Virtual cores may modestly improve overall system performance, but they are likely to have little effect on the performance of MATLAB applications. Simultaneous multithreading gives the appearance that a computer has twice as many cores than it actually has. When using a tool such as Windows Task Manager, MATLAB may appear to use only half of the CPU cores available on the computer, when in fact the "unused" half is actually the virtual cores created by hyperthreading.
Your computer can suffer performance degradation due to thrashing when MATLAB and the programs you run concurrently with it use more than the available physical memory and your computer must resort to virtual memory. If, while running a MATLAB application, you find your computer is using little of the CPU, you may be experiencing thrashing. To detect thrashing on a Windows platform, use Windows Performance Monitor. On a Mac, use Activity Monitor.
The hard disk speed is a significant factor in MATLAB startup time. Once MATLAB is running, disk speed is only a factor if a MATLAB application's performance profile is dominated by file I/O, or if your system is using virtual memory (see Memory section). For diskintensive MATLAB applications or to improve the startup time of MATLAB, you can take advantage of technologies such as solidstate drives or RAID.
Graphics Processing Unit (GPU) for display
MATLAB Graphics are rendered using OpenGL technology, so a graphics card with superior OpenGL support can outperform a lesser card. Uptodate drivers are recommended for the best visual appearance and robustness.
Graphics Processing Unit (GPU) for computation
To speed up computation, Parallel Computing Toolbox leverages NVIDIA GPUs with compute capability 3.0 or higher. For releases R2017b and earlier, compute capability 2.0 is sufficient. For releases R2014a and earlier, compute capability 1.3 is sufficient.
See the compute capabilities of all NVIDIA GPUs. MATLAB does not support computation acceleration using AMD or Intel GPUs at this time.
Benchmarking Your Program
MATLAB provides a builtin benchmarking utility called that provides a general sense of MATLAB performance on a particular computer, but it cannot reliably predict how any particular MATLAB application will run. Use the MATLAB function to help produce reliable and repeatable performance benchmarks. Use to benchmark GPU code.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
Select web siteYou can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Contact your local office
Select a Web Site
Run MATLAB Functions on a GPU
MATLAB Functions with Arguments
Hundreds of functions in MATLAB^{®} and other toolboxes run automatically on a GPU if you supply a argument.
Whenever you call any of these functions with at least one as a data input argument, the function executes on the GPU. The function generates a as the result, unless returning MATLAB data is more appropriate (for example, ). You can mix inputs using both and MATLAB arrays in the same function call. To learn more about when a function runs on GPU or CPU, see Special Conditions for gpuArray Inputs. enabled functions include the discrete Fourier transform (), matrix multiplication (), left matrix division (), and hundreds of others. For more information, see Check gpuArraySupported Functions.
Check Supported Functions
If a MATLAB function has support for objects, you can consult additional GPU usage information on its function page. See GPU Arrays in the Extended Capabilities section at the end of the function page.
Several MATLAB toolboxes include functions with builtin support. To view lists of all functions in these toolboxes that support objects, use the links in the following table. Functions in the lists with information indicators have limitations or usage notes specific to running the function on a GPU. You can check the usage notes and limitations in the Extended Capabilities section of the function reference page. For information about updates to individual enabled functions, see the release notes.
You can browse supported functions from all MathWorks^{®} products at the following link: supported functions. Alternatively, you can filter by product. On the Help bar, click Functions. In the function list, browse the left pane to select a product, for example, MATLAB. At the bottom of the left pane, select GPU Arrays. If you select a product that does not have enabled functions, then the GPU Arrays filter is not available.
Deep Learning with GPUs
For many functions in Deep Learning Toolbox, GPU support is automatic if you have a suitable GPU and Parallel Computing Toolbox™. You do not need to convert your data to . The following is a nonexhaustive list of functions that, by default, run on the GPU if available.
For more information about automatic GPU support in Deep Learning Toolbox, see Scale Up Deep Learning in Parallel, on GPUs, and in the Cloud (Deep Learning Toolbox).
For advanced networks and workflows that use networks defined as (Deep Learning Toolbox) objects or model functions, convert your data to . Use functions with support (Deep Learning Toolbox) to run custom training loops or prediction on the GPU.
Check or Select a GPU
If you have a GPU, then MATLAB automatically uses it for GPU computations. You can check and select your GPU using the function. If you have multiple GPUs, then you can use to examine the properties of all GPUs detected in your system. You can use to select one of them, or use multiple GPUs with a parallel pool. For an example, see Identify and Select a GPU Device and Use Multiple GPUs in Parallel Pool. To check if your GPU is supported, see GPU Support by Release.
For deep learning, MATLAB provides automatic parallel support for multiple GPUs. See Deep Learning with MATLAB on Multiple GPUs (Deep Learning Toolbox).
Use MATLAB Functions with the GPU
This example shows how to use enabled MATLAB functions to operate with objects. You can check the properties of your GPU using the function.
Create a row vector that repeats values from 15 to 15. To transfer it to the GPU and create a object, use the function.
To operate with objects, use any enabled MATLAB function. MATLAB automatically runs calculations on the GPU. For more information, see Run MATLAB Functions on a GPU. For example, use , , , , , and together.
Plot the results.
If you need to transfer the data back from the GPU, use . Transferring data back to the CPU can be costly, and is generally not necessary unless you need to use your result with functions that do not support .
In general, running code on the CPU and the GPU can produce different results due to numerical precision and algorithmic differences between the GPU and CPU. Answers from the CPU and GPU are both equally valid floating point approximations to the true analytical result, having been subjected to different roundoff behavior during computation. In this example, the results are integers and eliminates the roundoff errors.
Sharpen an Image Using the GPU
This example shows how to sharpen an image using gpuArrays and GPUenabled functions.
Read the image, and send it to the GPU using the function.
Convert the image to doubles, and apply convolutions to obtain the gradient image. Then, using the gradient image, sharpen the image by a factor of .
Resize, plot and compare the original and sharpened images.
Compute the Mandelbrot Set using GPUEnabled Functions
This example shows how to use GPUenabled MATLAB functions to compute a wellknown mathematical construction: the Mandelbrot set. Check your GPU using the function.
Define the parameters. The Mandelbrot algorithm iterates over a grid of real and imaginary parts. The following code defines the number of iterations, grid size, and grid limits.
You can use the function to transfer data to the GPU and create a , or you can create an array directly on the GPU. provides GPU versions of many functions that you can use to create data arrays, such as . For more information, see Create GPU Arrays Directly.
Many MATLAB functions support gpuArrays. When you supply a gpuArray argument to any GPUenabled function, the function runs automatically on the GPU. For more information, see Run MATLAB Functions on a GPU. Create a complex grid for the algorithm, and create the array for the results. To create this array directly on the GPU, use the function, and specify .
The following code implements the Mandelbrot algorithm using GPUenabled functions. Because the code uses gpuArrays, the calculations happen on the GPU.
When computations are done, plot the results.
Work with Sparse Arrays on a GPU
The following functions support sparse objects.
abs acos acosd acosh acot acotd acoth acsc acscd acsch angle asec asecd asech asin asind asinh atan atand atanh bicg bicgstab ceil cgs classUnderlying conj cos cosd cosh cospi cot cotd coth csc cscd csch ctranspose deg2rad diag  end eps exp expint expm1 find fix floor full gmres gpuArray.speye imag isaUnderlying isdiag isempty isequal isequaln isfinite isfloat isinteger islogical isnumeric isreal issparse istril istriu isUnderlyingType length log log2 log10 log1p lsqr minus mtimes mustBeUnderlyingType ndims nextpow2 nnz  nonzeros norm numel nzmax pcg plus qmr rad2deg real reallog realsqrt round sec secd sech sign sin sind sinh sinpi size sparse spfun spones sprandsym sqrt sum tan tand tanh tfqmr times (.*) trace transpose tril triu uminus underlyingType uplus 
You can create a sparse either by calling with a input, or by calling with a sparse input. For example,
Sparse objects do not support indexing. Instead, use to locate nonzero elements of the array and their row and column indices. Then, replace the values you want and construct a new sparse .
Work with Complex Numbers on a GPU
If the output of a function running on the GPU could potentially be complex, you must explicitly specify its input arguments as complex. This applies to or to functions called in code run by .
For example, if creating a that might have negative elements, use , then you can successfully execute .
Or, within a function passed to , if is a vector of real numbers, and some elements have negative values, generates an error; instead you should call .
If the result is a of complex data and all the imaginary parts are zero, these parts are retained and the data remains complex. This could have an impact when using , , and so on.
The following table lists the functions that might return complex data, along with the input range over which the output remains real.
Function  Input Range for Real Output 

Special Conditions for gpuArray Inputs
GPUenabled functions run on the GPU only when the data is on the GPU. For example, the following code runs on GPU because the data, the first input, is on the GPU:
Acknowledgments
MAGMA is a library of linear algebra routines that take advantage of GPU acceleration. Linear algebra functions implemented for objects in Parallel Computing Toolbox leverage MAGMA to achieve high performance and accuracy.
See Also

Related Examples
More About
NVIDIA GPU Support from GPU Coder
GPU Coder™ generates optimized CUDA^{®} code from MATLAB^{®} code for deep learning, embedded vision, and autonomous systems. The generated code can be compiled and executed on NVIDIA^{® }GPUs. Generated CUDA code calls optimized NVIDIA CUDA libraries including cuDNN, cuSolver, and cuBLAS.
You can use the generated CUDA within MATLAB to accelerate computationally intensive portions of your MATLAB code on NVIDIA GPUs such as NVIDIA Titan^{® }and NVIDIA Tesla^{® }GPUs. GPU Coder lets you incorporate legacy CUDA code into your MATLAB algorithms and the generated code.
You can deploy a variety of trained deep learning networks, such as YOLO, ResNet50, SegNet, and MobileNet, from Deep Learning Toolbox™ to NVIDIA GPUs. You can generate optimized code for preprocessing and postprocessing along with your trained deep learning networks to deploy complete algorithms.
When used with Embedded Coder^{®}, GPU Coder lets you verify the numerical behavior of the generated code via softwareintheloop (SIL) testing on NVIDIA GPUs.
GPU Coder also supports embedded NVIDIA Tegra^{®} platforms such as the NVIDIA Drive PX2 Jetson^{®} TK1, Jetson TX1, Jetson TX2, Jetson Xavier, and Jetson Nano developer kits.
Code Generation and GPU Support
Main Content
Generate portable C/C++/MEX functions and use GPUs to deploy or accelerate processing
Audio Toolbox™ includes support to accelerate prototyping in MATLAB^{®} and to generate code for deployment.
GPU Code Acceleration. To speed up your code while prototyping, Audio Toolbox includes functions that can execute on a Graphics Processing Unit (GPU). You can use the (Parallel Computing Toolbox) function to transfer data to the GPU and then call the (Parallel Computing Toolbox) function to retrieve the output data from the GPU. For a list of Audio Toolbox functions that support execution on GPUs, see Function List ( support). You need Parallel Computing Toolbox™ to enable GPU support.
C/C++ Code Generation. After you develop your application, you can generate portable C/C++ source code, standalone executables, or standalone applications from your MATLAB code. C/C++ code generation enables you to run your simulation on machines that do not have MATLAB installed and to speed up processing while you work in MATLAB. For a list of Audio Toolbox functions that support C/C++ code generation, see Function List (C/C++ Code Generation). You need MATLAB Coder™ to generate C/C++ code.
GPU Code Generation. After you develop your application, you can generate optimized CUDA^{®} code for NVIDIA^{®} GPUs from MATLAB code. The code can be integrated into your project as source code, static libraries, or dynamic libraries, and can be used for prototyping on GPUs. You can also use the generated CUDA code within MATLAB to accelerate computationally intensive portions of your MATLAB code in machine learning, deep learning, or other applications. For a list of Audio Toolbox functions that support GPU code generation, see Function List (GPU Code Generation). You need MATLAB Coder and GPU Coder™ to generate CUDA code.
Functions
Generate C/C++ code from MATLAB code  
Transfer distributed array or gpuArray to local workspace  
Array stored on GPU 
Featured Examples
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
Select web siteYou can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Contact your local office
Support matlab gpu
MATLAB GPU Computing Support for
NVIDIA CUDAEnabled GPUs
Perform MATLAB computing on NVIDIA CUDAenabled GPUs
MATLAB^{®} enables you to use NVIDIA^{®} GPUs to accelerate AI, deep learning, and other computationally intensive analytics without having to be a CUDA^{®} programmer. Using MATLAB and Parallel Computing Toolbox™, you can:
 Use NVIDIA GPUs directly from MATLAB with over 500 builtin functions.
 Access multiple GPUs on desktop, compute clusters, and cloud using MATLAB workers and MATLAB Parallel Server™.
 Generate CUDA code directly from MATLAB for deployment to data centers, clouds, and embedded devices using GPU Coder™.
 Generate NVIDIA TensorRT™ code from MATLAB for low latency and highthroughput inference with GPU Coder.
 Deploy MATLAB AI applications to NVIDIAenabled data centers to integrate with enterprise systems using MATLAB Production Server™.
“Our legacy code took up to 40 minutes to analyze a single wind tunnel test; by using MATLAB and a GPU, computation time is now under a minute. It took 30 minutes to get our MATLAB algorithm working on the GPU—no lowlevel CUDA programming was needed.”
Christopher Bahr, NASA
Develop, Scale, and Deploy Deep Learning Models with MATLAB
MATLAB allows a single user to implement an endtoend workflow to develop and train deep learning models using Deep Learning Toolbox™. You can then scale training using cloud and cluster resources using Parallel Computing Toolbox and MATLAB Parallel Server, and deploy to data centers or embedded devices using GPU Coder.
Develop Deep Learning and Other Computationally Intensive Analytics with GPUs
MATLAB is an endtoend workflow platform for AI and deep learning development. MATLAB provides tools and apps for importing training datasets, visualization and debugging, scaling training CNNs, and deployment.
Scale up to additional compute and GPU resources on desktop, clouds, and clusters with a single line of code.
Scale MATLAB on GPUs With Minimal Code Changes
Run MATLAB code on NVIDIA GPUs using over 500 CUDAenabled MATLAB functions. Use GPUenabled functions in toolboxes for applications such as deep learning, machine learning, computer vision, and signal processing. Parallel Computing Toolbox provides, a special array type with associated functions, which lets you perform computations on CUDAenabled NVIDIA GPUs directly from MATLAB without having to learn lowlevel GPU computing libraries.
Engineers can use GPU resources without having to write any additional code, so they can focus on their applications rather than performance tuning.
Using parallel language constructs such as and you can perform calculations on multiple GPUs. Training a model on multiple GPUs is a simple matter of changing a training option.
MATLAB also lets you integrate your existing CUDA kernels into MATLAB applications without requiring any additional C programming.
Deploy Generated CUDA Code from MATLAB for Inference Deployment with TensorRT
Use GPU Coder to generate optimized CUDA code from MATLAB code for deep learning, embedded vision, and autonomous systems. The generated code automatically calls optimized NVIDIA CUDA libraries, including TensorRT, cuDNN, and cuBLAS, to run on NVIDIA GPUs with low latency and highthroughput. Integrate the generated code into your project as source code, static libraries, or dynamic libraries, and deploy them to run on GPUs such as the NVIDIA Volta^{®}, NVIDIA Tesla^{®}, NVIDIA Jetson^{®},and NVIDIA DRIVE^{®}.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
Select web siteYou can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Contact your local office
Select a Web Site
GPU Support by Release
To use your GPU with MATLAB^{®}, you must install a recent graphics driver. Best practice is to ensure you have the latest driver for your device. Installing the driver is sufficient for most uses of GPUs in MATLAB, including and GPUenabled MATLAB functions. You can download the latest drivers for your GPU device at NVIDIA Driver Downloads.
Supported GPUs
To see support for NVIDIA^{®} GPU architectures by MATLAB release, consult the following table.
The cc numbers show the compute capability of the GPU architecture. To check your GPU compute capability, see the property in the output of the and functions. Alternatively, see CUDA GPUs (NVIDIA).
MATLAB Release  Ampere (cc8.x)  Turing (cc7.5)  Volta (cc7.0, cc7.2)  Pascal (cc6.x)  Maxwell (cc5.x)  Kepler (cc3.5, cc3.7)  Kepler (cc3.0, cc3.2)  Fermi (cc2.x)  Tesla (cc1.3)  CUDA^{®} Toolkit Version 

R2021b  11.0  
R2021a  11.0  
R2020b  10.2  
R2020a  10.1  
R2019b  10.1  
R2019a  10.0  
R2018b  9.1  
R2018a  9.0  
R2017b  8.0  
R2017a  8.0  
R2016b  7.5  
R2016a  7.5  
R2015b  7.0  
R2015a  6.5  
R2014b  6.0  
R2014a  5.5  
R2013b  5.0  
R2013a  5.0  
R2012b  4.2  
R2012a  4.0  
R2011b  4.0 
– Builtin binary support.
– Support for Kepler and Maxwell GPU architectures will be removed in a future release. At that time, using a GPU with MATLAB will require a GPU device with compute capability 6.0 or greater. MATLAB generates a warning the first time you use a Kepler or Maxwell GPU.
– Supported via forward compatibility. Optimized device libraries must be compiled at runtime from an unoptimized version. Support can be limited and you might see errors and unexpected behaviour. For more information, see Forward Compatibility for GPU Devices.
– By default, this architecture is not supported. You can enable support by enabling forward compatibility for GPU devices. You might see errors and unexpected behaviour. For more information, see Forward Compatibility for GPU Devices.
CUDA Toolkit
If you want to generate CUDA kernel objects from CU code or use GPU Coder™ to compile CUDA compatible source code, libraries, and executables, you must install a CUDA Toolkit. The CUDA Toolkit contains CUDA libraries and tools for compilation. You do not need the toolkit to run MATLAB functions on a GPU or to generate CUDA enabled MEX functions.
Task  Requirements 

 Get the latest graphics driver at NVIDIA Driver Downloads. You do not need the CUDA Toolkit as well. 
 Install the version of the CUDA Toolkit supported by your MATLAB release. 
* To create CUDA kernel objects in MATLAB, you must have both the CU file and the corresponding PTX file. Compiling the PTX file from the CU file requires the CUDA toolkit. If you already have the corresponding PTX file, you do not need the toolkit.
For more information about generating CUDA code in MATLAB, see Run MEXFunctions Containing CUDA Code and Run CUDA or PTX Code on GPU. Not all compilers supported by the CUDA Toolkit are supported in MATLAB.
The toolkit version that you need depends on the version of MATLAB you are using. Check which version of the toolkit is compatible with your version of MATLAB version in the table in Supported GPUs. Recommended best practice is to use the latest version of your supported toolkit, including any updates and patches from NVIDIA.
For more information about the CUDA Toolkit and to download your supported version, see CUDA Toolkit Archive (NVIDIA).
Forward Compatibility for GPU Devices
Note
Starting in R2020b, forward compatibility for GPU devices is disabled by default.
In R2020a and earlier releases, you cannot disable forward compatibility for GPU devices.
Forward compatibility allows you to use a GPU device with an architecture that was released after your version of MATLAB was built, by recompiling the device libraries at runtime.
When forward compatibility is enabled, the CUDA driver recompiles the GPU libraries the first time you access a device with an architecture newer than your MATLAB version. Recompilation can take up to an hour. Increase the CUDA cache size to prevent a recurrence of this delay. For instructions, see Increase the CUDA Cache Size.
When forward compatibility is disabled, you cannot perform computations using a GPU device with an architecture that was released after the version of MATLAB you are using was built. You must enable forward compatibility if you want to use this GPU device in MATLAB.
Caution
Enabling forward compatibility can result in wrong answers and unexpected behavior during GPU computations.
The degree of success of recompilation of device libraries can vary depending on the device architecture and the CUDA version used by MATLAB. In some cases, forward compatibility does not work as expected and recompilation of the libraries results in errors.
For example, forward compatibility from CUDA version 10.0–10.2 (MATLAB versions R2019a, R2019b, R2020a, and R2020b) to Ampere (compute capability 8.x) has only limited functionality.
You can enable forward compatibility for GPU devices using the following methods.
Use the function . Enabling forward compatibility using this method is not persistent between MATLAB sessions.
Set the environment variable to . This can preserve the forward compatibility between MATLAB sessions. If you change the environment variable while MATLAB is running, you must restart MATLAB to see the effect. On the client, you can use to set environment variables. You can then copy environment variables from the client to the workers so that the workers perform computations in the same way as the client. For more information, use Set Environment Variables on Workers.
Increase the CUDA Cache Size
If your GPU architecture does not have builtin binary support in your MATLAB release, the graphics driver must compile and cache the GPU libraries. This process can take up to an hour the first time you access the GPU from MATLAB. To increase the CUDA cache size to prevent a recurrence of this delay, set the environment variable to a minimum of (512 MB). On the client, you can use to set environment variables. You can then copy environment variables from the client to the workers so that the workers perform computations in the same way as the client. For more information, use Set Environment Variables on Workers.
Related Topics
External Websites
You will also like:
 Minecraft mineplex hacking
 Joseph clyde daniels
 Dell optiplex 3050
 Super retriever series rules
 Death is a noun
 Green pig solutions
 Psalm 83 nlt
 Music flashcards
 Pink letters copy and paste
 Air force glassdoor
.