![]() |
MFEM v4.8.0
Finite element discretization library
|
Class for performing batched linear algebra operations, potentially using accelerated algorithms (GPU BLAS or MAGMA). Accessed using static member functions. More...
#include <batched.hpp>
Public Types | |
enum | Backend { NATIVE , GPU_BLAS , MAGMA , NUM_BACKENDS } |
Available backends for implementations of batched algorithms. More... | |
enum | Op { N , T } |
Operation type (transposed or not transposed) More... | |
Static Public Member Functions | |
static void | AddMult (const DenseTensor &A, const Vector &x, Vector &y, real_t alpha=1.0, real_t beta=1.0, Op op=Op::N) |
Computes | |
static void | Mult (const DenseTensor &A, const Vector &x, Vector &y) |
Computes | |
static void | MultTranspose (const DenseTensor &A, const Vector &x, Vector &y) |
Computes | |
static void | Invert (DenseTensor &A) |
Replaces the block diagonal matrix | |
static void | LUFactor (DenseTensor &A, Array< int > &P) |
Replaces the block diagonal matrix | |
static void | LUSolve (const DenseTensor &A, const Array< int > &P, Vector &x) |
Replaces | |
static bool | IsAvailable (Backend backend) |
Returns true if the requested backend is available. | |
static void | SetActiveBackend (Backend backend) |
Set the default backend for batched linear algebra operations. | |
static Backend | GetActiveBackend () |
Get the default backend for batched linear algebra operations. | |
static const BatchedLinAlgBase & | Get (Backend backend) |
Get the BatchedLinAlgBase object associated with a specific backend. | |
Class for performing batched linear algebra operations, potentially using accelerated algorithms (GPU BLAS or MAGMA). Accessed using static member functions.
The static member functions will delegate to the active backend (which can be set using SetActiveBackend(), see BatchedLinAlg::Backend for all available backends and the order in which they will be chosen initially). Operations can be performed directly with a specific backend using Get().
Definition at line 31 of file batched.hpp.
Available backends for implementations of batched algorithms.
The initially active backend will be the first available backend in this order: MAGMA, GPU_BLAS, NATIVE.
Enumerator | |
---|---|
NATIVE | The standard MFEM backend, implemented using mfem::forall kernels. Not as performant as the other kernels. |
GPU_BLAS | Either cuBLAS or hipBLAS, depending on whether MFEM is using CUDA or HIP. Not available otherwise. |
MAGMA | MAGMA backend, only available if MFEM is compiled with MAGMA support. |
NUM_BACKENDS | Counter for the number of backends. |
Definition at line 38 of file batched.hpp.
Operation type (transposed or not transposed)
Enumerator | |
---|---|
N | Not transposed. |
T | Transposed. |
Definition at line 53 of file batched.hpp.
|
static |
Computes
Definition at line 54 of file batched.cpp.
|
static |
Get the BatchedLinAlgBase object associated with a specific backend.
This allows the user to perform specific operations with a backend different from the active backend.
Definition at line 103 of file batched.cpp.
|
static |
Get the default backend for batched linear algebra operations.
Definition at line 98 of file batched.cpp.
|
static |
Replaces the block diagonal matrix
Definition at line 71 of file batched.cpp.
|
static |
Returns true if the requested backend is available.
The available backends depend on which third-party libraries MFEM is compiled with, and whether the CUDA/HIP device is enabled.
Definition at line 87 of file batched.cpp.
|
static |
Replaces the block diagonal matrix
Definition at line 76 of file batched.cpp.
|
static |
Replaces
The LU factors and pivots of
Definition at line 81 of file batched.cpp.
|
static |
Computes
Definition at line 60 of file batched.cpp.
|
static |
Computes
Definition at line 65 of file batched.cpp.
|
static |
Set the default backend for batched linear algebra operations.
Definition at line 92 of file batched.cpp.