Releases: romeric/Fastor
Fastor V0.6.4
Fastor V0.6.4 is an incremental change over V0.6.3. This release includes bug fixes and some new features
- Sleef backend for SIMD implementation of trigonometric and hyperbolic functions
- New tensor functions:
squeeze,reshapeandflatten[d8acd9f]. Refer to Wiki page for the documentation - More general support for complex-valued arithmetic and complex-valued tensor algebra such
lu,solveetc [c03811b] - Add top level CMake file for distribution of Fastor [8bc161e]. Contributed by @mablanchard
- Fix compile issue with type name printing [13b2a1c]. Contributed by @matthiasneuner
- Fix bug in determinant [e96e63f]. Contributed by @wermos
- Implement Singular-Value-Decomposition
svdand Signed Singular-Value-Decompositionssvdfor small square matrices [6de6662] - Fix multiple bugs with TensorMaps [5980b41]
Fastor V0.6.3
Fastor V0.6.3 is another incremental change over V0.6 release that introduced a significant overhaul in Fastor's internal design and exposed API. This release includes mainly internal changes
New features and improvements
- Continuous integration support via Travis CI on Linux. We test against
GCC-5toGCC-latestand defaultClangusing both scalar and SIMD implementations - Continuous integration support via Appveyor CI for MSVC builds on Windows. We test against Visual Studio 2019 under debug for now. Our test cases take excessively long under release and eventually time out although they build fine
- Unit test are now built using CMake instead of raw Makefiles
lut_inverseandut_inversehave been renamed totinversetakingUpLoTypesimilar to linear algebra computation types #87- Single tensor expression
einsumfor inner and permuted inner product of a single tensor expression #80 - Explicit
einsumby allowing the user to specify the shape of the tensor contraction output.einsumnow can permute and can deal with inner and permuted inner products of tensors and tensor expressions #91 - A new
permutefunction that closely resembles NumPy'spermuteoption and implements contiguous writes (instead of contiguous reads) which results in about 15-20% performance improvement. This function is not identical topermutation - All remaining mathematical functions are now implemented -
cbrt,exp2/expm1,log10/log2/log1p,asinh/acosh/atanh/atan2,erf/lgamma/tgamma/hypot,round/floor/ceil,min/maxetc. Where applicable SIMD versions of these are implemented. The SIMD math layer has been cleaned up and reworked - Element-wise unary boolean operators
!(Expression),isinf(Expression),isnan(Expression)andisfinite(Expression)are implemented #90 - Element-wise binary math function
min(a,b)/max(a,b),pow/hypot/atan2are now available - Fastor now uses
alignasinstead of compiler specific macros for memory alignment #98 - Fastor specific and user controllable macros are now moved to
config.handmacros.hunderconfigfolder previously namedcommons#58
Bug fixes
- Bug fix in expression binding policy that resulted in segfaults #95
- Fix for assigning
cv-qualifiedTensorMaptoTensor#94 by @feltech - Detect the correct SIMD type for
cv-qualified tensors #99 - Bug fix for nested boolean expressions such as
!isfinite(Expression)or!(a>b)#93 - Fix overflow in boolean views #100
- Fix detecting the correct language standard under MSVC 7592ea7
- Fix regression in abstract permutation #96
Fastor V0.6.2
Fastor V0.6.2 is another incremental change over V0.6 release that introduced a significant overhaul in Fastor's internal design and exposed API. This release includes
- SIMD support for complex numbers and complex valued arithmetics starting from SSE2 all the way to AVX512. The SIMD implementation for complex numbers is written with optimisation and specifically
FMAin mind and it delivers performance similar to Intel's MKL JIT for complex matrix-matrix multiplication and so on. Comprehensive unittests are added for SIMD complex valued arithmetics conjfunction introduced for computing the conjugate of a complex valued tensor expressionargfunction introduced for computing the argument or phase angle of a complex valued tensor expressionctransposeandctransfunctions introduced for computing the conjugate transpose a complex valued tensor expression- All boolean tensor methods such as
isequal,issymmetricetc are now implemented as free functions working on tensor expressions instead of tensors. There is no longer an underscore in the name of these functions that isis_equalmethod of the tensor is now transformed toisequalworking on expressisons - Performance optimisations for creating tensors of tensors (such as
Tensor<Tensor<double,3,3>,2,2>) or tensors of any non-primitive types (such asTensor<std::vector<double>,2,2>).matmulandtmatmulfunctions have been specifically tuned to work well with such composite types - Fix an issue in
tmatmulthat was causing compilation error on Windows with MSVC 2019
Fastor V0.6.1
Fastor V0.6.1 is an incremental change over V0.6 release that introduced a significant overhaul in Fastor's internal design and exposed API. This release includes
lufunction introduced for LU decomposition of 2D tensors. Multiple variants of LU decomposition is available including no pivoting, partial pivoting with a permutation vector and partial pivoting with a permutation matrix. This is perhaps the most performant implementation of the LU decomposition available today for small matrices of up to64x64. If no pivoting is used it the performance is unbeaten for all sizes up to the stack limit however given that the implementation is based on compile time loop recursion for sizes up to32x32and further it uses block recursion which in turn uses block-triangular-inversion compilation would be quite time consuming for bigger sizesut_inverseandlut_inversefor fast triangular inversion of upper and unit lower matrices using block-wise inversiontmatmulfunction equivalent to BLAS'sTRMMfunction for triangular matrix-matrix (or vector) multiplication which allows either or both operand to be upper/lower triangular. The function can be used to specifiy which matrix is lower/upper at compile time liketmatmul<matrix_type::lower_tri,matrix_type::general>(A,B). Proper 2X speed up over matmul for when one operand is triangular and 4X when both are triangular can be achieved for bigger sizesdet/determinantcan now be computed for all sizes using the LU decomposition [default for matrix sizes bigger than4x4].inv/inverseandsolvecan be performed with any variant of the LU decomposition- There is now a unified interface for choosing the computation type of linear algebra functions for instance
det<DetCompType::BlockLU>(A)orinv<InvCompType::SimpleLUPiv>(A)orsolve<SolveCompType::BlockLUPiv>etc tril/triufunctions added for getting the lower/upper part of a 2D tensor- Comprehensive unit tests and benchmarks are added and are available for these newly added (and some old) routines
Fastor V0.6
Fastor V0.6 is a major release that brings a lot fundamental internal redesign and performance improvements. This is perhaps the biggest release since the inception of project Fastor. The following are a list of changes and the new features released in this version
- The whole of Fastor's expression template engine has been reworked to facilitate arbitrary re-structuring of the expressions. Most users will not notice this change as it pertains to internal re-architecturing but the change is quite significant. The main driver for this has been to introduce and chaing linear algebra expressions with other element-wise operations.
- A series of linear algebra expressions are introduced as a result with less verbose names and the other existing linear algebra routines are now moved to a dedicated linear algebra expression module. This lays out the basic building blocks of Fastor's tensor algebra library
- Multiplication operator
%introduced that evaluate lazy and takes any expression - Greedy like matmul implemented. Operations like
A%B%C%D%...will be evaluated in the most efficient order invfunction introduced for lazy inversion. Extremely fast matrix inversion up to stack size256x256transfunction introduced for lazy transpose. Extremely fast AVX512 8x8 double and 16x16 float transpose using explicit SIMD introduceddetfunction introduced for lazy determinantsolvefunction evaluated for lazy solve.solvehas the behaviour that if both the inputs areTensorit evaluates immedidately and if either one of the inputs is an expressions it delays the evaluation.solveis now also able to solve matrices for up to stack size256x256qrfunction introduced for QR factorisation using modified Gram-Schmidt factorisation that has the potential to be easily SIMD vectorised in the future. The scalar implementation at the moment has good performanceabsdetandlogdetfunctions introduced for lazy computation of absolute and natural logarithm of a determinantdeterminant,matmul,transposeand most verbose linear algebra functions can now take expressions but evaluate immediatelyeinsum,contraction,inner,outer,permutation,cross,sumandproduct, now all work on expressions.einsum/contractionfor expressions also dispatches to the same operation minimisation algorithms that the non-expression version does hence the above set of new functions are as fast for expressions as they are for tensor types.crossfunction for cross product of vectors is introduced as well- Most linear algebra operations like
qr,det,solvetake optional parameters (class enums) to request the type of computation for instancedet<DetCompType::Simple>,qr<QRCompType::MGSR>etc - MKL (JIT) backend introduced which can be used in the same way as libxsmm
- The backend
_matmulroutines are reworked and specifically tuned for AVX512 and_matmul_mk_smallnis cleaned up and uniformed for up to5::SIMDVector::Size. Most matmul routines are now available at SSE2 level when it makes sense.matmulis now as fast as the dedicated MKL JIT API - AVX512
SIMDVectorforint32_tandint64_tintroduced.SIMDVectorforint32_tandint64_tare now activated at SSE2 level as well - Most intrinsics are now activated at SSE2 level
- All views are now reworked so there is now no need for
FASTOR_USE_VECTORISED_EXPR_ASSIGNmacro unless one wants to vectorise strided views - Multi-dimensional
TensorFixedViewsintroduced. This makes it possible to create arbitrary dimensional tensor views with compile time deducible sizes. This together with dynamic views complete the whole view expressions of Fastor diagfunction introduced for viewing the diagonal elements of 2D tensors and works just like other views in that it can appear on either side of an equation (can be assigned to)- Major bug fix for in-place division of all expressions by integral numbers
- A lot of new features, traits and internal development tools have been added.
- As a result Fastor now requires a C++14 supporting compiler
The next few releases from here on will be incremental and will focus on ironing out corner cases while new features will be continuously rolled out.
Fastor V0.5.1
Although with a minor tag Fastor V0.5.1 includes some major changes specially in the API design, performance and stability
SIMDVectorhas been reworked to fix the long-standing issue with fall-back to non SIMD code for non-64 bit types. The fall-back is now always to the correct scalar type where a scalar specialisation is available i.e.float, double, int32_t, int64_tand to a fixed array of size 1 holding the type for other cases. The API is now a lot closer toVcandstd::experimental::simd.SIMDVectorfor floating points is now also activated atSSE2level allowing any compiler that automatically definesSSE2without-march=nativevectorise Fastor's code since all compiler these days define__SSE2__at-O2/-O3levels- Fix a long-standing bug in network tensor contraction. Rework opmin_meta/cost models to be truly compile-time recursive in terms of depth first search. Strided contractions for networks have completely been removed and for pairs it is deactivated. Tensor contraction of networks now dispatches to by-pair
einsumwhich has many specialisation including dispatching to matmul. More than an order of magninute performance gain in certain cases. - Extremely fast
matmul/gemmroutines. Fastor now provides potentially the fastestgemmroutine for small to medium sized tensors of single and double precision as far as static dispatch is concerned. Benchmarks have been added here. Many flavours of matmul implementations are now available, for different sizes and with remainder handling and mask loading/storing. AVX512support for single and double floats- Better macro handling through a series of new
FASTOR_...macros - Accurate
timeitfunction based onrdtsctogether with memory clobber and serialisation for further accuracy - Fastor is now Windows compatible. The whole test suite runs and passes on MSVC 2019
- Quite a few bugs and compiler warnings have been fixed along the way
Fastor V0.5
Fastor V0.5 is one hell of a release as it brings a lot of new features, fundamental performance improvements, improved flexibility working with Tensors and many bug fixes:
New Features
- Improved IO formatting. Flexible, configurable formatting for all derived tensor classes
- Generic matmul function for AbstractTensors and expressions
- Introduce a new Tensor type
SingleValueTensorfor tensors of any size and dimension that have all their values the same. It is extremely space efficient as it stores a single value under the hood. It provides a more optimised route for certain linear algebra functions. For instance matmul of aTensorandSingleValueTensoris O(n) and transpose is O(1) - New evaluation methods for all expressions
tevalandteval_sthat provide fast evaluation of higher order tensors castmethod to cast a tensor to a tensor of different data typeget_mem_indexandget_flat_indexto generalise indexing across all tensor classes. Eval methods now use these- Binary comparison operators for expressions that evaluate lazy. Also binary comparison operators for SIMDVectors
- Constructing column major tensors is now supported by using
Tensor(external_data,ColumnMajor) tocolumnmajorandtorowmajorfree functionsall_of,any_ofandnone_offree function reducers that work boolean expressions- Fixed views now support
noaliasfeature FASTOR_IF_CONSTEXPRmacro for C++17
Performance and other key improvements
Tensorclass can now be treated as a compile time type as it can be initialised as constexpr by defining the macroFASTOR_ZERO_INITIALISE- Higher order einsum functions now dispatch to matmul whenever possible which is much faster
- Much faster generic permutation, contraction and einsum algorithms that definitely beat the speed of hand-written C-code now based on recursive templates.
CONTRACT_OPTis no longer necessary - A much faster loop tiling based transpose function. It is at least 2X faster than implementations in other ET libraries
- Introducing libxsmm backend for matmul. The switch from in-built to libxsmm routines for matmul can be configured by the user using
BLAS_SWITCH_MATRIX_SIZE_Sfor square matrices andBLAS_SWITCH_MATRIX_SIZE_NSfor non-square matrices. Default sizes are 16 and 13 respectively. libxsmm brings substantial improvement for bigger size matrices - Condensed unary ops and binary ops into a single more maintainable macro
FASTOR_ASSERTis now a macro toassertwhich optimises better at release- Optimised
determinantfor 4x4 cases. Determinant now works on all types and not just float and double allis now an alias tofallwhich means many tensor view expressions can now be dispatched to tensor fixed views. The implication of this is that expressions likea(all)andA(all,all)can just return the underlying tensor as opposed to creating a view with unnecessary sequences and offsets. This is much faster- Specialised constructors for many view types that construct the tensor much faster
- Improved support for
TensorMapclass to behave exactly the same asTensorclass including views, block indexing and so on - Improved unit-testing under many configurations (debug and release)
- Many
Tensorrelated methods and functionalities have been separated in to separate files that are now usable by other tensor type classes - Division of an expression by a scalar can now be dispatched to multiplication which creates the opportunity for FMA
- Cofactor and adjoint can now fall back to a scalar version when SIMD types are not available
- Documentation is now available under Wiki pages
Bug fixes
- Fix a bug in
productmethod of Tensor class (99e3ff0) - Fix AVX store bug in backend matmul 3k3 (8f4c6ae)
- Fix bug in tensor matmul for matrix-vector case (899c6c0)
- Fix a bug in SIMDVector under scalar mode with mixed types (f707070)
- Fix bugs with math functions on SIMDVector with size>256 not compiling (ca2c74d)
- Fix bugs with matrix-vector einsum (8241ac8, 70838d2)
- Fix a bug with strided_contraction when the second matrix disappears (4ff2ea0)
- Fix a bug in 4D tensor initializer_list constructor (901d8b1)
- Fixes to fully support SIMDVector fallback to scalar version
- and many more undocumented fixes
Key changes
- Complete re-architecturing the directory hierarchy of Fastor. Fastor should now be included as
#include <Fastor/Fastor.h> TensorRefclass has now been renamed toTensorMap- Expressions now evaluate based on the type of their underlying derived classes rather than the tensor that they are getting assigned to
There is a lot more major and minor undocumented changes.
Fastor V0.4
This release brings new features, improvements and bug fixes to Fastor:
- Lots of changes to support MSVC. Thanks to @FabienPean.
- Permutation and einsum functions for generic tensor expressions.
- A
TensorRefclass that wraps over existing data and exposes Fastor's functionality over raw data. - Some more tensor functions can work on tensor expressions.
- Linear algebra functions for high order tensors operating on the last two indices (similar to how NumPy operates).
- More variants of tensor cross product are now available for high order tensors.
- Bug fixes in backend trace and transpose.
- Bug fix in ordering of tensor networks.
- Bug fix in computing cost models.
and much more!
Fastor V0.3.1
Bug fix release.
Fastor V0.3
This release brings lots fundamental new features, performance improvements and bug fixes to Fastor, in particular
- Tensor views provide the ability to index, slice and broadcast multi-dimensional tensors using ranges/sequences/other tensors much like NumPy/MATLAB arrays. See the documentation.
- The evaluation mechanism in Fastor so far used
static_casting for chaining operations in the correspondingevalfunctions. This used to generate a lot of unnecessary type conversion code. Starting fromV0.3theevalfunctions are well-informed leading to faster and much cleaner code and helping the compiler optimise much more. - Support for FMA. The
matmul,normandinnerfunctions and multiple other tensor overloads now use FMA instructions when available. - Support for
norm,inner,sumandproductfunctions for any type of expressions. - Bug fix in generic transpose and 2D SP transpose methods.
- Code splitting and plugins for cleaner maintainable code base.
- Division instructions can safely be dispatched to multiplication while hoisting the reciprocal out of the loop for expressions of type
Expr / Scalar. FASTOR_FAST_MATHandFASTOR_UNSAFE_MATHare introduced. TheFASTOR_UNSAFE_MATHflag turnsExpr / Scalarexpressions to approximate reciprocal and multiplication intrinsics, which can harm the accuracy.FASTOR_FAST_MATHis just a place holder macro activated by default under-Ofast.- Lots of new test cases introduced.
- New benchmark problems for views and finite difference introduced.
scalar_typewas not correctly implemented for expressions. Now fixed.- Equal rank tensor assignment restriction is now relaxed in order for expressions and views of any rank to be assigned to expressions of a different rank, as long as their size (capacity) is equal.
- Many functions are decorated
inlineandconstexpr. This helps the compiler generate very compact code and aggressively eliminate dead code. - Low and high rank tensors can be created using brace initialisers.
- Fix the
SP/DPbug inmatmul. - Introduce the now very recommended
-DNDEBUGflags to most Makefiles. - Lots of other minor improvements and bug fixes.
As a final note, while compiling views mixed with other complex expression it is really beneficial to add the inlining flags to the compiler, such as -finline-limit=n for GCC, -mllvm -inline-threshold=n for Clang and -inline-forceinline -inline-factor=n for ICC, and although overlapping assignments are provided for convenience, it helps the compiler a lot in inlining if -DFASTOR_NO_ALIAS is issused. Also for 1D and 2D views -FASTOR_USE_VECTORISED_ASSIGN can cut down runtimes by a factor of 2-4, if compiler is successful at inlining.