# Use Matlab API to Increase Performance of Java, C++, or Python Applications

MATLAB (https://www.mathworks.com/) – hereon referred to as “Matlab” – has come a long ways since the early days – especially regarding the execution time for matrix and other types of operations. It’s almost unparalleled when compared to the available math libraries for other coding languages. As such, it has become another important toolbox for the engineer, no matter the software development language he or she is using for the specific application.

MathWorks has made it very easy to interface Java, C, C++, Python, and FORTRAN applications with standard and custom (user-developed) Matlab functions. This article shows an example of a Java application test bench which performs matrix multiplication using: 1) Matlab’s lighting-fast matrix multiplication capability, 2) the very fast Efficient Java Matrix Library (EJML), and 3) the conventional method of matrix multiplication. Matlab is one to two orders of magnitude faster than EJML which is one to two orders of magnitude faster than the conventional method.

A video overview is shown below – followed by a similar written discussion. At the end of this article, you can watch the code walk-through video as well as download the source code configured in an Apache NetBeans 12.2 Java project environment. The only item you will need to provide is your own Matlab “engine.jar” which is in your licensed copy of Matlab.

A simple high-level block diagram, of the application-to-Matlab interface, is shown below. Note that the software application can access not only the standard Matlab functions, but any of the Matlab toolbox functions (for those toolboxes that the engineer has purchased), AND any custom scripts, functions, and classes that were developed by the engineer.

The following is an example of a simple Java application which performs matrix multiplication (the user selects the matrix size) with three methods: 1) the conventional way of multiplying matrices, 2) the Efficient Java Matrix Library (EJML), and 3) the Matlab workspace to multiply the two matrices (“A * B”). While the EJML matrix multiplication algorithm is very fast, the Matlab algorithm is much faster. I suspect that MathWorks uses multithreading / parallelization to achieve the blistering speeds. Note that the CPU was an Intel i9, 8-core processor.

The following is the same plot but with specific numbers for the multiplication of matrices with 1,000 x 1,000 elements, 2,500 x 2,500 elements, and 5,000 x 5,000 elements. For example, for a 5,000 x 5,000 matrix multiplication operation, the conventional approach takes almost 1,064 seconds. The EJML algorithm is much faster at 76 seconds. But the Matlab algorithm is even faster at just over 2 seconds.

You may say “well I don’t multiply large matrices very often …”. But you may want to perform real-time machine vision (perception) applications such that you have to process large matrices of image pixel elements in very short amounts of time. Ah – now I have your attention. That will be the next article.

The following is a simple Java test bench project layout for demonstrating the interface with the Matlab engine API. It’s a simple setup – there’s driver class, a timer class, a class for building the two matrices to be multiplied, and the three different matrix multiplication classes (conventional, EJML, and Matlab). The source code is available as a download at the end of this article.

The Java-to-Matlab interface (not a Java Interface) class, which connects to the Matlab workspace, invokes four basic calls to the Matlab engine, as shown below.

The Java method that performs the direct interface with the Matlab workspace is shown below.

Note that after launching the Matlab engine, there is a “warm-up” period such that some type of optimization of the algorithm is performed. Thus the Matlab algorithm needs to be run several times to allow the optimization process to be performed. In the plot below, the Matlab engine is allowed to “warm-up” for 20 cycles, and then all three algorithms are tested together for the next 40 cycles. Once the “warm-up” process has been performed, the Matlab algorithm will continue to perform with blistering speeds up until the Matlab engine is shut down (the algorithm could be run for weeks at a time with the same high performance) – thus the “warm-up” phase is only required at the very beginning after the Matlab engine has been activated.

The following Java console output shows part of the run-time results from the Java test bench project.