IK.AM


Dev > Java > ai > djl

Notes on Using MPS (Metal Performance Shaders) with DJL's PyTorch Backend

Created on Tue Oct 31 2023 • Last Updated on Tue Oct 31 2023N/A Views

🏷️ Java | DJL | Machine Learning | PyTorch | MPS

Warning

This article was automatically translated by OpenAI (gpt-4o). It may be edited eventually, but please be aware that it may contain incorrect information at this time.

DJL (Deep Java Library) has supported MPS since version 0.20.0.
I couldn't find any samples, so I decided to try it out and take some notes.

The sample code is here. I tested it on an Apple M2 Pro, with 32 GB of memory, running macOS 13.5.2.

It seems you can create a Device instance with Device.of("mps", 0) as shown below:

import ai.djl.Device;
import ai.djl.ndarray.NDArray;
import ai.djl.ndarray.NDManager;
import ai.djl.ndarray.types.Shape;

public class Main {
    public static void main(String[] args) {
        int dimension = 1024;
        Device device = Device.of("mps", 0);
        //Device device = Device.cpu();
        System.out.println(device.isGpu()); // false
        try (NDManager manager = NDManager.newBaseManager(device)) {
            NDArray array1 = manager.randomUniform(0, 1, new Shape(dimension, dimension));
            NDArray array2 = manager.randomUniform(0, 1, new Shape(dimension, dimension));
            NDArray result = array1.add(array2).mul(10).matMul(array1.transpose()).div(5);
            System.out.println(result);
        }
    }
}

Although MPS itself is not a GPU, using the MPS API allows the GPU to be utilized. Therefore, when you run code that uses MPS, the % GPU number in the Activity Monitor will be greater than 0.

Found a mistake? Update the entry.