How to use a Rust lib inside the Scala app

MOIA Engineering
7 min readJan 24, 2023

by: Bogdan Kolbik and Fabian Braun

We at MOIA use both Scala and Rust together in one application, and it’s great. Here’s why we do it and how we do it.

Why

It all started with an application for optimizing vehicle routes. Optimization problems are quite tricky: there can be many feasible solutions but it’s hard to find the best one. Additionally, to then ensure that your solution is the best one you usually must compare it to every other possible solution. In our case, that could takes years. Instead, we search for a fixed amount of time and return the best solution we found so far. In such a case, it’s important to do as many solution evaluations as possible before the time is up.

To hit the ground running we started off with a Scala application using a generic open-source solver written in Java. It allowed us to deliver decent optimization quality quickly when MOIA launched the service. But later we hit a glass ceiling: At some point, we observed that the open-source solver was becoming a limiting factor instead of enabling us. We decided to write our own optimizer — tailored to our specific needs — in Rust.

We chose Rust for the following reasons:

  • High runtime-performance.
  • Low-level control over object creation. Memory safety without a garbage collector.
  • An expressive language that we enjoy programming, coming from Scala.
  • Previous success rewriting another optimizer in Rust at MOIA.

Minimizing the rewrite scope

Our optimization service is pretty big and rewriting big services has a problem: it takes a lot of time. Most of the service rewrite won’t even provide business value (I.e. wasted time), but we must do this because we’re changing the language, right? Still, we would like to minimize the scope of the rewrite.

Another problem with big rewrites is a big-bang change. When we replace a service with another one, something will definitely go wrong. The bigger the change — the more things will go wrong. So, again, we want to minimize the scope of changes that happen all at once.

How can we minimize the scope of a rewrite?

If we look at our application we can see the solver core (aka “optimizer”) and a lot of infrastructure code (aka “coordinator”). The coordinator does everything before and after the optimization. It includes serving GRPC calls, aggregating data from DynamoDB tables, consuming events, etc.

Rewriting the optimizer would provide a lot of benefits. Rewriting the coordinator would double the effort and provide no extra benefits.

This looks like a great rewrite boundary.

How about a separate service?

We can build the new optimizer as a separate service called by the coordinator. That’s a valid approach, but it brings some networking-specific problems: How will the coordinator find and access the optimizer? How to make use only the coordinator can call the optimizer? What to do if the optimizer isn’t responding? Surelythese are all resolvable issues, it is much more complex than the single function call we had before.

It would be nice to keep that simplicity if possible.

Plan

Write the optimizer as a stateless function in Rust. Use the existing coordinator code in Scala. Make them work together as parts of a single application.

A schema: old optimizer gets replaced by the new optimizer. Infratstructure logic remains unchanged.

How

Calling Rust with JNI

The viability of our plan depends on whether we can call non-Java code from Scala. And yes, we can do it.

JVM provides Java Native Interface (JNI), an interface that lets Java work with binary dynamic libraries. It looks like a relic from the past designed to simplify migration from C/C++, but it is still used a lot, mostly by Android developers.

Implementing the JNI call

JNI cannot call any library but only those that were built with JNI in mind. The intended usage looks like this:

  1. Write a Java class to be our interface.
  2. Generate a C++ header file from that Java class.
  3. Write a C++ implementation for the header file.

That looks quite complicated, especially if we remember that we are going to call Rust from Scala and not C++ from Java. Luckily, with Rust, we can use macros provided by robusta_jni to make it a bit simpler.

To implement a JNI integration we need to:

  1. Provide a native API from Rust, wrapped by robusta macros.
  2. Create a matching class in Scala with methods marked as @native.

Going this way, we must manually make sure that interfaces from both sides match, but it is the price we pay for a simpler process.

The Scala-side API looks like this:

package io.moia.vso.breakoptimizer

class NativeOptimizer() {
@native def solve(message: String): String
}

Our Rust-side API is mostly the boilerplate code adjusted to match the scala interface.

use robusta_jni::bridge;

#[bridge]
mod jni {
// --snip--
#[derive(Signature, TryIntoJavaValue, IntoJavaValue, FromJavaValue, TryFromJavaValue)]
#[package(io.moia.vso.breakoptimizer)]
pub struct NativeOptimizer<'env: 'borrow, 'borrow> {
#[instance]
raw: AutoLocal<'env, 'borrow>,
}
impl<'env: 'borrow, 'borrow> NativeBreakOptimizer<'env, 'borrow> {
pub extern "jni" fn solve(message: String) -> JniResult<String> {
// --snip--
}
}
}

Here we have only one function with a single argument and no state stored between the calls. With robusta, we can do much more but we avoid that for simplicity.

Connecting to the application with the Scala wrapper library

The next problem is: how to store it on the code level? On the one hand, we want both Scala and Rust API to be in the same repository to keep them in sync. On the other hand, the optimizer and the coordinator are different applications and we want to store them separately.

We used a Trojan horse approach for this. We created a Scala library that works as a thin wrapper around the Rust application. It has no business logic; it just provides an API to call the Rust code, and it is small enough to be in the same repository as the optimizer.

A trojan horse illustration with Rust code pretending to be a Scala library.

The outside application can install and use this library as a usual Scala dependency, but behind the interface, the library extracts the OS-specific Rust binary from resources and calls it.

This way the coordinator doesn’t even know it uses Rust. It only sees the Scala interface.

Rust code is compiled twice: as a Linux library and as a Mac library. Both libraries are added as a static resources to the Scala library. The resulting Scala library is uploaded to JFrog Artifactory.

Passing data with Protobuf

Both optimization input and output are complex structures with nested objects and arrays. We need to map these values between JVM and Rust. Theoretically, we can do this with JNI, but it looked like a lot of library-specific effort.

Instead, we pass a single protobuf-serialized message. We use protobuf a lot in the company anyway, so it is a well-known tool. Also, it would be useful if we later decide to turn the optimizer into a separate service.

We store protobuf definitions in the same repository and generate messages for both languages. This way we can easily have messages of any complexity and keep them in sync between Scala and Rust parts.

We don’t use a generated protobuf message in external Scala API though. Instead, we have handmade case classes and map the data using teleproto.

Consequences

Error handling

When Rust code panics, the Scala application crashes and there is nothing we can do about it. So, we need to handle Rust errors gracefully.

Our rule is “Don’t Panic”; never do any unsafe operations and have a fallback for every case. This turned out to be surprisingly easy because the optimizer doesn’t rely on any external services and we have an input that is already validated and typed.

If we missed something and any errors occur, we try to catch them and forward them to the Scala application. This is why our API returns JniResult<String>. In successful cases, it is a serialized response message. In failure cases, it is an error message; robusta will handle this message and throw an exception we can catch on the Scala side.

Observability

The optimizer provides a stateless function and has no external dependencies, but we still want some observability.

The only side effect the optimizer may do is write logs. We had to make sure both the optimizer and coordinator write logs in the same format so Kubernetes could handle them as a single stream. We also use env_logger to keep the log level in sync.

As for metrics, the optimizer collects them and returns them as a part of the response message. Later the Scala application can decide how to record them.

Shadow mode

The change is encapsulated within the coordinator, so we could run the new optimizer in shadow mode for some time. This way we could find bugs before they cause any harm and do the switch only when we are confident enough. This way we reduced the scope of change even more.

Result

This combination runs on production for a year now and it never caused any problems after its initial development. And the new optimizer runs faster: we managed to reduce deadlines several times without compromising the solution quality.

As a result, using JNI+Protobuf is a viable approach to merging Scala and Rust to use the best of both languages and it is especially useful for the gradual migration between languages.

--

--