Rust programming language is often viewed as the successor to C/C++, poised to eventually dominate the development of safety-critical kernels and low-level applications. Its efficiency and versatile application possibilities, however, extend far beyond safety, predetermining it for success in many more fields. Rust holds significant potential for developing web applications, especially API layers, where speed is also of great importance.
In the development of web and mobile applications, creating the frontend applications that run in the user’s browser or on their mobile phone only represents half of the work. During operation, these interfaces interact with one or more business systems, giving rise to various middleware or backend for frontend (BFF) layers. Several common issues arise when trying to integrate a frontend application with a system primarily organized around business processes and logics.
One fundamental issue is that frontend developers prefer to work with a unified API, such as REST or GraphQL, which returns only the necessary data, in the required form and manner, for a particular function of the frontend application. In contrast, business systems provide varied APIs that are overly “chatty”, and integrating multiple systems, even their protocols may be different. Performance issues also frequently arise because business systems are often not designed to handle tens of thousands of requests per second, necessitating some form of data caching.
Another challenge arises when data and processes from different systems need to be combined and standardized. For example, different systems may provide user identification, flight information, and reservations, while offers from a CRM system come from a fourth source, and these link to contents from a CMS system. Managing this complexity on the frontend is not advisable, not least for security reasons; it’s crucial that the business logic remains hidden. This is where the BFF layer (or middleware) plays its role, acting as an intermediary between the frontend and business systems.
What Makes a Good BFF?
From a customer experience perspective, the most critical expectation of a BFF layer is speed. Technically, this means operating with the shortest possible response time and utilizing the available hardware resources as efficiently as possible. It’s vital that the BFF layer minimally increases the latency in serving requests from the customer to the business systems.
From a business standpoint, security is also crucial: the BFF layer itself should be as secure as possible, and it should also protect the business systems from both overload attempts (e.g., through efficient caching, limiting the number of concurrent requests) and malicious activities (e.g., by checking and pre-filtering incoming data).
The first crucial decision in developing a BFF layer is selecting the programming language. PHP, Python, Java, C#, Go, JavaScript / TypeScript are all viable options, each with its advantages and disadvantages. Rust has recently joined this list, appearing as a steep choice: a relatively new, low-level language that is not particularly easy to learn.
However, its popularity is rapidly growing in areas where performance and security are critical: cloud providers base their fundamental systems on it (see Amazon Firecracker), it’s a common choice in blockchain system development, and it’s beginning to infiltrate operating system development. In the past few years, many have started using it for web development as well, where it has developed a particularly robust ecosystem.
Challenges of a Sports Betting System
At Mito Digital, the final push towards Rust adoption came from implementing a sports betting system. This system needed to manage rapidly changing data for tens of thousands of betting events. The business system couldn’t handle the thousands of requests per second from users, as it wasn’t designed for this. This led to the need for a unique web application that stores sports event data in memory and serves user requests directly from there.
We only request the complete event database from the business system once a day and then the changes every few seconds. Our application receives the data in XML format, processes it, stores it in in-memory data structures, and indexes it from several perspectives for efficient searching: sometimes using simple B-Tree indexes, other times using a full-text search engine (tantivy).
Interestingly, the first version of this system was developed in Go. We faced two main issues: slow XML processing and high memory usage, pushing the limits of our available hardware.
The new Rust-based implementation solved both issues: both the XML processing time and memory usage were reduced to a fraction.
Getting acquainted with Rust was relatively quick, as its basic structures are similar to previously known languages (Go, C#, PHP, JavaScript). The real novelty was the borrow checker. This compile-time check ensures Rust’s two unique features: safe memory management without a garbage collector and essentially risk-free concurrent programming (fearless concurrency). Getting used to the borrow checker took some time, but once we overcame this, we could proceed with development without significant problems.
Rust allows for both synchronous and asynchronous programming. The asynchronous programming keywords async/await might be familiar from C# and JavaScript, and they work similarly here. Several async runtime implementations exist for Rust; we chose tokio-rs, as the Warp and Axum web frameworks we prefer are built on it. Thanks to asynchronous programming, the application only needs to run a few threads concurrently: one thread handles specific background tasks (such as regularly downloading data changes in the sports betting case), and roughly one thread per CPU core is launched by the tokio runtime to asynchronously serve incoming web requests.
This Is Where ArcSwap Comes In
In a multi-threaded environment, synchronizing access to shared data can be a significant issue. If multiple threads attempt to modify the same memory area simultaneously, it could likely lead to a system crash or a substantial security breach. Most cases avoid this problem using locks and mutexes, ensuring that only one thread works with the data structure at a time.
In our case, ArcSwap allowed us to handle these race conditions mostly in a lockless manner, enabling threads serving client requests and background tasks to work without blocking each other, limited only by the available CPU performance.
ArcSwap is a data structure referencing another data structure via a pointer, which can be swapped out using atomic operations. Once packaged in an ArcSwap, the referenced data structure becomes read-only, allowing multiple threads to safely read it concurrently without locks. If the data needs to be modified, a cheap copy-on-write duplicate is made, where we perform the necessary modifications, then swap the data in the ArcSwap with the new version in a single atomic CPU operation, discarding the old one. At this point, the memory occupied by the discarded data is immediately freed, without waiting for a garbage collector run. This is a significant reason for the drastic reduction in memory usage.
Beyond ArcSwap, a plethora of pre-implemented concurrent access data structures can be found on crates.io. This is the central repository for Rust’s package manager, Cargo, similar to npm and https://npmjs.org for JavaScript, or composer, and https://packagist.org for PHP. For example, crossbeam provides communication channels similar to Go’s channels, dashmap provides a concurrent access HashMap data structure, and evmap offers a lock-free eventually consistent HashMap implementation.
In statically typed programming languages, a significant problem can be the need to produce a lot of boilerplate code during implementation. This can be the case when converting a data structure to and from JSON, or when linking a REST API endpoint with the URL routing layer. Most languages use some form of annotation to eliminate this, from which boilerplate code can be generated at runtime through reflection.
Rust addresses this problem with declarative macros. Like C macros, they are expanded into Rust code before the actual compilation process, significantly simplifying and making the process more comfortable than, for example, Go’s go generate solution, and without the runtime overhead of reflection. With macros, for instance, converting a Rust data structure to and from JSON can be accomplished with a few simple declarative macros (see serde_json). Macros greatly simplify many routine operations in Rust, allowing developers to focus on solving the task at hand.
Observability is crucial in a microservices-based system. The tokio tracing library makes both logging and tracing easy to implement. Since the language is static, there’s no dynamic runtime instrumentation as in Java or C#, but macros can be used with minimal effort to insert the necessary code snippets for logging and metrics. The collected data can be forwarded to almost any Application Performance Monitoring (APM) tool through various adapters. For example, log data can go to ElasticSearch or Grafana Loki, tracing spans to any OpenTelemetry compatible collector, and metrics to Prometheus.
” Rust’s major advantage in Kubernetes or serverless environments is the minimal runtime dependencies of the resulting executable. “
If compiled for a statically linked musl libc environment, the container needs to contain only the executable and some additional configuration (e.g., timezone, locale data), making the entire container only a few MB in size. The language’s runtime overhead is very small, and a Rust application starts up in moments compared to a .NET or Java application (useful, for example, in the case of AWS Lambda cold starts).
How Fast Is It?
If the response can be served from an in-memory data structure and doesn’t require generating a large JSON, it can serve tens of thousands of requests per second with a response time of a few milliseconds, comparable to the speed of serving static files from a web server. If producing the response is more complex or requires generating a larger JSON response, the rate might drop to a few thousand requests per second on a 4 vCPU machine, with response times in the 10-100ms range. If the request cannot be served from memory, then the response time of the called backend service will be the determining factor, not the Rust-based application.
The number of concurrent requests is not a problem due to asynchronous operation: each incoming connection consumes only a minimal amount of memory, and running out of file descriptors for sockets is likely to be a problem before memory usage becomes an issue.
” Overall, our experience with Rust has been very positive, and it’s not as difficult to use as we initially feared. “
There are still shortcomings: the language is young, so there are significantly fewer mature tools available than for, say, C# or Java. Finding Rust developers is also not easy; we tend to train backend developers from other languages in-house. Fortunately, the language’s popularity is rapidly growing according to the TIOBE index and GitHub data, so these issues should resolve over time.
Sándor Apáti
works at Mito Digital as a Software Architect. Over the past 25 years, he has gained significant experience in backend and system development, with his professional focus besides Rust being on DevOps and the cloud. He is a certified AWS Solutions Architect Professional and Advanced Networking Specialist.