Vaadin Flow is a full-stack Java web framework that runs all UI logic server-side on the JVM — eliminating the need for REST APIs or duplicate JavaScript. This lets teams deliver full-stack features in minutes or hours instead of days, speeding up development for internal tools, enterprise dashboards, and modern business apps.
Because Vaadin relies on the JVM to handle server-side UI logic, manage user sessions, and render events in real time, your choice of JVM isn’t just a runtime detail – it directly affects memory use, responsiveness, and scalability.
Generally speaking all the leading JVM vendors provide good solutions for running modern business-grade Vaadin applications. In this post, we’ll make comparisons and focus on practical concerns like memory pressure, CPU usage, garbage collection pauses, and startup performance—especially in containerized and cloud environments where every millisecond and megabyte counts.
What it takes to run a Vaadin app at scale
Keeping the UI state in JVM memory for each user session can mean hundreds of KB per user on top of a ~200 MB baseline for the app. In today's world this is not a lot and servers can easily handle it, but it makes garbage collection efficiency and memory footprint critical for scalability and smooth UI updates. This is what we are talking about:
Metric |
Typical Value |
Notes |
Memory usage per user session cache |
50 KB – 1 MB |
Depends on UI complexity and session data. Most apps stay well below 500 KB. |
Baseline server memory usage (idle app) |
100 MB – 200 MB heap |
For modern apps on JDK 17+, with default GC and minimal traffic. |
Concurrent users per 1 GB heap |
1,000 – 2,000 users |
When memory use per session is <500 KB. |
Server response latency |
20-200 ms |
With optimized GC and stable connections (using WebSocket). |
Cold startup time (typical) |
2–4 seconds |
Faster sub-second with CRaC or warmed JVMs . |
Startup time with CRaC |
<300 ms (best case) |
Requires build-time setup and support in base image. |
Likewise, in cloud deployments (e.g. containers/Kubernetes), CPU usage and startup/warm-up times impact how quickly new instances can handle load changes.
Choosing the right JVM vendor for your Vaadin app
While all modern JVMs are based on OpenJDK, not all are created equal—especially when it comes to running demanding server-side Vaadin applications. From faster warm-ups to smarter garbage collection, each vendor brings something unique to the table. Some focus on low memory usage, others on low latency or long-term support, and a few offer commercial features for mission-critical deployments.
Below we summarize how major Java runtimes compare on these points for business Vaadin applications:
- Eclipse Adoptium (Temurin): The default community OpenJDK build, Temurin uses the standard HotSpot VM (G1 GC by default) and delivers baseline performance and memory management equivalent to Oracle’s reference JDK. It has no proprietary optimizations for GC or startup, but its broad industry backing and TCK-certification ensure reliability; it’s essentially a safe, well-tested choice with predictable behavior for Vaadin servers.
- Alibaba Dragonwell: Dragonwell adds unique enhancements targeting large-scale services. Its G1ElasticHeap can return unused heap memory to the OS, trimming the JVM’s memory footprint, and its JWarmup feature pre-compiles hot code based on prior profiling to reduce warm-up time and avoid CPU spikes during just-in-time compilation. These optimizations can benefit Java apps by reducing garbage buildup and latency as user count grows, though Dragonwell is primarily Linux (x86_64) only.
- Amazon Corretto: Corretto is Amazon’s OpenJDK distribution, hardened by running at AWS scale. It includes patches (contributed back upstream) aimed at improving performance and stability under heavy workloads. For example, Amazon has tuned garbage collection scheduling and memory management to prevent out-of-memory errors in large services. In practice, Corretto behaves like vanilla OpenJDK in memory footprint and startup, but with Amazon’s extra bug fixes and optimizations, it handles high load scenarios gracefully; the tradeoff is that official support is focused on AWS environments.
- Azul (Zulu & Platform Prime): Azul’s Zulu builds are TCK-tested OpenJDK binaries similar in performance to other standard distributions. The standout is Azul Platform Prime (Azul Zing), a specialized JVM designed for maximum throughput and consistent low latency under stress. The platform uses the C4 pauseless GC to virtually eliminate stop-the-world pauses, which means even data-intensive Vaadin apps won’t suffer noticeable GC hiccups. It also features ReadyNow! technology to accelerate warm-up (using profile data to JIT-compile code eagerly) and a Falcon JIT with an option to offload compilation threads, keeping CPU utilization smooth during peak loads. Azul Prime is a commercial product (with higher memory overhead for its GC) — it excels in demanding, latency-sensitive deployments, whereas Azul’s free Zulu offering covers more routine needs.
- BellSoft Liberica JDK: Liberica is known for its small footprint and cloud-ready packaging. It provides one of the smallest base images among JDKs (thanks to a slimmed-down JRE), which helps reduce container memory usage and startup disk overhead . In runtime behavior it’s equivalent to other OpenJDK-based JVMs (using G1 GC, etc.), but BellSoft also offers an optional build with CRaC (Coordinated Restore at Checkpoint) support. CRaC can snapshot a running Java instance and restore it in milliseconds, drastically improving startup times – BellSoft demonstrated up to 164× faster startup with CRaC in a Spring app. This is a big advantage for Vaadin applications that need to scale up quickly.
- Red Hat Build of OpenJDK: Red Hat’s distribution closely tracks OpenJDK but distinguishes itself by including the Shenandoah GC – a low-pause garbage collector ideal for large heaps . For Vaadin applications maintaining extensive in-memory UI state, Shenandoah can greatly reduce GC pause times (improving responsiveness) by doing most GC work concurrently. This comes with a slight throughput cost and higher CPU usage during collection, but it keeps latency consistent. Red Hat’s JDK is production-hardened (Red Hat runs it for its middleware), yet a practical consideration is that official binaries/support are limited to RHEL and Windows platforms. In summary, it’s a strong choice for minimizing GC pauses in large-scale Vaadin apps when your deployment aligns with Red Hat’s platform support.
- Oracle (Oracle JDK/OpenJDK): Oracle’s JDK is the baseline for performance — it uses the highly optimized HotSpot engine and G1 garbage collector by default, offering robust throughput and mature tuning for typical enterprise loads. It also includes Oracle’s ZGC (Z Garbage Collector) for those who need very low latency with huge heaps. In a Vaadin context, Oracle JDK will perform on par with the community builds for memory and CPU usage. Oracle JDK is well-tested and reliable; just ensure that the lack of certain niche features isn’t a bottleneck for your Vaadin deployment’s needs.
- SAP SapMachine: SapMachine is SAP’s downstream OpenJDK tailored for its enterprise cloud platform (SAP BTP). It inherits standard HotSpot behavior and GC algorithms from OpenJDK, so memory and CPU characteristics are similar to other distributions. The real focus is on stability and diagnostic enhancements for large, long-running business applications. SapMachine is proven in SAP’s own high-user-count services, and SAP sometimes includes patches for issues encountered in their products (with many fixes contributed upstream) . For a Vaadin app, it means you get a reliable, up-to-date JVM that handles heavy loads well. It doesn’t offer builds for older Java versions like 8 (only 17 up) , but is a solid choice if you value SAP’s testing pedigree.
With multiple vendors now offering OpenJDK builds—each with subtle differences in garbage collection behavior, warm-up time, and resource efficiency—picking the right JVM for a Vaadin workload can meaningfully impact both developer experience and production cost.
Comparison of major JVM vendors
JVM Vendor |
GC / Memory Behavior |
Startup / Warm-up |
Container / Cloud Fit |
Best For |
Eclipse Temurin |
Standard G1 GC, stable memory use |
Moderate startup, no warm-up optimizations |
Broad support, good base image compatibility |
Reliable default for most Vaadin apps |
Dragonwell |
ElasticHeap GC returns memory to OS |
JWarmup reduces warm-up overhead |
Linux only, tuned for Alibaba Cloud |
Large-scale, memory-sensitive deployments |
Amazon Corretto |
Hardened HotSpot with AWS-scale tuning |
Standard startup with upstream patches |
AWS-optimized, no-cost LTS |
Apps deployed on AWS or needing long-term LTS |
Azul Prime |
C4 GC with ultra-low latency, ReadyNow! profile warmup |
Fast startup and zero GC pauses under load |
Commercial only, excellent in cloud VMs |
High-throughput, GC-sensitive Vaadin apps |
BellSoft Liberica |
Low memory JDK, optional CRaC support |
Very fast with CRaC, small container images |
Ideal for containers and microservices |
Lightweight containers, CRaC-accelerated apps |
Red Hat OpenJDK |
Shenandoah GC for low-pause collections |
Standard startup, RHEL-tuned |
Strong in RHEL/OpenShift stacks |
Large, long-lived Vaadin sessions |
Oracle JDK |
Mature G1/ZGC, predictable performance |
Stable startup, commercial support |
Standard enterprise environments |
Reference-quality JVM with commercial backing |
SAP SapMachine |
OpenJDK GC with SAP-specific diagnostics |
Standard startup, LTS support |
Enterprise-grade, SAP ecosystem focused |
SAP-integrated Vaadin apps |
JVM features that matter in business applications
Beyond performance, business application teams care deeply about licensing clarity, security patching, and long-term support. The table below compares the major JVM vendors from an operational perspective and highlights which ones offer commercial backing, predictable update policies, and strong ecosystem fit for serious production use.
JVM vendor |
Licensing |
Support & updates |
Ecosystem fit |
Business value |
Eclipse Temurin |
Free, Open (GPL+CE) |
Community-driven, Adoptium LTS builds |
Broad IDE/build tool support |
Safe default, vendor-neutral choice |
Dragonwell |
Free, Open (GPL+CE) |
Maintained by Alibaba, Linux-only |
Alibaba Cloud integration |
Tuned for large-scale, in-house infra |
Amazon Corretto |
Free, Open (GPL+CE) |
Amazon-backed LTS, quarterly patches |
Strong AWS alignment |
No-cost, AWS-optimized production runtime |
Azul Prime |
Commercial license |
Premium support, fast security response |
Azul tools (JMC, ReadyNow, Cloud Compiler) |
Best for latency-sensitive enterprise apps |
BellSoft Liberica |
Free and commercial options |
Regular LTS & security fixes |
Includes JavaFX, CRaC builds, Alpine support |
Good fit for slim, cloud-native deployments |
Red Hat OpenJDK |
Free with RHEL / subscription |
Enterprise-grade, RHEL lifecycle aligned |
Tight integration with Red Hat stack |
Best for Red Hat/OpenShift shops |
Oracle JDK |
Commercial (subscription) |
Full Oracle support & update services |
Industry standard, full TCK compliance |
Best for long-term Oracle customers |
SAP SapMachine |
Free, Open (GPL+CE) |
SAP-maintained LTS releases |
Runs SAP software stack |
Ideal for SAP-aligned enterprise systems |
The right JVM + Vaadin = faster delivery
Server-side Java remains one of the most robust and productive ways to build modern web applications—especially when using Vaadin. By keeping UI logic, state, and rendering all on the JVM, Vaadin eliminates the complexity of full-stack JavaScript and lets teams focus on delivering business value fast.
No matter which JVM you choose, the strength of Vaadin lies in its ability to build rich, reactive UIs without sacrificing the power, safety, and scalability of the Java ecosystem. Server-side Java lets you keep logic, state, and security under full control while benefiting from decades of performance tuning, modern tooling, and predictable behavior across JVM vendors.
Choosing the right JVM sharpens that advantage. It helps your Vaadin apps start faster, run leaner, and scale smoother, whether you’re deploying to bare metal, containers, or the cloud. And with the right combination, you get what every team wants: fewer surprises in production, happier users, and more time to focus on building real business value.