Working Smarter

Here are some trending topics in distributed space, that you may like
Virtual Threads are picking up fast — simplify concurrency in microservices, often replacing reactive stacks for I/O-bound work. Spring Boot 3+ can flip them on with spring.threads.virtual.enabled=true.
Jakarta EE 11 is out — adds Jakarta Data 1.0, alignment with virtual threads, and many spec updates, good baseline if you’re on the EE/MicroProfile side.
MicroProfile 7.1 — moves from “Metrics” to Telemetry (logs + metrics + tracing) and updates OpenAPI; depends on Jakarta EE Core Profile.
Faster startup options for microservices:
- CRaC (Checkpoint/Restore) now has Spring Boot support docs; dramatic warm starts for JVM apps.
- GraalVM native image continues to mature in Spring Boot 3.x for ultra-fast cold starts & small memory.
Caching: what’s current
- Caffeine remains the go-to in-memory cache for Java (high hit-rates, adaptive eviction). Recent releases add Jakarta-namespace niceties and fixes. Use directly or behind Spring Cache.
- JCache (JSR-107) is stable/older but still supported in Spring (@CacheResult, etc.). Most teams prefer Spring Cache + Caffeine/Redis over raw JCache APIs now.
- Redis/Valkey as the distributed layer — Redisson keeps shipping (3.50.0 in 2025); gives locks, maps, rate-limiters, etc., beyond plain caching.
- Observability first — Micrometer Observation/Tracing integrates metrics + traces (“SLF4J for observability”), aligning with MicroProfile Telemetry.
Rock the boat with Virtual Threads
I have picked “Virtual thread” for this session and will explore it in detail:
- Introduced in Java 19 (preview) and standardized in Java 21, Virtual Threads are lightweight threads managed by the JVM rather than the OS.
- They sit on top of carrier threads (traditional OS threads), scheduled by the JVM.
- Think of them as “green threads” or “fibers” — but fully integrated into Java’s
ThreadAPI (so you don’t need new abstractions). - Platform Threads : Each incoming request gets mapped to an OS thread → heavy memory (1–2 MB per stack), context switches, limited concurrency.
- Virtual Threads (Loom): They were Created by the JVM, not OS. abd Cost is ~kilobytes per stack, not MB. They run on carrier threads (a small pool of OS threads). If the virtual thread blocks (e.g., DB query, HTTP call), the JVM parks it and frees the carrier.
- When the result arrives, the JVM resumes the parked virtual thread on any available carrier.
- This makes blocking code (imperative style) scalable without adopting reactive APIs.
Why They Matter
Traditional microservices in Java use:
- Platform threads (OS threads) → limited, expensive to block (e.g., waiting on DB/HTTP calls).
- Reactive programming (Project Reactor, RxJava, Vert.x) → avoids blocking, but adds complexity and a different programming model.
Virtual Threads simplify this:
- You can write blocking-style code (imperative, easy to read/maintain).
- JVM turns that into highly scalable concurrency under the hood.
- You don’t need reactive frameworks for I/O-bound workloads anymore.
How They Fit Microservices
1. HTTP Request Handling
- Each request → handled by a virtual thread.
- Instead of Tomcat/Jetty allocating expensive worker threads, you can spawn thousands/millions of lightweight threads.
- Works seamlessly with Spring Boot 3.2+ (
spring.threads.virtual.enabled=true).
2. Database Calls
- JDBC calls (blocking) → park the virtual thread instead of hogging a carrier thread.
- Example: 50k concurrent DB queries → only ~200–500 carrier threads in use.
3. External API Calls
- Microservices often chain multiple downstream calls (REST, gRPC).
- Instead of complex async callbacks, write simple
RestTemplate/WebClientstyle blocking code. - Loom turns it scalable.
4. Background Jobs & Batch Work
Use structured concurrency (StructuredTaskScope) to fire multiple parallel tasks, wait for results, and automatically cancel others if one fails.
Where to Use them
Microservice APIs : REST controllers, gRPC endpoints, GraphQL resolvers. and High concurrency + blocking I/O (DB, cache, HTTP).
Service-to-Service Calls : Internal downstream calls where latency hiding is key.
Transactional Workflows: Where code readability is important, but still needs massive concurrency.
Thread-per-Task Workloads : Each user request, job, or task can map to a thread — without overhead concerns.
Long-Polling & Streaming : Chat apps, push notifications, SSE — virtual threads can “wait” cheaply.
Limitations
CPU-bound tasks → Virtual threads don’t magically speed up compute-heavy work. Use parallel streams or structured concurrency there.
Thread-local misuse → Be careful, virtual threads can be mounted/unmounted across carriers. Use ScopedValue (new in Java 21) instead of ThreadLocal for request context.
Older libs → Some blocking I/O libraries may not yet be optimized. Stick to JDK I/O, modern JDBC drivers, and async-friendly clients.
Monitoring → Millions of threads are normal now; thread counts in monitoring tools will look “scary” unless you tune alerts.
Please leave comments if you like the content.
Leave a comment