Batch AI workloads should not be slow, expensive, or fragile.
Solvyr runs early-stage infrastructure for batch AI workloads such as embeddings, document pipelines, classification, and large-scale reprocessing. We use distributed GPUs in people’s homes to explore a lower-cost, failure-tolerant model for offline inference.
Currently validating real batch workloads with early partners across documents, embeddings, and reprocessing jobs.
No public signup. No marketplace. No generic GPU cloud pitch. Just focused discussions around real batch workloads.
Who this is for
Example workload
100k documents → embeddings → 30 to 60 seconds per batch → retry-safe → good fit.
How it works (high level)
Small work units
Jobs are broken into bounded tasks that run in seconds to a few minutes. This makes retries and recovery practical.
Built for failure, not perfect hardware
Nodes pull work, tasks can be reassigned, and the system is designed around unreliable execution environments.
Deliberately constrained
Early stage. Limited workload classes. No self-serve UI. We prioritize survivability, repeatability, and learning over feature breadth.
What you get from an early conversation
A quick assessment of whether your workload is likely to fit, what the main technical constraints are, and whether it makes sense to run a bounded pilot.
About
Jan — Founder
Nearly seven years at Anker from early-stage to IPO, leading sales and go-to-market across Europe, MEA, and KSA. Now focused on workload economics, cost structures, and validating real demand for batch AI infrastructure.
Maksym — Founding Engineer
15+ years in GPU software engineering, including 8+ years at Intel across GPU drivers, deep learning infrastructure, and distributed systems. Led development of C++ drivers integrated into TensorFlow for Nervana ASIC. Focused on reliable execution on unstable hardware.
Current stage
Running focused technical pilots with teams that have real batch workloads and want to test fit before scale.