We built the infrastructure
so you don't have to.
ResonTech is an end-to-end ML ecosystem — a SaaS platform for running inference and training models at scale. Three GPU pool types. One platform. Zero overhead.
From a hackathon
to the ecosystem.
ResonTech started at ETHKyiv 2025. Sven, Kyrylo, Petro, and Ivan built a prototype decentralized GPU marketplace on Ethereum — idle GPUs as liquid compute with state channel settlement. Won the DeAI and State Channels tracks.
But we learned something unexpected: the hardest part wasn't the payment layer. It was the orchestration. Getting distributed jobs to actually run — reliably, fast, automatically — was unsolved for most ML teams.
We pivoted. Dropped the blockchain rails. Rebuilt from scratch as a proper ML infrastructure platform. Dmytro Horskyi joined and the vision grew: not just a GPU marketplace, but an end-to-end ML ecosystem covering every stage of the lifecycle.
Today, ResonTech runs beta training jobs on the public pool and is actively building out managed clusters — dedicated nodes with SLA-backed uptime for teams serious about ML infrastructure.
ETHKyiv — Where It Started
Sven, Kyrylo, Petro, and Ivan built the first prototype of a decentralized GPU marketplace on Ethereum at ETHKyiv 2025. Won the DeAI and State Channels tracks.
From Blockchain to Real Infrastructure
The real bottleneck wasn't payments — it was orchestration. We pivoted to proper ML infrastructure. Dmytro Horskyi joined as Head of Infrastructure.
Platform Build-out
Built the core kernel, public GPU pool, per-account S3 bucket with presigned-URL data access, and pay-as-you-go billing. Python SDK and web platform shipped. Foundation for everything that followed.
First Beta Training Jobs
Public pool opened to beta testers. First real training jobs ran on the cluster — DeepLab and image model fine-tuning. Zero lost checkpoints across all beta runs.
Managed Clusters
Beta testing managed clusters and inference endpoints. Dedicated nodes, isolated kernel, and first live inference runs — all in active testing with early users.
Eliminate the infrastructure tax
on AI development.
AI teams waste enormous time managing infrastructure instead of doing science. Configuring Kubernetes clusters. Debugging NCCL errors. Recovering from spot instance preemptions. Reconciling GPU billing. Hours that should be spent on model architecture and iteration.
Our mission is to eliminate that entirely. Submit a job. Get your model back. That's it.
Five people who care
deeply about compute.
The core team behind the ResonTech ecosystem.

Sven Möller
CEO & Co-Founder
Enterprise strategy · AI adoption · digital infrastructure
EY Manager. Led transformation at UBS, Credit Suisse, Allianz. Built Swisscom tech subsidiary to CHF 2M+ B2B in yr 1. Co-founded TantumPay. Deploys private AI stacks for enterprises.

Kyrylo Gorokhovskyi
CTO & Co-Founder
Software architecture · AI for eHealth & GovTech · serial founder
20+ years in software development. Managed 100+ people engineering teams, built nationwide and European-scale projects. Expert in AI for eHealth, dataspaces, GovTech, and MilTech. Lecturer at Kyiv-Mohyla Academy.

Petro Yaremenko
COO & Co-Founder
Distributed systems · blockchain & open source · hackathon champion
7× hackathon winner (ETHKyiv 2024 & 2025, Solana Global Mobile, Kumekathon ×2). Expert in distributed systems — built blockchain infrastructure, open-source protocols, and production-grade backends long before the AI wave.

Ivan Volkov
LEAD Architect & Co-Founder
Systems architect · privacy-preserving ML · confidential compute
Architected v1 Reson.tech. Won ETHKyiv 2025 DeAI & Solana Kumekathon. CUDA pipelines, multi-GPU provisioning, NVFlare federated learning, TEEs and encrypted memory in live GPU training.

Dmytro Horskyi
Head of Infrastructure
Distributed systems · big data · Rust · enterprise engineering
25+ years in enterprise infrastructure. Designed large-scale distributed systems for banking and financial institutions. Deep expertise in big data, Rust, and high-throughput data pipelines.
What we stand for.
Zero Infrastructure Headaches
Researchers should spend time on models, not kubectl. Every abstraction removes another operational burden.
Radical Transparency
Clear pricing. Honest SLAs. Real-time cost visibility. No surprise bills for idle GPUs.
Fault Tolerance as Default
Hardware fails. We build as if it will. Checkpoint recovery and automatic rescheduling are requirements, not features.
Open Ecosystem
Every major framework, every GPU vendor, your existing workflows. No SDK wrapping. No lock-in.
Ready to train faster?
Zero infrastructure. Just models.
Your team writes model code. We handle everything else.