Enterprises are starting to reconsider AWS. The economics, reliability, and compliance needs of AI workloads are pushing teams to look at decentralized infrastructure instead. In his BeInCrypto interview, Jack Collier, CGO of io.net, outlined why organizations are switching to io.net: Cost: io.net delivers up to 70% lower GPU pricing compared to hyperscalers. Adoption: UC Berkeley ran 12,696 uninterrupted GPU hours for navigation AI research. Leonardo.ai (acquired by Canva) serves 16M+ creators on io.net GPUs. Consolidation: IO Intelligence delivers compute, models, RAG, and orchestration through a single API, eliminating the need for fragmented vendor contracts. Compliance: SOC 2–aligned infrastructure with regional GPU selection and on-chain verification. Full Interview 👇
io.net
Technology, Information and Internet
New York, NY 5,188 followers
The intelligent stack for powering AI workloads.
About us
io.net is the intelligent stack for powering AI. It offers on-demand access to GPUs, inference, and agent workflows through a unified platform that eliminates complexity and reduces cost. io.cloud delivers on demand, high-performance GPUs. Developers and enterprises can train, fine-tune, and deploy models on fast, reliable clusters spun up in minutes. io.intelligence is the simple and comprehensive AI toolkit. It provides a single API to run open-source models, deploy custom agents, and evaluate performance without changing your integration. Teams use io.net to move fast, cut infrastructure costs, and scale AI systems with full control.
- Website
-
https://io.net
External link for io.net
- Industry
- Technology, Information and Internet
- Company size
- 51-200 employees
- Headquarters
- New York, NY
- Type
- Privately Held
- Founded
- 2022
- Specialties
- Cloud computing, GPU Cloud, AI, MLOps, Cloud Infrastructure , Accelerated computing, DePIN, Crypto, crypto network, solana, and filecoin
Locations
-
Primary
447 Broadway St, Manhattan
New York, NY 10013, US
-
500 Folsom St
Suite 17
San Francisco, California HQ, US
Employees at io.net
Updates
-
Google’s launch of Jules makes one thing clear: every major platform will have agents. That accelerates adoption, but it also frames a deeper choice for builders: Do agents live only inside closed ecosystems, bound to one company’s stack? Or do they run on open infrastructure, with global scale, cost efficiency, and freedom to evolve on their own terms? We believe the second path is the only way agents become more than product features. It’s how they become the foundation for new systems and industries. #ai #depin #google
-
-
How do you balance speed, reliability, and efficiency for AI workers? CPO @Raj_TheBUIDLer on @ionet’s approach: ⚡ Remote inventory testing across regions 💳 Flexible hourly-to-weekly payment models ✅ Proof-of-concept runs to earn developer trust That’s why developers rely on io.net.
-
io.net at TOKEN2049 Singapore. CEO Gaurav Sharma, CGO Jack Collier, and CPO Raj Karan joined discussions on the future of decentralized compute and AI infrastructure. Panels, keynotes, and conversations all converged on one theme: decentralized networks are becoming increasingly central to how AI will be built and scaled. Great to connect with so many teams and partners driving this ecosystem forward.
-
-
-
-
-
+2
-
-
io.net reposted this
Just wrapped up a chat with BeInCrypto about my path through crypto and why I joined io.net. I’ve been lucky to work at Circle and Near Protocol before this, and each step has only made me more convinced that decentralisation can solve real problems. That’s what pulled me into IO: decentralised compute actually works right now. Instead of future promises, we’re: ⚡ Connecting unused GPUs around the world into a network that start-ups and enterprises can use on-demand ⚡ Helping teams like Leonardo.ai and UC Berkeley cut huge chunks off their compute & inference bills ⚡ Building IO Intelligence, which makes it way easier for developers to spin up & integrate open-source AI models At the end of the day, compute is the bottleneck for so many AI projects. We’re solving that in a way that’s open, fair, and global. Full interview here if you want the deep dive 👇 🔗 https://lnkd.in/e9GKb3dN #ai #crypto #compute #DePIN
-
Come listen to io.net CEO Gaurav Sharma speaking at TOKEN2049 as an AI leader at the LongHash Ventures Web3 Forum on September 30th Link 👇
Introducing the AI Lineup at LongHash Web3 Forum, part of TOKEN2049 Singapore We’re excited to welcome these leaders in Web3 AI to our Singapore stage: • Jeremy Millar, Chairman & Board Member at Theoriq • Jansen Teng, Co-Founder of Virtuals Protocol • Michael Heinrich, CEO & Co-Founder of 0G Labs • Gaurav Sharma, CEO at io.net Plus a fireside chat hosted by Shi Khai WEI, featuring Yau Teng Yan, Founder of Chain of Thought. 🗓 30 September 2025, 10:00 AM – 4:30 PM 📍 National Gallery Singapore 🔗 RSVP here: https://luma.com/p4l772o7 #LongHashWeb3Forum #Web3 #AI #DigitalAssets #CryptoInvesting
-
-
The OpenAI–Nvidia $100B agreement highlights how AI compute is consolidating at the very top of the market. When a single deal secures this much hardware, the ripple effects are immediate: • Scarcity deepens for the rest of the industry • Costs rise for startups and independent labs • Innovation slows as access gets bottlenecked This is less about one company and more about the structural challenge in AI infrastructure. As centralized contracts tighten supply, the broader ecosystem needs to rethink how access to GPUs is provisioned and priced.
-
Heading to KBW? Listen to our CMO, Jack Collier, speak on September 25th Link 👇 #koreablockchainweek #depin #kbw
-
-
Most teams think MLOps challenges are solved with better tooling. The real problems appear when systems hit production. 1. Latency kills adoption 📊 Benchmarks measure accuracy, not speed. ⚡ Gartner reports 70% of cloud AI workloads exceed the <50ms threshold required for interactive applications. A response that takes 2.5s instead of 250ms drives churn and higher infra bills. 2. Data pipeline drag 🖥️ In large-scale deployments, 60–70% of GPU cycles are lost waiting for data ingestion, preprocessing, and retrieval. Models look compute-heavy on paper, but in reality they are I/O-bound. 3. Multi-model orchestration 🤖 Modern agents rarely use one model. A single loop might involve: – Reasoning from an LLM – Encoding from a vision model – Retrieval from a database Without orchestration across these calls, costs compound and reliability falls apart. The takeaway MLOps will not succeed with more dashboards. It requires infrastructure that: ⚡ Closes the latency gap ⚡ Optimizes data flow ⚡ Orchestrates across models This is where projects succeed or fail.
-