Industry Feed

Gcore enhances Everywhere Inference with flexible deployment options, including cloud, on-premise, and hybrid

Contributed Content / Press Release January 17, 2025

Gcore, a global edge AI, cloud, network, and security solutions provider, announced a major update to Everywhere Inference, formerly known as Inference at the Edge. This update offers greater flexibility in AI inference deployments, delivering ultra-low latency experiences for AI applications. Everywhere Inference now supports multiple deployment options including on-premise, Gcore’s cloud, public clouds, or a hybrid mix of these environments.

Gcore developed this update to its inference solution to address changing customer needs. With AI inference workloads growing rapidly, Gcore aims to empower businesses with flexible deployment options tailored to their individual requirements. Everywhere Inference leverages Gcore’s extensive global network of over 180 points of presence, enabling real-time processing, instant deployment, and seamless performance across the globe. Businesses can now deploy AI inference workloads across diverse environments while ensuring ultra-low latency by processing workloads closer to end users. It also enhances cost management and simplifies regulatory compliance across regions, offering a comprehensive and adaptable approach to modern AI challenges.

Seva Vayner, Product Director of Edge Cloud and Edge AI at Gcore, commented: “The update to Everywhere Inference marks a significant milestone in our commitment to enhancing the AI inference experience and addressing evolving customer needs. The flexibility and scalability of Everywhere Inference make it an ideal solution for businesses of all sizes, from startups to large enterprises.”

The new update enhances deployment flexibility by introducing smart routing, which automatically directs workloads to the nearest available compute resource. Additionally, Everywhere Inference now offers multi-tenancy for AI workloads, leveraging Gcore’s unique multi-tenancy capabilities to run multiple inference tasks simultaneously on existing infrastructure. This approach optimizes resource utilization for greater efficiency.

These new features address common challenges faced by businesses deploying AI inference. Balancing multiple cloud providers and on-premises systems for operations and compliance can be complex. The introduction of smart routing enambles users to direct workloads to their preferred region, helping them stay compliant with local data regulations and industry standards. Data security is another key concern and with Gcore’s new flexible deployment options, businesses can securely isolate sensitive information on-premise, enhancing data protection.

Subscribe to NewscastStudio for the latest news, project case studies and product announcements in broadcast technology, creative design and engineering delivered to your inbox.

The content on this page is provided by the featured companies. NewscastStudio cannot guarantee the accuracy or veracity of any claims about products or services made in this content. The views expressed in this content do not necessarily reflect the views of NewscastStudio or its team. This content may contain trademarks owned by third parties, and those marks are the property of those companies.