Inspur, a leading data center and AI full-stack solutions provider, today revealed its latest Open Computing innovations that continue to advance open computing systems. Offerings include an OCP Cloud Optimized Rack Solution for SAP HAHA workloads. The Inspur NF8260M5 server integrates OCP recognized hardware optimized with 2nd Gen Intel Xeon Scalable processors and Intel Optane persistent memory and is certified for SAP HANA®. Shared open networking solutions with SONiC through two switch offerings will be a new contribution to the OCP community. The recently introduced Inspur OAI (Open Accelerator Infrastructure) UBB system MX1, a 21-inch Full-Rack OAM solution, delivers efficiency, flexibility and management.
OCP Cloud Optimized Rack Solutions
The first high-density cloud-optimized four-socket platform for CSP market, jointly announced by Inspur and Intel at last year’s OCP Global Summit, the Inspur NF8260M5 delivers energy efficiency to enable the highest number of single rack cores while lowering TCO. This cloud optimized solution has been adopted by customers in hyperscale, Fintech Cloud, enterprise and other segments requiring high computing workloads.
This high-density 2U 4-socket Inspur NF8260M5 server is validated for the second-generation Intel Xeon Scalable processor, optimized with Intel Optane persistent memory, and certified for SAP HANA®.
The Inspur OCP Cloud Optimized Rack Solution for SAP HANA is a fully integrated rack solution combining OCP recognized hardware and validated software to provide database capacity that can be doubled for a lower price. Customers who have multiple SAP HANA older gen nodes, where memory capacity is the limiting factor, can upgrade multiple analytics nodes (scale out) into one consolidated analytics node (scale up) with more In Memory database capacity.
Additionally, Inspur has joined the SONiC community and will provide shared open networking solutions with Sonic through two switch offerings, the Inspur SC5630EL 25GE high-density TOR switch and the Inspur SC8661SL 100GE Spine-Leaf switch with interface cards supporting hot swap.
OAI UBB system delivers breakthrough efficiency, flexibility and management
Artificial intelligence (AI) and other emerging technologies and applications are creating workloads in data centers that are growing ever more diverse and complex. What’s more, Internet companies are struggling with AI’s increasing hardware complexity, where integrating an AI accelerator typically takes 6 to 12 months, and there is a growing imperative to improve the flexibility of AI hardware. The OAI specification—led by Baidu, Facebook and Microsoft in the OCP community—unifies the technical specifications of the accelerator module and simplifies the design complexity of the AI accelerator system, thereby shortening time to market for the hardware system.
The recently launched OAI UBB system also delivers breakthrough efficiency, flexibility and management. The 21-inch Full-Rack OAI solution provides simplified inter-module communication to scale up and input/output bandwidth to scale out to support disparate network architectures through OAM direct connection. And, Inspur can feature two OCP interconnect topologies: Hybrid Cube Mesh (HCM) and Fully Connected (FC).
Inspur offers an open rack solution that extends the foundation of Intel Xeon Scalable processors with built-in AI acceleration from integrated Intel Deep Learning Boost with dual San Jose Nodes connecting 8xOAMs and supporting large cluster training. With flexible GPU pooling, both OAM and UBB modules are designed to universally support different types of AI accelerators and run various AI applications such as AI cloud, deep learning for training, and image recognition.
The new UBB system highlight Inspur’s longstanding commitment to leadership in open computing technologies. Moreover, with these launches, Inspur is enabling its global internet customers to leverage OAI to bring their innovative new AI solutions to market quickly.
“Inspur is committed to pursuing ongoing technology advancements that will drive the development of next-generation AI applications,” said Peter Peng, Inspur Group senior vice president. “Our membership in OCP and other global open computing standards organizations underpins Inspur’s leadership in promoting open source technology in the data center in order to help our customers accelerate innovation in the development of new and useful AI applications.”
“Intel Optane persistent memory can cost-effectively increase the data capacity of in-memory analytics including SAP HANA, and dramatically increase performance over traditional storage solutions for large data sets,” said Alper Ilkbahar, vice president and general manager of the Memory and Storage Products Group at Intel. “Inspur has been a strong partner and innovative leader with Intel Optane persistent memory technology in both cloud and enterprise systems.”
Intel, the Intel logo, Optane and Xeon are trademarks of Intel Corporation or its subsidiaries.
Inspur is a leading provider of data center infrastructure, cloud computing, and AI solutions, ranking among the world’s top 3 server manufacturers. Through engineering and innovation, Inspur delivers cutting-edge computing hardware design and extensive product offerings to address important technology arenas like open computing, cloud data center, AI and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges. To learn more, please go to www.inspursystems.com.