AI Server

The world's first ASIC and AI system customized for inferencing built in Saudi Arabia

  • Lenovo x Deer AI Server solution built with Google Coral Intelligance:  2U rack server features powerful performance as part of the Google Coral platform.
  • Full x86 architecture:Up to 2x 3rd generation Intel® Xeon® Scalable processors, up to 40 cores, up to 270W TDP
  • Network Interface:LOM adapter installed in the OCP 3.0 slot; PCIe adapters
  • Ports: Front: 1x USB 3.1 G1, 1x USB 2.0 with XClarity Mobile support, 1x VGA (optional), 1x external diagnostics handset port Rear: 3x USB 3.1 G1, 1x VGA, 1x RJ-45 (management), 1x Serial port (optional)
  • Memory:32x DDR4 memory slots; Maximum 8TB using 32x 256GB 3DS RDIMMs; Supports up to 16x Intel® Optane™ Persistent Memory 200 Series modules (PMem).
  • GPUs: Up to 8x single-width GPUs or 3x double-width GPUs
  • Power: Dual redundant power supplies (up to 1800W Platinum)
  • OS Support: Microsoft, SUSE, Red Hat, VMware. Visit lenovopress.com/osig for details.
OP product mainImage

For Compute-Intensive Workloads

The Lenovo x Deer AI Server is an ideal 2-socket 2U rack server for small businesses up to large enterprises that need industry-leading AI inference capability, reliability, management, and security, as well as maximizing performance and flexibility for future growth. The Lenovo x Deer AI server is based on the Google Edge tensor processing unit (TPU) the new 3rd generation Intel Xeon Scalable processor family (formerly codenamed "Ice Lake"), and the new Intel Optane Persistent Memory 200 Series. The SR650 V2 is designed to handle a wide range of workloads, such as databases, virtualization and cloud computing, virtual desktop infrastructure (VDI), infrastructure security, systems management, enterprise applications, collaboration/email, streaming media, web, and HPC.

Powerful performance in ML inferencing using Coral Edge TPU

The Lenvo X Deer AI Server  is a complete, x86-architecture Server that additionally benefits from a Google Edge TPU machine-learning (ML) accelerator. This combines the ability to execute high-performance ML inferencing with the ease of development afforded by the familiar x86 platform.It can be scaled from 12-48 asics per server.The Edge TPU is capable of performing 4 trillion operations per second (4 TOPS), using 2 watts of power—that's 2 TOPS per watt. For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at almost 400 frames per second, in a power efficient manner. See more performance benchmarks

Accelerated machine learning with Coral Edge

Edge TPU is Google’s purpose-built application-specific integrated circuit (ASIC) designed to run AI at the edge. It delivers high performance in a small physical and power footprint, and provides ML acceleration – speeding up processing efficiency, lowering power demands and making it easier to build connected devices and intelligent applications.
  • Efficient

    Balances power and performance with local applications

  • Private

    Keeps user data private by performing all inferences locally

  • Fast

    Runs AI at lightning-fast inference speeds

  • Offline

    Deploys in the field where connectivity is limited

On-device intelligence for diverse applications

With 12- 48 onboard ML accelerator is able to provide on-device intelligence for a diverse spread of scenarios, from AI-powered security checks and smart healthcare diagnoses to suspect identification and tailored advertising. Some popular uses for the Google Edge TPU include:

  • Pose estimation

    Estimates the poses of people in an image by identifying various body joints

  • Object detection

    Draws a square around the location of various recognized objects in an image

  • Key-phrase detection

    Listens to audio samples and quickly recognizes known words and phrases

  • Image segmentation

    Identifies various objects in an image and their location on a pixel-by-pixel basis.

  • Google Edge TPU

    ML accelerator

  • 48-192 TOPS

    Performance

  • 6-24 W/unit

    power consumption

  • TensorFlow Lite

    Supported

Enhanced software supports

  • Works with Windows & Linux    

    Integrates with any Debian-based Linux system.

  • Neural Network Tools for Users

    Supports TensorFlow Lite for easy building and deployment of ML models.

  • Popular pre-compiled models

    Helps with running popular models*:

    • MobileNet
    • Inception
    • EfficientNet-EdgeTPU

    *For more details, please visit : https://coral.ai/models/

Further applications

  • For Developer

    • A platform with ML inferencing capabilities
    • Highly compatible with current solutions and peripherals
  • For Education

    • AI/ML/neural networks
    • Electronics learning
    • Programming/coding study
  • For Commercial

    • Low power consumption edge device for inferencing through computer-based vision
    • Comprehensive configuration to meet wide of usage scenarios.

Kubernetes Support 

To provide stable performance, Lenovo x Deer AI Server employs dedicated thermal-monitoring technology. When set to cool the whole system, the cooling fan adjusts its speed according to the temperature of the M.2 SSD – providing a significant improvement over previous generations, where only the CPU was monitored. The BIOS also has three distinct cooling modes, so you’re free to choose the one the best suits your needs or situation.

Features