AI Management Console

Through the Gemini AI Console to build GPU management nodes that facilitates enterprise organizations to collaborate on cross AI projects. Make underlying construction infrastructure simplified, so that time and human resources can be focused on core algorithms to help companies to dig out better business opportunities from massive data more efficiently.

Data scientists and developers can quickly and easily open a cluster environment with a large amount of pre-loaded data and AI tools by Gemini AI console portal. With Gemini’s unique GPU partitioning technology, the GPU utilization rate reaches its peak!

Download Gemini AI Console whitepaper

Gemini AI Console Feature

Heterogeneous/Hybrid Cloud Platform

GPU Partitioning sharing Multiple Containers

Job/Pipeline Build MLOps Environment

AI Framework and Developement Tools

API/Web UI & Monitoring

3-Tier Role and Multi-Tenancy Management

Benefit

Simplify IT complexity and optimize GPU management

  • AI Console helps IT administrators manage physical and virtual resources from single to hundreds of GPUs and CPU servers in a single platform.

Improve R&D efficiency and shorten development time

  • Make it easier to prepare the complex infrastructure environment with simple browser interface for the deployment of Big Data and AI computing tools, which helps scientists to focus on their AI algorithm development and training.
  • Through exclusive GPU Partition and Job scheduling technologies, users can greatly reduce the time waiting for resources and accelerate development progress.

Support different computing architectures and heterogeneous environments to meet various architecture possibilities in the future

  • In addition to container services, a single platform can also be connected to existing external storage that data can be used without moving. AI Console can also be connected to public clouds such as Google Cloud Platform, Linode and other resources to ensure uninterrupted use of resources.
  • Can be perfectly connected to the OpenStack virtual machine management platform. Users can develope with VMs according to different needs.
  • Painlessly connect new computing nodes to expand your AI services in the future.

GPU Partitioning - Gemini exclusive multi-container GPU sharing technology


  • Can be divided into up to 8 x ⅛ dedicated GPU resources
    • Supports ½ ¼ ⅛ partitioning, each partitioned GPU has exclusive computing resources to ensure that the resource level is not disturbed.
  • Simultaneous execution to increase GPU utilization
    • After GPU partitioning, multiple containers or projects can be executed on the same GPU card at the same time, with less preemption and higher resource utilization.
  • Reduce GPU usage time and increase productivity
    • Multiple projects can enjoy resources at the same time, increasing fairness, thus reducing the queue time of projects and reducing the total task time, thereby achieving an increase in company productivity.