Gemini AI Console

Through the Gemini AI Console to build GPU management nodes that facilitates enterprise organizations to collaborate on cross AI projects. Make underlying construction infrastructure simplified, so that time and human resources can be focused on core algorithms to help companies to dig out better business opportunities from massive data more efficiently.
Data scientists and developers can quickly and easily open a cluster environment with a large amount of pre-loaded data and AI tools by Gemini AI console portal. With Gemini's unique GPU partitioning technology, the GPU utilization rate reaches its peak!

Download Gemini AI Console Whitepaper

Product Intro

Benefits

Gemini AI Console is an artificial intelligence management platform specially designed for collaborative sharing of multiple users, multiple teams, and multiple workloads.


  • Cloud-native Kubernetes resource pools
  • Multi-tenancy / multi-project management
  • Support Different Workloads
  • Container built-in Jupyter Notebook web editor
  • Built-in private image registry

Maximize GPU resources in Kubernetes. Derive business value from deep learning training and inferential predictions.


  • GPU Partitioning : Sharing GPU resources with multiple containers
  • Jupyter to Job:Notebook service and computing jobs with GPUs are running separately
  • Cloud-native Kubernetes resource pools

With automated features and functions, it liberates IT , architects and data scientists from the dilemma of manual scheduling management.


  • Single Job Test Job
  • Connected jobs with single pipleine
  • Repeat the training with Pipeline Template
  • With Scheduler, you can specify training timer

Functions

Multi-cloud resource management
GPU sharing & Notebook development dedicated GPU jobs
Automated MLOps environment derived form Job/Pipeline
AI framework & development tools from marketplace
API / Web UI operation and monitoring
Three-tier roles and multi-tenancy management

Maximize GPU resource utilization

GPU Partitioning sharing

CUDA GPUs

Pure software solutions, as long as CUDA-capable GPUs are available.

No need to change the code

Users can use the split GPU without changing any program.

Effectivly resource isolation

It has the ability to control the isolation and independence of container resources to ensure that the resources of individual containers are not interfered by other containers.

Flexible scheduling

The user can regulate the minimum/maximum QoS. It can also flexibly and automatically increase the GPU quota, which also reduces the overhead of personnel management.

Increase container usage

Kubernetes can execute more machine learning containers at the same time, reduce preemption, and reduce queuing time.

Improved resource utilization

The total utilization can approach the sum of the individual container utilizations.

Development notebooks run separately from GPU computing

Jupyter to Job

Jupyter development services created through AI Console all have built-in Jupyter to Job plug-in function, which can dispatch GPU tasks through Jupyter and browse the task log, which is helpful for debugging and tracking.

Customer Shring

"Our requirement is to distribute GPU resources fairly and to facilitate management. After the introduction of Gemini’s system, AI artificial intelligence teaching will be simplified. Teachers and students can focus on the establishment and training of AI models without spending a lot of time on the operation of the system. It can speed up stepping into the application areas of AI artificial intelligence and machine learning."

More...

Takming University of Science and Technology Computer Center

Related Product and Solution

GPU Mgmt and AI Development

Machine Learning and DevOps