Catalog Details
CATEGORY
deploymentCREATED BY
UPDATED AT
May 17, 2024VERSION
1.0
What this pattern does:
A batch workload is a process typically designed to have a start and a completion point. You should consider batch workloads on GKE if your architecture involves ingesting, processing, and outputting data instead of using raw data. Areas like machine learning, artificial intelligence, and high performance computing (HPC) feature different kinds of batch workloads, such as offline model training, batched prediction, data analytics, simulation of physical systems, and video processing. By designing containerized batch workloads, you can leverage the following GKE benefits: An open standard, broad community, and managed service. Cost efficiency from effective workload and infrastructure orchestration and specialized compute resources. Isolation and portability of containerization, allowing the use of cloud as overflow capacity while maintaining data security. Availability of burst capacity, followed by rapid scale down of GKE clusters.
Caveats and Consideration:
Ensure proper networking of components for efficient functioning
Compatibility:
Recent Discussions with "meshery" Tag
- Apr 14 | Unable to deploy meshery to minikube
- May 08 | No reachable contexts found in the uploaded kube config
- May 08 | Meshery Development Meeting | May 8th 2024
- May 01 | WEBINAR: Making the CNCF Landscape interactive with Meshery
- Apr 24 | Meshery Development Meeting | April 24th 2024
- Mar 11 | [Help Wanted] A list of open DevOps-centric needs on Meshery projects
- Apr 16 | Help needed for setup of meshery cli
- Apr 17 | Meshery Development Meeting | April 17th 2024
- Apr 12 | What exactly is this sistent design system project
- Nov 11 | Unable setup local Meshery development server