Posts

What AI Data Centres Do & Who Can Get Jobs

Image
                                                       images from unspalsh 🌐 What AI Data Centres Do AI data centres are specialized facilities designed to support the massive computational needs of artificial intelligence. They differ from traditional data centres in scale, architecture, and purpose: Core Functions Training AI models : Running large-scale computations for deep learning and generative AI. Inference & deployment : Serving AI applications in real time (e.g., chatbots, recommendation engines). Data management : Handling huge volumes of structured and unstructured data efficiently. High-performance infrastructure : Equipped with GPUs, TPUs, and advanced networking to accelerate workloads. Cooling & energy optimization : AI workloads consume enormous power, so these centres use advanced cooling and sustainability strateg...

Uber's Architectural Redesigns for Risk Management

Here are the key lessons from Uber's architectural redesigns for risk management, synthesized from their engineering blogs and public case studies. 🚦 Lesson 1: Orchestrate Risk Across Services, Not Just Within Them The first major lesson came from addressing the "blast radius" problem. In a monorepo architecture, a single bad commit could potentially break thousands of services at once . - The Problem: Traditional safety checks (pre-commit tests, per-service health metrics) were insufficient. If a change passed initial tests but failed in production, automated deployment pipelines could rapidly propagate the failure to hundreds of critical services before anyone noticed . - The Solution: Uber introduced a cross-cutting service deployment orchestration layer. This system acts as a global gatekeeper, coordinating rollouts across all services affected by a single commit . - How It Works:     - Service Tiering: Services are classified into tiers from 0 (most critical, e.g., ...

Agentic AI Plumbing

In the rapidly evolving landscape of Agentic AI (systems where AI agents take autonomous actions), these five acronyms represent the "new plumbing" of the internet. They are open-source protocols that allow different AI agents, tools, and businesses to talk to each other, negotiate, and even spend money securely. Here is the breakdown of the agentic AI stack: 1. MCP (Model Context Protocol) Role: The "USB-C" of AI. What it does: Developed by Anthropic (and adopted by Google, OpenAI, and Microsoft), MCP allows an AI model to safely "plug in" to your data and tools. Example: Instead of writing custom code to let an agent read your Google Drive or Slack, you use an MCP server. It provides the context (data) and tools (capabilities) the agent needs to work. 2. A2A (Agent-to-Agent Protocol) Role: The "Common Language" for agents. What it does: Launched by Google and now part of the Linux Foundation, A2A defines how one AI agent talks to anoth...