Nvidia and ABB Robotics Bridge the Sim-to-Real Gap with AI

Nvidia and ABB Robotics Bridge the Sim-to-Real Gap with AI

Nvidia and ABB Robotics have announced a joint platform combining Omniverse simulation with ABB’s RobotStudio engineering software, targeting a persistent challenge in industrial automation: getting robots to perform in the real world the way they do in digital environments. The integrated offering, RobotStudio HyperReality, is scheduled for release in the second half of 2026.

Nvidia and ABB Robotics Bridge the Sim-to-Real Gap with AI Read More
NHTSA probes Waymo after AV hits child near Santa Monica school

NHTSA probes Waymo after AV hits child near Santa Monica school

School-bus compliance troubles

This incident follows separate scrutiny of Waymo’s behavior around school buses. In December, the company announced a voluntary software recall after its robotaxis illegally passed stopped school buses in multiple states. A televised incident in Atlanta in September preceded an NHTSA probe into school-bus stop compliance. Austin Independent School District reported 19 instances of “illegally and dangerously” passing stopped buses since the 2025–2026 school year and urged Waymo to suspend operations during loading and unloading windows.

Operations, safety record, and prior recalls

Waymo says its robotaxis surpassed 100 million autonomous miles in July 2025 and are adding about 2 million miles weekly. The service operates paid rides in Atlanta, Austin, Los Angeles, Phoenix, and San Francisco, with more than 10 million paid trips completed. In May 2025, Waymo recalled 1,212 vehicles to address collision risks with chains, gates, and other barriers, resolved via a November 2024 software update. The company plans to expand to additional U.S. cities including Nashville, Las Vegas, San Diego, Detroit, Washington, D.C., Miami, Dallas, Seattle, Houston, Orlando, San Antonio, Baltimore, Philadelphia, Pittsburgh, St. Louis, and Denver, and targets a 2026 London deployment after initiating testing in Tokyo.

The Editor’s Take: For riders and nearby road users, expect AVs to drive even more conservatively around schools—slower approach speeds, wider buffers near occlusions, and more frequent full stops. For developers, the near-term work is clear: tighten school-zone geofencing, increase occlusion penalties in the planner, lower speed caps under partial visibility, and validate end-to-end perception-to-brake latency against child-pedestrian benchmarks; those changes will likely trade throughput for safety but should reduce impact velocity in worst-case scenarios.

NHTSA probes Waymo after AV hits child near Santa Monica school Read More
OpenClaw boom exposes agentic AI security blind spots

OpenClaw boom exposes agentic AI security blind spots

OpenClaw’s viral growth has validated agentic AI—and exposed how little traditional enterprise defenses can see when autonomous assistants run on unmanaged devices. Researchers are already finding leaked keys, chat logs, and open consoles, while Cisco and IBM warn the model requires production-grade controls now.

OpenClaw boom exposes agentic AI security blind spots Read More
MoonshotAI Kimi K2.5 Tech Report Details Latency and Inference Upgrades

MoonshotAI Kimi K2.5 Tech Report Details Latency and Inference Upgrades

MoonshotAI released a technical report for Kimi K2.5 that documents architecture refinements, inference optimizations, and qualifier handling informed by user feedback. The report targets lower latency and more efficient deployment paths; the full PDF is hosted on GitHub.

SAN FRANCISCO, CA — MoonshotAI today published the Kimi K2.5 technical report, presenting iterative architecture changes and practical optimizations aimed at lowering inference latency and improving deployment flexibility. The document, hosted on GitHub, emphasizes changes driven by user feedback and points developers to updated qualifiers and documentation for production use: Kimi K2.5 tech report (PDF).

Technical analysis: architecture and inference

The report outlines incremental architecture refinements rather than a wholesale redesign. MoonshotAI describes modifications to transformer blocks and runtime execution that concentrate on operator fusion, optimized kernel scheduling, and memory layout improvements to reduce per-token latency. The paper frames these changes around practical inference metrics—latency and throughput—rather than model-size headlines, and documents the engineering trade-offs between single-request latency and batch throughput.

Quantization, deployment, and qualifiers

Kimi K2.5 includes guidance on quantization and runtime configurations for both edge and server-class deployments. The authors recommend specific qualifier settings (detailed in the repository documentation) that affect numerical precision, memory footprint, and kernel choice. The report stresses that qualifier selection alters latency and accuracy trade-offs, and it provides reproducible scripts and examples to help engineers measure inference time across common hardware targets.

Evaluation and benchmarks

Rather than exposing only peak scores, the report supplies benchmark samples that show how changes influence stable latency under realistic loads. It also documents latency variance and inference cost implications when switching between different runtime qualifiers. This focus on operational metrics helps teams anticipate performance in production settings.

The Editor’s Take: MoonshotAI’s Kimi K2.5 report is a pragmatic, developer-focused update. By foregrounding latency, qualifier-driven behavior, and reproducible deployment guidance, the report gives engineers the technical levers needed to tune inference performance for specific hardware and use cases. Expect quicker iteration cycles for deployment testing, but plan for thorough validation when changing qualifiers that affect numerical precision.


MoonshotAI Kimi K2.5 Tech Report Details Latency and Inference Upgrades Read More
Starbucks builds AI ordering companion for ‘vibe’ coffee

Starbucks builds AI ordering companion for ‘vibe’ coffee

At its investor day in New York, Starbucks said it is developing an AI “ordering companion” that converts mood and taste prompts into orderable drinks, but offered no launch date. CEO Brian Niccol also previewed voice-first ordering and a drive‑thru pilot that pipes natural-language conversations directly into the POS.

Starbucks builds AI ordering companion for ‘vibe’ coffee Read More