News

Google Opal Gets an Agent Brain - No-Code AI Workflows Just Became Autonomous

Google adds an agent step to Opal, its no-code AI mini-app builder, powered by Gemini 3 Flash. The agent picks its own tools, remembers user preferences across sessions, and routes itself through workflows.

Google Opal Gets an Agent Brain - No-Code AI Workflows Just Became Autonomous

Google's experimental no-code platform Opal just took a significant step past "drag and drop." The company announced on February 24 that Opal now includes an agent step - a workflow node powered by Gemini 3 Flash that can analyze a goal, decide which tools it needs, and execute a multi-step plan without the user wiring up every connection manually.

It is the difference between building a pipeline yourself and telling a pipeline what you want done.

TL;DR

  • Google added an agent step to Opal, its no-code visual AI workflow builder, powered by Gemini 3 Flash
  • The agent autonomously selects tools like Veo (video generation), Web Search, and Google Sheets based on the user's stated goal
  • Three new capabilities ship alongside: persistent memory (via Google Sheets), dynamic routing (agent picks the next step), and interactive chat (agent asks the user for missing info)
  • Available to all Opal users immediately - no waitlist, no pricing tier gating
  • Opal is live in 160+ countries after expanding from its US-only launch in July 2025

What the Agent Step Actually Does

Previously, Opal let you chain together prompts and model calls in a visual editor. You picked the model, defined the input, connected the output to the next step. It worked, but every decision in the workflow was yours.

The new agent step changes that. Instead of selecting a specific model in the "generate" step, you select an agent. You describe your objective, and the agent - running on Gemini 3 Flash - figures out the execution plan. Need to research a topic? It triggers Web Search. Need a video clip? It calls Veo. Need to store data? It writes to Google Sheets.

Google demonstrated this with a "Visual Storyteller" mini-app where the agent autonomously determines what details it needs, suggests plot points, and generates content - shifting from rigid templates to dynamic, user-driven workflows.

Three New Tools That Matter

The agent step ships with three capabilities that push Opal past simple prompt chaining:

Memory - The agent can persist information across sessions using Google Sheets as a backing store. A shopping list app remembers what you bought last week. A writing assistant remembers your style preferences. This is crude compared to purpose-built vector databases, but it works out of the box with zero configuration - which is the entire point of a no-code tool.

Dynamic Routing - Using an "@ Go to" tool, the agent can evaluate what it has done so far and choose which step to execute next. You define multiple paths and describe the criteria in natural language. The agent reads the conditions and transitions accordingly. This turns linear workflows into branching state machines without requiring users to understand what a state machine is.

Interactive Chat - The agent can pause execution to ask the user a question. Missing a key detail? The agent surfaces a prompt. Multiple valid options? The agent presents them as choices. This is a small feature with outsized impact - it means workflows no longer need every input defined upfront.

Where This Fits

Opal is not the first no-code AI builder, and the agent step does not make it the most powerful. Competitors like Lovable, Replit, and the SoftBank-backed Emergent all operate in adjacent space. Zapier and n8n have been doing workflow automation for years, increasingly with AI components bolted on.

What makes Opal's approach interesting is the integration depth. The agent has native access to Google's model stack - Gemini for reasoning, Veo for video, Search for retrieval - without the user needing API keys, OAuth flows, or external service accounts. For someone who has never written a line of code, that elimination of configuration overhead is the product.

The risk is the same one every no-code platform carries: the gap between demo and production. A "Visual Storyteller" demo is compelling. A business-critical workflow that needs to handle edge cases, retry failures, and audit its decisions is a different conversation entirely.

The Quiet Infrastructure Play

Opal has been on a steady geographic expansion since its US-only launch in July 2025. It reached 15 markets by October, went global across 160+ countries by November, and integrated into the Gemini web app with a visual editor in December. The agent step is the latest in a cadence that suggests Google Labs is treating this as a real product pipeline, not a one-off experiment.

Running the agent on Gemini 3 Flash is a deliberate choice. Flash is Google's speed-optimized model - fast inference, lower cost, good enough reasoning for most tool-selection tasks. It signals that Google is designing for high-volume, low-latency use cases where non-technical users will be generating many small workflows rather than a few complex ones.

For developers who have been building their own AI agents with frameworks like LangChain or CrewAI, Opal's agent step is not a replacement. It is Google's bet that the much larger market of non-developers wants the same capabilities without writing any code at all.

Whether that bet pays off depends on how far "describe what you want" can actually get you when the workflows get complicated.


Sources:

Google Opal Gets an Agent Brain - No-Code AI Workflows Just Became Autonomous
About the author AI Infrastructure & Open Source Reporter

Sophie is a journalist and former systems engineer who covers AI infrastructure, open-source models, and the developer tooling ecosystem.