spdup.net

Tech news

Google Antigravity AI Code Editor Reviewed – Performance with Gemini 3


Google Antigravity AI Code Editor Reviewed – Performance with Gemini 3

Introduction

Google’s latest foray into AI‑assisted development arrives under the name Antigravity. Marketed as a next‑generation code editor that combines autocomplete, an AI agent, and a project‑level agent manager, Antigravity is built on the same technology stack that powered the earlier Windsurf editor. This review examines how the product measures up to its promises, especially when paired with the new Gemini 3 model.

Background and Acquisition

The visual language and core features of Antigravity are unmistakably derived from Windsurf. After Google acquired the Windsurf codebase—and its founding team—those engineers were integrated into DeepMind. The original Windsurf product was later sold to Cognition, the makers of Devon, leaving Google with the underlying engine and a roadmap for a rebranded experience.

Key points from the acquisition:

  • Google obtained the complete Windsurf source code and key personnel.
  • The former Windsurf team joined DeepMind, where Antigravity was developed.
  • Cognition now maintains the legacy Windsurf product, while Google focuses on Antigravity.

Installation and User Interface

Antigravity is available for macOS, Windows, and Linux. After downloading, the installer offers to import settings from Windsurf, easing the transition for existing users.

The UI mirrors Windsurf’s layout:

  • File Explorer on the left with the same colorful icons that were exclusive to Windsurf.
  • Editor pane in the center where code is written.
  • Agent panel on the right, where prompts are entered and responses displayed.
  • A Settings dialog that replicates Windsurf’s configuration options, including tool‑tips that simply replace the term “cascade” with “agent”.

Overall, the interface feels like an older version of Windsurf that has been lightly refreshed rather than a ground‑up redesign.

Agent Manager – A Low‑Budget Verdant?

One of Antigravity’s touted new features is the Agent Manager, intended to help developers oversee multiple AI agents across projects. The concept is reminiscent of Verdant, a VS Code‑based environment praised for its intuitive agent workflow.

Comparative observations:

  • Verdant offers a clean, project‑centric view with clear inbox, task, and thread navigation.
  • Antigravity’s manager provides similar sections but lacks the polish and cohesion of Verdant.
  • The UI feels bolted on, with inconsistent styling and limited feedback mechanisms.

While functional, the Agent Manager falls short of the seamless experience that Verdant sets as a benchmark.

Benchmark Testing

To evaluate practical performance, a series of benchmark prompts were run through Antigravity powered by Gemini 3. The tasks included:

  • Implementing a Go TUI calculator
  • Building a simple “Godo” game in Go
  • Completing a long‑running spell‑checking benchmark
  • Developing small applications (Nux, Tari) that require multi‑step reasoning

Results:

  • Go TUI calculator: Completed with one minor error that was easily fixed.
  • Godo game: Failed after multiple attempts; the agent could not produce a working solution.
  • Spell‑checking benchmark: Timed out and produced errors, indicating difficulty with long‑running tasks.
  • Nux and Tari apps: Similar failures, with the agent aborting after a few prompts.

The pattern shows that Antigravity handles simple, single‑step tasks reasonably well but struggles with complex, multi‑step workflows. Errors often required manual intervention, reducing overall productivity.

Technical Shortcomings

Several technical issues emerged during testing:

  • Browser integration: Antigravity can invoke a browser to verify task completion, a feature inherited from earlier Gemini models. In practice, the browser checks were superficial and missed obvious UI bugs.
  • Token‑saving heuristics: The agent frequently truncates context to save tokens, which degrades the quality of generated code.
  • Buggy agent harness: Despite Gemini 3’s capabilities, the surrounding harness introduces instability, leading to crashes and incomplete outputs.
  • Inconsistent UI: Elements feel retrofitted, creating a disjointed experience that resembles a quick prototype rather than a polished product.

Comparison with Existing Google Tools

Google already offers several AI‑enhanced development solutions:

  • Firebase Studio: Provides a lightweight UI with VS Code integration.
  • Gemini Code Assist extension for VS Code: Delivers autocomplete and agentic suggestions directly within the popular editor.
  • Gemini CLI: Enables command‑line interactions with Gemini models for code generation.

Antigravity overlaps heavily with these tools but does not deliver a clear advantage. Its unique selling point—an integrated editor with a built‑in agent manager—fails to justify the additional learning curve given the availability of more mature alternatives.

Conclusion

Google’s Antigravity is essentially a rebranded Windsurf editor with a superficial UI overhaul and an added, but under‑engineered, Agent Manager. While it can handle straightforward coding prompts when paired with Gemini 3, it falters on more demanding, multi‑step tasks and exhibits a range of usability bugs.

For developers already invested in VS Code or Firebase Studio, Antigravity offers little incentive to switch. The product feels rushed, and the integration of Gemini 3 does not compensate for the underlying instability.

In its current state, Antigravity is an interesting experiment but not a viable replacement for established AI‑assisted development tools. Future iterations will need a more cohesive UI, robust agent orchestration, and deeper integration with Google’s existing ecosystem before it can be considered a competitive offering.

Watch Original Video