March 2026 update
agentic retrieval • energy optimization • a roadmap for the climate AI ecosystem

An energy-transparent, future-proof retrieval engine for climate insights

This demo previews ChatNetZero’s upcoming backend overhaul: agentic retrieval, enhanced mathematical capabilities, and estimated energy use per query.
Why this update is needed
In 2023, Data-Driven EnviroLab and Arboretica launched ChatNetZero to give climate professionals AI built for climate evidence. Since then, AI has shifted from static Q&A to reasoning, live search, and agentic workflows, and users now expect analysis, calculations, comparison, and source verification. ChatNetZero 3.0 is our rebuild for that reality: stronger capability with truth-first outputs, traceable sourcing, and energy-efficient design.
What’s new

Four upgrades we’re showcasing

These changes are designed to improve analysis in the time of rapid AI advancement, reduce compute and energy cost, and make answers more trustworthy as our knowledge base grows.
From classical RAG to agentic retrieval (coming soon)
Classical RAG works well for many, but it becomes slow and computationally heavy when the knowledge base is large and needs frequent updates.
We are upgrading ChatNetZero with an agentic retrieval architecture that first comprehends both the content and context of documents, then dynamically identifies the most relevant knowledge to retrieve from only those sources.
  • Smaller, temporary per-answer retrieval contexts (instead of loading everything every time)
  • Lower compute cost while keeping the same answer depth and quality.
  • Continuous backend updates without re-embedding the entire corpus to save energy.
Energy consumption per query
We will report an estimated energy cost for each user query using the approach developed in our research and from the How Hungry is AI paper. The goal is to make ChatNetZero a climate- and energy-responsible tool for researchers.
The energy consumption of each query will be reported with a detailed breakdown by step, to provide researchers with a benchmark for designing energy-efficient AI systems.
The energy consumption data are based on estimates as of January 2026.
Example output
Running time (s)Est. energy (Wh)
Entity detect: 0.269 s0.001 Wh
Retrieve: 0.237 s0.001 Wh
Hallucination check: 10.765 s0.028 Wh
LLM inference: 4.074 s0.435 Wh
Dynamic data updates
We are increasing our data update cadence to support a larger and more timely knowledge base—while preserving provenance and traceability. This includes regular ingestion of:
  • Latest company reports
  • Research PDFs and method documentation
  • Net Zero Tracker (NZT) snapshot data
Native analytical capabilities
General-purpose LLMs are not optimized for reliable aggregation, counting, and measurement. We are developing native analytical methods to answer questions like “how many,” “how much,” “which ones,” and “find all cases” with stronger grounding, explicit referencing, and hallucination checks.
Energy Efficiency

Promoting energy-efficient AI design

We are constantly developing algorithmic and architectural best practices to balance performance and energy consumption across AI workflows.
Smart Router. Use a lightweight routing agent to identify the most efficient path for each query, avoiding unnecessary heavy AI reasoning when a faster grounded method can answer just as well.
Caching the thinking process. Save common question types and use cases as reusable reasoning templates, helping us achieve faster, higher-quality answers while using significantly less energy than general-purpose agents.
Algorithmic hallucination check. Use local, CPU-based checks to catch hallucinated content before it reaches the user, reducing the need for extra GPU-heavy verification steps.
Reuse source embedding. Climate documents and datasets are vetted by climate experts and then reused for querying, avoiding repeated source-fetching costs. New documents can also be added or removed individually, which is more energy-efficient than rebuilding a legacy RAG system from scratch.
Optimize response. Use concise, accurate language and avoid unnecessary embellishment so the system does not spend extra energy generating low-value text.
Architecture

New retrieval system

From vector database embedding to vectorless retrieval.
User Query
Router
Tool Layer
Retrieval Agent
SQL Agent
Other Tools
Clarify / Reasoning Loop
Synthesizer
Prompt Engineer + Hallucination Check
Final Answer
Chain of Thought
Roadmap

The Ecosystem of Trusted Climate AI

Mapping the future of climate intelligence through specialized tools and scientific integrity.
Official Sources
Verified intelligence from official documents and trusted NGO data.
Report Scout City Scout
Climate-Specific AI
LLM applications fine-tuned for climate nuance and policy language.
Domain Workflows
Customized sustainability data extraction and tracking logic.
GreenSearch ChatNetZero (More to be announced)
Research Foundation
Built on scientific integrity and peer-reviewed methodology.
Demo

Test agentic retrieval

Full release coming soon.
Try sample questions
Enter a question to preview a side-by-side comparison of answers and estimated energy consumption.
In this demo, the knowledge base includes Net Zero Tracker data (as of February 2026), high-level reports and literature, and two sample corporate ESG reports (Ball Corporation and Baxter International). Contact us if you would like to test additional documents.
Sample questions
Reduce Greenwashing
Advanced Analytical Insights
National Climate Policies (early beta)
Deep dive into ESG reports, charts and figures (early beta)
Stay in the loop

Get notified when the upgraded ChatNetZero ships

We’re migrating retrieval, embedding, and data management. Share your details in the signup form and we’ll notify you when the full platform is live.
Please use this form to sign up for ChatNetZero 3.0 updates.
Sign up for updates