March 2026 update
agentic retrieval • energy optimization • a roadmap for the climate AI ecosystem

A future-proof retrieval engine for climate insights

This demo previews ChatNetZero’s upcoming backend overhaul: agentic retrieval, enhanced mathematical capabilities, and estimated energy use per query.
Why this update is needed
In 2023, Data-Driven EnviroLab and Arboretica launched ChatNetZero to give climate professionals AI built for climate evidence. Since then, AI has shifted from static Q&A to reasoning, live search, and agentic workflows, and users now expect analysis, calculations, comparison, and source verification. ChatNetZero 3.0 is our rebuild for that reality: stronger capability with truth-first outputs, traceable sourcing, and energy-efficient design.
What’s new

Four upgrades we’re showcasing

These changes are designed to improve analysis in the time of rapid AI advancement, reduce compute and energy cost, and make answers more trustworthy as our knowledge base grows.
From classical RAG to agentic retrieval
Classical RAG works well for many, but it becomes slow and computationally heavy when the knowledge base is large and needs frequent updates.
We are upgrading ChatNetZero with an agentic retrieval architecture that first comprehends both the content and context of documents, then dynamically identifies the most relevant knowledge to retrieve from only those sources.
  • Smaller, temporary per-answer retrieval contexts (instead of loading everything every time)
  • Lower compute cost while keeping the same answer depth and quality.
  • Continuous backend updates without re-embedding the entire corpus to save energy.
Energy consumption per query
We will report an estimated energy cost for each user query (and, where appropriate, a CO₂e estimate) using the approach developed in our research. The goal is to make ChatNetZero a climate- and energy-responsible tool for researchers.
Dynamic data updates
We are increasing our data update cadence to support a larger and more timely knowledge base—while preserving provenance and traceability. This includes regular ingestion of:
  • Latest company reports
  • Research PDFs and method documentation
  • Net Zero Tracker (NZT) snapshot data
Native analytical capabilities (coming next)
General-purpose LLMs are not optimized for reliable aggregation, counting, and measurement. We are developing native analytical methods to answer questions like “how many,” “how much,” “which ones,” and “find all cases” with stronger grounding, explicit referencing, and hallucination checks.
Architecture

New retrieval system

From vector database embedding to vectorless retrieval.
Documents
User Query
Distribution Agent
Index (text)
Index (number)
Page Index
Metadata
Category
Keywords
Topics
Selected relevant DBs
Document Databases
RAG (Prompt Engineer + LLM + Hallucination)
Roadmap

The Ecosystem of Trusted Climate AI

Mapping the future of climate intelligence through specialized tools and scientific integrity.
Official Sources
Verified intelligence from official documents and trusted NGO data.
Report Scout City Scout
Climate-Specific AI
LLM applications fine-tuned for climate nuance and policy language.
Domain Workflows
Customized sustainability data extraction and tracking logic.
GreenSearch ChatNetZero (More to be announced)
Research Foundation
Built on scientific integrity and peer-reviewed methodology.
Demo

Test agentic retrieval

Full release coming soon.
Try sample questions
Enter a question to preview a side-by-side comparison of answers and estimated energy consumption.
Sample questions
  • Which documents should be consulted to assess a firm’s Scope 3 coverage?
  • Find all examples of “supplier engagement” claims in recent reports.
  • How many companies have targets covering Scopes 1–3, according to NZT?
Stay in the loop

Get notified when the upgraded ChatNetZero ships

We’re migrating retrieval, embedding, and data management. Leave your email and we’ll notify you when the full platform is live.
Email
In production, submit this to your webhook (e.g., Zapier/Make/Mailchimp) or to Supabase.