top of page

Washing: What It Is and How To Avoid It

  • Writer: Weave.AI Team
    Weave.AI Team
  • Apr 24
  • 4 min read

Updated: Apr 28




Companies are often tempted to exaggerate their capabilities to win new customers, appease regulators, avoid trouble and attract investors. This practice, often referred to as ‘washing’, includes a variety of ways that companies create content that makes their capabilities seem greater than they are, or paints the company in an overly positive light. 


Identifying washing practices can be challenging and may lead to significant negative consequences. Investors can make a bad investment based on false claims, expose themselves to risk, and miss genuine opportunities because of difficulty differentiating washing from real information.

 

Analysts face a significant challenge in cutting through washing. To make well-informed decisions, they must sift through vast amounts of data. Even AI-powered search tools leveraging large language models are not immune, as they often accept information at face value without adequately evaluating its credibility.


What is Washing? 


Washing is a deceptive marketing tactic that overstates the role or capabilities of a company or product.


  • AI Washing - Investors are pouring money into AI technology. However, in this competitive climate, some organizations exaggerate their AI capabilities to differentiate themselves, often overstating the sophistication or impact of their solutions.


  • Greenwashing - Many company executives are eager to capitalize on the growing interest in sustainability, producing marketing materials that portray their practices as environmentally friendly, even though these claims often lack specific, verifiable evidence or adherence to recognized standards and international certifications. Greenwashing is a widely acknowledged problem, and companies held liable for such misrepresentations now face legal consequences in several jurisdictions.


  • Innovation Washing - Occurs when organizations represent their initiatives as transformative or groundbreaking despite offering no genuinely advanced developments. This marketing approach can mislead stakeholders, causing them to overestimate the organization’s true level of innovation.


Companies often employ ‘washing’ techniques to exaggerate claims regarding their data, product capabilities, scale, performance, and customer base.


Why Most Research (Including Semantic Search) Can’t Spot Washing


Washing practices are inherently misleading and introduce significant risks. Washing tactics also homogenize company messaging, making it challenging to identify the solutions and organizations that are truly differentiated.


Three commonly used analyst research techniques are susceptible to washing in different ways:


  • Manual Analyst Research: Analyst firms subscribe to vast quantities of research yet are able to manually review only a very small fraction of it. The sheer volume of content makes it next to impossible for analysts to scrutinize or verify every single claim that companies make. Consider greenwashing: Zero Carbon Analytics explains that there are eleven common forms of greenwashing that can be present in a typical sustainability report. It could take hours to properly identify greenwashing for a single company report, let alone all of the greenwashing happening across a whole sector over a period of months or years.


  • Traditional Search: Traditional search technology relies heavily on keyword matching—a principle that, paradoxically, rewards rather than punishes the tactics employed in washing techniques. Take artificial intelligence (AI) as an example: although it is a highly discussed topic, its technical intricacies remain beyond the grasp of many, especially senior leaders or non-technical financial analysts who may not be equipped to critically evaluate AI claims. As a result, companies can saturate their messaging with AI-related keywords to optimize their rankings in search queries, confident that they will not be rigorously challenged. 


  • Semantic Search (LLMs): AI-based search, despite its capacity to retrieve information based on more sophisticated prompts, is not impervious to making mistakes. Microsoft conducted tests to evaluate the robustness of AI queries and found that they could be manipulated with relative ease. Likewise, many multimodal large language models (MLLMs) have demonstrated similar vulnerabilities.


Cutting Through Washing With Weave.AI


Text-based analysis is not sufficient to discern truth from washing. Much like early search engines were fooled by keyword stuffing before Google introduced PageRank, legacy AI and NLP approaches are susceptible to language that sounds impressive but lacks material substance. Weave.AI’s technology overcomes this by leveraging our proprietary Weave.AI Knowledge Graph (WKG) to move beyond word-level analysis and assess the materiality and significance of claims in ways that are both quantifiable and investible.


Large Language Models (LLMs) alone are incapable of solving the washing problem because they fundamentally rely on statistical associations within text rather than external validation of claims. While LLMs can analyze sentiment, detect linguistic patterns, and even generate seemingly coherent insights, they lack a built-in mechanism to verify factual accuracy, assess real-world credibility, or infer materiality beyond the text itself. Consequently, LLMs are easily misled by persuasive but empty statements that sound impressive but provide no substantive proof of impact. Without grounding outputs in a structured knowledge graph that connects claims to external data, industry benchmarks, and historical context, LLMs risk amplifying misinformation rather than mitigating it.


Our knowledge graph acts as a truth filter, looking beyond words to map relationships, validate context, and assess credibility. For example, generic claims like "our revolutionary AI is transforming business" may sound compelling but lack actionable insight. However, a claim stating that “an AI platform was adopted by Fortune 500 firms such as Microsoft and Goldman Sachs, reduced document processing time by 75%, and secured $10M in annual recurring revenues within 18 months” carries significant weight. Our AI automatically structures this information into a knowledge graph, ensuring that verifiable, high-impact signals—such as adoption by large enterprises, specific ROI-driven outcomes, and concrete financial metrics—are surfaced while empty rhetoric is deprioritized.


By scaling this approach across millions of documents and billions of claims, our AI safeguards investors, analysts, and regulators from misleading narratives, ensuring that only substantive, verifiable claims influence decision-making. Just as Google revolutionized search by looking outside individual webpages for credibility, our AI revolutionizes document intelligence by looking outside individual reports to cross-validate claims against real-world data, industry benchmarks, and historical precedent. This capability represents one of our most differentiated technological capabilities and is essential for mitigating the increasingly sophisticated forms of "washing" that pervade today’s corporate and investment landscape.




 
 
 

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page