The following is a guest post and opinion from J.D. Seraphine, Founder and CEO of Raiinmaker.
X’s Grok AI cannot seem to stop talking about “white genocide” in South Africa; ChatGPT has become a sycophant. We have entered an era where AI isn’t just repeating human knowledge that already exists—it seems to be rewriting it. From search results to instant messaging platforms like WhatsApp, large language models (LLMs) are increasingly becoming the interface we, as humans, interact with the most.
Whether we like it or not, there’s no ignoring AI anymore. However, given the innumerable examples in front of us, one cannot help but wonder if the foundation they’re built on is not only flawed and biased but also intentionally manipulated. At present, we are not just dealing with skewed outputs—we are facing a much deeper challenge: AI systems are beginning to reinforce a version of reality which is shaped not by truth but by whatever content gets scraped, ranked, and echoed most often online.
The present AI models aren’t just biased in the traditional sense; they are increasingly being trained to appease, align with general public sentiment, avoid topics that cause discomfort, and, in some cases, even overwrite some of the inconvenient truths. ChatGPT’s recent “sycophantic” behavior isn’t a bug—it’s a reflection of how models are being tailored today for user engagement and user retention.
On the other side of the spectrum are models like Grok that continue to produce outputs laced with conspiracy theories, including statements questioning historical atrocities like the Holocaust. Whether AI becomes sanitized to the point of emptiness or remains subversive to the point of harm, either extreme distorts reality as we know it. The common thread here is clear: when models are optimized for virality or user engagement over accuracy, the truth becomes negotiable.
When Data Is Taken, Not Given
This distortion of truth in AI systems isn’t just a result of algorithmic flaws—it starts from how data is being collected. When the data used to train these models is scraped without context, consent, or any form of quality control, it comes as no surprise that the large language models built on top of it inherit the biases and blind spots that come with the raw data. We have seen these risks play out in real-world lawsuits as well.
Authors, artists, journalists, and even filmmakers have filed complaints against AI giants for scraping their intellectual property without their consent, raising not just legal concerns but moral questions as well—who controls the data being used to build these models, and who gets to decide what’s real and what’s not?
A tempting solution is to simply say that we need “more diverse data,” but that alone is not enough. We need data integrity. We need systems that can trace the origin of this data, validate the context of these inputs, and invite voluntary participation rather than exist in their own silos. This is where decentralized infrastructure offers a path forward. In a decentralized framework, human feedback isn’t just a patch—it’s a key developmental pillar. Individual contributors are empowered to help build and refine AI models through real-time on-chain validation. Consent is, therefore, explicitly inbuilt, and trust, therefore, becomes verifiable.
A Future Built on Shared Truth, Not Synthetic Consensus
The reality is that AI is here to stay, and we don’t just need AI that’s smarter; we need AI that is grounded in reality. The growing reliance on these models in our day-to-day—whether through search or app integrations—is a clear indication that flawed outputs are no longer just isolated errors; they are shaping how millions interpret the world.
A recurring example of this is Google Search’s AI overviews that have notoriously been known to make absurd suggestions. These aren’t just odd quirks—they indicate a deeper issue: AI models are producing confident but false outputs. It’s critical for the tech industry as a whole to take notice of the fact that when scale and speed are prioritized above truth and traceability, we don’t get smarter models—we get convincing ones that are trained to “sound right.”
So, where do we go from here? To course-correct, we need more than just safety filters. The path ahead of us isn’t just technical—it’s participatory. There is ample evidence that points to a critical need to widen the circle of contributors, shifting from closed-door training to open, community-driven feedback loops.
With blockchain-backed consent protocols, contributors can verify how their data is used to shape outputs in real time. This isn’t just a theoretical concept; projects such as the Large-scale Artificial Intelligence Open Network (LAION) are already testing community feedback systems where trusted contributors help refine responses generated by AI. Initiatives such as Hugging Face are already working with community members who test LLMs and contribute red-team findings in public forums.
Therefore, the challenge in front of us isn’t whether it can be done—it’s whether we have the will to build systems that put humanity, not algorithms, at the core of AI development.
The post AI is reinventing reality. Who is keeping it honest? appeared first on CryptoSlate.
#reinventing #reality #keeping #honest