THE GREAT DISCONNECT
How Artificial Intelligence Is Dismantling Human Connection, Public Knowledge, and the Cognitive Infrastructure of Modern Life
Artificial intelligence is simultaneously dismantling the social infrastructure of the internet — flooding it with synthetic content, collapsing public knowledge sharing, deepening the loneliness epidemic through simulated companionship, and eroding the cognitive capacities of the populations most exposed to it. These five crises form a self-reinforcing feedback loop: AI consumes human knowledge to build products that destroy the incentive to create human knowledge, which contaminates the training data those products depend on to function. This article documents the evidence for each crisis, demonstrates their interconnection, and concludes with a specific claim about what must be built in response.
I. The Convergence
In the span of roughly thirty-six months — from the public release of ChatGPT in November 2022 to the present day — the internet has undergone a transformation more profound than any since its commercialization. The change is not merely technological. It is social, psychological, and ecological. The environment in which human beings communicate, share knowledge, form relationships, and construct shared meaning has been fundamentally altered, and the alteration is accelerating.
The purpose of this article is to examine the nature and trajectory of that alteration with precision. The popular discourse around artificial intelligence tends to oscillate between utopian celebration and dystopian panic, neither of which serves the analysis well. What is required is a careful accounting of what the evidence actually shows — and what it shows is deeply concerning, not because AI is inherently destructive, but because the specific way it has been deployed is systematically dismantling the social and cognitive infrastructure that human beings depend on to function.
The argument proceeds through five domains. The internet, once a predominantly human environment, has become a predominantly machine environment. This transition has destroyed the incentive structures that sustained public knowledge sharing. The resulting vacuum has been partially filled by AI companion products that simulate connection while deepening isolation. The cumulative effect of these shifts, compounded by the cognitive offloading that AI encourages, is producing measurable declines in the reasoning capacity of the populations most exposed. And the entire cycle is unsustainable even on its own terms, because the AI systems at its center require human-generated data to function, and the supply of that data is being systematically destroyed.
I should be transparent about my position. I am a senior QA engineer with over a decade of experience in software. I use AI tools daily. I teach others to use them. I am not a technophobe, a Luddite, or a contrarian. I am a person who works inside this ecosystem, who sees its power clearly, and who believes the evidence requires me to say what I see: we are building the most sophisticated tools in human history, and we are using them to make ourselves lonelier, less capable, and less connected to one another. That trajectory is not inevitable. But changing it requires first acknowledging it.
II. The Dead Internet: When Machines Outnumber People
In 2021, an anonymous post on the internet forum Agora Road proposed what became known as the Dead Internet Theory: the idea that authentic human activity on the internet was being systematically replaced by automated content and bot accounts. The theory was widely dismissed as conspiracy thinking. By 2025, it had been cited in peer-reviewed journals, analyzed by researchers at multiple universities, and acknowledged as increasingly credible by the CEO of OpenAI.
The empirical basis is now substantial. Imperva’s 2024 Bad Bot Report documented that automated traffic exceeded human traffic on the open web for the first time, accounting for 51 percent of all internet activity. An Ahrefs analysis of 900,000 newly published web pages in April 2025 found that 74.2 percent contained AI-generated content. On LinkedIn, an estimated 54 percent of long-form posts are AI-generated. AI-generated product reviews have been growing at 80 percent month-over-month since mid-2023, according to The Transparency Company.
The implications extend beyond inconvenience. When a majority of content is machine-generated, the fundamental nature of the environment changes. It ceases to be a space where human beings communicate with one another and becomes a space where human beings interact with the outputs of statistical models trained on what other human beings once said. A human response carries lived experience, judgment, accountability, and the possibility of genuine empathy. A generated response carries none of these things, regardless of how fluently it mimics the appearance of them.
The population is beginning to react. According to Billion Dollar Boy, consumer preference for AI-generated creator content fell from 60 percent in 2023 to 26 percent in 2025. A CivicScience survey found that 31 percent of consumers said AI involvement in advertising made them less likely to choose a brand. iHeartMedia’s internal research found that 90 percent of listeners want media created by humans. Brands are now requesting imperfections in creator content — unmade beds, unpolished video — because overproduced material has become a signal of inauthenticity. These are not aesthetic preferences. They are market signals indicating that a significant and growing segment of the population is actively seeking verified humanity.
The economic consequences are already severe. Google traffic to publishers dropped 33 percent globally between late 2024 and late 2025. Zero-click searches — queries answered by AI without directing users to human sources — rose from 56 to 69 percent in a single year. The economic model that funded quality content creation is collapsing, and the content that replaces it is synthetic.
III. The Knowledge Commons Collapse
No institution illustrates this collapse more precisely than Stack Overflow. Founded in 2008, the platform became the central repository of programming knowledge for more than a decade, accumulating 24 million questions and answers. At peak between 2014 and 2020, the site received more than 200,000 new questions per month.
By late 2025, monthly question volume had fallen below 10,000 — a decline exceeding 90 percent, returning to levels not seen since the platform’s first year. Data visualization published by developer Sam Rose in January 2026, using Stack Overflow’s own query system, confirmed the trajectory: fifteen years of accumulated growth entirely erased.
Stack Overflow’s own 2025 Developer Survey, drawing responses from more than 49,000 developers across 177 countries, documented that 84 percent of respondents now use AI tools in their development process. When a developer encounters a problem, asking an AI assistant is faster, less adversarial, and requires no public exposure of ignorance. The community is no longer consulted.
The loss is not merely quantitative. Stack Overflow’s most valuable contributions were not individual answers but the discussions surrounding them: comments that refined approaches, dissenting opinions that identified edge cases, alternative solutions that revealed trade-offs. A language model provides one answer with high confidence. A community provides a spectrum of perspectives with varying conviction. The difference is epistemological. It is the difference between receiving a conclusion and participating in the reasoning that produces one.
As a New York Times analysis observed, developers have not stopped communicating. They have stopped communicating publicly. Knowledge that was once contributed to a shared commons — searchable, debatable, improvable by anyone — is now consumed in private conversations with language models that benefit a single company. The shift from public knowledge building to private knowledge consumption is not confined to software. Blog creation for knowledge sharing is declining across domains. Forum participation has cratered in fields from electronics to academic research. The incentive to contribute — whether for reputation, reciprocity, or intellectual engagement — has been undercut by tools that extract value from the commons without contributing to it.
IV. The Loneliness Paradox: AI Companions and the Illusion of Connection
In 2023, the United States Surgeon General declared loneliness a public health crisis comparable in severity to smoking and obesity. Against that backdrop, a new class of products has emerged promising to alleviate isolation through artificial companionship. ChatGPT now has more than 800 million active weekly users, and one of the most popular non-work use cases in 2025 has been therapy and companionship, according to a Harvard Business Review analysis. A 2025 survey found that 83 percent of Generation Z believed they could form deep emotional bonds with AI.
The research on outcomes is unambiguous. A study of more than 1,100 AI companion users (Zhang et al., 2025) found that people with fewer human relationships were more likely to seek chatbot companionship — and that heavy emotional self-disclosure to AI was consistently associated with lower well-being. A four-week randomized controlled trial at MIT (Fang et al., 2025) found that heavy daily use correlated with greater loneliness, increased dependence, and reduced real-world socializing.
Perhaps most alarmingly, psychiatric researchers documented cases in which intense chatbot engagement contributed to delusional thinking and suicidality — a phenomenon they termed “technological folie à deux” (Dohnány et al., 2025). The chatbot’s affirming, non-challenging responses reinforced pathological thought patterns rather than providing the corrective feedback a human relationship or competent therapist would offer.
“People across all demographics are experiencing increased loneliness and isolation, and we don’t have the same social safety nets and connections that we used to. While these tools may provide pseudo-connection, relying on them to replace human connection can lead to further isolation.”
— Ayorkor Gaba, Columbia University, Counseling and Clinical Psychology
The BMJ warned in December 2025 that we may be witnessing a generation learning to form emotional bonds with entities that lack the capacity for human empathy, care, and relational attunement. Short-term studies, including Harvard Business School research, have found that brief chatbot interactions can reduce loneliness comparably to a human stranger. But longitudinal data shows the opposite: sustained use deepens the isolation it temporarily masks, creating dependency that progressively substitutes artificial responsiveness for the demanding but irreplaceable work of human relationship.
The data on Generation Z is particularly stark. Only 15 percent report having never felt lonely in the past year, compared with 54 percent of Baby Boomers. Sixty-two percent globally struggle to build meaningful relationships. Seventy-three percent report digital exhaustion despite averaging 7.2 hours of daily screen time. Research from the Survey Center on American Life found that 56 percent of Gen Z reported childhood loneliness at least monthly — more than double the rate of Boomers — and that childhood loneliness is a powerful predictor of adult isolation. Stanford research documented that the historical U-shaped happiness curve, which once made young adults among the most content, has inverted entirely: young adults are now the least happy age group.
The response from this generation is increasingly withdrawal. Nearly a third of Gen Zers deleted a social media app in the past year, according to Deloitte. Global social media usage declined approximately ten percent between 2022 and 2024, with steeper drops among young people, per Financial Times and GWI analysis of 250,000 adults across fifty countries. These are not lifestyle trends. They are a population discovering through lived experience that the digital environment they inherited is actively hostile to human connection.
V. The Cognitive Erosion: Getting Dumber While Getting Faster
The preceding sections describe social and informational crises. This section describes something more fundamental: emerging evidence that the tools designed to augment human intelligence may instead be degrading it.
The Reverse Flynn Effect
For most of the twentieth century, average IQ scores rose steadily across the developed world — approximately two to three points per decade, a phenomenon called the Flynn Effect after New Zealand researcher James Flynn, who first documented it in 1984. The trend was global, consistent, and appeared to reflect genuine improvements in cognitive capacity driven by better nutrition, reduced environmental toxins, expanding education, and increasingly complex environments.
That trend has reversed. Research by Bratsberg and Rogeberg, published in the Proceedings of the National Academy of Sciences in 2018, analyzed IQ data from more than 730,000 Norwegian men and found a steady decline in scores among those born after 1975. Critically, the decline was observable within families — siblings from the same parents scored lower than their older brothers — ruling out genetic explanations and confirming that environmental factors were responsible.
The reversal is not confined to Norway. Compulsory military IQ testing data from Finland, Denmark, Australia, Britain, the Netherlands, Sweden, and France have all documented statistically significant declines beginning in the mid-1990s. In 2023, Northwestern University researchers published the first large-scale American data: analyzing 394,378 IQ test scores collected between 2006 and 2018, they found declines in verbal reasoning, matrix reasoning, and computational ability. The decline was uniform across age, education, and gender.
The causes remain debated. Proposed explanations include changes in educational emphasis, media consumption patterns, nutritional shifts, and reduced cognitive challenge in daily life. But the timing is notable: the decline accelerated precisely as digital technology became the dominant mediating environment for learning, communication, and problem-solving.
AI and Cognitive Atrophy
Emerging research connects AI use specifically to measurable cognitive decline. The most rigorous evidence comes from MIT’s Media Lab (Kosmyna et al., 2025), which used EEG to measure brain activity during essay writing across 54 participants divided into three groups: those using ChatGPT, those using a search engine, and those using no external tools. LLM users displayed the weakest neural connectivity of any group — and when reassigned to write without tools in a fourth session, they showed reduced alpha and beta connectivity, indicating under-engagement of precisely the brain regions responsible for executive control and deep reasoning. They also struggled to accurately quote their own essays, suggesting diminished ownership of their own thinking. A preliminary but directionally consistent study by Gerlich (Societies, 2025), surveying 666 participants, found a strong negative correlation between cognitive offloading and critical thinking, with younger participants (ages 17–25) showing higher AI dependence and lower critical thinking scores. While the effect size is large and the finding awaits replication in higher-powered studies, it aligns with the MIT neurological data and with a broader pattern across the literature.
The Harvard Gazette, reporting on these findings in November 2025, described the phenomenon as “cognitive atrophy” and “cognitive debt” — the accumulation of reasoning deficits over time through sustained AI dependence. A Microsoft study (2025) found that higher user confidence in AI’s ability to perform a task directly correlated with less critical thinking effort applied to that task. A separate study by Stadler, Bannert, and Sailer (2024), published in Computers in Human Behavior, found that ChatGPT-aided research produced significantly less cognitive load but lower-quality arguments and shallower depth of reasoning compared to standard web search.
The pattern is consistent and directional: AI tools reduce mental effort in the short term and erode mental capacity over sustained use. The metaphor used by multiple researchers is muscle atrophy — cognitive abilities, like physical ones, weaken when not exercised. The irony is precise: tools marketed as making us smarter may be making us measurably less capable of the independent reasoning that being smart actually requires.
VI. The Ecosystem Consuming Itself: Model Collapse and the Human Data Crisis
The crisis described in the preceding sections contains within it a paradox that elevates its urgency from a social concern to an existential one for the AI industry itself. Artificial intelligence systems do not merely consume human-generated content as a matter of convenience. They depend on it as a matter of survival.
The Mechanism
The foundational research was published in Nature by Shumailov and colleagues, who demonstrated that “indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear.” The mechanism is intuitive: generative models replicate the patterns most common in their training data while progressively losing rare but important information. Each generation introduces small statistical distortions. When the next generation trains on that contaminated output, the errors compound.
In experimental demonstrations, a language model fine-tuned on its own output across multiple generations produced text that progressively lost coherence — in one documented case, a model prompted to discuss medieval architecture began producing text about jackrabbits by the fourth generation. The metaphor most commonly used is photocopying a photocopy: each successive copy loses fidelity until the output bears no resemblance to the original.
The Scale of Contamination
The contamination is no longer theoretical. As of early 2026, 74 percent of newly published web content is AI-generated. The datasets scraped for training future models will inevitably contain proportionally more synthetic material with each passing month. Timothy Shoup of the Copenhagen Institute for Futures Studies has projected that 99 percent of online content may be AI-generated by 2030. Gartner predicts search engine volume will decline another 25 percent by late 2026 as users abandon keyword search for AI chatbots — further reducing the traffic that incentivizes human content creation.
The feedback loop is vicious and self-accelerating. AI companies scraped the open web to build language models. They deployed those models, which flooded the web with synthetic content. That contaminated content is now being scraped to train the next generation of models. The companies that recognized this earliest have responded by securing exclusive licenses to pre-2022 human data — data generated before the contamination began. The Harvard Journal of Law and Technology has analyzed this as a potential barrier to market entry: companies without access to uncontaminated training data may be permanently locked out of building competitive AI systems.
The Human Data Crisis
This is the central paradox of the current AI economy. The industry’s most valuable resource is original human-generated data. But the industry’s products are systematically destroying the incentive structures that produce that resource. Stack Overflow’s community produced 24 million pieces of verified, debated, community-validated human knowledge. That knowledge was scraped to train language models. Those models killed Stack Overflow’s community. Stack Overflow is now licensing its declining dataset back to AI companies. The cycle is, in the most literal sense, cannibalistic.
The implications cascade. If human beings stop contributing original knowledge, creative work, and authentic discourse to the public internet — because the incentive has been eliminated, or because the environment has become too degraded to feel worth participating in — then the AI systems themselves lose the foundation upon which their capabilities rest. The models become progressively more homogeneous, more error-prone, more disconnected from the diversity of genuine human thought. The ecosystem is consuming itself.
This renders the social crisis documented in this article not merely a humanitarian concern but a technological and economic one. The loneliness epidemic, the knowledge commons collapse, the cognitive erosion, and the trust crisis on the open web are not side effects of AI development. They are inputs to a degradation cycle that threatens the viability of AI itself. Anyone who cares about the future of artificial intelligence should, by direct logical implication, care about the future of authentic human connection — because without it, there is no future for AI either.
VII. Conclusion: What I Believe Should Be Built
I want to end this article with a specific claim, not a rhetorical question.
I work inside the AI ecosystem. I use Claude, ChatGPT, Copilot, and a dozen other tools daily. I build automation frameworks for a living. I am not arguing against artificial intelligence. I am arguing that the way we have deployed it is producing a cascade of harms — social, cognitive, epistemic, and ultimately technological — that are measurable, documented, accelerating, and addressable.
The evidence presented in this article supports the following position:
the most urgent infrastructure problem of the next decade is not compute, not data pipelines, and not model architecture. It is the systematic reconstruction of incentive structures that make it worthwhile for human beings to connect with one another, share knowledge publicly, and engage in the effortful cognitive work that sustains both individual capability and collective intelligence.
Concretely, I believe three things must be built:
First, verified-human social infrastructure. The internet needs a trust layer that cryptographically guarantees human authorship. Not as an identity product, but as a social experience — spaces where every interaction is guaranteed to involve a real person, where no algorithmic amplification distorts the signal, and where reputation accrues through verified human contribution over time. Humanity Protocol’s recent Proof of Trust framework points in this direction, but the consumer product built on top of it does not yet exist.
Second, an economic model for public knowledge creation. The Stack Overflow model is dead. Its replacement must solve the incentive problem: if AI companies need verified human data to avoid model collapse, then the humans who generate that data should be compensated directly and continuously. A platform where contributing original, community-validated knowledge generates ongoing revenue from the AI companies that license it would simultaneously address the knowledge commons collapse, the model collapse crisis, and the economic displacement of human creators.
Third, connection infrastructure that uses AI as a bridge to humans, not a substitute for them. The University of North Carolina research by Prinzing and Fredrickson demonstrated that AI systems designed to encourage real human connection — rather than replace it — produced improvements in social behavior, generosity, and well-being. The technology is capable of being a bridge. It has simply not been built that way, because replacing human connection is more profitable in the short term than facilitating it.
What the evidence makes clear is that certain things cannot be automated without destroying their essential nature. The trust between two people who have shown up for each other repeatedly. The surprise of encountering a perspective that reorganizes your understanding. The accountability that comes from knowing a real person is watching. The vulnerability required to ask for help and the dignity conferred by providing it. The feeling — irreducible and unmistakable — of being genuinely seen by another human being.
These are not sentimental abstractions. They are, according to the meta-analytic research cited by the Surgeon General’s advisory, biological necessities: social disconnection carries health risks comparable to smoking fifteen cigarettes per day. And they are, as the model collapse research demonstrates, economic necessities: without authentic human contribution, the AI systems themselves degrade and fail.
We are building the most powerful tools in human history. We are using them to make ourselves lonelier, less capable, and less connected to one another. I do not believe that trajectory is inevitable. But I believe continuing to ignore it while selling the next subscription is, at this point, a choice. And I believe someone needs to build the alternative.
Everything else can be generated. Connection cannot.
Works Cited
Bratsberg, B. and Rogeberg, O. “Flynn Effect and Its Reversal Are Both Environmentally Caused.” Proceedings of the National Academy of Sciences, 2018.
Dworak, E. M., et al. “Looking for Flynn Effects in a Recent Online U.S. Adult Sample.” Intelligence, 2023.
Fang, C. M., et al. “How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use.” arXiv, 2025.
Gerlich, M. “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking.” Societies, 2025.
Kosmyna, N., et al. “Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant.” MIT Media Lab / arXiv, 2025.
Lee, H.-P., et al. “The Impact of Generative AI on Critical Thinking.” Microsoft / ACM CHI Conference, 2025.
Stadler, M., Bannert, M., and Sailer, M. “Cognitive Ease at a Cost: LLMs Reduce Mental Effort but Compromise Depth.” Computers in Human Behavior, 2024.
Shumailov, I., et al. “AI Models Collapse When Trained on Recursively Generated Data.” Nature, 2024.
Zhang, Y., et al. “The Rise of AI Companions: How Human-Chatbot Relationships Influence Well-Being.” arXiv, 2025.
Dohnány, S., et al. “Technological Folie à Deux: Feedback Loops Between AI Chatbots and Mental Illness.” arXiv, 2025.
De Freitas, J., et al. “AI Companions Reduce Loneliness.” Journal of Consumer Research, forthcoming.
Holt-Lunstad, J., Smith, T. B., and Layton, J. B. “Social Relationships and Mortality Risk.” PLoS Medicine, 2010.
Muldoon, J. and Parke, J. J. “Cruel Companionship.” New Media & Society, 2025.
Shelmerdine, S. and Nour, M. “AI Chatbots and the Loneliness Crisis.” The BMJ, December 2025.
Prinzing, M. and Fredrickson, B. “Can Artificial Intelligence Help Us Become Less Lonely?” Greater Good Science Center, UC Berkeley, 2023.
Zaki, J. and Pei, R. “Social Connection and Young People’s Mental Health.” 2025 World Happiness Report.
Cox, D. A. and Hammond, K. E. “The Childhood Loneliness of Generation Z.” Survey Center on American Life, 2022.
Imperva. “2024 Bad Bot Report.” Imperva / Thales, 2024.
Ahrefs. Analysis of 900,000 Newly Published Web Pages, April 2025.
Stack Overflow. “2025 Developer Survey.” Stack Overflow, July 2025.
Billion Dollar Boy. “Muse Two: The Real Impact of AI on the Creator Economy.” October 2025.
Deloitte. “2025 Consumer Trends Survey.” Deloitte UK, 2025.
GWI and Financial Times. Analysis of Online Habits, 250,000 Adults, 50+ Countries, 2024.
Humanity Protocol. “Trust Manifesto: From Proof of Humanity to Proof of Trust.” February 2026.
Harvard Journal of Law & Technology. “Model Collapse and the Right to Uncontaminated Human-Generated Data.” March 2025.


