Looking Things Up Isn't Learning
Search returns answers. Learning is the ability to produce them. The two are not the same thing and a generation of students raised on Google is paying for the difference.
By Clevriq Admin
A Search Engine Is Not a Tutor
A child sits down to study cell biology the night before a unit test. She types "difference between mitosis and meiosis" into Google. The first result is an AI Overview with a four-line summary. Below it, an SEO-bloated page with the same answer in slightly different words. Below that, a YouTube clip. She reads the snippet. It makes sense. She closes the tab. The next morning, in front of the test paper, the page is blank.
This is not a discipline failure. It is a pedagogical mismatch. A search engine is a phenomenal tool for finding information. It is, by design, a poor tool for learning it. The two activities feel similar to the student doing them, since she did, after all, read the answer, but in cognitive terms they are almost opposites.
The Google effect
In 2011, Betsy Sparrow at Columbia, Jenny Liu at Wisconsin, and Daniel Wegner at Harvard ran a series of simple experiments. Participants read trivia statements and typed them into a computer. Half were told the computer would save what they typed. Half were told it would erase it. Then everyone tried to recall the statements from memory.
The "saved" group remembered substantially less. They were also better at remembering where on the computer the information had been stored than what it actually was.
Sparrow and her co-authors called this the Google effect. When we expect that a fact will be available later, the brain quietly decides it doesn't need to hold on to the fact itself. We remember the path to the answer instead of the answer.
The Internet has become a primary form of external or transactive memory, where information is stored collectively outside ourselves.
Sparrow, Liu, and Wegner, Science, 2011
This is fine for a phone number. It is not fine for the periodic table.
Retrieval is the thing
The most consistent finding in the cognitive science of learning is forty years old and almost everyone ignores it. You remember things you have effortfully retrieved, not things you have effortlessly read.
In a now-classic experiment, Henry Roediger and Jeffrey Karpicke had students study a science passage and then either re-read it or take a short test on it. On a delayed test a week later, the students who had been tested, who had had to pull the answers out of their own heads, outperformed the re-readers by a wide margin, even though they reported feeling less confident at the time. A 2011 follow-up, also published in Science, pitted retrieval practice against concept mapping, the gold-standard "active" study technique. Retrieval still won, again on a delayed test.
Robert Bjork calls these effortful struggles desirable difficulties. The feeling of slight strain when you try to recall something is not friction in the way of learning. It is the learning. Re-reading and Googling produce the opposite effect, sometimes called the illusion of fluency. It is the smooth, easy feeling of comprehension that students reliably mistake for mastery.
A search bar is the perfect generator of fluency illusions. You did read the answer. You did understand it for a moment. You will not be able to produce it on Wednesday.
What's actually on the page
Even setting the cognitive science aside, the modern search results page is not the encyclopedia your parents Googled in 2008.
A 2024 study by researchers at Leipzig University and Bauhaus-Universität Weimar tracked Google, Bing, and DuckDuckGo for a full year across 7,392 product review queries. The conclusion confirmed what most adults already feel. SEO spam is winning, and higher-ranked pages are on average more optimised, more monetised with affiliate links, and lower in linguistic diversity than the rest of the web. Google's own response to the LLM era, AI Overviews, launched in May 2024 and almost immediately began advising users to put glue on pizza so the cheese would stick, and to eat one small rock per day for the minerals. The latter was lifted, in earnest, from a satirical article in The Onion. Google quietly rolled the feature back for many queries. The errors were funny. The underlying problem was not. The snippet at the top of the page is sometimes a hallucination, presented with the same authority as a sourced answer.
Meanwhile, more than half of all Google searches now end without a click. The user reads the snippet, accepts the answer, and bounces. The snippet feels like a complete reply. It is, by design, a fragment torn from context, and for a student trying to learn ionic versus covalent bonding, that matters. The chapter exists for a reason. Each idea depends on the previous one. A snippet is a leaf without the tree.
The student can't tell what's good
A reasonable response is, "Fine, students just need to evaluate sources." It turns out they can't, and most of them have never been taught how.
In a multi-year project at Stanford, Sam Wineburg and the Stanford History Education Group ran tasks measuring how well middle-schoolers, high-schoolers, and university undergraduates evaluated online information. The headline finding from their 2016 report, based on close to 8,000 student responses, was that more than 80% of middle-schoolers could not tell that a clearly labelled "sponsored content" item was an advertisement and not a real news article. Their later work showed that even Stanford history PhDs evaluated unfamiliar websites less effectively than professional fact-checkers, who used a technique the team called lateral reading. Leave the suspect site immediately and check what other sources say about it.
The expert move is to leave the page. The student move, the Ctrl-F, scroll-the-snippet move, is to stay on it.
The cost of every search
There is also the matter of attention. Gloria Mark's lab at UC Irvine has spent two decades observing knowledge workers and students at their desks. Her landmark observational study, published at CHI 2005, found that interrupted work, on average, took more than 23 minutes to resume. Her 2008 follow-up confirmed the same picture experimentally. Interrupted people do complete the immediate task, but at the cost of higher stress, higher effort, and a wake of unfinished side-quests.
A "quick Google" is not quick. The query takes seconds. The recovery does not. The open browser is also the most distraction-rich surface ever engineered. A study session built around tabs is a study session that ends, on average, somewhere it didn't intend to go.
What real study requires
Stack the cognitive-science findings against the search-engine experience and the mismatch is almost embarrassing.
| Real study needs | A search engine offers |
|---|---|
| Effortful retrieval | Effortless lookup |
| Spaced practice over weeks | One-shot answer, now |
| Prerequisite-aware sequence | Independent, decontextualised snippets |
| Feedback on what you got wrong | No model of what you know |
| A source you can trust without checking | Top result optimised for clicks |
| Quiet, single-task focus | Tabs, ads, related searches, autoplay |
None of this is Google's fault. Google was built to be a search engine, and is the best one ever made. The fault is in our quiet expectation that it should also be a tutor, an expectation the company itself is now reinforcing with AI Overviews that pose as instruction. They are not instruction. They are summaries of the open web, weighted by what other websites link to, sometimes hallucinated, and almost never built for the chapter your student is sitting in tonight.
How we are thinking about it
We built Clevriq on the opposite premise. A study session shouldn't begin with a query box. It should begin with the system already knowing what the student knows and what they don't. Every question the student attempts is graded, mapped against a per-chapter knowledge graph, and used to update a Bayesian estimate of mastery on each sub-skill it tested. The next question is chosen to maximise information about a gap, not to surface the most popular page on the web. When the student is stuck, the doubt assistant works inside the chapter. It will not pull a snippet from a Class 8 textbook into a Class 5 lesson, because the chapter scope is enforced.
None of this is a substitute for searching the web. It is a substitute for studying on the web. The student who needs to find a definition can still Google it in three seconds. The student who needs to learn cell biology should be doing something else entirely, and we think it should be measured, sequenced, and effortful, because that is the only kind of study the cognitive science has ever endorsed.
References
- Sparrow, B., Liu, J., and Wegner, D. M. (2011). Google effects on memory: Cognitive consequences of having information at our fingertips. Science, 333(6043), 776-778.
- Roediger, H. L., and Karpicke, J. D. (2006). Test-enhanced learning: Taking memory tests improves long-term retention. Psychological Science, 17(3), 249-255.
- Karpicke, J. D., and Blunt, J. R. (2011). Retrieval practice produces more learning than elaborative studying with concept mapping. Science, 331(6018), 772-775.
- Bjork, E. L., and Bjork, R. A. (2011). Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning. In M. A. Gernsbacher, R. W. Pew, L. M. Hough, and J. R. Pomerantz (Eds.), Psychology and the Real World: Essays Illustrating Fundamental Contributions to Society. Worth Publishers.
- Bevendorff, J., Wiegmann, M., Potthast, M., and Stein, B. (2024). Is Google getting worse? A longitudinal investigation of SEO spam in search engines. In Advances in Information Retrieval, ECIR 2024. Springer.
- Wineburg, S., McGrew, S., Breakstone, J., and Ortega, T. (2016). Evaluating Information: The Cornerstone of Civic Online Reasoning. Stanford History Education Group.
- Wineburg, S., and McGrew, S. (2019). Lateral reading and the nature of expertise: Reading less and learning more when evaluating digital information. Teachers College Record, 121(11), 1-40.
- Mark, G., González, V. M., and Harris, J. (2005). No task left behind? Examining the nature of fragmented work. Proceedings of CHI 2005, 321-330.
- Mark, G., Gudith, D., and Klocke, U. (2008). The cost of interrupted work: More speed and stress. Proceedings of CHI 2008, 107-110.