Search vs discovery vs ChatGPT? If we want to research something online we have at least three distinct strategies. In search strategy, we formulate keywords and read the documents that matter. When we rely on discovery we go to the sources with authority and interest in the subject area, hoping to stumble upon some relevant texts. Using large language models we kind of apply both approaches, describing our observations to the model and relying on the ability of the model to discover things for us. Which is better?
Artificial intelligence is unreliable
If we can use artificial intelligence, we should probably use it. Large language models tend to be incredibly creative, producing information that is not readily available in a simple search or discovery process. This creativity comes at the cost of reliability.
Many companies simply close their networks to artificial intelligence, since corporate secrets might leak out via too explicit search queries. Even if AI models like ChatGPT are available, the first answer they provide is often irrelevant and we need to understand the subject well enough to ask for proper corrections and disambiguation. Even then, the answer we get will probably not work and we will need to further guide the AI process accurately describing each issue we find. Once we get a good result, we often cannot use it as there are tools that detect AI-generated content and apply penalties to that content.
Search and discovery as a virtuous cycle
Using search strategies we can overcome AI limitations and get a balanced representation of the subject. While AI will usually present very few alternative perspectives, in search we are likely to see not just the relevant perspectives, but also how popular they are. Here I assume using 3-5 keywords and reading 3-5 pages of search results.
In search, we first formulate keywords, and then read documents that matter to the keywords. If the documents are good, we often go to other documents on the same sites. And then we get into the mode of discovery. Suddenly we encounter new keywords that we did not predict. With these keywords, we search more….
There is a virtuous cycle of search and discovery. The more we search, the more likely we are to discover something new. The more we discover, the more likely we are to use the relevant keywords and search more.
Keywords vs context
The search and discovery cycle might not produce any relevant results, which is less likely to happen if we use ChatGPT. The reason is often the way we describe the process. In search and discovery mode we use the keywords describing our subject combined with the keywords describing the sources of authority, like Wikipedia. We are quite likely to make mistakes, defining the wrong set of keywords.
Large language models like ChatGPT rely on context instead of keywords. We describe the situation, and the model builds a context vector. This vector is very similar to keywords but less explicit. Then as we are presented with some results, we can give instructions fine tuning this context. Because the context is not explicit, it is easier to fine-tune than a simple set of keywords.
We can easily explore the keywords and logical expressions we use. The context used by artificial intelligence will probably remain hidden. Moreover, the next time we ask the same questions we may get very different answers generating a very different context. Like a discovery process, artificial intelligence can be very effective, but the results of its usage are not promised to converge.
Transparency of a message
When we formulate search keywords, we describe a very specific message. We control this message to the level that we can add which keywords to use and which to avoid, maybe even some logic of which keywords are more important than others. Simply writing an expression “(A OR (B AND C)) AND D” shows the search engine that A is as important as B and C combined, and D is more important than other parts of the search phrase.
This transparency of our search efforts enables convergence. We do not necessarily converge to the correct solution, but we will be exposed to several competing options and perspectives. Compare this to large language models, and especially discovery. Both are likely to introduce some filter bubbles we cannot control or understand. Large language models will likely follow the most popular and thus most probable solutions, while discovery will present the solutions that the editors of the relevant resources find appropriate. As readers, we are not likely to see the editorial guidelines and reasoning behind this filter bubble.
The transformational power of discovery
I usually focus on keywords as the guiding force of our aggregation of knowledge. Most authors do not automatically think in terms of keywords. Instead, they try to deliver a message. Something gets missed in translation between the message and the keywords. This something is pretty big, and it is the experience or motivation. We can try to deliver emotions as small icons, but this is not the way authors deliver messages. A powerful message is transformational, and it is much more than a set of keywords.
While keywords usually describe very well the search activity, the discovery activity is often driven by some powerful personal transformation that escapes formation. We may try to encode it anyway, using emotionally charged “anchoring” visualizations and keywords corresponding to the anchors. It is just a pale reflection of the real deal, like a small photo of an art piece.
Mix and match
If we use only search, we are not likely to venture beyond the keywords we can think of. Such an approach may converge quickly, but the solution we converge to is likely to be limited by our understanding of the subject.
When we use discovery mode and read great blogs or works of good scientists and journalists, we are likely to get access to a wider set of ideas available to these people. In this scenario, we are not likely to converge, but we are very likely to learn something new and get out of our filter bubble.
Then when we come to discuss the subject with artificial intelligence, we will be able to judge better the quality of its answers and guide its context to provide ever better answers. Using artificial intelligence effectively is a form of art somewhere between debate skills and programming. If we get the first answer AI brings, we are likely to get a very limited answer. When we fine-tune the conversation using the information we acquired through search and discovery or experimentation, we may get a better answer. People who win competitions using AI often experiment with hundreds of prompt and guidance lines, fine-tuning the context for an effective result.
Hands-on experementation
To be honest, all of these methods provide only theories. If we get to proper articles we might also get scientific justifications behind these theories. Yet, theoretical results rarely apply in practical situations.
Take for example code snippets. ChatGPT is likely to provide great and insightful pieces of code. It does not guarantee that the code compiles or works. If we try to actually run the code, we are likely to fail and need some fine-tuning to produce a working code. Something similar applies to literary questions, when ChatGPT may “forget” some important details.
The scientific method we rely on requires experimentation to complete the loop. Discover, search or large language models (LLM) will provide interesting theories and predict some workable solutions, but they can also we wrong. Unless we actually implement the solutions we will not even see their limitations. While we can speed up the theoretical research, we need to work hands-on to produce something we can eventually use.
Get 4 Free Sample Chapters of the Key To Study Book
Get access to advanced training, and a selection of free apps to train your reading speed and visual memory