Quickdraw and the Connection Between Words and Symbols
Updated: May 15
Image recognition has been around for decades, but usually when we communicate with AI, our preferred medium is words, whether spoken or written. Some apps and tools are changing that, including Quickdraw, an experimental game where a neural network tries to guess what you’re drawing. Although it may seem insignificant, this game is not only an experimental project, but it also is a useful tool for understanding the relationship between symbols and language.
How Quickdraw Unintentionally Explores Semiotics
In quickdraw, you are given a prompt, and 20 seconds to draw what is usually a common item or concept. The AI sometimes guesses it within seconds, but if you don’t envision the item similarly to the people who’ve drawn it in the app in the past, the AI may not be able to guess what it is at all. Probably the most fascinating feature is that after you’ve drawn a set of six images, the game allows you to view past images that the AI used to connect the image and word, as well as what the AI was comparing your drawing to when it guessed correctly.
These doodles are typically what we might think of as symbols, while we tend to think of words as something completely different, but this game provides an example of how closely an image and a word can be linked. In fact, words themselves are symbols—An alphabet is a group of symbols that can be arranged to represent certain sounds, which we have collectively decided to represent a concept, whether it be concrete or abstract.
Semiotics, also called semiology, is the study of symbols and broader concepts such as signification, which is the idea that one thing stands in for something else. Semiotics can often sound very academic, theoretical, and detached from real-world and practical experiences, but the reality is that we use semiotic concepts every day, not just when using direct visual representations like the doodles in Quickdraw, but every time we speak or write something down. Natural language processing often focuses only on the practice of language rather than the theory behind it, understanding deeper linguistic theories such as semiotics could help us improve NLP processes and really understand what we are analyzing when looking at qualitative data.
How is a word a symbol?
When we write down a word, the reality is that it is just a couple of marking on paper, but we don’t think of that reality because we have assigned it a meaning. The word “Dog” as I have just typed it out is just a collection of pixels on a screen, but the shape of those pixels has a predetermined meaning, and that predetermined meaning probably means you are thinking of a dog in your mind when you read those words. It may be a golden retriever, a husky, or any kind of dog, but you are thinking about it because of a collection of small symbols that together represented something else. On the most basic level, this is how language works.
This is Not a Pipe
These ideas come into play in the famous painting, The Treachery of Images by Belgian artist René Magritte, created in 1929. The image displays a picture of a pipe along with the words "this is not a pipe." If we aren’t thinking about language in an abstract way, this picture is quite confusing, because it clearly displays a pipe, but the text says that it does not. The answer is that it does not display a literal pipe, but a representation, or symbol, of a pipe. This artwork makes us stop and really consider what we do every day when we speak or see a painting. Our minds are incredibly complex and have developed complex shortcuts so we can understand each other.
In semiotics, the pipe is called a signifier, and a real, physical pipe would be the signified. In this sense, the world is a somewhat arbitrary stand in for the physical thing: it only represents what it represents because we have all agreed to it. In this view, language and individual words may seem less important, but all it means is that there is a nuance here that we can be aware of, and this kind of thinking can allow of to be more attuned to what we really mean when speak. In addition, this can also help us understand linguistic shifts that change the meaning of words. A great example of this is the word “tablet.” Does it mean an ancient block of clay, perhaps containing one of the earliest samples of modern writing or a high-tech, touch-screen device? At some point, we decided that this word could refer to both, and at that point the context became more important than ever to understanding what a tablet is.
On Quickdraw, one of the reasons the AI may struggle to identify a simple doodle is because the player envisions the item differently than the previous players, so the AI can’t match the two symbols. This isn’t much different than the differences in handwriting, and after enough attempts the AI can learn to recognize all symbols within the category. In the following screenshot, we can clearly tell that the reason why the AI didn’t recognize the previous doodle of a baseball was because the player didn’t draw the usual stitch pattern that is often seen on baseballs. In the future, the AI may be better able to identify the baseball even without this iconic pattern—It understanding of the symbol is widening, just like words take on new meanings.
The Future of Symbols
With the rise of emoticon use in text, visual media, and improved image recognition, it is possible that in the future we could do an NLP analysis that included doodles just like those in quickdraw. While to us it may feel like those symbols are not “really” text, the line between text and doodles is less defined than we may have realized. Such advancements could widen the range of textual analysis into visual art, and those who have trouble expressing themselves in words alone could contribute significantly more to documents included in a sentiment analysis.