Are you fascinated by the intersection of cognitive science, philosophy, artificial intelligence, and human decision-making? Do you enjoy tackling complex questions about how minds, machines, and societies interact? The Stanford Symbolic Systems Essay Competition invites high school students to explore these themes through original essays bridging disciplines and challenging conventional thinking.
This competition seeks thought-provoking analyses on topics such as rationality and decision-making, the ethics of AI, the nature of human cognition, and the role of symbolic reasoning in intelligence. Whether you take a philosophical, computational, or psychological approach, we welcome essays that push the boundaries of understanding in the field.
A winning essay will be thorough, well-researched, and provide a complete analysis of the issue, taking counterarguments into consideration, if applicable.
Winning entries will receive recognition and the opportunity to publish their work in Stanford’s Symbolic Systems Program’s journal, Machina. Open to all high schoolers.
Submission Deadline: April 19, 2025
Eligibility: Open to all high schoolers
Length: Essays should be 8-10 pages long
Submission: To submit, email a pdf of your essay to both sdm1@stanford.edu and hansonhu@stanford.edu.
Please subject your email “Machina Submission – [LAST NAME], [FIRST NAME]” (ex. “Machina Submission – Stanford, Leland”).
Please title your essay pdf “[LAST NAME]_[FIRST NAME]_submission” (ex. Stanford_Leland_submission).
Join us in shaping the conversation at the frontier of mind, computation, and society!
Some sample questions we would like for you to think about are:
If an AI system be granted legal rights if it passes the Turing Test?
How should we balance algorithmic decision-making with human intuition in high-stakes fields like medicine and criminal justice?
Could/what if AI develops its own language that humans can never understand?
Should AI be taxed as a worker or treated as a tool? How should society adapt to increasing automation?
Is AI art art? If an AI-generated work of art sells for millions, who deserves the profit—the AI, the programmer, or no one?
Can AI help solve systemic biases in society, or will it only reinforce them?
If an AI is programmed to act in self-preservation, does that make it alive?
Could AI ever develop humor? What would that reveal about human cognition?
Does language shape reality, or does reality shape language?
To what extent is our sense of self shaped by the language we use to describe ourselves?
Does the way we describe consciousness shape how we experience it?
If a being lacked language but displayed intelligence, would we consider it conscious?
Do certain languages encourage more ethical thinking, or is morality independent of language?
If free will is an illusion, does moral responsibility still exist?
Is it ethical to use psychological enhancement (e.g., nootropics, brain stimulation) to make people more intelligent or empathetic?
Cyberpunk worlds are defined by high technology, extreme income inequality, corporate dominance, and declining living standards for the masses. With big tech CEOs gaining unprecedented wealth and power, and little standing in their way, what—if anything—could prevent our world from fully becoming cyberpunk?
Feel free to frame your entry around one of these questions or around another topic of your choice related to Symbolic Systems