home

i'm sick of the term agi being tossed around so loosely by researchers who have no idea what barriers we'll have to overcome before we get there. this would be a laughingstock in other sciences, yet seems to be perfectly acceptable within some circles in ai research. this is an ongoing dump from november 2023 of puzzles that we'll need to solve, in one way or another, before we get agi. as new research comes out, problems that have been solved will be crossed off.

discrete vs continuous representation of tokens

perpetual inference. perpetual inference refers to how humans, even when we are given no input, have the ability to think and reflect, updating our "weights" and "biases", whereas models have no process for that when given genuinely no input as prompts.

continual learning without needing to redeploy a model

stable memory. neural embedding seems like a poor substitute.

agency, goals? arguably most people don't have any intrinsic motivation either, so i'll tag this as a question for now.

better methods for learning. this ties into a previous essay on coding llms, but humans are given significantly more input to learn on than any llm. we are given touch, hearing, smells, emotions, and many more tools than computers have access to. given this, having access to all text online seems like a lackluster alternative. you would not expect an olympic archer to read about archery for 40 years then shoot gold, so why do we expect llms?

uncertainty, beyond the surface level approaches that we have so far

interpretability. none of this means anything if we dont know how a model is coming up with a given output.

modular learning. a human learning how to drive doesn't impact their ability to take the derivative of an equation, and vice versa.

not agi specific, but rl breaking down for real world tasks. rl based on q learning, q learning based on q table. how do researchers effectively make these claims while knowing this? or are they simply unaware? that seems even more concerning.

self-training. humans can independently come up with hypothesis, think over them, run appropriate experiements, and learn from them. this is a relatively difficult challenge. cmu has a parimitive version of this NELL: Never-Ending Language Learning, but not the exact approach that i'd imagine for agi.

memory abstraction. this kind of exists with semantic consolidation, but not really

1 Note that these are based on the assumption that we're modeling agi after human intelligence.

12/9/2025