De Kai says the problem with AI is it isn’t mixing system 1 and system 2. I agree but have a completely different spin. The issue isn’t that researchers don’t want to do it, but that they have no idea how to make the two systems communicate.
This communication problem is a good way of explaining the weaknesses in Go AI I wrote about previously. The issue isn’t narrowly that humans can beat the superhuman computers using certain very specific strategies. That’s equivalent to a human chess grandmaster losing to a trained chimpanzee. The issue is that there are systemic weaknesses in the computer’s play which make it much weaker overall, with losing to humans being a particularly comical symptom. Go presents a perfectly presented test case for communicating between system 2 and system 1. The hand coded system 2 for it is essentially perfect, at least as far as tactics go, and any ability to bridge the two would result in an immediate, measurable, dramatic improvement in playing strength. Anyone acting smart for knowing that this is a core issue for AI should put up or shut up. Either you come up with a technique which works on this perfectly engineered test case or you admit that you’re just as flummoxed as everybody else about how to do it.
On top of that we have no idea how to make system 2 dynamic. Right now it’s all painstakingly hand-coded processes like alpha-beta pruning or test driven development. I suspect there’s a system 3. While system 2 makes models, system 3 comes up with models. Despite almost all educational programs essentially claiming to promote system 3 there’s no evidence that any of them do it at all. It doesn’t help that we have no idea how to test it. The educational instruction we have has a heavy emphasis on system 2 with the occasional bit of system 1 for very specific skills. Students can reliably be taught very specific world models and practiced in them. That helps them do tasks but there’s no evidence that it helps their overall generalization skills.
This creates practical problems in highly skilled tasks like music. To a theorist seeing what a self-taught person does is much more likely to turn up something novel and interesting than someone who has been through standard training even though the latter person will be overall objectively better. The problem is that a self-taught person will be 99% reinventing standard theory badly and, if they’re lucky, 1% doing something novel and interesting. Unfortunately they’ll have no idea which part is that 1%. Neither will a standard instructor, who will view the novel bit as merely bad like the vast bulk of what the student is doing and train them out of it. It takes someone with a deep understanding of theory and a strong system 3 themselves to notice something interesting in what a self-taught person has done, point out what it is, and encourage them to polish that thing more rather than abandon it. I don’t have any suggestions on how to make this process work better other than that everyone to get at least some personalized instruction from someone truly masterful. I’m even more at a loss for how to make AI better at it. System 3 is deep in the recesses of the human mind, the most difficult and involved thing our brains have gained the ability to do, and we may be a long ways away from having AI do it well.