I spent the past few days in the woods with a group of people interested in AI. The agenda included two days of discussion and scenario planning for “what would the world look like if intelligence was too cheap to meter?” The group introduced me to some new perspectives – I hadn’t spoken about AI in a large group before, let alone one of experts from different disciplines. There were neuroscientists, some hackers, some Google and Anthropic foundation model people, economists, UX researchers, and product people who’ve done big projects at startups or FAANG.
I was, of course, the least expert person there. I’ve been shying away from going deep on LLMs because I find it a little daunting to keep up with the pace of new developments. But in preparation for this event, I did a lot of reading and thinking so that I could contribute.
There will be a full “report” coming out from the retreat shortly, but here’s some bits and bobs that I found interesting.
Gordon Brander wrote a piece comparing LLMs to intuition instead of “intelligence.” I like the analogy: the prediction of the next word in the sentence as an “educated guess” feels like dealing with other people’s intuitions. “The brain is a machine for jumping to conclusions,” and so is an LLM, it seems.
I was pretty impressed to see other people’s queries – they’re so much more sophisticated in style than mine. Gordon wrote a query as a part of a team brainstorming exercise that “reminded” Claude about a bunch of theory in the space of scenario planning, then told it to “ack and await further instructions” as he continued to add context. He then got Claude to write some pretty reasonable stories about utopian and dystopian worlds, based in the theory that he highlighted at the top. Little tricks seem to really work – things like “Reminder: you are an expert in XYZ.” I predict that there will be people who call themselves “model whisperers” and pride themselves on that ability. Even their creators (a staff researcher at Anthropic was there) don’t really know all of the tricks, really. So much emergent behavior!
Neuromorphic chips seem incredible: why not run something like a recurrent neural network (RNNs are very like the brain) on a brain?
Watch OpenAI demo day, and look at Ben Thompson’s take. So many friends and familiar faces, but also I’m 1) pessimistic about the future of AI dev tools companies like LangChain and 2) curious to learn more about the relationship between OpenAI and MSFT with respect to enterprise products (not just APIs.)
We used to use metaphors from steam engines and other past machines to refer to the mind, but in this case, the machine is directly inspired by the mind. It may seem dumb that I didn’t know this, but I didn’t appreciate the extent to which the “neural” in neural networks is brain-like.I didn’t know the original AlexNet was so heavily inspired by neuroscience.
Huang’s Law: Jensen from Nvidia’s version of Moore’s law about GPU growth.
We spent a lot of time brainstorming what the “intelligence too cheap to meter” world would look like. One idea that kept coming up was the resurgence of rural spaces and in-person activities. I have been excited about the New Ruralism for a while now, maybe AI is a catalyst?
Hot bots on dating apps are exactly as bad as you’d imagine. If you’re showing signs of flagging on Tinder, they’ll send a passel of hot bots at you to try to make you believe that love is indeed possible.
Scientists are anecdotally really optimistic about AI helping accelerate scientific progress. No matter how the regulation shakes out, no matter how the winner-takes-all foundation model dynamics occur, scientists are excited. Someone go build cool tools for science!
This is where you go to get the full list of generative AI events in SF. Some people recommend going!