4 Comments
User's avatar
Gideon Futerman's avatar

I should say, this is my first time trying to publish fairly unpolished writing and thoughts - writing more like I speak than in more polished essay/paper format. So if people have feedback on the style, length of the pieces etc, I'd be really interested to hear

Expand full comment
Gaurav Yadav's avatar

It would have been helpful if you’d bolded certain sections to make it easier to parse — which is what most people probably do with pieces like this anyway to figure out what they want to read.

This was an enjoyable read, and I agree with the main beats. For a while, I ignored post-AGI thinking because (a) extinction risks haven’t been solved, so there’s no point in thinking about a post-AGI world yet, and (b) I assumed AGI would set the tone for the world in a way that humans wouldn’t have any meaningful say. I buy this less now, and I think this piece solidified some of the vague intuitions I’ve had recently. Great to see you on Substack, by the way. Please keep writing!

Expand full comment
Amos Wollen's avatar

Some thoughts for Substack, specifically:

1. Article summaries at the beginning are in fact helpful, and *should* be more widely used; but they’re so uncommon on Substack that most readers will find it out of place (it would be a bit like including an abstract.) In particular, most Substack users like to read blogs that don’t feel like academic essays or reports — Dan Williams is a Substack writer who pretty routinely manages to do the sort of thing you want to do in a Substack-compatible way, so I recommend checking out his blog to see how he does it.

2. More paragraph breaks!

3. For increased informality, consider adopting a moderate anti-hedging heuristic — of course, you should hedge when necessary (and always when making predictions etc.), but many of the sentences included qualifiers that bogged down the sentences while not adding relevant content.

4. Otherwise, I really like the tone — the writing sounds like you, and your writing feels measured and trustworthy.

Expand full comment
DeepLeftAnalysis🔸's avatar

What would it mean for an AI to be interpretable? Is AI at present interpretable?

Expand full comment