Discussion about this post

User's avatar
Noah Birnbaum's avatar

Great post.

Three things that I think were a bit understated (though I realize that there are a lot of things to get through here):

1) A bit more engagement with those that worry that there is a race with China and who wins this actually matters. I (mostly) dont see this as a reason not to work on AI - I just see it as being reason to work on more robust interventions (ie ones that work in both worlds), which I think exist. Not really discussing it seems somewhat (likely accidentally) intellectually dishonest.

2) Getting a bit clearer on who 80k is deferring to. In the past, it seems like 80k seems to really think that superforecasters are the people to defer to for predictions, but they seem to think this has changed (and even if superforecasters have updated, as mentioned in the piece, it would likely be marginal - see the decision trees/conditional probabilities in the XPT). While there is good reason to update against superforecasters (ie the reflection new report on XPT from FRI showing that long term and short term predictions aren’t correlated and that they already got some things wrong), I think it should be made more explicit who 80k is actually relying on (it doesn’t seem like the .38% number - but let me know if you are) and why. I guess 80k could be appealing to object level reasoning (ie we have some probability in the Benjamin Todd post on TAI by 2030), but object level reasoning here seems unique, and I would like to know more about why it makes more sense to rely on that much in this case.

3) While the difficulty of making progress on this issue was discussed, imo, (mostly on vibes) it wasn’t discussed hard enough - and the T in ITN stands for something. Actually doing things that end us up with a good future (and are robust towards bad ones) seems to actually just be insanely difficult here.

Also, it was frequently mentioned that 80k is extremely uncertain about AI being an x-risk, but it’s unclear to me how true this can be given the 80k shift in resources to AI.

This is not to say that 80k shouldn’t be certain but it’s to say that being extremely uncertain about this fact (which probably is the largest reason to prioritize AI) probably means that AI would have been the top area but it’s less clear that they should have put less resources into other causes because of a <1% probability (but maybe they disagree, and I would curious to know!).

Curious to hear what other people think here and any critiques people have of these framings.

Expand full comment
Anoop Kumar U's avatar

I have an important question in my mind, and I think that this is the correct space and time to ask it.

Humans already know the threats if such power-seeking large AI models are left it's own. Researchers have confirmed this already; the power-seeking behavior of AI models. And, the makers of such models are aware of it.

So, my viewpoint is that they will take this into account and deliver AI models into society in a way such that power-seeking behaviors are mitigated. Or, say, limiting the capacity of AI models to human assistance only, like what we have today. Large language models like ChatGPT, Gemini, Claude, etc., are great at helping us in improving our understanding, but they cannot do things of their own deeper motives because they are delivered in that way. They function within strict boundaries set by their creators.

Therefore, if we can deliver even larger models, like the future AGI, in such a way with similar safeguards, then why should we feel concern about the existential threat from AI?

Please, someone answer this question elegantly. I am seeking a clarification. If it's a threat, then only should we go deeper into the context, right? I am in this context right now.

Expand full comment
5 more comments...

No posts