Models in AI-safety

Models in AI-safety

Updated
/*
If USA unilaterally pauses at a national level and doesn't coordinate with China, is that good or bad?

"pause" doesn't have to be a literal pause, could also be a meaningful slowdown. What I have in mind is a significant pause or slowdown to put serious effort into solving AI safety, not just a short pause/slight slowdown.

Model was written manually with point probabilities and then run through Squiggle AI to generate boilerplate + documentation, then I manually edited the documentation to make it better. Squiggle AI also turned all my point probabilities into ranges, which is fine I guess.
*/

// == Part 1: P(doom) ==