/* A model for the probability of solving AI alignment The basic setup: 1. It will cost some amount to solve alignment; the cost is distributed over multiple orders of magnitude. 2. Some amount has already been spent, and some amount will be spent in the future. 3. If the amount spent exceeds the cost, then alignment is solved. */
/*
Cost to morally offset the harm of an LLM subscription by donating to AI safety
organizations.
Variables and their values were written by me; scaffolding and docs were initially written by AI and then heavily edited by me.
Note: Any reference to "AI companies" refers specifically to frontier AI companies that are working toward AGI/ASI.
*/
@name("Revenue to Valuation Ratio")/* If USA unilaterally pauses at a national level and doesn't coordinate with China, is that good or bad? "pause" doesn't have to be a literal pause, could also be a meaningful slowdown. What I have in mind is a significant pause or slowdown to put serious effort into solving AI safety, not just a short pause/slight slowdown. Model was written manually with point probabilities and then run through Squiggle AI to generate boilerplate + documentation, then I manually edited the documentation to make it better. Squiggle AI also turned all my point probabilities into ranges, which is fine I guess. */ // == Part 1: P(doom) ==