Squiggle Hub

quri

1 variables
·
Updated
// How much time would it take to migrate Squiggle Hub from GraphQL (slow) to RSC (10x faster)?

// searching in VS Code with ` query [A-Z]` regex; not counting tests
queries = 24
mutations = 25

minPerQuery() = mx(
  // some queries are made of nested fragments; estimation based on converting frontpage and user page definitions and groups, and I should become better with each query
  7 to 15,
  // are there any queries where I might have to load them in portions, dataloader style? I hope not, but not sure
Updated
// for https://github.com/quantified-uncertainty/squiggle/discussions/3066

helpers = {
  @doc("Take a function and produces a derivative function for it.")
  derivative(f, eps) = {|x| (f(x + eps) - f(x)) / eps}

  optimalAllocation(
    totalUtilityFunctions,
    budget,
    step
Updated
prioritization = "
## Little things
- Slava:
  - [x] PR Reviews.
  - [x] Danger.location.
  - [x] Tooltips.
  - [ ] Clean up Playground settings for @scale tags (1 day of work). 
- Ozzie: Other ops/emails. Outreach, relative values estimates maybe. 

## Things to do
Updated
/*
 Simple example of the Drake Equation
*/

@hide
@doc("Simple helper to construct symbolic lognormal dists")
sTo(p5, p95) = Sym.lognormal({ p5: p5, p95: p95 })

@name("Number of new stars formed per year")
r_star = sTo(1, 10)
Updated
stages = {
  s1: {
    name: "",
    description: "1. It will become possible and financially feasible to build APS systems.",
    default: "65%",
  },
  s2: {
    name: "",
    description: "2. There will be strong incentives to build APS systems | (1)",
    default: "80%",
Updated
/*
Taking models from this post, by Nuno
https://forum.effectivealtruism.org/posts/BDXnNdBm6jwj6o5nc/five-slightly-more-hardcore-squiggle-model
*/

/* 
Part 1: AI timelines at every point in time
*/
part1 = {
  _sigma(slope, top, start, t) = {
Updated
/*
Describe your code here
*/

import "hub:quri/exports-example" as foo

foo
2 variables
·
Updated
// Exports are new in Squiggle Hub. To export a variable, just prefix it with the keyword "export".
// You can later import a variable with the syntax: 
// import "hub:quri/exports-example" as exportsExample.
// The "hub" part of this is useful to indicated that this data comes from Squiggle Hub. We might allow other data sources in the future. 

export a = normal(2, 5)
export b = [2,3,4]
Updated
/*
  Tried moving this model from Guesstimate, by Adrian Cockcroft - @adrianco
  https://www.getguesstimate.com/models/1307

  Model of a storage service showing how cache hits and misses
  contribute to a multi-modal response time distribution. The
  positions of the modes depend on the relative response times of
  each path through the model. The amplitude of each mode depends on
  the cache hit rates through the system. Mean/Median/Percentiles are
  a poor and unstable characterization of the distribution.
Updated
/*
 (Experiment on AI Timelines)
*/

//algorithmicImprovements

epochComputeTrends = {
  /** Doubling time of the training compute of milestone systems, since 2010 */
  deepLearningComputeDoublingTimeInYears: 0.5,
  deepLearningComputeInFlopsIn2023: 1e22,
Updated
/*
From Nuno Sempere
https://github.com/quantified-uncertainty/squiggle-models/blob/master/ukraine-ceasefire/ceasefire.squiggle
*/

/** Likelihood that a ceasefire will start */
numSuccesses = 0
numFailures = 138 // no ceasefire so far
numFutureTrials = 172 // days since the 24th of February
Updated
/*
 We're considering LLM integration in Squiggle Hub.
This has some estimations for how much this will cost. 
I'm assuming that "Running an LLM" means "Reading a full squiggle file" and then providing some feedback.
One main question here is how big these files are, and how frequently this will get run.
*/

/** Public GPT Pricing, from OpenAI */
models = {
  gpt4: {
Updated
/*
 Some simple attempts 
*/

monthlySpend = 15k to 25k
amountInBankJuly = 355k
inBank(t: [0, 1.5]) = amountInBankJuly - monthlySpend * t * 12
inBankYear(t: [2023.5, 2025]) = inBank(t - 2023.5)

linear = [[2023.5, 2 to 2.4], [2024, 1.5 to 2.5], [2025, 0.2 to 3]]
Updated
/*
A quick attempt at making marginal utility curves for funding a project.
This assumes that a project produces discrete amounts of value with each hire;
i.e. money only given to an org only produces value when it enables the next hire.

One annoying thing is that this doesn't allow for uncertainty of the costs for each hire;
that would require a lot more computation, which didn't seem worth it at this stage.
*/

hires = [
Updated
/*
Improving Karma: $8mn of possible value (my estimate)
By Nathan Young
https://forum.effectivealtruism.org/posts/YajssmjwKndBTahQx/improving-karma-usd8mn-of-possible-value-my-estimate
*/

//Here is a link but I’ve also put the code here (and I explain it below).

//critical stuff
pc_to(a, b) = truncate(a to b, 0, 1)
Updated
/*
 Model By Ben West, here: 
 https://forum.effectivealtruism.org/posts/ZtZmkgDW6MH8AEEK6/how-much-do-markets-value-open-ai.

  Summary: A BOTEC indicates that Open AI might have been valued at 220-430x their 
  annual recurring revenue, which is high but not unheard of. Various factors make 
  this multiple hard to interpret, but it generally does not seem consistent with
  investors believing that Open AI will capture revenue consistent with creating
  transformative AI.
*/