OpenAI Email Archives (from Musk v. Altman and OpenAI blog) - Part III
date
Dec 16, 2024
slug
open_ai_email_archives_musk_altman3
status
Published
tags
AI
summary
type
Post
Subject: Top AI institutions today
Andrej Karpathy to Elon Musk, (cc: Shivon Zilis) - Jan 31, 2018 1:20 PM
The ICLR conference (which is the top deep learning - specific conference (NIPS is larger, but more diffuse)) released their decisions for accepted/rejected papers, and someone made some nice plots that show where the current deep learning / AI research happens at. It's an imperfect measure because not every company might prioritize paper publications, but it's indicative.
Here's a plot that shows the total number of papers (broken down by oral/poster/workshop/rejected) from any institution:
Long story short, Google is dominating with 83 paper submissions. The academic institutions (Berkeley / Stanford / CMU / MIT) are next, in 20-30 ranges each.
Just thought it was an interesting snapshot of where all the action is today. The full data is here: http://webia.lip6.fr/~pajot/dataviz.html
- Andrej
Elon Musk to Greg Brockman, Ilya Sutskever, Sam Altman, (cc: Sam Teller, Shivon Zilis, fw: Andrej Karpathy) - Jan 31, 2018 2:02 PM
OpenAI is on a path of certain failure relative to Google. There obviously needs to be immediate and dramatic action or everyone except for Google will be consigned to irrelevance.
I have considered the ICO approach and will not support it. In my opinion, that would simply result in a massive loss of credibility for OpenAI and everyone associated with the ICO. If something seems too good to be true, it is. This was, in my opinion, an unwise diversion.
The only paths I can think of are a major expansion of OpenAI and a major expansion of Tesla AI. Perhaps both simultaneously. The former would require a major increase in funds donated and highly credible people joining our board. The current board situation is very weak.
I will set up a time for us to talk tomorrow. To be clear, I have a lot of respect for your abilities and accomplishments, but I am not happy with how things have been managed. That is why I have had trouble engaging with OpenAI in recent months. Either we fix things and my engagement increases a lot or we don’t and I will drop to near zero and publicly reduce my association. I will not be in a situation where the perception of my influence and time doesn’t match the reality.
Elon Musk to Andrej Karpathy - Jan 31, 2018 2:07 PM
fyi
What do you think makes sense? Happy to talk by phone if that’s better.
Greg Brockman to: Elon Musk, (cc: Ilya Sutskever, Sam Altman, [redacted], Shivon Zilis) - Jan 31, 2018 10:56 PM 📎
Hi Elon,
Thank you for the thoughtful note. I have always been impressed by your focus on the big picture, and agree completely we must change trajectory to achieve our goals. Let's speak tomorrow, any time 4p or later will work.
My view is that the best future will come from a major expansion of OpenAI. Our goal and mission are fundamentally correct, and that will increasingly be a superpower as AGI grows near.
Fundraising
Our fundraising conversations show that:
- Ilya and I are able to convince reputable people that AGI can really happen in the next ≤10 years
- There's appetite for donations from those people
- There's very large appetite for investments from those people
I respect your decision on the ICO idea, which matches the evolution of our own thinking. Sam Altman has been working on a fundraising structure that does not rely on a public offering, and we will be curious to hear your feedback.
Of the people we've been talking to, the following people are currently my top suggestions for board members. Would also love suggestions for your top picks not on this list, and we can figure out how to approach them.
- <redacted>
- <redacted>
- <redacted>
- <redacted>
- <redacted>
- <redacted> (she heads Partnership on AI, originally created by Demis to steal OpenAI's thunder – would bring a lot of outside credibility)
The next 3 years
Over the next 3 years, we must build 3 things:
- Custom AI hardware (such as <redacted> computer)
- Massive AI data center (likely multiple revs thereof)
- Best software team, mixing between algorithm development, public demonstrations, and safety
We've talked the most about the custom AI hardware and AI data center. On the software front, we have a credible path (self-play in a competitive multiagent environment) which has been validated by Dota and AlphaGo. We also have identified a small but finite number of limitations in today's deep learning which are barriers to learning from human levels of experience. And we believe we uniquely are on trajectory to solving safety (at least in broad strokes) in the next three years.
We would like to scale headcount in this way:
- Beginning of 2017: ~40
- End of 2018: 100
- End of 2019: 300
- End of 2020: 900
█████████████ █████████████████ ██████
Moral high ground
Our biggest tool is the moral high ground. To retain this, we must:
- Try our best to remain a non-profit. AI is going to shake up the fabric of society, and our fiduciary duty should be to humanity.
- Put increasing effort into the safety/control problem, rather than the fig leaf you've noted in other institutions. It doesn't matter who wins if everyone dies. Related to this, we need to communicate a "better red than dead" outlook — we're trying to build safe AGI, and we're not willing to destroy the world in a down-to-the-wire race to do so.
- Engage with government to provide trusted, unbiased policy advice — we often hear that they mistrust recommendations from companies such as ████████████.
- Be perceived as a place that provides public good to the research community, and keeps the other actors honest and open via leading by example.
The past 2 years
I would be curious to hear how you rate our execution over the past two years, relative to resources. In my view:
- Over the past five years, there have two major demonstrations of working systems: AlphaZero [DeepMind] and Dota 1v1 [OpenAI]. (There are a larger number of breakthroughs of "capabilities" popular among practitioners, the top of which I'd say are: ProgressiveGAN [NVIDIA], unsupervised translation [Facebook], WaveNet [DeepMind], Atari/DQN [DeepMind], machine translation [Ilya at Google — now at OpenAI], generative adversarial network [Ian Goodfellow at grad school — now at Google], variational autoencoder [Durk at grad school — now at OpenAI], AlexNet [Ilya at grad school — now at OpenAI].) We benchmark well on this axis.
- We grew very rapidly in 2016, and in 2017 iterated to a working management structure. We're now ready to scale massively, given the resources. We lose people on comp currently, but pretty much only on comp. I've been resuming the style of recruiting I did in the early days, and believe I can exceed those results.
- We have the most talent dense team in the field, and we have the reputation for it as well.
- We don't encourage paper writing, and so paper acceptance isn't a measure we optimize. For the ICLR chart Andrej sent, I'd expect our (accepted papers)/(people submitting papers) to be the highest in the field.
- gdb
Andrej Karpathy to Elon Musk - Jan 31, 2018 11:54 PM
Working at the cutting edge of AI is unfortunately expensive. For example, DeepMind's operating expenses in 2016 were at around $250M USD (does not include compute). With their growing team today it might be ~0.5B/yr. But then Alphabet in 2016 reported ~20B net income so it's still fairly cheap even if DeepMind had no revenue of its own. In addition to DeepMind, Google also has Google Brain, Research, and Cloud. And TensorFlow, TPUs, and they own about a third of all research (in fact, they hold their own AI conferences).
I also strongly suspect that compute horsepower will be necessary (and possibly even sufficient) to reach AGI. If historical trends are any indication, progress in AI is primarily driven by systems - compute, data, infrastructure. The core algorithms we use today have remained largely unchanged from the ~90s. Not only that, but any algorithmic advances published in a paper somewhere can be almost immediately re-implemented and incorporated. Conversely, algorithmic advances alone are inert without the scale to also make them scary.
It seems to me that OpenAI today is burning cash and that the funding model cannot reach the scale to seriously compete with Google (an 800B company). If you can't seriously compete but continue to do research in open, you might in fact be making things worse and helping them out "for free", because any advances are fairly easy for them to copy and immediately incorporate, at scale.
A for-profit pivot might create a more sustainable revenue stream over time and would, with the current team, likely bring in a lot of investment. However, building out a product from scratch would steal focus from AI research, it would take a long time and it's unclear if a company could "catch up" to Google scale, and the investors might exert too much pressure in the wrong directions.
The most promising option I can think of, as I mentioned earlier, would be for OpenAI to attach to Tesla as its cash cow. I believe attachments to other large suspects (e.g. Apple? Amazon?) would fail due to an incompatible company DNA. Using a rocket analogy, Tesla already built the "first stage" of the rocket with the whole supply chain of Model 3 and its onboard computer and a persistent internet connection. The "second stage" would be a full self driving solution based on large-scale neural network training, which OpenAI expertise could significantly help accelerate. With a functioning full self-driving solution in ~2-3 years we could sell a lot of cars/trucks. If we do this really well, the transportation industry is large enough that we could increase Tesla's market cap to high O(~100K), and use that revenue to fund the AI work at the appropriate scale.
I cannot see anything else that has the potential to reach sustainable Google-scale capital within a decade.
- Andrej
Elon Musk to Ilya Sutskever, Greg Brockman - Feb 1, 2018 3:52 AM
[Forwarded previous message from Andrej]
Andrej is exactly right. We may wish it otherwise, but, in my and Andrej’s opinion, Tesla is the only path that could even hope to hold a candle to Google. Even then, the probability of being a counterweight to Google is small. It just isn’t zero.
Subject: AI updates
Shivon Zilis to Elon Musk, (cc: Sam Teller) - Mar 25, 2018 11:03AM
OpenAI
Fundraising:
- No longer doing the ICO / “instrument to purchase compute in advance” type structure. Altman is thinking through an instrument where the 4-5 large corporates who are interested can invest with a return capped at 50x if OpenAI does get to some semblance of money-making AGI. They apparently seem willing just for access reasons. He wants to discuss with you in more detail.
Formal Board Resignation:
- You're still technically on the board so need to send a quick one liner to Sam Altman saying something like “With this email I hereby resign as a director of OpenAI, effective Feb 20th 2018”.
Future Board:
- Altman said he is cool with me joining then having to step off if I become conflicted, but is concerned that others would consider it a burned bridge if I had to step off. I think best bet is not to join for now and be an ambiguous advisor but let me know if you feel differently. They have Adam D’Angelo as the potential fifth to take your place, which seems great?
TeslaAI
Andrej has three candidates in pipeline, may have 1-2 come in to meet you on Tuesday. He will send you a briefing note about them. Also, he’s working on starter language for a potential release that will be ready to discuss Tuesday. It will follow the “full-stack AI lab” angle we talked about but, if that doesn’t feel right, please course correct... is tricky messaging.
Cerebras
Chip should be available in August for them to test, and they plan to let others have remote access in September. The Cerebras guy also mentioned that a lot of their recent customer interest has been from companies upset about the Nvidia change in terms of service (the one that forces companies away from consumer grade GPUs to enterprise Pascals / Voltas). Scott Gray and Ilya continue to spend a bunch of time with them
Subject: The OpenAI Charter
Sam Altman to Elon Musk, (cc: Shivon Zilis) - Apr 2, 2018 1:54 PM
We are planning to release this next week--any thoughts?
The OpenAI Charter
OpenAI's mission is to ensure that artificial general intelligence (AGI) — by which we mean highly autonomous systems that outperform humans at most economically-valuable creative work — benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. To that end, we commit to the following principles:
Broadly Distributed Benefits
We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always assiduously act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.
Long-Term Safety
We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community.
We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case by case agreements, but a typical triggering condition might be "a better-than-even chance of success in the next 2 years".
Technical Leadership
To be effective at addressing AGI's impact on society, OpenAI must be on the cutting edge of AI capabilities — policy and safety advocacy alone would be insufficient.
We believe that AI will have broad societal impact before AGI, and we’ll strive to lead in those areas that are directly aligned with our mission and expertise.
Cooperative Orientation
We will actively cooperate with other research and policy institutions; we seek to create a global community working together to address AGI’s global challenges.
We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.
Elon Musk to Sam Altman - Apr 2, 2018 2:45PM
Sounds fine
Subject: AI updates (continuation)
Shivon Zilis to Elon Musk, (cc: Sam Teller) - Apr 23, 2018 1:49 AM (continued from thread with same subject above)
Updated info per a conversation with Altman. You’re tentatively set to speak with him on Tuesday.
Financing:
- He confirmed again that they are definitely not doing an ICO but rather equity that has a fixed maximum return.
- Would be a rather unique subsidiary structure for the raise which he wants to walk you through.
- Wants to move within 4-6 week on first round (probably largely Reid money, potentially some corporates).
Tech:
Says Dota 5v5 looking better than anticipated.
- The sharp rise in Dota bot performance is apparently causing people internally to worry that the timeline to AGI is sooner than they’d thought before.
- Thinks they are on track to beat Montezuma’s Revenge shortly.
Time allocation:
- I’ve reallocated most of the hours I used to spend with OpenAI to Neuralink and Tesla. This naturally happened with you stepping off the board and related factors — but if you’d prefer I pull more hours back to OpenAI oversight please let me know.
- Sam and Greg asked if I’d be on their informal advisory board (just Gabe Newell so far), which seems fine and better than the formal board given potential conflicts? If that doesn’t feel right let me know what you’d prefer.
Subject: Re: OpenAI update 📎
Sam Altman to: Elon Musk - Dec 17, 2018 3:42 PM
Hey Elon–
In Q1 of next year, we plan to do a final Dota tournament with any pro team that wants to play for a large cash prize, on an unrestricted game. After that, we'll call model-free RL complete, and a subset of the team will work on re-solving 1v1 Dota with model-based RL.
We also plan to release a number of robot hand demos in Q1–Rubiks cube, pen tricks, and Chinese ball rotation. The learning speed for new tasks should be very fast. Later in the year, we will try mounting two hands on two arms and see what happens...
We are also making fast progress on language. My hope is that next year we can generate short stories and a good dialogue bot.
████████████████████████████████████████████████████████████████████████████ The hope is that we can use this unsupervised learning to build models that can do hard things, eg never make an image classification mistake a human wouldn't, which to me would imply some level of conceptual understanding.
We are also making good progress in the multi-agent environment, with multiple agents now collaborating to build simple structure, play laser tag, etc.
Finally, I am working on a deal to move our computing from Google to Microsoft (in addition to our own data centers).
Also happy to talk about fundraising (we should have enough for the next ~2 years even with aggressive growth) and evolving hardware thoughts if helpful, would prefer not to put either of those in email but maybe next time you're at Pioneer?
Sam
Elon Musk to: Sam Altman - Dec 17, 2018 3:47 PM
Sounds good
Could probably meet Wed eve in SF
Subject: I feel I should reiterate 📎
Elon Musk to: Ilya Sutskever, Greg Brockman, (cc: Sam Altman, Shivon Zilis) - Dec 26, 2018 12:07 PM
My probability assessment of OpenAI being relevant to DeepMind/Google without a dramatic change in execution and resources is 0%. Not 1%. I wish it were otherwise.
Even raising several hundred million won't be enough. This needs billions per year immediately or forget it.
Unfortunately, humanity's future is in the hands of <redacted>. [Link to article]
█████████████████████████████████████████████████████████████████████████████████████████████████████████████████
OpenAI reminds me of Bezos and Blue Origin. They are hopelessly behind SpaceX and getting worse, but the ego of Bezos has him insanely thinking that they are not!
I really hope I'm wrong.
Elon
Subject: OpenAI
Sam Altman to Elon Musk, (cc: Sam Teller, Shivon Zilis) - Mar 6, 2019 3:13PM
Elon—
Here is a draft post we are planning for Monday. Anything to add/edit?
TL;DR:
- We've created the capped-profit company and raised the first round, led by Reid and Vinod.
- We did this is a way where all investors are clear that they should never expect a profit.
- We made Greg chairman and me CEO of the new entity.
- We have tested this structure with potential next-round investors and they seem to like it.
Speaking of the last point, we are now discussing a multi-billion dollar investment which I would like to get your advice on when you have time. Happy to come see you some time you are in the bay area.
Sam
Draft post[3]
Subject: Bloomberg: AI Research Group Co-Founded by Elon Musk Starts For-Profit Arm
Elon Musk to Sam Altman - Mar 11, 2019 3:04PM
Please be explicit that I have no financial interest in the for-profit arm of OpenAI
AI Research Group Co-Founded by Elon Musk Starts For-Profit Arm
Bloomberg
OpenAI, the San Francisco-based artificial intelligence research group co-founded by Elon Musk and several other prominent Silicon Valley entrepreneurs, is starting a for-profit arm that will allow it to raise more money. Read the full story
Shared from Apple News
Sam Altman to Elon Musk - Mar 11, 2019 3:11PM
on it
iMessage between Elon Musk and Sam Altman 📎
Elon Musk to: Sam Altman – October 23, 2022 3:06AM
Elon here
New Austin number
I was disturbed to see OpenAI with a $20B valuation. De facto. I provided almost all the seed, A and most of B round funding.
https://www.theinformation.com/articles/openai-valued-at-nearly-20-billion-in-advanced-talks-with-microsoft-for-more-funding
This is a bait and switch
iMessage between Sam Altman and Shivon Zilis 📎
Sam Altman - 8:07 AM
got this from elon, what do you suggest: <screenshot of iMessage between Elon and Shivon from October 23>
you've offered the stock thing to him in the past and he said no right?
i don't know what he means by A and B round
Shivon Zilis
It's unclear what the actual issue is here. Not having stock, it still being the same entity in public belief that he initially funding (OpenAI name), or just disagreement with the direction
It was offered to him and was declined at the time. I don't recall what actual path was decided on for that. I thought you asked him directly at one point if I'm not mistaken?
Call if you'd like additional context, but overall recommendation is don't text back immediately
Sam Altman - 9:13 PM
quick read if you have a second?
I agree this feels bad—we offered you equity when we established the cap profit, which you didn't want at the time but we are still very happy to do if you'd like. We saw no alternative to a structure change given the amount of capital we needed and still to preserve a way to 'give the AGI to humanity' other than the capped profit thing, which also lets the board cancel all equity if needed for safety. Fwiw I personally have no equity and never have. Am trying to navigate tricky tightrope the best I can and would love to talk about how it can be better any time you are free. Would also love to show you recent updates.
Sam Altman - 10:50 PM
(i sent)
got this back: I will be in SF most of this week for the Twitter acquisition. Let's talk Tues or Wed.
Shivon Zilis
Sorry was asleep! Long few days. Great
- ^
Emails and messages from the OpenAI blogposts are indicated with a 📎 emoji. Unless otherwise indicated the source for an email are the Musk v. Altman court documents.
- ^
Edited to add collapsible section, original was just a large text block.
- ^
Edited to add collapsible section.