OpenAI Email Archives (from Musk v. Altman and OpenAI blog) - Part II

date
Dec 16, 2024
slug
open_ai_email_archives_musk_altman2
status
Published
tags
AI
summary
type
Post

Subject: Bi-weekly updates 📎

Ilya Sutskever to: Greg Brockman, [redacted], Elon Musk - Jun 12, 2017 10:39 PM

This is the first of our bi-weekly updates. The goal is to keep you up to date, and to help us make greater use from your visits.
Compute:
  • Compute is used in two ways: it is used to run a big experiment quickly, and it is used to run many experiments in parallel.
  • 95% of progress comes from the ability to run big experiments quickly. The utility of running many experiments is much less useful.
  • In the old days, a large cluster could help you run more experiments, but it could not help with running a single large experiment quickly.
  • For this reason, an academic lab could compete with Google, because Google's only advantage was the ability to run many experiments. This is not a great advantage.
  • Recently, it has become possible to combine 100s of GPUs and 100s of CPUs to run an experiment that's 100x bigger than what is possible on a single machine while requiring comparable time. This has become possible due to the work of many different groups. As a result, the minimum necessary cluster for being competitive is now 10–100x larger than it was before.
  • Currently, every Dota experiment uses 1000+ cores, and it is only for the small 1v1 variant, and on extremely small neural network policies. We will need more compute to just win the 1v1 variant. To win the full 5v5 game, we will need to run fewer experiments, where each experiment is at least 1 order of magnitude larger (possibly more!).
  • TLDR: What matters is the size and speed of our experiments. In the old days, a big cluster could not let anyone run a larger experiment quickly. Today, a big cluster lets us run a large experiment 100x faster.
  • In order to be capable of accomplishing our projects even in theory, we need to increase the number of our GPUs by a factor of 10x in the next 1–2 months (we have enough CPUs). We will discuss the specifics in our in-person meeting.
Dota 2:
  • We will solve the 1v1 version of the game in 1 month. Fans of the game care about 1v1 a fair bit.
  • We are now at a point where a single experiment consumes 1000s of cores, and where adding more distributed compute increases performance.
Rapid learning of new games:
  • Infra work is underway
  • We implemented several baselines
  • Fundamentally, we're not where we want to be, and are taking action to correct this.
Robotics:
  • Current status: The HER algorithm (https://www.youtube.com/watch?v=Dz_HuzgMzxo) can learn to solve many low-dimensional robotics tasks that were previously unsolvable very rapidly. It is non-obvious, simple, and effective.
  • The above will be deployed on the robotic hand: [Link to Google Drive] [this video is human controlled, not algorithmic controlled. Need to be logged in to the OpenAI account to see the video].
Self play as a key path to AGI:
  • Self play in multiagent environments is magical: if you place agents into an environment, then no matter how smart (or not smart) they are, the environment will provide them with the exact level of challenge, which can be faced only by outsmarting the competition. So for example, if you have a group of children, they will find each other's company to be challenging; likewise for a collection of super intelligences of comparable intelligence. So the "solution" to self-play is to become more and more intelligent, without bound.
  • Self-play lets us get "something out of nothing." The rules of a competitive game can be simple, but the best strategy for playing this game can be immensely complex. [motivating example: https://www.youtube.com/watch?v=u2T77mQmJYI].
  • Training agents in simulation to develop very good dexterity via competitive fighting, such as wrestling. Here is a video of ant-shaped robots that we trained to struggle: <redacted>
We have a few more cool smaller projects. Updates to be presented as they produce significant results.

Elon Musk to: Ilya Sutskever, (cc: Greg Brockman, [redacted]) - Jun 12, 2017 10:52 PM

Thanks, this is a great update.

Elon Musk to: Ilya Sutskever, (cc: Greg Brockman, [redacted]) - Jun 13, 2017 10:24 AM

Ok. Let's figure out the least expensive way to ensure compute power is not a constraint...

Subject: The business of building AGI 📎

Ilya Sutskever to: Elon Musk, Greg Brockman - Jul 12, 2017 1:36 PM

We usually decide that problems are hard because smart people have worked on them unsuccessfully for a long time. It’s easy to think that this is true about AI. However, the past five years of progress have shown that the earliest and simplest ideas about AI — neural networks — were right all along, and we needed modern hardware to get them working.
Historically, AI breakthroughs have consistently happened with models that take between 7–10 days to train. This means that hardware defines the surface of potential AI breakthroughs. This is a statement about human psychology more than about AI. If experiments take longer than this, it’s hard to keep all the state in your head and iterate and improve. If experiments are shorter, you’ll just use a bigger model.
It’s not so much that AI progress is a hardware game, any more than physics is a particle accelerator game. But if our computers are too slow, no amount of cleverness will result in AGI, just like if a particle accelerator is too small, we have no shot at figuring out how the universe works. Fast enough computers are a necessary ingredient, and all past failures may have been caused by computers being too slow for AGI.
Until very recently, there was no way to use many GPUs together to run faster experiments, so academia had the same “effective compute” as industry. But earlier this year, Google used two orders of magnitude more compute than is typical to optimize the architecture of a classifier, something that usually requires lots of researcher time. And a few months ago, Facebook released a paper showing how to train a large ImageNet model with near-linear speedup to 256 GPUs (given a specially-configured cluster with high-bandwidth interconnects).
Over the past year, Google Brain produced impressive results because they have an order of magnitude or two more GPUs than anyone. We estimate that Brain has around 100k GPUs, FAIR has around 15–20k, and DeepMind allocates 50 per researcher on question asking, and rented 5k GPUs from Brain for AlphaGo. Apparently, when people run neural networks at Google Brain, it eats up everyone’s quotas at DeepMind.
We're still missing several key ideas necessary for building AGI. How can we use a system's understanding of “thing A” to learn “thing B” (e.g. can I teach a system to count, then to multiply, then to solve word problems)? How do we build curious systems? How do we train a system to discover the deep underlying causes of all types of phenomena — to act as a scientist? How can we build a system that adapts to new situations on which it hasn’t been trained on precisely (e.g. being asked to apply familiar concepts in an unfamiliar situation)? But given enough hardware to run the relevant experiments in 7–10 days, history indicates that the right algorithms will be found, just like physicists would quickly figure out how the universe works if only they had a big enough particle accelerator.
There is good reason to believe that deep learning hardware will speed up 10x each year for the next four to five years. The world is used to the comparatively leisurely pace of Moore’s Law, and is not prepared for the drastic changes in capability this hardware acceleration will bring. This speedup will happen not because of smaller transistors or faster clock cycles; it will happen because like the brain, neural networks are intrinsically parallelizable, and new highly parallel hardware is being built to exploit this.
Within the next three years, robotics should be completely solved, AI should solve a long-standing unproven theorem, programming competitions should be won consistently by AIs, and there should be convincing chatbots (though no one should pass the Turing test). In as little as four years, each overnight experiment will feasibly use so much compute capacity that there’s an actual chance of waking up to AGI, given the right algorithm — and figuring out the algorithm will actually happen within 2–4 further years of experimenting with this compute in a competitive multiagent simulation.
To be in the business of building safe AGI, OpenAI needs to:
  • Have the best AI results each year. In particular, as hardware gets exponentially better, we’ll have dramatically better results. Our DOTA and Rubik’s cube projects will have impressive results for the current level of compute. Next year’s projects will be even more extreme, and what’s realistic depends primarily on what compute we can access.
  • Increase our GPU cluster from 600 GPUs to 5000 GPUs ASAP. As an upper bound, this will require a capex of $12M and an opex of $5–6M over the next year. Each year, we’ll need to exponentially increase our hardware spend, but we have reason to believe AGI can ultimately be built with less than $10B in hardware.
  • Increase our headcount: from 55 (July 2017) to 80 (January 2018) to 120 (January 2019) to 200 (January 2020). We’ve learned how to organize our current team, and we’re now bottlenecked by number of smart people trying out ideas.
  • Lock down an overwhelming hardware advantage. The 4-chip card that <redacted> says he can build in 2 years is effectively TPU 3.0 and (given enough quantity) would allow us to be on an almost equal footing with Google on compute. The Cerebras design is far ahead of both of these, and if they’re real then having exclusive access to them would put us far ahead of the competition. We have a structural idea for how to do this given more due diligence, best to discuss on a call.
2/3/4 will ultimately require large amounts of capital. If we can secure the funding, we have a real chance at setting the initial conditions under which AGI is born. Increased funding needs will come lockstep with increased magnitude of results. We should discuss options to obtain the relevant funding, as that’s the biggest piece that’s outside of our direct control.
Progress this week:
  • We’ve beat our top 1v1 test player (he’s top 30 in North America at 1v1, and beats the top 1v1 player about 30% of the time), but the bot can also be exploited by playing weirdly. We’re working on understanding these exploits and cracking down on them.
    • Repeated from Saturday, here’s the first match where we beat our top test player: https://www.youtube.com/watch?v=FBoUHay7XBI&feature=youtu.be&t=345
    • Every additional day of training makes the bot stronger and harder to exploit.
  • Robot getting closer to solving Rubik’s cube.
    • The improved cube simulation teleoperated by a human: <redacted>.
  • Our defense against adversarial examples is starting to work on ImageNet.
    • We will completely solve the problem of adversarial examples by the end of August.
████████████████████████████████████
██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████
███████████████████████████████████████████████████████████████████████████████████████████████████████████████

iMessages on OpenAI for-profit 📎

Shivon Zilis to: Greg Brockman - Jul 13, 2017 10:35 PM

How did it go?

Greg Brockman to: Shivon Zilis - Jul 13, 2017 10:35 PM

Went well!
ocean: agreed on announcing around the international; he suggested playing against the best player from the winning team which seems cool to me. I asked him to call <redacted> and he said he would. I think this is better than our default of announcing in advance we’ve beaten the best 1v1 player and then having our bot playable at a terminal at TI ████ ██████████ █████████████████ ██.
gpus: said do what we need to do
cerebras: we talked about the reverse merger idea a bit. independent of cerebras, turned into talking about structure (he said non-profit was def the right one early on, may not be the right one now — ilya and I agree with this for a number of reasons). He said he’s going to Sun Valley to ask <redacted> to donate.

Shivon Zilis to: Greg Brockman - Jul 13, 2017 10:43 PM

<redacted> and others. Will try to work it for ya.

Subject: biweekly update

Ilya Sutskever to Elon Musk, Greg Brockman - Jul 20, 2017 1:56 PM

  • The robot hand can now solve a Rubik's cube in simulation:
https://drive.google.com/a/openai.com/file/d/0B60rCy4P2FOIenlLdzN2LXdiOTQ/view?usp=sharing (needs OpenAI login)
Physical robot will do same in September
  • 1v1 bot is no longer exploitable
It can no longer be beaten using “unconventional” strategies
On track to beat all humans in 1 month
  • Athletic competitive robots:
https://drive.google.com/a/openai.com/file/d/0B60rCy4P2FOIZE4wNVdlbkx6U2M/view?usp=sharing (needs OpenAI login)
  • Released an adversarial example that fools a camera from all angles simultaneously:
https://blog.openai.com/robust-adversarial-inputs/
  • DeepMind's directly used one of our algorithms to produce their parkour results:
DeepMind's results: https://deepmind.com/blog/producing-flexible-behaviourssimulated-environments/
DeepMind's technical papers explicitly state they directly used our algorithms
Our blogpost about our algorithm: https://blog.openai.com/openai-baselines-ppo/ (DeepMind used an older version).
  • Coming up:
Designing the for-profit structure
Negotiate merger terms with Cerebras
More due diligence with Cerebras

Subject: Beijing Wants A.I. to Be Made in China by 2030 - NYTimes.com 📎

Elon Musk to: Greg Brockman, Ilya Sutskever - Jul 21, 2017 3:34 AM

They will do whatever it takes to obtain what we develop. Maybe another reason to change course. [Link to news article]

Greg Brockman to: Elon Musk, (cc: Ilya Sutskever) - Jul 21, 2017 1:18 PM

100% agreed. We think the path must be:
  1. AI research non-profit (through end of 2017)
  1. AI research + hardware for-profit (starting 2018)
  1. Government project (when: ??)
█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████
  • gdb

Elon Musk to: Greg Brockman, (cc: Ilya Sutskever, [redacted]) - Jul 21, 2017 1:18 PM

Let's talk Sat or Sun. I have a tentative game plan that I'd like to run by you.

Subject: Tomorrow afternoon 📎

Elon Musk to: Greg Brockman, Ilya Sutskever, Sam Altman, (cc: [redacted], Shivon Zilis) - Aug 11, 2017 9:17 PM

████████████████████████████████ Are you guys able to meet or do a conf call tomorrow afternoon?
Time to make the next step for OpenAI. This is the triggering event.

Subject: OpenAI notes

Shivon Zilis to Elon Musk, (cc: Sam Teller) - Aug 28, 2017 12:01 AM

Elon,
As I'd mentioned, Greg had asked to talk through a few things this weekend. Ilya ended up joining, and they pretty much just shared all of what they are still trying to think through. This is the distillation of that random walk of a conversation... came down to 7 unanswered questions with their commentary below. Please note that I'm not advocating for any of this, just structuring and sharing the information I heard.
1. Short-term control structure?
  • Is the requirement for absolute control? They wonder if there is a scenario where there could be some sort of creative overrule provision if literally everyone else disagreed on direction (not just the three of them, but perhaps a broader board)?
2. Duration of control and transition?
  • *The* non-negotiable seems to be an ironclad agreement to not have any one person have absolute control of AGI if it's created. Satisfying this means a situation where, regardless of what happens to the three of them, it's guaranteed that power over the company is distributed after the 2-3 year initial period.
3. Time spent?
  • How much time does Elon want to spend on this, and how much time can he actually afford to spend on this? In what timeframe? Is this an hour a week, ten hours a week, something in between?
4. What to do with time spent?
  • They don't really know how he prefers to spend time at his other companies and how he'd want to spend his time on this. Greg and Ilya are confident they could build out SW / ML side of things pretty well. They are not confident on the hardware front. They seemed hopeful Elon could spend some time on that since that's where they are weak, but did want his help in all domains he was interested in.
5. Ratio of time spent to amount of control?
  • They are cool with less time / less control, more time / more control, but not less time / more control. Their fear is that there won't be enough time to discuss relevant contextual information to make correct decisions if too little time is spent.
6. Equity split?
  • Greg still instinctually anchored on equal split. I personally disagree with him on that instinct and he asked for and was receptive to hearing other things he could use to recalibrate his mental model.
  • Greg noted that Ilya in some ways has contributed millions by leaving his earning potential on the table at Google.
  • One concern they had was the proposed employee pool was too small.
7. Capitalization strategy?
  • Their instinct is to raise much more than $100M out of the gate. They are of the opinion that the datacenter they need alone would cost that so they feel more comfortable raising more.
Takeaways:
Unsure if any of this is amenable but just from listening to all of the data points they threw out, the following would satisfy their current sticky points:
  • Spending 5-10 hours a week with near full control, or spend less time and have less control.
  • Having a creative short-term override just for extreme scenarios that was not just Greg / Sam / Ilya.
  • An ironclad 2-3yr minority control agreement, regardless of the fates of Greg / Sam / Ilya.
  • $200M-$1B initial raise.
  • Greg and Ilya's stakes end up higher than 1/10 of Elon's but not significantly (this remains the most ambiguous).
  • Increasing employee pool.

Elon Musk to Shivon Zilis, (cc: Sam Teller) - Aug 28, 2017 12:08 AM

This is very annoying. Please encourage them to go start a company. I've had enough.

iMessages on majority equity and board control 📎

Shivon Zilis to: Greg Brockman - Sep 4, 2017 8:19 PM

Actually I'm still slightly confused on the proposed detail around the share % and board control
Given it sounds like proposal is that Elon always gets max(3 seats, 25% of seats) and all the power rests with the board

Greg Brockman

Yes. Though I am guessing he intended your overrule provision for first bit but I'm not sure

Shivon Zilis

So what power does having a certain % of shares have?
Sounds like intention is static board members, or at least board members coming statically from certain pools
But yeah would be curious to hear the specifics. Also I guess for even board sizes a 50% means no action?

Greg Brockman

I think it would grow to at least 7 pretty quick. The question is not that but when does it transition to traditional board if in fact transitions
And he sounded fairly non-negotiable on his equity being between 50-60 so moot point of having majority

Subject: Re: Current State 📎

Elon Musk to: Ilya Sutskever, (cc: Greg Brockman) - Sep 13, 2017 12:40 AM

Sounds good. The three common stock seats (you, Greg and Sam) should be elected by common shareholders. They will de facto be yours, but not in the unlikely event that you lose the faith of a huge percentage of common stockholders over time or step away from the company by choice.
I think that the Preferred A investment round (supermajority me) should have the right to appoint four (not three) seats. I would not expect to appoint them immediately, but, like I said I would unequivocally have initial control of the company, but this will change quickly.
The rough target would be to get to a 12 person board (probably more like 16 if this board really ends up deciding the fate of the world) where each board member has a deep understanding of technology, at least a basic understanding of AI and strong & sensible morals.
Apart from the Series A four and the Common three, there would likely be a board member with each new lead investor/ally. However, the specific individual new board members can only be added if all but one existing board member agrees. Same for removing board members.
There will also be independent board members we want to add who aren't associated with an investor. Same rules apply: requires all but one of existing directors to add or remove.
I'm super tired and don't want to overcomplicate things, but this seems approx right. At the sixteen person board level, we would have 7/16 votes and I'd have a 25% influence, which is my min comfort level. That sounds about right to me. If everyone else we asked to join our board is truly against us, we should probably lose.
As mentioned, my experience with boards (assuming they consist of good, smart people) is that they are rational and reasonable. There is basically never a real hardcore battle where an individual board vote is pivotal, so this is almost certainly (sure hope so) going to be a moot point.
As a closing note, I've been really impressed with the quality of discussion with you guys on the equity and board stuff. I have a really good feeling about this.
Lmk if above seems reasonable.
Elon

Subject: Honest Thoughts

Ilya Sutskever to Elon Musk, Sam Altman, (cc: Greg Brockman, Sam Teller, Shivon Zilis) - Sep 20, 2017 2:08 PM

Elon, Sam,
This process has been the highest stakes conversation that Greg and I have ever participated in, and if the project succeeds, it'll turn out to have been the highest stakes conversation the world has seen. It's also been a deeply personal conversation for all of us.
Yesterday while we were considering making our final commitment given the non-solicit agreement, we realized we'd made a mistake. We have several important concerns that we haven't raised with either of you. We didn't raise them because we were afraid to: we were afraid of harming the relationship, having you think less of us, or losing you as partners.
There is some chance that our concerns will prove to be unresolvable. We really hope it's not the case, but we know we will fail for sure if we don't all discuss them now. And we have hope that we can work through them and all continue working together.
Elon:
We really want to work with you. We believe that if we join forces, our chance of success in the mission is the greatest. Our upside is the highest. There is no doubt about that. Our desire to work with you is so great that we are happy to give up on the equity, personal control, make ourselves easily firable — whatever it takes to work with you.
But we realized that we were careless in our thinking about the implications of control for the world. Because it seemed so hubristic, we have not been seriously considering the implications of success.
The current structure provides you with a path where you end up with unilateral absolute control over the AGI. You stated that you don't want to control the final AGI, but during this negotiation, you've shown to us that absolute control is extremely important to you.
As an example, you said that you needed to be CEO of the new company so that everyone will know that you are the one who is in charge, even though you also stated that you hate being CEO and would much rather not be CEO.
Thus, we are concerned that as the company makes genuine progress towards AGI, you will choose to retain your absolute control of the company despite current intent to the contrary. We disagree with your statement that our ability to leave is our greatest power, because once the company is actually on track to AGI, the company will be much more important than any individual.
The goal of OpenAI is to make the future good and to avoid an AGI dictatorship. You are concerned that Demis could create an AGI dictatorship. So do we. So it is a bad idea to create a structure where you could become a dictator if you chose to, especially given that we can create some other structure that avoids this possibility.
We have a few smaller concerns, but we think it's useful to mention it here:
In the event we decide to buy Cerebras, my strong sense is that it'll be done through Tesla. But why do it this way if we could also do it from within OpenAI? Specifically, the concern is that Tesla has a duty to shareholders to maximize shareholder return, which is not aligned with OpenAI's mission. So the overall result may not end up being optimal for OpenAI.
We believe that OpenAI the non-profit was successful because both you and Sam were in it. Sam acted as a genuine counterbalance to you, which has been extremely fruitful. Greg and I, at least so far, are much worse at being a counterbalance to you. We feel this is evidenced even by this negotiation, where we were ready to sweep the long-term AGI control questions under the rug while Sam stood his ground.
Sam:
When Greg and I are stuck, you've always had an answer that turned out to be deep and correct. You've been thinking about the ways forward on this problem extremely deeply and thoroughly. Greg and I understand technical execution, but we don't know how structure decisions will play out over the next month, year, or five years.
But we haven't been able to fully trust your judgements throughout this process, because we don't understand your cost function.
We don't understand why the CEO title is so important to you. Your stated reasons have changed, and it's hard to really understand what's driving it.
Is AGI truly your primary motivation? How does it connect to your political goals? How has your thought process changed over time?
Greg and Ilya:
We had a fair share of our own failings during this negotiation, and we'll list some of them here (Elon and Sam, I'm sure you'll have plenty to add...):
During this negotiation, we realized that we have allowed the idea of financial return 2-3 years down the line to drive our decisions. This is why we didn't push on the control — we thought that our equity is good enough, so why worry? But this attitude is wrong, just like the attitude of AI experts who don't think that AI safety is an issue because they don't really believe that they'll build AGI.
We did not speak our full truth during the negotiation. We have our excuses, but it was damaging to the process, and we may lose both Sam and Elon as a result.
There's enough baggage here that we think it's very important for us to meet and talk it out. Our collaboration will not succeed if we don't. Can all four of us meet today? If all of us say the truth, and resolve the issues, the company that we'll create will be much more likely to withstand the very strong forces it'll experience.
  • Greg & Ilya

Elon Musk to Ilya Sutskever (cc: Sam Altman; Greg Brockman; Sam Teller; Shivon Zilis) - Sep 20, 2017 2:17PM

Guys, I've had enough. This is the final straw. Either go do something on your own or continue with OpenAI as a nonprofit. I will no longer fund OpenAI until you have made a firm commitment to stay or I'm just being a fool who is essentially providing free funding for you to create a startup. Discussions are over

Elon Musk to Ilya Sutskever, Sam Altman (cc: Greg Brockman, Sam Teller, Shivon Zilis) - Sep 20, 2017 3:08PM

To be clear, this is not an ultimatum to accept what was discussed before. That is no longer on the table.

Sam Altman to Elon Musk, Ilya Sutskever (cc: Greg Brockman, Sam Teller, Shivon Zilis) - Sep 21, 2017 9:17 AM

i remain enthusiastic about the non-profit structure!

Subject: Non-profit

Shivon Zilis to Elon Musk, (cc: Sam Teller) - Sep 22, 2017 9:50 AM

Hi Elon,
Quick FYI that Greg and Ilya said they would like to continue with the non-profit structure. They know they would need to provide a guarantee that they won't go off doing something else to make it work.
Haven't spoken to Altman yet but he asked to talk this afternoon so will report anything I hear back.
If anything I can do to help let me know.

Elon Musk to Shivon Zilis (cc: Sam Teller) - Sep 22, 2017 10:01 AM

Ok

Shivon Zilis to Elon Musk, (cc: Sam Teller) - Sep 22, 2017 5:54 PM

From Altman:
Structure: Great with keeping non-profit and continuing to support it.
Trust: Admitted that he lost a lot of trust with Greg and Ilya through this process. Felt their messaging was inconsistent and felt childish at times.
Hiatus: Sam told Greg and Ilya he needs to step away for 10 days to think. Needs to figure out how much he can trust them and how much he wants to work with them. Said he will come back after that and figure out how much time he wants to spend.
Fundraising: Greg and Ilya have the belief that 100's of millions can be achieved with donations if there is a definitive effort. Sam thinks there is a definite path to 10's of millions but TBD on more. He did mention that Holden was irked by the move to for-profit and potentially offered more substantial amount of money if OpenAI stayed a non-profit, but hasn't firmly committed. Sam threw out a $100M figure for this if it were to happen.
Communications: Sam was bothered by how much Greg and Ilya keep the whole team in the loop with happenings as the process unfolded. Felt like it distracted the team. On the other hand, apparently in the last day almost everyone has been told that the for-profit structure is not happening and he is happy about this at least since he just wants the team to be heads down again.
Shivon

Subject: ICO

Sam Altman to Elon Musk (cc: Greg Brockman, Ilya Sutskever, Sam Teller, Shivon Zilis) - Jan 21, 2018 5:08 PM

Elon—
Heads up, spoke to some of the safety team and there were a lot of concerns about the ICO and possible unintended effects in the future.
Planning to talk to the whole team tomorrow and invite input. Going to emphasize the need to keep this confidential, but I think it's really important we get buy-in and give people the chance to weigh in early.
Sam

Elon Musk to Sam Altman (cc: Greg Brockman, Ilya Sutskever, Sam Teller, Shivon Zilis) - Jan 21, 2018 5:56 PM

Absolutely

© Chris Song 2021 - 2025