Featured Post

This is the Kodak Moment for the Auto Industry

Plug-In Drivers Not Missin' the Piston Electric vehicles are here to stay. Their market acceptance and growth will continue. Why?...

Friday, July 31, 2020

Tesla and The March of Nines to Full Self Driving

“It always seems impossible until it's done.” ~ Nelson Mandela




Tesla is working on full self-driving (FSD) cars. Some have said this is impossible. When it is done, this will be added to the growing list of things that Tesla has achieved that were once branded impossible. These once impossible achievements were not always delivered on the promised timeline, but they, nonetheless, arrived. Trent Eady, (the same person tweeting to Elon Musk in the image above) said it well when they wrote, “If Musk promises you the moon in six months and delivers it in three years, keep things in perspective: you’ve got the moon.” How long will the FSD moon take to be delivered? That's what we'll explore below.

In early July of 2020, at the World Artificial Intelligence Conference Musk said, “I’m extremely confident that Level 5 autonomy, or essentially complete autonomy, will happen, and I think it will happen very quickly. I remain confident that we will have the basic functionality for Level 5 autonomy complete this year.”

There’s a massive amount of work with each order of magnitude of reliability. This is the long 'March of the Nines'.

In the Q2 Financial update later in July, Musk reiterated his confidence in FSD, “The car will seem to have just like a giant improvement. We’ll probably roll it out later this year. [It will] be able to do traffic lights, stops, turns, everything pretty much. Then it will be a long march of nines, essentially. How many nines of reliability are okay? So it’s definitely way better than human, but how much better than human does it need to be? That’s actually going to be the real work. There’s just a massive amount of work with each kind of order of magnitude of reliability.”

What Are Nines

Musk mentioned the "nines of reliability." What are the nine? There are plenty of systems where 99% reliability is sufficient. If a video game crashes occasionally, it might be annoying, but no real damage was done. Whereas, something like a flight control system needs to be 99.999% reliable or better. However, it can be tedious to say, “ninety-nine point nine nine nine," so the verbal shorthand is to ignore the decimal point and just say the number of nines, e.g., 99.999% is called five-nines. It would be nice if we had 100% reliable systems, but that is an impossibility. Failures occur, components age, cosmic rays flip bits... so you have a backup, but the backup could fail too, so you have a backup for the backup, but that could fail too... Each layer of backup improves the overall system reliability, but, short of an infinite number of backups, it's not impossible that all of the backups fail at once either coincidentally or due to a common cause.

Why Nines Matter

Here's a simple example of why 99% is not good enough. There are about 150 billion credit card transactions each year totaling about $10 trillion. If these transactions were correct 99% of the time, that would be 1.5 billion transactions (~$100 billion) with an error each year. A system at this scale needs to be better than 99% reliable. Five-nines (99.999%) of reliability would reduce the annual error rate to “only” 15 million errors per year. Seven-nines would reduce it to 150,000 errors (still $10 million in annual errors). This is a system where it literally pays to improve reliability.

What is the March and Why is it So Long?

There are a few ways to look at this and it is different for any effort. Generally speaking, the more complex the system, the more difficult it is to improve the reliability. In a complex system, it can be hard to see the 2nd and 3rd order effects of potential changes.

There are several ways to view this; let's look at the 80/20 Rule.

The 80/20 Rule or Pareto Principle has many applications. For our purposes, we'll consider software development and we'll call feature-complete the 80% mark of the effort for a highly reliable application. Let's say that 80% effort took 8 months. That's an average of 10% each month, so the project should be 100% complete in just 2 more months, right? Unfortunately, the last 20% does not scale linearly like the first 80%. This last 20% is where all the hard problems live. These are the bugs that only show up intermittently, the race-conditions, the new feature that would require nearly a complete rewrite, the scalability problems that only show up during a beta at your biggest customer's site... That remaining 20% becomes its own 100% effort. An additional eight months later you might have 80% of the 20% done, and the cycle repeats. The number of iterations you go through depends on the level of dependability that your application needs. Let's look at a progression and see how long it would take for this fictional application to reach five-nines.

Cycle    Reliability %       Nines
180~1
2961
3992
499.8~3
599.973
699.994
799.9995
899.99975
999.999956
1099.999997
1199.999998~8
1299.99999968
1399.99999999
1499.99999998~10

According to our 80/20-rule table, it will take 7 development cycles to hit five-nines. In this example, each cycle was 8 months, so that's 56 months or 4 years, 8 months.

Imagine the conversation where you were 8 months into a project and you were 80% done and then you told your boss or customer that the last 20% will take 4 more years. They might think you're sandbagging them. It's hard to believe that it could take as long to go from 99.99% to 99.999% as it did to go from zero to 80% but this is why this is often referred to as “the long tail."

The 80/20 rule is straight forward, but as I mentioned at the start, no two projects are the same, progress is made in fits and starts and the 80/20 rule is just a rule of thumb and only one possible model.

If the problem that you're tackling has a long tail, then early progress must not be linearly extrapolated to determine a likely completion date.


Another, more academic, method to view the long tail is the Empirical Rule. The empirical rule is also known as the "68–95–99.7 rule." You can find tons of equations in project planning books on this, but we'll keep it simple here. With this method, each iteration is accounting for another standard deviation of input, defects, system behavior... on a normal continuous probability distribution curve.

Cycle   Reliability %    Nines
1680
2951
399.72
499.994
599.99996
699.9999998~9

If our hypothetical application follows the empirical rule, we'd achieve five-nines in just 5 cycles or 3 years, 4 months. Remember when it seemed like we could be done in just 10 months? If the problem that you're tackling has a long tail, then early progress, although great, should not be linearly extrapolated to determine a likely completion date.

How Good Are Human Drivers?

The goal is for AI driving systems to be better than human drivers. In our article, “AI Driver: Safer Is Not Enough," we discussed why the self-driving cars will need to be more than just a little better human drivers, but let's just look at human drivers and see where that bar is set.

Despite the accident reports that cause mile-long traffic jams that seem to happen all too often, human drivers do a remarkably good job, all things considered. Humans have poor reaction time, are unable to look in multiple directions simultaneously, are distractable, have several blind spots, occasionally fall asleep at the wheel, drink & drive, have medical issues... yet humans are only involved in an injury collision about once every 1 million miles, and a fatal crash only once per 100 million miles or so driven. This is an injury collision avoidance performance of six-nines and a fatal collision avoidance rating of eight-nines.

Applying the Nines to Tesla Full Self Driving

Musk did not promise FSD by the end of 2020, he said he was confident that they would have “basic functionality for Level 5" by the end of the year, then the “the real work" begins. Musk stated, “There’s just a massive amount of work with each kind of order of magnitude of reliability." I think Musk's assessment of the “real work" effort after feature-complete is accurate and an under-appreciated aspect of system development; remember our simple 8 months to feature-complete project that took another 3 to 4 years to reach five-nines. As we see from the human driving data, FSD will need at least six-nines to be as good as a human.

Every Tesla made today has eight cameras, a front-facing radar, and ultrasonic sensors. These sensors are important, but the heart of the system is a deep learning artificial intelligence. All of the various sensor data, GPS info, navigation, speed data, and more are streamed to the AI system where it attempts to make sense of the world around it, make real-time decisions, and get you to your destination without an insurance claim or a hospital visit.

The hard part of a self-driving system is not simply staying in a well-marked lane; it's dealing with all of the edge-cases. Computer systems interacting with each other can have a massive number of edge-cases. Self-driving systems have to interact with the real world, a smorgasbord of edge-cases: occluded signage, rain, snow, dirty cameras, construction, something falling off a truck, potholes, animal crossings, people in costumes, unpredictable human drivers, bicyclists, runners, scooters, skateboarders...

Some have asserted that the tail is so long that it will be impossible for an AI system to drive a car until AI has common sense and understands things like a person looking at their phone is not paying attention and that a person in a costume is still a person. Plus many situations at intersections are resolved with eye-contact and hand waves, how will an AI navigate this? These certainly are difficult problems, but that's what makes engineering interesting. They will be solved, without requiring an AI to be conscious, the only question is when.

When Will Tesla Achieve Level 5? 

To know when you're done with a project, you have to know the goal. Going through this, we've established some of the criteria:
  • Hit feature-complete, so the "real work" can begin
  • Better than a human driver (better than six-nines)
  • Able to handle novel situations safely
Using the methods we've outlined above, we need to need to know how long it took to get to feature-complete. Let's assume Musk is correct and feature-complete will happen in December of 2020. Now we need to know when AutoPilot development started. This is a more complicated question. Musk first mentioned Autopilot publicly in May of '13. Certainly, they had started working on it if it was discussed publically. Using this starting point, it would be 91 months to go from zero to feature complete. However, Tesla initially worked with Mobileye on the Autopilot 1 system. Autopilot 2 started shipping in October of 2016. This is when the sensor suite that's in production now was first seen. Using this starting point, it would be 51 months from start to feature-complete.

Andrej Karpathy became Tesla's director of artificial intelligence in June 2017. With Karpathy's arrival, the direction for Autopilot development was shifted greatly with more operations moved into a unified neural net backbone with multiple heads (dubbed the hydranet). Using Karpathy's arrival as the starting date would yield 42 months.

In April of 2019, Tesla released Hardware 3 and referred to it as their FSD computer. This is a Tesla-designed custom system-on-a-chip inference engine. Tesla claims that the new system was 21 times faster than their previous vendor-supplied solution. This is when Tesla said that they had the hardware platform that they required for FSD to be achieved. Based on this date, the time to feature-complete would be 20 months.

Now, which of these dates should we select as our start? I don't want to keep "moving the goalpost" and allow any significant event to be a restart point, yet I don't want to allow false starts or work by suppliers to count against the time either. Given these competing goals, I'm selecting Andrej Karpathy's start date as the legitimate beginning for the current direction for Tesla's FSD direction. (Let me know which date you'd select.)

Okay, so given Karpathy's start date and a possible feature-complete of December 2020, that's 42 months to feature-complete. So looking at the two models we have above, how long would it take to reach the goal of better than six-nines?

The 80/20 rule would need 9 iterations (8 more after December). That would be 8 * 42 months or 28 years to get to six-nines. By this model, the steering wheel could be deleted from the parts list in December of 2048. Let's look at the other model.

The empirical model gets to six-nines in only 5 iterations (4 more after December). That would be 4 * 42 months or 14 years before you could fall asleep and wake up safe and sound at your destination.

It's possible that we won't see self-driving cars until 2034, but let's use the more recent HW3 date. You could make a case that until this hardware was available, the AI was severely limited and this bottleneck hampered progress. Using this more optimistic date, it was 20 months from power-on to feature complete. And since we're going for the optimistic model, let's use the empirical model. Five iterations (4 more after December) would be 4 * 20 months or 6 years 8 months. That puts the 'sitting in New York and summon your car from LA' date as August of 2027.

Before you assume these models are accurate, let me assure you, they. are. not. These are rules-of-thumb based primarily on people debugging complex systems, not deep learning AI. They are based on projects that occur within a handful of years on a single generation of hardware. AI is still a nascent field, major breakthroughs are still occurring. Moore's Law yields periodic doubling of computing performance. In a mature technology, you don't see a 21x performance boost like Tesla's HW3 effort. And neural nets evolve on a non-linear "S"-shaped sigmoid function which means they can quickly go from incompetent to mastery.


The point of this long entry was to attempt to determine when we might see robo-taxis on the road. Toward this effort, we've generated estimates from 2027 to 2048. This ~20-year window seems large but if you're reading this, it means it is likely to occur within your lifetime. What I can guarantee is that new driver-assist features will continue to roll out and improve each year. And that when self-driving cars happen, it will be a step-function in human history. Self-driving cars will join the list of humanity's greatest breakthroughs along with the wheel, electricity, and powered flight; it will save more lives than Penicillin and yet, quickly it will be as taken for granted as the self-piloting elevator.

Disclosure: I'm long Tesla stock.

Monday, June 22, 2020

Elon's Estimates - Mistaking A Clear View For A Short Distance


Elon Musk is known for many things: Zip2, PayPal, SpaceX, Tesla, Boring Co...
But he is also known for his over-enthusiastic estimates of when a technology can be delivered. Other than Model Y, every one of Tesla's vehicles has been late to market. In December of 2015, Musk said that full self-driving would be available in 2 years; it has made progress, but it is still not here. And more recently on the Joe Rogan podcast, Musk predicted that within 5 to 10 years people will be able to directly communicate thoughts via brain implants rather than using the slow analog process of speech or writing.

For followers of Musk (fans and detractors alike), this is known as MST or Musk Standard Time. Converting from MST to a Gregorian calendar is not an easy task. It involves leap years and slide rules and it is not possible in all instances.

I don't point this out for ridicule; rather it is to ask the question: Why does Musk continue to make bold predictions on unrealistic timelines?

In short, I think that he is falling into the trap that Paul Saffo warned against:

       Never mistake a clear view for a short distance. ~Paul Saffo

Musk has a clear view of his plan. He's well aware that there will be challenges, but he has built teams and achieved many things that were deemed previously impossible. Create a door-to-door driving directions website - Check; Create an internet payment system - Check; Make sexy fast electric cars that blow away gas cars costing 10 times as much - Check; Create giant energy storage systems that change the way energy is bought and sold - Check; Land rockets on autonomous drones ships at sea - Check; Launch the largest network of low Earth orbit satellites that has ever existed to bring internet access to every square millimeter of the planet - Now underway and looking for Beta customers.

Class ½ Impossibilities

So it is not naiveté that brings Musk to these optimistic timelines. Rather, it's a series of successes. You look at the problem and ask yourself if engineering and innovation can achieve it, or would it require magic. If the answer is the former, then it can be accomplished. Cars will be self-driving, the only question is when. Humans will land on Mars, the question is will it be in this generation or another. When done, these will be incredible feats, but it will not be magic that brought them into existence. These accomplishments will be the product of hard-fought breakthroughs. If you have a vision, a roadmap, the ability to raise capital, the ability to attract great talent, and the ability to adapt based on feedback and learnings, the 'impossible' can be achieved. And maybe, just maybe, the people that accomplish it will be called sorcerers.

The Dunning-Kruger Effect is when someone has little skill or expertise in an area and assumes it will be easy for them. Their lack of knowledge gives them undeserved overconfidence. What Musk 'suffers' from is almost the opposite of this effect. He knows it will be Hell; it's just that he's been on the trail through Hell so many times that he could be a tour guide. And the one sure way to fail is to assume it's not achievable.

Musk has been on the trail through Hell so many times that he could be a tour guide.

What is the opposite of the Dunning-Kruger Effect? Would it be The Kruger-Dunning Effect or perhaps the Regurk-Gninnud Effect? 😃 Knowing something will be hard and doing it anyway is how great things are achieved.

The future will not be bound to a timeline. It is fickle and does not give up its secrets easily, but this should never stop the quest for a better tomorrow.

This post started off with a quote from Paul Saffo and I'll end it here with a quote from one of his contemporaries:

        "The best way to predict the future is to invent it." ~Alan Kay 

For more on Musk's Moonshot management style, check out this article.

Friday, June 12, 2020

Prius vs Model 3

The Toyota Prius was a landmark vehicle. At its introduction, it was the biggest advancement in car tech in decades. Worldwide sales of the Prius passed the 1 million milestone in May 2008, jumped the 2 million mark in September 2010, and reached 3 million in June 2013. It was selling well, it was a halo brand for Toyota and branched off many variants: Prius V, Prius C, Prius-Plug-In, and most recently Prius Prime.

Hybrid Technology Never Crossed the Chasm

Prius was the flagbearer hybrid brand in the industry. A hybrid vehicle from any manufacturer was compared to the industry benchmark, Prius. Toyota put hybrid tech into many of their other vehicles too including Lexus brands for a total of 44 different hybrid models sold around the globe.

As I write this in 2020, Toyota has sold over 15 million hybrid electric vehicles. Despite this success, hybrid vehicles have remained a niche product. Hybrid tech has a loyal following, but it has not crossed the chasm to become mainstream.

Will EVs Suffer The Same Fate?

This made me wonder if EVs would suffer the same fate of being relegated to a niche market. As one (far from conclusive) indicator, I decided to compare the sales of the flagship EV (Tesla Model 3) to sales of the flagship hybrid (Toyota Prius). 

Model 3 has been on sale for 11 quarters now, so we put the first 11 quarters of cumulative Prius sales next to the first 11 quarters of cumulative Model 3 sales. Here's that chart:


As you can see, during this time window, Model 3 is selling significantly better than Prius had. This does not guarantee that EVs will go mainstream, but it looks like the technology has a shot and, as we wrote here, and this could be the decade that it happens.

In the final quarter of Model 3 sales, Model Y was included in the date. I would have preferred to have just Model 3, but Tesla lumped Model Y and Model 3 sales together in their Q1 2020 report. Although, Model Y sales have just begun their production ramp, so their volume is not yet significant.

I thought it was important to make this comparison now, since I expect Q2 2020 numbers to be skewed by the pandemic (for Tesla, the rest of the auto industry, and most of the economy).

Will EVs go mainstream? Magic 8-Ball says 'Signs Point To Yes!'

https://en.wikipedia.org/wiki/Toyota_Prius#Sales
https://en.wikipedia.org/wiki/Tesla_Model_3#Deliveries

Thursday, June 4, 2020

2020s The Decade Of The EV


The decade* has not gotten off to a good start: a pandemic, giant killer hornets, racial strife, Ebola outbreak, Michigan dam breaches, Puerto Rico earthquakes, Australian bushfires, Cyclone Amphan, Cyclone Harold, Taal volcano eruption, Brazilian floods & mudslides...

Some of these disasters could leave an indelible mark on this decade; and while I hope that we learn our lessons from these tragedies and improve our society, that's a topic for another forum. This blog is about electric cars and EVs are sure to leave their mark on the 2020s.

Technologies frequently limp along for 10 or 20 years before the stars align and they suddenly become an "overnight success". This decade will be the one where EVs hit this overnight success tipping-point and become the norm. By the end of the decade, new car sales will be dominated by electric vehicles. When you are car shopping in 2029, considering a gas-powered car would be like considering a flip-phone in today's smartphone world.

Source: BloombergNEF
Why do I make this assertion?
  • First, EVs are more fun to drive (they are quieter, smoother, quicker) 
  • Gas prices are volatile and change with the whims of politics, saber-rattling, hurricane refinery outages... Electricity prices are far more stable and you can even generate it yourself from your own roof.
  • Battery prices have and will continue to drop. Batteries are the most expensive component in electric cars today and their price of manufacture has continued to drop. More battery factories are being built today than ever before in history.
  • EVs will be more affordable than gas cars by 2026. Today, if you consider fueling and maintenance, EVs are cheaper from the long term total cost of ownership perspective. However, for many people today, the initial sticker shock drives them away from an EV purchase. Following on the trend of battery costs, the sticker price for EVs will continue to drop. 
  • Charging speeds will increase. As battery tech improves, the causes of battery degradation will be mitigated and batteries will continue to toughen up and become tolerant to higher charging rates and more heat.
  • Ranges will increase. As battery tech improves, more energy will fit in the same space with less weight. This will be driven by both technology improvements and cost reductions.
  • Charging infrastructure will continue to proliferate. Unless you drive an EV, you are likely unaware of all of the charging infrastructure that already exists. Take a look at the map on plugshare.com, there are many places you can plug-in. And as more people start driving EVs, more infrastructure will be deployed at businesses that want to attract EV drivers and by utilities that want to sell electricity.
  • Electric fuel is cheaper. As I write this, gasoline prices are cheaper than they have been in decades. However, even at $1 per gallon, charging overnight at offpeak rates, I'm paying ~70% less per mile than a similar gas-powered car (25MPG @ $1 per gallon compared to $0.05 per kWh @ 4 miles per kWh). 
  • Update: @KennyBSAT pointed out that I forgot to mention the variety of vehicles that will become available during this decade with choices that can "carry more people or a bunch of stuff or tow, all while maintaining range." Good point!

Monday, May 25, 2020

What is Tesla's Project Dojo?


Tesla has made significant investments in artificial intelligence (AI). AI is the key to Tesla's full self-driving (FSD) future. Yet, Elon Musk has also called AI humanity's “biggest existential threat.” How do you reconcile this dichotomy? The answer is simple, Narrow AI vs General AI. A narrow AI is trained for a particular task such as playing a particular game or language processing. These narrow intelligences are not transferable. A narrow chess AI will not know anything about checkers despite the two games sharing a board. Whereas, a General AI (sometimes called Strong AI or Artificial General Intelligence(AGI)) is the hypothetical ability of a system to learn any intellectual task that a human could learn. Skills an AGI learned in one arena could be applied in new areas and an artificial superintelligence could quickly develop. An artificial superintelligence may find humans are irrelevant or worse, a threat. This is the “existential threat” that concerns Musk. 

So Tesla's FSD system will be a narrow AI, able to drive your car and you'll even be able to tell it where you'd like to go. You won't, however, be able to chat with the FSD AI about your day, but at least you'll know it won't decide that the best way to reduce traffic accidents is to kill all humans. 


Tesla's AI investments to date include creating an AI software development and validation team, creating a data labeling team, and creating an FSD hardware team to design their own custom neural network inference engine. Next on Tesla's AI investment list is "Project Dojo."


Project Dojo

We've been given a few hints about Dojo: Musk talked about it in the 2019 financial call and Tesla's Director of Artificial Intelligence and Autopilot, Andrej Karpathy, has talked about it at multiple AI conferences. We'll discuss how neural nets work and then move into some wild speculation; but first, we have to acknowledge the Dad Joke that is the name Project Dojo. We know that Project Dojo is intended to vastly improve the Autopilot Neural Network training. If you want to train, where do you go? A Dojo, of course. 



Before we get into Dojo we need to cover a few basics about neural networks. There are two fundamental phases to neural networks (NN): Training and Inference.

Training

NNs have to be trained. Training is a massive undertaking. This is when the digital ocean of data that is the training dataset must be digested. It takes terabytes of data and exaflops of compute to train a complex NN. Through training the NN forms "weights" for nodes. When the training is complete, the resulting NN is tested. A test dataset that was not part of the training dataset, where the expected results are known, is thrown at the resulting network and if the NN is properly trained, it infers the correct answer for each test. Since Project Dojo is all about training, we'll dig more into this later. Depending on the use case, there may be several stages of simulation and testing before the NN is deployed. Deploying the NN leads us to our next phase, Inference.

Inference

When a neural network receives input, it infers things about the input based on its training; this is known as “inference.” These inferences may or may not be correct. Compared to training, the storage and compute power needed for inference is significantly lower. However, in real-time applications, the inference needs to happen within milliseconds; whereas training can take hours, days, or weeks.

Unlike training, inference doesn't modify the neural network based on the results. So when the NN makes a mistake, it is important that these are captured and fed back to the training phase. This brings us to a third (optional) phase, Feedback.

Feedback

You may have heard the phrase "Data is the new Oil." Nowhere is this more applicable than AI training datasets. If you want an AI that performs well, you have to give it a training set that covers many examples of all of the types of situations that it may encounter. After you have deployed the AI, you have to collect the situations where it did the wrong thing, label it with the expected result, and add this (and perhaps hundreds or thousands of examples like it) to the training dataset. This allows the AI to iteratively improve. However, it means that your training dataset grows with each iteration and so does the amount of computing horsepower needed for training.


Tesla's Autopilot Flywheel 

Now that we've ever so briefly covered AI basics, let's look at how these apply to Tesla's FSD.

Let's start with Deploying the Neural Net. Every car that Tesla makes today is a connected car that receives over-the-air updates. This allows the cars to receive new software versions frequently. When a new version of Autopilot is deployed, Tesla collects data about its performance. The AI makes predictions such as the path of travel, where to stop, et cetera. If Autopilot is driving and you disengage it, this may be because it was doing something incorrectly. These disengagements are reported back to Tesla (assuming you have data sharing enabled). The report could be a small file that only has the data labels and a few details or it could be streams of sensor data and clips of video footage depending on the type of disengagement and the types of situations that Tesla is currently adding to their training set.

Even if Autopilot is not engaged, it is running in "shadow mode." In shadow mode, it is still making predictions and taking note when you, the human driver, don't follow those predictions. For example, if it predicts that the road bends to the left, but you go straight, this would be noted and potentially reported back to the mothership. If Autopilot infers that a traffic light is green but you stop, this data would again likely be noted and potentially reported back.

Tesla has about a million vehicles on the road today collectively driving about 15 billion miles each year. The bulk of these cars are from Tesla's Fremont factory. Tesla now has a second factory, Giga Shanghai, putting cars on the road. Soon Giga Berlin and Giga Austin (or will it be Tulsa?) will join them. All of this will result in a large amount of data for the training dataset.

The bigger the training set, the longer it takes to process. However, with a system like this, the best way to improve it is to quickly iterate (deploy it, collect errors, improve, repeat). If training takes months, this slows down the flywheel. How do you resolve this? With a supercomputer dedicated to AI training. This is Project Dojo: make a training system that can drink in the oceans of data and produce a trained NN in days instead of months.


A Cerebras Wafer Scale Engine

Cerebras

At the start, I promised some speculation. As promised, here it is.

The size of the chips used for AI training has been increasing every year. From 2013 to 2019, AI chips increased by about 50% in size. A startup called Cerebras saw this trend and extrapolated it to its natural conclusion of 1 chip per wafer. For comparison, the Cerebras chip is 56 times bigger than the largest GPU made in 2019, it has 3,000 times more on-chip memory, and it has more than 10,000 times the memory bandwidth.

This wafer-scale chip is an AI training accelerator and my conjecture is that a Cerebras chip will be at the heart of Project Dojo. This wafer-scale chip is the biggest (literally and figuratively) breakthrough in AI chip design in a long time.

There is one (albeit tenuous) thread that connects Tesla and Cerebras, both are part of ARK Invest's disruption portfolio. ARK has investments in both companies and meets with their management teams. When there are two companies that could mutually benefit working together and it would benefit their mutual investor, ARK, you can bet that introductions would be made.

Thursday, January 16, 2020

10 Years of Trading Tesla (TSLA)



Tesla's stock has been on a tear recently. I've been buying (and occasionally selling) the stock since its IPO in 2010. Below is a brief history of my trading activity.

Of course, I have no way of knowing what the stock will do tomorrow, so don't take this as stock advice.

I bought my first shares soon after the IPO. The stock opened at about $20 and had a dip over the next few weeks. In late June and early July of 2010, I bought at $18, $17.84, and (the best price I picked some up was) $16.01 per share.

I held these shares for nearly 6 years, until early 2016. Why did I sell them then? Two reasons. First, after a stock has had a good run (from $18 to $249 (or ~1400%) in this case), I like to take out my initial stake so that no matter what happens to the stock after that, I will always be net positive. The second reason I sold was that we were going to buy a new car in 2016. I didn't sell all of my shares.

My timing to sell was great. The stock dipped later in 2016 and I was able to buy the shares back at a lower price.

After taking delivery of the car in the fall of 2016, my view of the company changed. This was not my first EV (it was my 3rd actually). I knew that EVs were the future of personal transportation, but Tesla was lightyears ahead of everyone else. There was no other car that could compare. After owning a Tesla, all other cars (electric or not) seem like relics from a bygone era. They did unlock as you walked up to them, you had to push a button or turn a key to start it and stop it, they had tiny screens, they didn't have vast free Supercharging networks, they didn't have 200+ miles of range, they didn't receive firmware updates over-the-air...

Based on this two-pronged belief (1: EVs are the future. 2: Only Tesla has cracked the code), throughout 2017 and 2018, I was buying TSLA whenever the price dropped below $300. At the end of 2018, I sold a portion of my shares at $375. The reason we sold this time was once again, to buy a Tesla.

Again my sell timing was lucky. We sold near a local maximum. Soon after we sold, the SEC became concerned with Musk's infamous 420 tweet. This, and other concerns, drove the stock price down in the first half of 2019. This allowed me to buy shares back in the $200s, I even picked up some in May of 2019 for $185 per share. I had just sold for $375 and now I was able to buy it at half that price. How great is that? I understand that an investor would not be happy if they had bought at $375 and saw their investment halved. I, on the other hand, was convinced that this slump in the stock price was temporary. Issues like this get resolved and Tesla still made the best vehicles in a fast-growing category.

Now, it's early 2020 and the stock is over $500 per share. Again, I am taking some profits for the same 2 reasons I did initially. One, to remove my seed funds. Doing this allows me to sleep soundly at night. TSLA is a volatile stock. If it goes up, I still own shares and I'll share in the rewards. But if it goes down, I'm not concerned. By removing the money I initially put into it (plus a little), I am guaranteed, that (even if the stock goes to zero) I've made money on my Tesla trades. And the second reason is to again buy a Tesla product. This time we are getting Powerwalls installed on our home. More on that in later posts.

It only seems right that after making money on their stock that I should share the profits with them by buying their products. I've certainly done the same with Amazon, Netflix, and Google.

I'm still holding TSLA, I'm long the stock.

http://ts.la/patrick7819