9 months ago I started a startup ๐
Today I am officially declaring it a failure!
I've taken some time to write about what we were doing and why I think we failed. I like to keep my audience up to date on my whereabouts, and I think it's important to talk about our failures ๐
To understand why we failed, first I need to explain what we did:
#### ๐๐๐ ๐๐ง๐ค๐๐ก๐๐ข ####
Within only the last few years, coding assistants like GitHub Copilot and Cursor's Copilot++ have taken the coding space by storm. These tools are great at filling out boilerplate, covering your documentation, and almost all of the tedious tasks that come along with being a programmer. I have even used Copilot++ to fully implement some of my research experiments.
These tools are unbelievably useful for common tasks - ideas that are similar to what has been done before. However, ๐๐ต๐ฒ๐ ๐๐๐๐ฒ๐ฟ๐น๐ ๐ณ๐ฎ๐ถ๐น ๐๐ต๐ฒ๐ป ๐๐ผ๐ ๐๐ฎ๐ป๐ ๐๐ผ ๐๐ผ๐ฟ๐ธ ๐ผ๐ป ๐๐ต๐ฒ ๐ฐ๐๐๐๐ถ๐ป๐ด-๐ฒ๐ฑ๐ด๐ฒ, ๐ผ๐ฟ ๐๐ต๐ฒ๐ป ๐๐ฟ๐๐ฒ ๐ป๐ผ๐๐ฒ๐น๐๐ ๐ถ๐ ๐ถ๐ป๐๐ผ๐น๐๐ฒ๐ฑ.
Do you want to build upon niche research ideas? Do you want to write code that pilots space rovers? Do you simply want to use the experimental branch of your favorite library or API? Sorry, there's not enough data for that, so these tools will fail. It is impossible to train one coding assistant model that will be good at everything, especially when some of those things are novel concepts.
#### ๐๐๐ ๐๐๐จ๐๐ค๐ฃ ####
The only way to overcome this is for coding assistants to learn continually, on the fly. If your work builds on previous research, your assistant should learn about those papers. It should learn from the documentation of the libraries you use. It should remember the history of your projects. And most importantly, it should continually learn from its interaction with you and never stop improving!
The coding assistant should not forget its learnings just because information has left its context window (and no, a larger context is not enough). Our startup was about figuring out how to make this possible. We were tackling the problems of memory and continual learning.
#### ๐๐๐ฎ ๐ฌ๐ ๐๐๐๐ก๐๐ ####
Going into the startup, I constantly heard the same advice: "Do everything you can to get an MVP out in the first few months, and then iterate." For most startups, this is probably great advice. It's all about getting to your first customers ASAP because that's when you start getting user feedback, and itโs when investors take interest. For us, this was a terrible idea (but we did try it). Getting a quick MVP out would mean forsaking the idea of solving these problems at the root, and instead looking to bandage them with big models and hard-coded logic. It would mean ending up with a mediocre product in one of the most crowded spaces in the AI startup boom, and who would want to fund that?
This approach may work well when your differentiator is something that can be developed in a couple months, but that was not the case for us. Tackling the problems of memory and continual learning in AI require a deep tech approach. To solve them, you need to hire a talented research team, you need clear thinking, you need to run a lot of experiments, and you need time. And to hire that research team, you need money.
In summary:
We needed time and money to build the tech to differentiate ourselves.
To get the money we needed investors.
To get investors we needed first customers.
To get our first customers we needed time and money to build the tech to differentiate ourselves.
You might see the problem now.
The problem was that we chose an area where we didnโt have an entry point. If we already had (1) the tech going into the startup, or (2) the connections/recognition to raise money up front and hire a research team, I think we could have succeeded. So perhaps the 9 month lesson is, if you are going to shoot for the stars and attempt to build something that pushes at the boundaries of current tech, make sure you have a foothold first! If you are in a crowded space and you don't have that foothold, maybe just choose a different idea? lol
#### ๐๐๐๐ฉ'๐จ ๐๐๐ญ๐ฉ ####
We open sourced part of our codebase, check it out here: github.com/intractai/IntractCodeAPI
I still think this idea can work in the future, but for now, Iโm looking for a job!
If youโre looking for an ML research engineer (especially for RL) in the US, Canada, Japan, or remote, reach out!
88 - 4
Thinking about doing a video related to getting a Master's in ML given how many people ask me about this.
Which type of video would you prefer:
(I will have a separate video about my master's research)
8 - 1
Hey everyone, ๐
I wanted to give you all a quick update on why I haven't posted a video in 2 months and where I've been:
About 2 months ago I started a new internship ๐ฅ๏ธ, while at the same time, I've continued working on my Master's thesis full-time ๐ so that I can graduate in ~1 month. I'm also getting ready to prepare a conference submission to ICLR based on the same research ๐.
Given that I'll be graduating soon, I've been sketching out plans for the next chapter. I plan on starting a startup, and I want that to be ready to go once I graduate. I'd love to talk more about this on my channel ๐ฅ once I have some time, but it is still in the very early stages.
And of course, I've been taking some time to chill - I can't just do work all day ๐
.
So that's what I've been up to... a lot of stuff. But I'm writing this to let you all know I have every intention of making some new videos once I find the time. I love this channel a lot โค๏ธ, and I can't wait to talk about my own work + some other cool ideas I have planned.
It will just have to wait until I'm not drowning with work.
Thank you so much for everyone that has stuck with me, and to the people who have continued to find and watch my content. It means a lot ๐
76 - 6
What would you think of a new "Project Spotlight" series, where I go into depth on a recent AI project that catches my eye. Generally I would be looking for open source projects on GitHub that we can use, or look through.
27 - 9
I just sent out emails to everyone who won something from the NVIDIA sponsored raffle. Make sure to check spam, something tells me emails with multiple mentions of a "raffle" won't have the best time with spam filters
25 - 7
๐ข BIG NEWS ๐ข
Hey everyone! After months of soul searching, I've made a tough decisionโI'm leaving this channel to pursue my true calling: anime reviews! ๐
Join me on this new adventure and check out my first anime review:
๐ bit.ly/3KnUa98 ๐
28 - 4
๐ What underrated ML paper do you think I should cover? ๐ฅ
Old ๐ด or new ๐ถ, I'm on the hunt for hidden gems ๐
Share your suggestions in the comments below! โฌ๏ธ๐จ๏ธ
25 - 15
Last chance to sign up for Nvidia's GTC conference and catch lots of the awesome AI talks happening tomorrow! Also last chance to be eligible for my RTX 3080Ti giveaway sponsored by Nvidia:
GTC Signup: nvda.ws/408jS7w
Raffle Signup: forms.gle/NyxKRw2tF6Un9CaT6
19 - 3
I like to talk about AI, Machine Learning, and especially a lot of Reinforcement Learning! Most of my content is educational and focused on research that I find interesting.
6 December 2016