Congrats on the launch and really want this to succeed. I think I write too many trivial code that can be replaced by AI like this - like writing simple auth stuff or json parsing logic etc in my projects.
However, I just installed it in VS Code and tried it in my Django project.
1. I like the refractoring suggestions. Really cool and had some awesome suggestions right off the bat.
2. I tried to give instructions to write a method and it simply failed. I tried something like: "write a method to parse json object and parse non-zero values from it". All it did was add a method called parseJson(json) with pass as body :(
I tried a few different variations to get a method written by mutable but got nothing except new lines.
Anyway, wish you the success. I would love to see more players in this space. I feel a strong need for this kind of AI that can supplement my day to day coding .
Thank you for trying us out and we appreciate your congratulations. We are moving very fast so you should notice steady improvements all the time. We are serious about training our own LLMs and believe a competitive one can be trained on a seed startups budget.
Feedback that goes directly to us? We've thought about having a feature where you give us feedback and a ticket is filed with the code snippet in question (and the AI suggestion). Is that what you mean?
I noticed the docs mentioned it sends upto 1000 lines on a request and reads the contents of the immediate files.
One thing I have noticed with Copilot when working on large codebases is that the suggestions are that useful. For generic code it is fine, but when trying to write code that depends on ~100% internal modules, it was pretty useless and generated a lot of noise that I rarely used it and finally uninstalled.
I have been wondering if taking a hybrid approach of creating a mini model of the entire project locally and using it with Codex as a complementary source would greatly improve the quality of suggestions. I don't know enough about all the existing tools, so this might be something already implemented. Just wanted to share this with someone who worked in this space
That is a great point. We want to scale to entire codebases so that this will no longer be an issue. I think you're on to something when you say that companies, teams and ultimately individuals will have their own models. We're really excited about this future and can't wait to build it.
Thanks for your comment. Our vision of this space goes beyond autocompletion, for example currently we have documentation and the code edit functionality. We are also building out our own LLMs to support other functionality like AI refactoring.
Thanks for your question! I am not super familiar with Private Hub, but I envision companies wanting their own networks (which don't necessarily have to be on-prem) for many reasons. This allows for fine tuning the network on company/team code idioms, lower latency, and even fine tuning for very company specific applications (say translating their APIs from one language they develop in to all other languages they support).
Tabnine is my personal favorite as it works with most IDEs including IntelliJ, Vim and even VSCode and the suggestions are high-quality. I actually moved away from Copilot to Tabnine across all IDEs and haven't gone back since.
I am personally quite interested to see how many Codex-wrapper-for-VSCode projects we see. It's the one project every single engineer on HN has considered. (Same goes for the GPT3 text generation flavor, the "autocomplete for Gmail/Chrome/whatever" ones).
What makes it interesting is everyone knows there will be thousands. So the people who decide to continue anyway will presumably have some significant ideas for post-processing.
It's interesting in a game theory sense.
Yes... many of these will be the obvious (generate 5 suggestions and rank them with a different neural net, similar to what Google is doing internally, etc.)
But a small portion will do interesting things. (I know my personal daydreaming session on the topic ended with many pages of possible approaches.)
There are MANY novel applications or approaches possible beyond simply "wrap Codex, add stack-specific context to the prompt, re-rank with secondary model".
I think some of the wrappers will actually succeed. But there will be so many (I predict it being a popular course project for CS classes etc).