Sure, makes sense. The original argument was that an ecosystem like Python's could have been a possible substitution for Rust, as an example, and therefore it was a baseless statement. Think the analogy with yours would be an equally good screenwriter or scientific collaborator. You need one, but technically maybe not exactly the one you had.
That said, not trying to undercut anyone's appreciative statements, especially the one from OP in his repo towards Rust! Obviously people (and programming languages, for that matter) aren't simple drop-in replacements for each other in real life. You have to be inspired and empowered by them.
That personal motivation or rapport component is probably what was being missed in the post to which I replied.
Yeah, you're right. Probably I didn't get the correct linguistic context.
But you never know, maybe in another language he would have done a much better project, maybe not. Actually, in alternative universes everything would probably be different. If Python didn't exist this project would maybe not exist too, and would be at least a little different in any case, so he could have opened the full version to Python codebases too, and that applies to all other languages.
And Rails could have been written in C. The reason why it wasn't should be clear. Different languages enable different ways of thinking, and what is hard to express in one can come out easily in another.
This is valid for both programming and human languages.
Not everyone has infinite time for development. If Rust was the only language they could get to the performance they want, with all the features, in the time available then the statement is valid.
Note that they talk about the ecosystem, not just the language. If you want a high performance, compiled language, with a good ecosystem of packages that you can leverage then Rust is a great choice. Arguably Go is the only other language that would fit the bill, but for some people the simpler type system doesn't allow the abstractions they want.
Ah, got it! You were being literal and he was not. That's where the confusion is coming from. Your comment came across as mildly aggressive which is probably why you're getting downvoted, but you were just pointing out that what he said made no sense in literal terms which is true.
I don't think it was intended to be interpreted literally. The intent and the feeling behind what he was communicating was represented clearly.
...as risky as installing a proprietary editor plugin which updates automatically, yes.
Also, AFAIK most understandings of MIT, BSD, and Apache 2.0 licenses require you to acknowledge the copyright holders of the source code you compile into your binary, even if the licenses permit binary distribution. I can't find your "Copyright (c) 2018 Tokio Contributors" or "Copyright (c) 2014 The Rust Project Developers" that I'd expect based on `strings TabNine | grep github`. Maybe you've got a lawyer that suggests otherwise? Your plea of "trust me, I have good hygiene" carries less weight when I have to `strings` your stuff to know what shoulders of which giants you're standing on.
> ...as risky as installing a proprietary editor plugin which updates automatically, yes.
Can't you make the same complaint about any auto-update functionality in any software? Even if it's BSD licensed, you're still counting on whomever has authority to push an update to not push malicious code.
This doesn't seem to have anything to do with the fact that his code is proprietary nor his monetisation strategy, so why are you singling him out for those?
Proprietary - Can't patch out the autoupdate, which I might be tempted to do if something else in my toolchain did things at someone else's leisure.
DRM/monetisation - the product as of my comment didn't seem to acknowledge the open source works compiled into the binary, and I didn't think that was a good look for someone with the authority to push out malicious code.
As risky as ones that don't update automatically either. Just because a plugin doesn't update automatically doesn't mean it doesn't still have the capability of doing network access. Unless you're actually sandboxing all your IDE plugins and denying most of them network access (and verifying on every new IDE plugin you install whether it's allowed network access), but I don't believe that's how IDE plugins generally work.
MIT only requires source attribution. It's the BSD licenses that require attribution for binary forms of redistribution. Still, it is good manners and good cover-your-arse practice to attribute whatever free software work they used (Google does this with their giant "open source licenses" page).
Well, you could argue that the notice is "present" (in a very esoteric sense) in a binary distribution of the software because it was present in the source code used to build it. You could also argue that a compiled version of a program isn't a "copy or substantial portion" of the Software (compilation is effectively a form of translation, which is a derivative work under the Copyright Act in the US -- and not just a copy).
Personally I would still include it in both, but I always had the impression that MIT was looser than BSD-2-Clause about this. BSD-2-Clause explicitly states that binary distribution needs to include the notice in "the documentation and/or other materials provided with the distribution", and I have a feeling that the license authors might've had a reason to want to be explicit about it.
Wait, is the auto-update all that's needed for network? I assumed it was license validation or something. If it's just updating, couldn't you provide a different method of updating like manual update checking, and then peoples concerns would be solved?
> Finally, TabNine will work correctly if you deny it network access (say, by blacklisting update.tabnine.com).
Just to clarify - would it still work if I deny network acess for the TabNine binary, _after_ validating my license key? Or is the key validation invoked on every launch (hence requiring network access)?
I agree with your concerns - I wonder what could be written to alleviate them? This brings up an interesting problem.
Ie, could we write a monitoring proxy where if enabled, all traffic goes through this proxy. This proxy enables the end user to monitor 100% of traffic, all http requests, and could even have a secondary documentation flow that explains the I/O for security minded individuals.
Then you'd shut off remote network access to the binary, monitor all traffic, and feel secure knowing that it's only sending what it says it's sending, and why.
With that said, I imagine you could do the same thing with a sniffer. Perhaps a documentation standard could be built into request/responses, so a monitoring program like Wireshark could snuff the I/O and see what it is.
Do you have any thoughts on how someone could both network-license, and make you feel secure in their I/O? Ie, no trust needed?
I don't think a DRM solution that is both robust against an adversary and inspectable by a stakeholder can be engineered. Software can't look out for both the person running it and the person selling it simultaneously when their needs are mutually exclusive. Cory Doctorow has some eloquent content on the topic, ie at .
In this particular case, the use of TLS (good!) makes it relatively challenging to inspect. Assuming the author isn't shipping a cert in his binary (doesn't look like it) - I'd have to spinup a new VM, load a custom root cert, and mess with a TLS terminating proxy / forwarding solution, and hope he's not using a secondary stream cipher on top of TLS. Maybe I get lucky and https://mitmproxy.org/ or something just works out of the box. In any case, lots of effort to know he's not siphoning up all the source code on the local machine and using it to train v2 of his project. And the more robust the DRM solution, the less feasible it is to inspect.
A combo of two applications: main app and network agent. Main app writes to a file with request, registration check or update, in JSON or other text-format for user inspection. It loads the agent which reads same file, applies operations, sends them to 3rd party, and writes result into another file. Main app reads that the second it appears. To keep it simple and not have to delete, the files might be numbered with old exchanges kept unless admin/owner deletes them.
With such a setup, users can see exactly what data is outgoing, have a reasonable belief they know what's incoming is harmess, main app gets no network access, agent has no access to secrets/system, and agent can be open source (entirely or mostly).
So, there's a quick brainstorm from how I did privilege-minimization for high-assurance security. This is basically a proxy architecture. That's a generic pattern you can always consider since it can help protect lots of risky apps both ways.
I wish someone would figure out the right UX for partial autocompletion. e.g. I type "wo" and my phone suggests ("would", "work", "wonder"), there should be an easy way to say I'm trying to type "working" rather than clicking the "work" autocomplete then backspace, then "ing".
I'd imaging TabNine has this problem in spades, since it does such long autocompletes. It could suggest "unsigned long long" when I've typed "unsi" and I really want "unsigned long int". Seems like a tough UX problem. ¯\_(ツ)_/¯
Xcode has handled this for years. In Xcode, when autocompletion is presented, hitting Tab will complete the longest unique prefixed subword for the currently-selected tab item. If this results in only having one completion option left, then it completes the whole thing (e.g. adding method arguments and whatnot). Similarly, hitting Return will just complete the whole entry instead of the longest unique prefixed subword.
By that I mean if you have 2 autocompletion options `addDefaultFoo()` and `addDefaultBar()`, and you type `add` to get those options, hitting Tab will fill in `addDefault`, and then hitting Tab again will fill in the rest of the selection.
Sounds like what you want is fuzzy searching (say fzf ) over autocomplete suggestion results. You could type the prefix, and then fuzzy search by typing the suffix to get your desired word (while letting autocomplete fill in the middle of the word).
UX-wise holding tab would be the best, meaning tab => use completion (like it works now), holding tab => use this completion but show me further possible completions of that word; if it doesn't have any just keep the caret there (for me to finish it writing manually)
Whilst they're doing that how about adding caret-placement sensitivity:
When I click just after the initial letter (pipe representing caret that would be) eg "w|orking" the chances of me wanting to type "worked" are pretty slim; instead it should offer "Dorking" (a UK placename), "borking" and such; according to my frecency scores.
Similarly if I click to place the caret at "work|s" I'm probably after "words" or "worts" (beer stuff), or similar. Again "working|" and I'm probably going to change to a different suffix - works, workers, worked.
I'm amazed that gboard (Google's Android keyboard) doesn't already do that? Perhaps I missed a setting.
It turns out that iOS prediction will make a provisional guess based on what you typed and will go back and adjust its autocorrections as you type subsequent words. You can see this more clearly if you use dictation, but allegedly it won't do as well if you use the word corrections every time.
The way I use autocomplete is that I type the entire word I mean really quickly. I get most (or all) of it wrong, but the autocompleter has enough information to substitute that with the correct word. It's much faster than the read-evaluate-correct loop you're describing.
I've been using TabNine for a few weeks, and it's really cool how well it works. My first "woah" moment with it was writing a function where the first thing I wanted to do was take the length of the array, and once I started typing
it suggested the entire completion of "= len(bar)". It has a really cool way of picking up your coding style that makes it stand out to me.
Thinking about it more, I wonder how useful that type of autocompletion is for those who can type fast. I wonder how much time it takes my brain to context switch away from "code authoring and typing mode" to recognize the " = len(bar)" in the autocomplete options list. It seems like it would be faster to just type out the " = len(bar)" for those who type a solid 60+ words a minute?
I'm trying it out now. If it works well $30 is nothing for this magic. Especially in VSCode, my favorite editor. I have a problem with many languages not having the support I need. And I also don't have the best memory, so autocompletion makes me much faster and costs me less frustration with Googling.
First impression is that this is insanely fast and is actually giving recommendations based on context, without setting up additional files. So, it's doing exactly as advertised.
I'm using this in Vim and would like to know if there's a way to configure it such that the dropdown does not show up until I hit <C-n> or <C-p>? I realize that this is supposed to be a zero-config tool, and I'm asking for a configuration!
Great job with pricing as well. Going to use this for a week before I commit to the license but $29 is a no-brainer for how much use I'll get out of this autocomplete.
I've prototyped something like this in the past using n-grams and it was surprisingly effective. When I think it gets really interesting is if you marry ML/NLP tactics with traditional static code analysis.
So you can imagine the ML engine generating the suggestions with the static analyzer ranking the suggestions intelligently.
It's kind of similar to the original AlphaGo where you have the model generate the potential moves that are then ranked by the Monte Carlo Tree Search algorithm.
It looks like it is probably OK. The vim plugin it is based on seems to have already been designed to run using a client/server architecture. The plugin is the client, and it gets its completions from a server.
He just changed it so that it uses TabNine as that server.
Not cool in my book regardless of legality. Rebranding it to tabnine-vim alone is confusing, since none of the legwork for vim support belongs to TabNine. At the very least the original copyright notice should be left intact in the README (iiuc this is required by GPLv3).
It includes a copy of GPL. The README tells you what it applies to, and what it does not. It tells you where to find the original project it was forked from. And all the files that the TabNine guy did not write contain their original copyright notices from the YouCompleteMe authors.
Let me be not the first to say ... nice! You've ticked a lot of boxes for me, Rust to boot. And the price is reasonable. I echo some of the privacy concerns, but I am not a purist who will not use proprietary development tools -- many of which are from small shops. I had questions that I'm sure I'll get answered after I install the trial extension:
I noticed in some of the examples that the autocompletions were multi-word (for the language involved). This makes sense and I have no real problem with that it limited cases. What I wonder is have you found any issues with autocompletions resulting in less DRY code?
- and -
Since it's not parsing, is it possible to tell it to not show autocompletions based on a pattern? This is no deal breaker, it just annoys me when code comments accidentally invoke the drop-down and I'd imagine that a similar problem could happen with strings.
Awesome plugin Jacob. I have a question about the full version, is it per editor? Eg., I use Sublime Text most of the time, but occassionally vim, do I need to buy 2 licenses?
Also are these licenses transferrable between machines (work vs home)?
I've been using TabNine for a couple months now and it's been really great. It "just works" and I don't ever have to worry about it even when opening large projects. It's always fast and high-quality. It really feels like it's just part of Sublime Text in a way that's very rare for a plugin.
> TabNine builds an index of your project, reading your .gitignore so that only source files are included.
Heads up, it's not necessarily uncommon for JS developers to include node_modules in their git repos. If you're developing something like an Electron project or a website instead of a library, it's even sometimes advised to do so -- there's a line of thought that your static dependencies should be tracked as part of your source control.
It might not be a terrible idea to have an alternate config for this that allows excluding other directories. Even if a developer doesn't include their dependencies, they might have old code that they don't want integrated into their suggestions if they're in the middle of a refactor or something.
It's become less popular with the introduction first of ``shrinkwrap`` and then ``package-lock.js``. At one point in time, it was recommended behavior in the official NPM documentation for site deployments, because there wasn't a way to checksum dependencies.
They've since switched to recommending private repositories like Artifactory instead, which to be fair is usually better for very large organizations nowadays. But that wasn't always the case, and even as recent as 2013, it was the flat-out prevailing advice from package managers like Bower, and there are organizations who are still using and maintaining codebases that were set up in 2013.
You won't see a lot of projects on Github that rely on it, because:
A) Usually Open Source projects are designed to be built on multiple environments/OSes.
However, you want to be careful not to make the mistake of assuming that every project has the same concerns as a standard Open Source project. Especially if an Org is going all in on standardizing dev environments through Vagrant or Docker, the question becomes, "why would you want an extra checkout/build step on top of that?"
Freezing a dependency tree isn't the point. The point is to avoid making a network request and to know that your dependencies will still be there 5 years from now.
Remember that one of the benefits of Git is that it's distributed. Even if you are hosting your own npm mirror, relying on it gets rid of that distributed advantage. It doesn't help you to be able to clone from the person next to you if you can't build without making a network request.
I'm not saying that this should be the norm for everyone. It obviously shouldn't be the norm for libraries. But it's not inherently a crazy or harmful idea.
Depends on if you want to bother setting up Artifactory. The problem with having your dependencies outside of your project directory is you're now relying on a network request and a build step to get your stuff up and running.
It's obviously not right for every project, I wouldn't classify it as default behavior or even a standard behavior. But if you're already using Vagrant/Docker to standardize environments across your entire stack, there's an argument to be made that there's really no need to not to have your dependencies precompiled and local to the project.
If you can get rid of complexity, it's worth considering whether or not doing so might be a good idea. Across standardized environments, fetching dependencies is extra complexity.
Afaik, Most language communities with a package manager are fine with the network request, since it should really only occur on initial pull, and library updates; not sure what they do with vagrant, but i imagine just keeping the libs locally and copying it in on vagrant build.
Eg in pythonland, I’m pretty sure I’ve never seen a repo with packages stored in the repo.
So what happened in jsland that makes the difference?
Python installs its packages system-wide with pip, so you'd never be able to commit those. The default for Ruby gems is also system-wide (although it seems like members of the community are starting to shift away from that).
Node installs packages locally to the project itself. This was partially a direct response to languages like Ruby and Python; the early community felt like system-wide dependencies were usually bad practice. So you can install packages globally in Node, but it's not the default.
When you move away from global dependencies to storing everything in a local folder, suddenly you have the ability to commit things. And at the time, there weren't a ton of resources for hashing a dependency; managers like Yarn didn't exist. So checking into source turns out to be an incredibly straightforward answer to the question of, "how do I guarantee that I will always get the same bytes out?"
People are free to fight me on it, but I would claim that this was not particularly controversial when Node came out, and it is a recent trend that now package managers are advising Orgs to just use lockfiles by default. Although to be fair, a lot of the community ignored that advice back then too, so it's never been exactly common practice in Open Source JS code.
>Python installs its packages system-wide with pip
Standard practice atm is to install packages locally to a project by using venv, or rather pipenv. Afaik, lockfiles remain sufficient. I assume ruby is in a similar state, but im not familiar with its ecosystem
>And at the time, there weren't a ton of resources for hashing a dependency
I suppose that’d be a big reason, but isn’t that basically equivalent to version pinning? (Whats the point of versioning, if multiple different sources can be mapped to the same project-version in the npm repo?)
It seems odd to me because it seems like it’d screw with all the tooling around vcs (eg github statistics), conflates your own versioning with other projects, and is the behavior you’d expect when package management doesn’t exist like in a C/++ codebase.
rust/python/ruby/haskell don’t see this behavior commonly, specifically because utilizing the package manager is generally sufficient. 62That njs would commonly only use npm for the initial fetch seems like a huge indictment of npm; its apparently failing half its job? It seems really weird to me that the js community would accept a package manager..that isn’t managing packages.. to the point that adding packages to your vcs becomes the norm, instead of getting fed up with npm
Adding to it is that, afaik, package management is mostly a solved problem for the common case, and there are enough examples to copy fron that I’d expect npm to be in a decent state... but apparently its not trusted at all?
> Standard practice atm is to install packages locally to a project by using venv, or rather pipenv.
Thanks for letting me know. This is a good thing to know, it makes me more likely to jump back into Python in the future.
I suppose it is to a certain point an indictment of NPM, certainly I expected more people to start doing this after the left-pad fiasco. But it's also an indictment of package-managers in general.
So let's assume you're using modern NPM or an equivalent. You have a good package manager with both version pinning and (importantly) integrity checks, so you're not worried about it getting compromised. You maintain a private mirror that you host yourself, so you're not worried that it'll go down 5-10 years from now or that the URLs will change. You know that your installation environment will have access to that URL, and you've done enough standardization to know that recompiling your dependencies won't produce code that differs from production. You also only ever install packages from your own mirror, so you don't need to worry about a package that's installed directly from a Github repo vanishing either.
Even in that scenario, you are still going to have to make a network request when your dependencies change. No package manager will remove that requirement. If you're regularly offline, or if your dependencies change often, that's not a solved problem at all. A private mirror doesn't help with that, because your private mirror will still usually need to be accessed over a network (and in any case, how many people here actually have a private package mirror set up on their home network right now?) A cache sort of helps, except on new installs you still have the question of "how do I get the cache? Is it on a flash drive somewhere? How much of the cache do I need?"
If you're maintaining multiple versions of the same software, package install times add up. I've worked in environments where I might jump back forth between a "new" branch and an "old" branch 10 or 15 times a day. And to avoid common bugs in that environment, you have to get into the habit of re-fetching dependencies every checkout. When Yarn came out, faster install times were one of its biggest selling points.
I don't think it's a black-and-white thing. All of the downsides you're talking about exist. It does bloat repo size, it does mess with Github stats (if you care about those). It makes tools like this a bit harder to use. Version conflation doesn't seem like a real problem to me, but it could be I suppose. If you're working across multiple environments or installing things into a system path it's probably not a good idea.
But there are advantages to knowing:
A) 100% that when someone checks out a branch, they won't be running outdated dependencies, even if they forget to run a reinstall.
B) If you checkout a branch while you're on a plane without Internet, it'll still work, even if you've never checked it out before or have cleared your package cache.
C) Your dependency will still be there 5 years from now, and you won't need to boot up a server or buy a domain name to make sure it stays available.
So it's benefits and tradeoffs, as is the case with most things.
I understand that the tradeoffs exist, my surprise is mainly that would be an uncommon workaround in pythonland for workload specific tasks (eg most projects dont have differing library versions across branches; at least not for very long) is common practice in jsland
Although one factor I just realized is that pip also ships pre-compiled binaries (wheels) instead of the actual source, when available. Which would generally be pretty dumb to want in your repo, since its developer-platform specific; assuming js only has text files, it would be a more viable strategy in that ecosystem to have as a common case
Regarding B and C, its not like you’re wiping out your libraries every commit; the common case is install once on git clone, and only again on the uncommon library update. A and C is a bit of an obtuse concern for most projects; I can see it happening and being useful, but eg none of my public project repos in python have the issue of A or B(they’re not big enough to have version dependency upgrades last more than a day, on a single person, finished in a single go) and for C, its much more likely my machine(s) will die long before all the pypy mirrors do;
Which I’m pretty sure is true of like 99% of packages on pypy, and on npm; which makes the divergent common practice weird to me. It makes sense in a larger team environment, but if npm tutorials are also recommending it (or node_modules/ isn’t in standard .gitignores), its really weird.
And now that you’ve pointed it out, I’m pretty sure I’ve seen this behavior in most js projects I’ve peeked at (where there’ll be a commit with 20k lines randomly in the history), which makes me think this is recommended practice
At one point, it did not, and the default behavior when one did npm install was to use quite permissive packages.json rules that allowed minor and patch updates. I remember being bitten by this a few times years ago, particularly when semver was more poorly understood.
Lot of surprise about something that I thought was not particularly controversial to say. Google has been using vendored dependencies in version control for years. It's also going to be the default behavior in Jai.
Is there something I'm missing that makes those examples particularly abnormal? Has consensus radically shifted since the last time I looked into this?
I've been using it for Python mainly but I find it's really helpful. It can often infer arguments for functions or functions to use based on the variable names. I've used Ruby lots in the past and I think it would work just as well based on my experience. I would give the free version a try and see what you think.
I just installed it in Sublime Text 3, TabNine seems to expand the first autocompletion only if the character to the left of cursor is not a whitespace, i.e. if I'm typing "let v = |" (where | is a cursor) and TabNine shows me a list of autocompletions, I press Tab key and then \t is inputted instead of first suggestion.
Any thoughts on how this performs vs deoplete? I've really enjoyed deoplete. It makes my coding quite a bit faster. However I've recently become pretty frustrated with all the gocode forks and go module interaction, so there's definitely room for improvement.
Tabnine and something like deoplete are not directly comparable in my opinion. Deoplete is a completion framework (with dictionary based language specific systems) and tabnine is an intelligent language-agnostic completion system. You could theoretically have Tabnine support deoplete (it is currently YCM based). As the author mentioned in an other reply, dictionary based completion systems are good for api exploration, while tabnine is for more contextual completion.
I must say that I never found a "clever" autocomplete that really suited me, I just ended up using a rather dumb "hippie-expand" in Emacs that basically tries to complete the word under the cursor using anything it finds in the current file or, failing that, any other open file. It's very dumb but it works regardless of language (including completing plain text in emails for instance) and it's predictable.
I'm pretty interested in your project, the way it seems to be able to learn from the way you type matches my workflow better than the usual "clever" auto-expander. I also have no issues paying for good tools (and $29 is really negligible as far as I'm concerned when it's for productivity tools, my keyboard alone costs an order of magnitude more).
However, and I know I'm probably in the minority here, I won't even consider using your program if I can't get the source. I'm not even asking for a FLOSS license or anything, even if it just came with a tarball that I can't redistribute I would consider it. But as it stands I would be completely relying on you maintaining the code and porting it to whatever platform I may want to use later. As it stands for instance it seems that you don't provide binaries for the BSDs: https://github.com/zxqfl/tabnine-vim/tree/master/binaries/0.... . I'm sure I could get it to work with Linux binary compatibily on FreeBSD but why even bother? What if Apple releases an ARM-based desktop a few years from now and you've stopped maintaining your project? Then I have to replace it with something else if I have to code on a Mac. The price is a non-issue but having to work around the closed source nature of the software is not something I want to bother with.
Again, I know that I'm probably in the minority and that many people on HN have no issues using mostly closed source development stacks but I genuinely wonder if you'd have much to lose if you kept the same business model but provided the source. I mean, if people want to pirate your program I'm sure they'll find a way even if it's just the binary, so I doubt you gain much from that. Then the risk is people stealing your code but is there really that much secret sauce in an autocomplete program? If people really care won't they just reverse-engineer it anyway? Aren't they even more likely to try and reverse-engineer it if it's the only way to get an open source version that they control?
Maybe I'm overthinking this.
Anyway, I hope I don't appear too negative, that's just my opinion. I'm happy to see people working on improving our code editing experience in any way or form, sometimes it feels like we're still in the stone age with our dumb ASCII files and relatively primitive tooling.
I'm also a big fan of emacs' dumb autocompletion, mainly dabbrev-expand. (Which hippie-expand uses.) I sometimes try other autocompletion methods, including those that use a proper cross-reference. But most of the time I just fall back to dabbrev-expand when I'm in the flow of typing. The main reason is predictability. It will reliably paste words and identifiers that are close above, so reliably that I usually don't slow down to check if it picked the right one.
And it works everywhere. It will also complete this long name I just typed in a markdown document into the filename when creating a new file, and into the class name after that. Yes, there are better methods (like templates) for many use cases if you bother to set them up. But it's amazing how far this single stupid tool already takes you.
TabNine seems to take this one step further. It's really exciting that this concept gets more mindshare. I'm not going to use it (license) but next time I think about upgrading my autocompletion I'll have a better idea into which direction to take it. I'm always toying with the idea of implementing my own.
As long as there is demand, he'd most probably maintain the project but it's not a disaster if he decides not to.
If there won't be any demand for this tool in a few years this could mean 2 things: Either people think it's not worth it (in this case, you don't lose anything by not using it) or there are better/cheaper alternatives (and you can use them)
I find it interesting that this quote will become less and less absurd as technology continues to improve.
The confusion stems from the fact that a human can tolerate a certain amount of "wrong" and still give the "right answer". For example you don't need to speak with perfect grammar to be understood. Humans won't choke on syntax errors the same way a browser chokes on malformed html.
Machines are much more rigid and can't understand context and intent. But this is starting to slowly change in the age of machine learning. For example if I make a small typo, I expect an autocompleter to still understand what I was trying to type. It wouldn't be too absurd to believe that in a not too distant future, it would also be able to autocomplete away common/obvious bugs. Maybe it can even autocomplete/rewrite code from near pseudocode if the intent is clear enough.
YouCompleteMe will know the specifics of the language you are in so it will be much better at simple syntax, this will be able to learn how you generally do things and repeated patterns, which usually also will get a good bit of the syntax down.
It's also a very competitive price. $29 is excellent for a piece of software that helps my day to day. It's really a sweet spot between very reasonable, and a bit pricey. I'm so happy about this project, and hope it works well (I'll be trying it tonight after work)
On that same note, I wish we were more willing to pay for our tools as a community. If we were, I think more neat and productive projects like this might exist. Yet, developers seem to be historically cheap, and our love for open source (which I do love) seems to be mixed in with our willingness to spend money on our tooling.