Just because it is not helpful to your workflow @OldDesigner, does not mean it is not a helpful tool for many for their workflows. As the support on this thread shows, that seems to be the case. I do not think anyone is advocating to replace the current point-and-click design with natural language. In fact, it could be an option in preferences that you can turn on or off. I BELIEVE that is how todoist is if I remember, but either way, that would be a great way to implement it so OF is a better task management tool for ALL.
Actually if you check I was replying to someone else and never intimated you were condescending or anything else but never mind.
I actually used ToDoist for over a year (pro plan), as for speculating it was just that, I did not say I was right merely that it could be a fact.
Really I just put another side, personally I do not think natural language is that important (for me) and detailed why. Sorry for swimming against the collective group think.
You know what they say about assumptions. The assumption that I haven’t used any of those systems is incorrect. My reply + arguments are based on both the idea behind these systems as well as real world usage of them. Instead of making (incorrect) assumptions it is better to ask.
And that’s the other problem: not everyone lives in an English speaking country. The applications you mention have been designed primarily for English speaking countries which makes their natural language input very hard or even impossible to use with any non-English language. They may do natural language input well for English but that doesn’t mean it works as good with other languages. This is extremely important for OmniFocus since it supports many languages. What good would natural language input do if it isn’t available in ones native language?
Languages have a certain amount of complexity to them that not always is good to capture into code and algorithms. Dutch is such a language which is why support for Dutch mostly means they have translated the text and provided a dictionary. Grammar checking and all the other fancy stuff is too difficult so you are not getting these. English is actually one of the easiest languages.
However, the main gist still is that it isn’t very necessary for OmniFocus to support it since it is a standard feature of the operating system it runs on and it isn’t going to solve the speed of data entry at all.
Question back to you: have you tried using the natural language input of these (and other) apps with Siri and also with a language that isn’t English?
A quick word of advise here: be more careful with this subject. The world has far more languages than English with Chinese being used by the most amount of people. It can come across rather arrogant if you only take English into consideration.
It is actually the other way around: you guys have a very different idea of what natural language input is about then what it really is as well as the real problem of what is causing the slow data entry. And apparently you have also missed the point of the part of my reply you are referring to here. The problem is that some people here are oversimplicating things. Implementing something like natural language takes quite a lot of effort because of all the other things you have to take into account. The autocomplete was just an example because it was a rather simple one: the developer of the app can disable autocomplete for the input box. The point was that the developer needs to be aware of the future and program the app to disable it. However, as I have stated earlier, autocomplete could be a very nice addition to have so the developer also has to make the decision if it is a good idea to disable it or not.
Or in other words: things aren’t as easy and simple as you think. There is more than meets the eye.
I think the biggest problem here is that (some of) you are looking at it from a user perspective where I’m looking at it from a software and system engineering perspective (it is my day time job). I know a lot about the technology behind the feature that you are using which is why I’m stepping on the brake here. Some of you guys seem to have entered a discussion that is above their heads. There are a lot of things that they seem unaware of.
You are wrong in this case. Natural language input, speech recognition, etc. all require a lot of computing power. The small devices where they are used on may pack quite a punch when it comes to computing power but it is nowhere near enough. That is way a lot of the stuff you do with Siri, Alexa, Google, etc. will go to their servers and be processed there. The way OmniFocus syncs its database has got nothing to do with that. You simply need to make certain information available to the natural language input service. If you read up on Siri you can find that some stuff it will do locally on the device and other stuff will be send to the servers at Apple. Simply has to do with processing power required. This is going to shift more in the future now that Apple (and others) is designing their SoCs with machine learning, deep learning and neural networks in mind (they made a big deal out of it when introducing the iPhone 8 and X).