I wonder which will come first: AI that is truly able to teach itself about our world, or that we are able to define algorithms or otherwise figure out software solutions to most of these these context problems. Of course, if an AI is intelligent enough that problem solves itself.
To me it seems that developing AI on that level will be here sooner than developing solutions to many context problems, given the difficulty the best funded algorithms in the world have answering this question which we humans see as very simple.
That's exactly what gave rise to the first "AI Winter". People had to hard-code things. But with NN and esp. Deep Learning, they thought there's no need to write programs for pattern recognition anymore; the "AI" just learns it on its own.
That wishful thinking has turned out to be dangerous tho, as we have moved towards an ML-dominated world where we don't even know how the ML algorithms produce specific results.
Add that to their bizarre behavior (like this example with Siri) and you'll realize chances of another AI Winter are not low. If we have to develop solutions to many context problems one by one, that may reduce so much of the hype and interest in "AI". We'd be basically back to square one.
To me it seems that developing AI on that level will be here sooner than developing solutions to many context problems, given the difficulty the best funded algorithms in the world have answering this question which we humans see as very simple.