Hacker News new | past | comments | ask | show | jobs | submit login

The gap between the levels of abstraction that humans and machines operate is much bigger than the most AI researchers think. No amount of computation for various kinds of gradients can compensate for that. The next AI breakthrough will be a radical development in knowledge modeling and representation.



I suspect perhaps that the AI community were used to (for decades) solving _no_ problems. Now that _some_ problems are solved in their field (e.g. facial recognition for social networking purposes, playing pong, playing chess) the thinking is that now all problems are going to be solvable. I think we are going to learn that this isn't the case. Perhaps there is just a threshold of problem hardness beyond which we can't get yet at any particular point in technology-time, or perhaps there's a hard barrier waiting to be discovered beyond which current approaches just can never take us regardless of cleverness or hardware speed/density?


>> I suspect perhaps that the AI community were used to (for decades) solving _no_ problems.

Where's that coming from? There's certainly been some important advances recently, but to claim that no progress was made is strange.

Just to give one blatant example, Deep Blue defeated Kasparov in 1997; Chinook had fully solved draughts (checkers) by 2007; TD-Gammon played backgrammon consistently at world champion level by 1995; two computer programs, JACK and WBRIDGE5 won five bridge championships between 2001-2007. All of those are 10 years older than AlphaGo/Zero and each has a very long history going back to the 1950's in the case of draughts AI [1].

You probably haven't hear dabout most of them because they were not advertised by a company with the media clout of Google or Facebook, but they were important successes that solved hard problems. There are many, many more results all over the AI bibliography.

And, just to settle this once and for all- this bibliography starts a lot earlier than 2012.

_______________

[1] All that's in "AI: A Modern Approach", 3d ed.


Wasn't it sort of the same way during the first AI boom? Expert systems providing some real value in limited and sexy prototypes working with a simplified blocks world that hinted at something power world-changing. Then...

> the thinking is that now all problems are going to be solvable

...and then failure and the "AI winter" for a generation after the initial promise had been discredited.


Either humans will drop various language variations to accommodate AI or it will take a looooooong time.


Probably some convergence between the two. You're probably used to changing some syntax around to get Siri/Alexa/* to understand you. That puts some mental tax on you, but you get used to it. Devices will seek to lower that mental tax, but ultimately you'll probably get used to it enough that the tax will feel free and devices won't need to evolve the syntax much past a certain point.

What seems missing in a lot of these threads is the idea of "context", and I think that's where there's lots of room for innovation. Current voice-assistants work "okay" for single-sentence queries, but if the device doesn't understand (or if I fail to phrase things in a way that it's expecting), it doesn't ask clarifying questions, and it doesn't use past exchanges to inform future ones (beyond perhaps some voice training data). It also limits the kinds of things it can do by requiring that all of the necessary information be presented in one utterance. It also raises the "mental tax" on doing "real things" because I know I have to say a long phrase just right or start over, and that's sure to raise anyone's anxiety-levels...


> Current voice-assistants work "okay" for single-sentence queries

They might work "okay" if you're native speaker. As ESL speaker with an accent, it's nowhere near close. It's a total PITA beyond "what time is it".


Not only native speaker, you also can't speak with an accent. Most speech recognition currently only works with a very "clean" language free of regional expressions or accents.


Good point. I changed Siri to Spanish to practice my speaking and comprehension, and it was a huge challenge. I assumed it was because Spanish support wasn't as developed as English, but I guess it's that there's not enough training for accents in any language.


Your google search queries use english words, but do not resemble english. (Or at least, I hope they don't.) Humans adapt to tech just fine.


My money is on humans adapting to AI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: