Hacker News new | past | comments | ask | show | jobs | submit login
The Stupidity of Computers (nplusonemag.com)
181 points by mgunes on Feb 7, 2014 | hide | past | favorite | 94 comments



The never ending language learner (NELL) is able to build ontologies largely unsupervised with 87% accuracy from a base set of seed facts and access to the internet, indefinitely. [1]

Whilst it is unable to assign one symbol with multiple nouns, I think these are more engineering issues than anything. The overall architecture of NELL can be made smarter with horizontally scalable knowledge inference additions.

I think articles like this are going to be out of date fairly soon (if not already out of date privately).

[1] http://en.wikipedia.org/wiki/Never-Ending_Language_Learning


> Accumulated errors, such as the deduction that Internet cookies were a kind of baked good, led NELL to deduce from the phrases "I deleted my Internet cookies" and "I deleted my files" that "computer files" also belonged in the baked goods category.

That is hilarious!

By the way, there is another intersting never ending lerner which is based on random images from the internet: http://www.neil-kb.com/


Yes 87% is quite impressive, but that remaining 13% is all important. The errors sadly propagate (as above); more importantly, the errors tend to strike randomly also.

eg. A human might not be expected to know too much medical terminology, hence could easily get names/locations/functions of muscles and organs mixed up. Whereas a computer might be brilliant at that, but then suddenly not know what a door knob is (door handle? door knob of butter? etc...)

So ultimately to make a computer seem intelligent the problem is two-fold: Getting a high percentage of right answers, whilst also getting the "right" wrong answers.

Afterall, we don't criticize a human too much for not knowing something like the exact geographic location of a small African nation. But we definitely would if they thought that it was on Mars! It's exactly the same with computers, they're just far more likely to make wildly incorrect mistakes than understandably incorrect ones. So finding a way to mitigate computer mistakes, to make them milder, is every bit as important as eliminating them entirely - a potentially impossible task.


If you think about the way that humans learn, we get formal education that the computer doesn't get. We are getting corrected for our mistakes by other people. This work did not originally do that. The newer versions of it get corrected by people but that doesn't mean that it couldn't be corrected by more instances of itself that operate over different datasets via a consensus algorithm. The algorithm will also probably get flack that it is not unsupervised when human learning is actually partially supervised.

One could argue that humans have a similar problem of errors propagating. Areas that we feel strongly about can bias us against learning in fields such as religion, politics, and, of course, programming language design.

I think that it is possible that humans would make some similarly poor responses in areas that we don't talk about often.


I agree absolutely. I believe that future AI's will be educated in much the same way we educate children. Perhaps less education will be needed - possibly we could start them off at a higher age bracket for example. But ultimately I'm certain the first "strong" AI's will be educated/supervised to some degree.

My point was more that perhaps we're a bit too focused on the wrong metric for success. The current criteria set is {positives, negatives, false positives, false negatives}, and we try to optimise for high/low degrees of one or the other in order to determine whether a particular approach is successful or not.

What is then overlooked, is that perhaps we don't need to have a near-perfect positive rate, but instead achieve an acceptably-incorrect false positive or false negative rate. Where the answer may be wrong, but it's not too far wrong. Much like a human might pin a country like India in the wrong place on the map, but wouldn't ever put it in the middle of the Indian ocean.

In summation: Perhaps the key for computers to appear intelligent, is not to be perfectly correct, but to be not too disastrously incorrect.


Perhaps you can extend the F1-score[1] to be an F1,w-score that weights errors based on some measure of distance from correctness.

[1]http://en.wikipedia.org/wiki/F1_score


Sounds like a good idea. Finding that distance heuristic will be a challenge though!


I agree. In fact, errors like thinking an internet cookie is a baked good sounds EXACTLY like the type or mistake a small child would make.


That makes me remember that as a small kid there were certain spots in my environment beyond which, I was conviced, infinite wilderness would adjoin or the world would just end (for example behind forests or hedges). Growing older I was often sobered when I discrovered there was just bog-standard urban landscape.

Yet, the machine seems to come from an entirely differernt direction. It has much more facts accumulated than a child and can articulate and process them to absolute precision.


I remember visiting my childhood elementary school and seeing that the "forest" behind the building was just a scraggly patch of trees with a chain-link fence on the other side. It seemed so much bigger back then...


> they're just far more likely to make wildly incorrect mistakes than understandably incorrect ones

Have you spoken to some members of the general public and asked what they think? People believe all sorts of wacky things. We are just a lot more tolerant of the mistakes people make. http://www.buzzfeed.com/mjs538/things-americans-believe-in


So finding a way to mitigate computer mistakes, to make them milder, is every bit as important as eliminating them entirely - a potentially impossible task.

Seems like they should run multiple instances against different but overlapping corpuses and see what they agree on. For all we know, any single person would forget what a door knob is if they had all the information in the world in their brain.

But maybe it already works that way internally, I don't know.


13% error rate in a self-driving car, hilarity ensues...


Exactly. A human would usually make "acceptably incorrect" decisions, such as not indicating, or speeding. But these mistakes generally don't end in disaster. Only occasionally making a really bad mistake ending in an accident. So out of those 13%, it might only end up with <0.1% overall as accidents, the rest being recoverable.

Whereas the uncontrolled randomness of computer mistakes could be horrendous. Hence why I'm saying an interesting avenue to look at in AI, might be to achieve acceptably incorrect answers (acceptable negatives), rather than just aiming for high true-positive and low false-positive rates.


Leave room for a margin of error, same as humans do.


Well reading through their site it seems it gets some categories of facts right nearly all the time, and some categories it does poorly at. This should be expected because some things are hard to understand even for a human, especially if you don't already know a lot about the subject. However it does seem to assign a confidence estimate to every fact it learns. So it does get some facts wrong, but at least it knows what probability there is the fact is wrong and doesn't assume it is 100% true.


Thanks both of you for these, very interesting. I am looking at this problem on my own at the moment and didn't find either on my own.


>Whilst it is unable to assign one symbol with multiple nouns, I think these are more engineering issues than anything. The overall architecture of NELL can be made smarter with horizontally scalable knowledge inference additions.

It is not a trivial problem.[1] In fact representing knowledge as some ontology of human language strings with fixed schematics has its fundamental limitations. In many situations it simply fails, disambiguation is just one of them. For example, trustworthiness, temporal information and newly invented phrases etc.

[1]http://en.wikipedia.org/wiki/Word-sense_disambiguation


As an Aspie, I like the fact that my computer is "stupid".

It understands my commands literally and executes them exactly the way I specified them, down to the last typo. It doesn't try to second-guess my intentions, read my body language, or do any of the thousand other things that neurotypical people do to drive me crazy. After a full, stressful day of interacting with people, interacting with the "stupid" Terminal is a breath of fresh air. I'm sure a lot of other autistic people like computers for the same reason.

If my computer ever began to interpret my words and actions like an actual specimen of homo sapiens does, I'd probably throw it on the ground and destroy it with a jackhammer. When I buy an Intel processor, it's because I want it to crunch numbers for me, not because I want a clone of that thing in the movie "Her".


> Amazon also had your entire purchasing history, and its servers could instantly compute recommendations you would be likely to accept.

I wish. I have long been surprised at how poor Amazon recommendations are given that they have a large pool of actionable data: things I have actually purchased. What else could be a stronger signal?

I have even informed them which purchases not to use as a basis for recommendations, or when recommendations they have made are not interesting (at least, those that are made from my page while signed in, no ability to do so with the emails they send).

I preorder plenty of technical books, so Amazon could easily suck more money out of me if they kept me informed on up and coming books by authors I have purchased from, or in the specific area I purchase in. Instead, I seem to get weekly emails about the latest "popular" books in a very general area (e.g. programming).


Yeah, I have a long purchasing history of sci-fi novels. But then I buy a book about UX and another about value investing, suddenly my recommendations are all replaced by Warren Buffet and web design books...

Sci-fi novels are an area where I need a large amount of discoverability to find new authors - recommendations are very useful. Not so much when I break my trend and buy a stock market textbook.


From personal experience I suggest you never buy a children's book on your own account.

There doesn't seem to be a way to tell Amazon "Hey this purchase was a one-off, don't use it for recommendations".


Sure there is, in the top left there is a link for "<your name>'s Amazon", click that, then in the toolbar in the middle click "Improve your recommendations". You'll get a list of all your purchases with a checkbox by each one to disregard it.


Or when you buy some newborn-baby clothes or equipment as a gift for a relative and all you see on Amazon thereafter is childcare products.


Did you find a good book on UX? I'd be interested in the title!


Sure, it was Don't Make Me Think by Steve Krug. Quite good, although just a basic short intro book on web usability. I'm generally hopeless at design, so it's given me quite a few ideas. Namely remove 80% of the wall of text on my site!

I've also picked up The Design of Everyday Things by Donald A. Norman. Haven't got around to reading it yet as it's not about the web, more just everyday design and usability, but it came highly recommended.


I had the exact opposite reaction - I was surprised at how accurate their recommendations were, at least for instant video.


Good article but I disagree with his basic conclusion. To illustrate what I mean a couple of quotes:

"... will increase the hold that formal ontologies have on us. They will be constructed by governments, by corporations, and by us in unequal measure,..."

"We will define and regiment our lives, including our social lives and our perceptions of ourselves, in ways that are conducive to what a computer can “understand.” Their dumbness will become ours."

I think the opposite is in fact happening, that academically for example there was a fixed system, in the UK for example you did GCSEs, A Levels, bachelor’s degree etc., which was something like a formal ontology constructed by government - physics shall be divided form chemistry and both shall be graded from A to E. Now we have all kinds of new forms of education like online courses, Wikipedia and so on so you can pretty much find an educational form to fit what you want to do. We're moving more from a formal approved ontology to many competing ones where you can choose.

I'm not sure where posting on Hacker News fits into this. Maybe it is regimenting our lives, including our social lives and our perceptions of ourselves, in ways that are conducive to what a computer can understand!


I've never heard of the magazine n+1, but if this is typical of it, I'm deeply impressed. Particularly, some of the author's conclusions are brilliant.

His only failure is when he tries to prognosticate. Ontologies are reductive only when they're not contextual, for example.


I generally like the magazine. It isn't mainly about technology, though, if that's what you're looking for: it's more of a literature/theory/politics/arts/culture magazine. Articles mainly about science/technology are maybe one per issue on average, though usually pretty good ones (with some misses).

Another new-ish magazine that I mentally place in vaguely the same genre is The New Inquiry: http://thenewinquiry.com


Mainly not about tech? It's such a perfect summaries of what's happening in the tech history for information processing. Like to see more of this kind.


This article is; I just meant that the magazine overall is not mainly about tech. Here's their latest ToC, for example: http://nplusonemag.com/print-issue-18/


It's unbelievable. Even in the tech world, I haven't seen such good quality of reports so far. Why is it not that popular? Maybe because it's a paid service.

Thank you so much. Please introduce more to us later.


I had a subscription for a couple years and liked it quite a bit. Much of the writing is impeccably crafted. As though the author rewrote the piece 8 times to say exactly what they wanted to say.


n+1 is great. Sometimes it's a nice antidote to tech since they are also very interested in literature and culture. Their print version isn't half bad, either.


His argument seems to be "computers cannot do X yet, therefore computers will never be able to do X".


> Consider how difficult it is to get a computer to do anything. To take a simple example, let’s say we would like to ask a computer to find the most commonly occurring word on a web page, perhaps as a hint to what the page might be about.

Um, has the OP ever tried getting a human to easily perform such a task?


Pointing out the problems with the proposition just illustrates the author's point about the semantic gulf. It's relatively easy for a human to not only see the problem but to point it out snarkily. But short of singularity we may not see a computer doing so in the near future.

Abandoning meta-snark despite it's pleasures, what is interesting is the way in which the semi-absurd example implied a relevant example which I now realize was why I wasn't bothered by the semi absurdity. I didn't really take the words literally.

What was evoked seems to have been the idea of how unlikely it would be that a computer could read the pseudo code and determine that it found the most frequent words on a page without being given the answer ahead of time.


I wasn't (only) meaning to be snarky, but the proposition he makes is itself a bit of snark and, more to the point, begs the question. If you're assuming that, given a non-trivial webpage, all educated adult humans will agree as to what it's about...well, how can the computational scientist argue against that? If anything, the author himself should be keenly aware of how humans fail at general interpretation, even after having 12 years of education (K-12)...unless he's never had a comments section for his articles.


Meta: your comment finally solidified the proper meaning and usage of "begs the question". I'd sort of understood it, but hadn't seen it used in modern text like that.


I think with that he was trying to show how computers have a hard time finding out what a webpage subject is. Humans have it easy, just read the title of the page or the title of the article.

"In fact, the least commonly occurring words on a page are frequently more interesting: words like myxomatosis or hermeneutics. To be more precise, what you really want to know is what uncommon words appear on this page more commonly than they do on other pages. The uncommon words are more likely to tell you what the page is about."


but realize how long it takes for humans to learn to speak/comprehend complete sentences.. heck to be potty trained even.


>but realize how long it takes for humans to learn to speak/comprehend complete sentences.. heck to be potty trained even.

interesting that a kitten or a puppy gets potty trained much faster than a human child.


I always wonder why computers should be intelligent. In essence they can add numbers pretty quickly. If you interpret the result of many such additions in a certain way you get a text, a video or an operating system. Wouldn't it make more sense to build a machine for thinking instead of reusing one we built for calculating?


That's one of the core high-level AI debates going back decades, whether a fundamentally different hardware substrate is needed for intelligent machines, or if it's instead more of a 'software' problem. Biologically-inspired AI tends to want to build things that look more like networks of neurons as the hardware substrate (among other things), while some other branches of AI find existing computing hardware to be generally fine as a substrate.

Imo it comes down to more of a pragmatic question than a philosophical one, though. Unless we actually discover a super-Turing computational system, any machine will be doing some kind of computation that in principle could be performed by any other Turing-equivalent machine. Of course not all Turing-equivalent machines are equally easy to build everything on top of, which is where the "equivalent in principle" part falls short. But alternate hardware models (assuming not super-Turing ones) don't in themselves get you anything that in some inherent or philosophical sense can't be done by a pile of NAND gates.


Special purpose hardware is faster than software, but it's generally much more expensive. It also doesn't allow for much experimentation or change. A general purpose computer can emulate anything any other computer can do, and designing software is much easier than designing hardware.


Arithmetic can simulate physics. Wouldn't it make more sense to simulate the thinking machine in a standard computer?


ELIZA was actually an AI platform, which made it possible to write all kinds of AI bots. The "doctor" program was a tiny example program, like a hello-world for the platform which was capable of much more. Here's a description of how ELIZA worked: https://csee.umbc.edu/courses/331/papers/eliza.html


If you're like me and did not previously know about SHRDLU, then I highly recommend watching this demonstration:

http://www.youtube.com/watch?v=bo4RvYJYOzI


I've learned about it in a 1977 magazine about robots, when I was 6 (that I probably still have lying around somewhere), and I've been fascinated by computers ever since, and a great admirer of Terry Winograd.


Humans have enough trouble understanding human language, especially in textual form. Context, body language, facial expression, shared experiences, cultural memes, power relationships, and internal motivations all play a role in how lanugage must be interpreted. Ambiguity, emotion, connotation, and double-meanings all play critical roles. It's highly unlikely (I would say impossible) that computers can ever be made to fully understand human language.

To the extent that humans and computers are able to communicate with each other, it will be via a cooperatively developed human-computer language that will be influenced as much by the computers as by the humans.

That shared compromise is true today via programming languages and the stilted way we must interact with Google Voice Search and Siri. And while those contours will change over the forthcoming decades, it will continue to be true for all time.

As humans augment themselves with digital computing power and as computer technology itself evolves, things will become very different, but the fundamental disconnect will always be there.


A human-computer language is an interesting idea, but it defeats much of the purpose of natural language. The convenience of being able to talk to a computer normally, or to process large amounts of text data that wasn't intended to be read by the computer. And there is so much data to learn from written in natural language and there would be very little in the new language.


What about this?

Building Brains to Understand the World's Data (google tech talk):

https://www.youtube.com/watch?v=4y43qwS8fl4

The result is http://www.groksolutions.com/


Surprising that the article doesn't talk about the modern day equivalent of Ask Jeeves:

http://www.wolframalpha.com/input/?i=How+old+is+President+Cl...


Some coconut farmers use trained monkeys to harvest crops. I am sure a monkey would make a weak opponent in a fencing match, but they are pretty good with planning and motor skills in addition to being great climbers. In this sense, computers (and AI) of our time is very useful. Furthermore, computers and programs are super flexible and they do not have hard coded limits to their capabilities as humans and monkeys have. As humans, we might be the all powerful masters for now, but compared to the improving AI we'll soon be what the SHRDLU is to Watson.


Is there anything that grasshoppers eat that isn't kosher? Maybe Watson is much smarter than we think.


My first experience with AI was playing against a chess computer in the 1980's and losing. I was young, didn't understand computers, and wasn't a very good chess player, but after I kept losing I had this eerie feeling about the machine, like it was an actual sentient being. I've been aware of the power of anthropomorphism ever since and noticed the readiness for humans to do ascribe feelings and thoughts to things that don't actually have them. The computer is the medium and tool for capturing the intelligence and data of the programmers behind it, so I think the right question is asking if Watson is self-aware. There is no evidence to support that and it is doubtful it will happen any time soon, but that's almost irrelevant. The combination of the user, the computer, the programmers behind them and the world's data working together is a new kind of intelligence and is almost frightening in it's power.


They aren't careful eaters, so some grasshoppers almost certainly eat things that aren't kosher.


According to Wikipedia all insects except locusts are non-kosher and some grasshoppers are omnivorous. So there must be some grasshoppers that don't "keep kosher".

https://en.wikipedia.org/wiki/Kosher#Prohibited_foods https://en.wikipedia.org/wiki/Grasshopper#Diet_and_digestion


The stupidity of computers is starting to get frustrating. As a software developer working on a distributed storage system, I've profiled my placement algorithm and after some thought have decided there is a better strategy based on a common use-case scenario. I create a branch in my repository and try out my changes. In the mean time my colleagues have made changes to the development branch I diverged from. My computer cannot seem to be able to run the differential of the profiling function against my diverged branches and tell me whether my changes validate my assumptions from a simple query (nor whether the combined changes would continue to validate my assumptions before I merge them... it couldn't even construct the profiling function for me).

Ontologies seem like just the tip of the iceberg.

It seems more likely to me that instead of a general-AI we're more likely to be able to map and simulate a human brain within a computer.


Well written article. I wrote a long comment about my views, sat for a bit, re-read the article, then deleted my banter because it wasn't adding anything interesting, but I do want to say these themes have been on my mind lately and I appreciate seeing them discussed so well.


A reductive ontology of the world emerges, containing aspects both obvious and dubious.

Now I'm contemplating what an opposite-of-reductive ontology would look like (if that even makes semantic sense). An ontology that enriches rather than simplifies. It's hard to think about.


What I got out of the article is that computers are good at doing what they are programmed to do. Problem is not bad computers, but rather the fact that we do not know how to program computer for general intelligence.


There are real, absolute limits on what you can program a set of transistors and logic gates to do. Changing those limits will require changing what a "computer" is. At which point, we'll probably call it something else.


What are those limits? One of the big ideas out of Wolfram's book A New Kind of Science is that, once a rather small threshold is reached, a system made of simple rules becomes universal. And since by definition any universal system can emulate any other universal system, there cannot be a thing computable by one universal system that is not computable by another. (One of the implications of this idea he suggests is that the weather, which he argues is a universal system made of simple rules, is able to do no less sophisticated computations than the human brain.)

He develops this argument more than I have time to do here: http://www.wolframscience.com/nksonline/page-822-text (note that this is near the end of the main section of the book and rests on ideas established earlier).

Wolfram himself seems pretty bullish on the idea of making computers that think sometime in the future in the section I linked to, in a talk he gave at HAL's birthday party called Hal Isn't Here back in '97, and much more recently in some comments on the movie Her.


This is not an original idea by any means; it's called the Church-Turing thesis, and has been around for 50+ years.

As for your question "what are those limits?": In a nutshell, no computer program can ever fully answer questions about the properties of other computer programs. Unlike what GP is implying, that is a hard limit we can never ever get rid of. So if weather is universal, we will never be able to fully understand it!


Certainly Wolfram didn't invent the notion of a universal system, nor claim to, nor was that intended to be implied above, as bad as my writing might be. Indeed, on page 1125 of NKS he discusses Church's and Turing's contributions.

What Wolfram repeatedly points out throughout his book is that the threshold for such universality is much lower than one might suspect given the complications involved in, say, a Turing machine. And because that threshold is so low, there exists the possibility that much of the natural world that is not obviously simple is exhibiting universal computation.


Yes there are limits, and we aren't anywhere near those limits. Our best algorithms are still primitive, and inefficient.


What are these limits and how do you know they exist?


There's a whole subfield of computer science about it: http://en.wikipedia.org/wiki/Theory_of_computation

On the other hand, "All problems in computer science can be solved by another layer of indirection." (http://www.dmst.aueb.gr/dds/pubs/inbook/beautiful_code/html/...) So if that applies to this problem (after enough levels of indirection) then maybe I'm wrong.


Can we create digital logic circuitry that embodies ALL the functionality of a neuron, including interconnections (and here you have to compensate circuitry limitations that the neurons don't have) in the same size? I think you can start from there to understand current technology's limitations.

I agree we will likely not be calling these things "computers", if we ever invent them.


Even if you remove the size restriction and the physical limitations of manufactured circuitry, we're lacking the perfectly defined model of how a neuron actually works. We have plenty of knowledge about neurons, but nowhere near enough to simulate them realistically. You'd need to be able to account for every possible state a neuron could ever be in.

But a neuron is more than just an input-output state machine, it's affected by levels of oxygen, glucose, and any number of hormones, proteins, and other chemicals in the bloodstream. An adult human's neurons are each individually shaped by their entire existence up to that point. Alcohol consumption, sun exposure, antidepressant medication, hydration levels, exercise levels. It all affects how they work.

And that's just one neuron. Simulating the brain as a solution to this problem is, I think, out of the question.


It wasn't necessary to simulate a feather in bird's wing in order to fly. In fact, practical flight really only happened once people started taking away features observed in nature.


Defining a computer in terms of transistors and logic gates is analagous to defining a brain in terms of neurons and synapses.

You arbitrarily impose limits in the definition of a computer and lift those limits in definition of a brain.


>An adult human's neurons are each individually shaped by their entire existence up to that point. Alcohol consumption, sun exposure, antidepressant medication, hydration levels, exercise levels. It all affects how they work.

Possibly, but why on Earth would you want to simulate that? Just simulate what the neuron is supposed to be doing or would be doing under ideal circumstances.


> Just simulate what the neuron is supposed to be doing or would be doing under ideal circumstances.

We still don't know that either.


Couldn't you make the same argument about the absolute limits of neurons and synapses?


Ditto, computers are for one still improving. Who knows what the limits are.

What I think is the problem is that humans do not have a clear explanation for how our intelligence works. Is it really that much of a wonder we can't emulate it?


Two comments

1. Machine learning is moving more and more towards indirect programming i.e. you program the computer with a learning algorithm and let it work out what to do. Google reinforcement learning, or machine learning. This greatly reduces the programming bottleneck.

2. People underestimate how much processing power the human brain has. Think 100,000,000,000 neurons, each with 1,000 active connections on average and perhaps 10,000 latent connections (which are being updated via Hebbian learning). The connections (axons and dendrites) are the active processing units. The cycle time is .01 seconds or so. Only the very largest computers are anywhere near this processing power (~10^16 operations/second). My current desktop is about 10,000 times less powerful. Now imagine trying to build a tractor with a 1/100 horsepower motor - such a difference is beyond being a gap, it is a qualitative difference.

Given the limited processing power available it is amazing computers can do what they can. Back in the 1980s a large bank was run on the equivalent of (1/10 of a millimeter of brain tissue)^3.


There are some caveats to comparing the computing power of the brain to computers. Computers have vastly more serial processing power than the brain. Transistors are thousands to millions of times faster than neurons. Signals propagate through the brain relatively slowly (only about 200 times per second.) For problems that can be done iteratively, computers are vastly superior.

Second computers are intentionally designed to be general purpose and they sacrifice a lot of potential speed to do this. If you were to build some algorithms into the hardware they would get vastly more performance (eg bitcoin mining), but that's extremely expensive. However being general purpose has a lot of advantages. Computers will always be faster at many things than neurons.


I'd like to place a bet, but unfortunately, there's no point in placing a bet stating that by a certain date AIs would beat median human intelligence by huge margins. It would be like placing a bet on a prediction of any other extinction event. Impossible to collect, even if the prediction is right.

On the other hand, just for the fun of it, a couple of other predictions. Here: "true, human level AIs would not be developed until 8Tb RAM sticks would become a commodity". Another: "true, high fidelity, multi-censorial brain-computer interface will never be built".


It'll be interesting to see what happens with neural networks. Whether copying the brain is the "shortcut" to true AI, or in fact the only way to achieve it.


It is possible to bet on the apocalypse, though it's not very practical: http://lesswrong.com/lw/ie/the_apocalypse_bet/


Computers aren't stupid, they're only really good at math. And trying to understand human language using math is hard.


Trying to understand human language using anything is hard. There isn't an alternative, non-math solution or we would be using it. Calling it math is a needless abstraction.


Techically they are terrible at Math too. All they are good at is doing as they're told.


My smart far-away and very old relative once said during the dot com boom: "computers are just tools, nothing else, people seem to see something magical in computers that is just not there". She was a very clever old lady. Still I love their logical magic and hope computers will never cease to amaze me.


Hmm...

Wall of text.. ok, challenge accepted!

read

scroll

read

scroll

read

scroll

read

MORE!?...

scroll

NOT MORE I GET IT!

...computers are stupid but at least they don't give up!


I thought it was a very well-written, engaging, and complete overview of the state of "general" AI. There is a lot of field to cover. I don't really see anything that you could cut here.


I didn't think it was a bad article, my only point was that robots and AI never give up - but humans do!


The HN-stasi have spoken: "Your text shall be white!"


Yep... really interesting article but just too long


After all, who has five minutes to spare these days to enrich their knowledge of the world around us?

Everyone knows that if the information is important enough, the author will find a way to fit it into 140 characters. Well 110, with 30 characters worth of tags.


I was just gave up looking for the words that appear the most..




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: