The last discussion was mostly about how the title was linkbait. I want to hear people's opinions on whether they think it's appropriate for a browser (Chrome) to be designed such that it doesn't operate independently- that it can be crashed (or self destruct bug, insert your own word here) by a remote server at any time.
To my knowledge, Firefox doesn't do that. Safari doesn't do that. Internet browsers are probably the #1 most important app on a computer these days, browser reliability is vital.
Chrome Sync is, AFAIK, not a push service.
Something polled a Google server, it returned a bad answer, it crashed the browser.
Why is this important?
Because it's entirely possible that Firefox or Safari, for example, could have been crashed by contacting the safebrowsing server, and the safebrowsing server returning an answer that crashes it.
Firefox also does remote firefox update checks and plugin update checks, etc.
None of the browsers you mention are "independent" of internet servers anymore. They are meant to function independently, as is Chrome, but exactly the right remote bug could likely crash all of them.
Why is it possible to receive a response that crashes the browser? I'll allow (with reservations) your premise that browsers aren't so independent anymore. But it seems to me that input from an external source (even trusted) ought to be validated, and malformed input should raise a warning or something. The fact that crashing is a possible behavior upon receiving unexpected input is surprising to me.
Well that's the point of a bug, isn't it? Chrome didn't handle malformed input appropriately and crashed because of it.
It wasn't a command that shut down Chrome, it was just poor handling of an edge-case which resulted in unexpected behavior (crash).
It's a sad truth that most programs will explode if you fling garbage at them. When push comes to shove, many development timelines don't have room to bulletproof against everything
At some point, the results of what the side process did have to get communicated to the parent process. You can reduce the size of the channel, but you can't close it completely. Bugs happen. Bugs in never-executed code happen and tend to stick around longer.
> 2) It is a design choice on the Chrome team to fail fast and hard. It's better to get crash reports to our automated crash servers with diagnostics and stack traces than to have reports in the field of weird behavior with no way to debug.
It's a design choice that your product... which some people (myself included) pay money for, crash hard so that you can get better diagnostics? Sounds like misplaced priorities.
This sort of catchall and keep going error handling could leave your browser in a completely unknown state. It could start making bad requests, making the wrong requests, start leaking info, or more likely crash elsewhere but with a much less clean crash log.
When you don't know how to handle an error such that it bubbles all the way up to the top, often the best thing to do is crash. At least then you might get the logs that allow you to fix it and turn around the fix quickly and with confidence.
Crashes suck, but crashing and not knowing why sucks more.
This. I can't believe how many conversations I have had to have in my (short) career trying to convince people that catch(...) { /* ignore */ } is not an error-handling strategy, it is an error-ignoring strategy and opens you up for hilarious ROFLExploits, heisenbugs, and all manner of fun things. As a user an app crash sucks (I know that well, all apps I use have crashed before), but in order to fix it the developers generally need either a repro (almost no one provides these, at lest not reasonable ones, this case was an exception as the repro was trivial) or a crash-dump (less helpful as not all the info you need is always there). If you have a catch-all you have neither of these, at best you have vague reports that sometimes, when users do X, Y and Z and have been using your product for 18 hours then 'weird shit' starts happening. This is NOT the kind of bug you want to investigate if you value your sanity.
Sorry, I removed the point because I realized it wasn't relevant to this particular bug. This bug isn't a case of an assertion failing (which is the "crash hard" bit). It was just a logical failure. A bug.
(Incidentally, eliminating these kind of rarely executed branches is a bugaboo of mine. They frequently have problems.)
Among other things, that try..catch won't do anything for SIGSEGV(int * p = 0; * p = 1) and SIGFPE(1/0). You can handle the signals, or you could miss them like you did just now. That won't be a design issue but an implementation bug.
On windows you can catch segfaults with __try and __catch.
Trust me when I say you don't want to be writing code in a stack frame above someone else who catches and ignores exceptions using them.
There are actually some cases where windows will catch and discard segfaults if you have them in response to certain window messages. That bug was hard to track down, let me tell you.
In what way was it 'designed' to not operate independently? The browser is - in theory - perfectly fine operating when sync is broken (and in fact wouldn't have triggered this were sync fully unreachable). This was simply a bug in sanitizing input, nothing further. Not different in flavour to input sanitization problems within the javascript engine, which have been known to occur as well.
To your knowledge there are no crash bugs in the Firefox sync code? How much are you willing to bet you're right? I seriously doubt an entire module like that has absolutely nothing wrong with it.
It's pretty ridiculous of you to point at a single mistake in implementation and blame the entire sync feature.
I hope you are kidding.
JIT's often have a large number of crashing bugs, usually even more than static compilers (because it is often hard to reproduce every set of circumstances that cause something to happen, unlike static compilers)
The point is that the probability that JS code will segfault the browser is dramatically less than the probability that C++ code will segfault the browser.
I don't believe this for a second.
I might believe it if you said "the probability that commonly used JS code will segfault the browser is dramatically less than the probability that browser-specific C++ code will segfault the browser". Which would be a very different claim.
Remember that most JS is popular JS, with some small amount of custom lines. JS seems less crashy because people don't use as much "random" JS in general.
It was a Chrome Sync bug. If you had Chrome Sync enabled, it crashed. Believe it or not, Chrome Sync is actually a very useful feature for those of us who go back and forth between computers on a daily basis.
We back up so much of our tools and data, but without a working browser, we're sunk. Especially non-technical people.
That a huge percentage of the internet clients in the world can be simultaneously removed from accessing the internet, either intentionally or accidentally, is troubling me this morning.
> that it can be crashed (or self destruct bug, insert your own word here) by a remote server at any time.
It isn't by design that syncing can affect the whole browser; it's a bug in the syncing code which should have been handled. There is no self destruct bug, and calling it that is incorrect. Are you aware of the fix?
I think cross-device syncing is valuable functionality that I enjoy. Obviously this comes with a risk that if bad data is sent that is unhandle-able by my browser, it may cause a crash. The fact that syncing is elective and toggle-able is a great feature to be included.
I think it's a bit sensationalist to still refer to it "being crashed" at any time, rather than saying it "may crash due to a bug".
FWIW the same sort of bug could have occurred in the "auto-update" features of any browser. The problem was the client's failure to handle an unexpected response from the server.
I think your outrage is misplaced here. Certainly a browser should not crash if a given server happens to be offline, but we can easily extend that to say that a browser should not crash, period, can we not? But that is a rather arbitrarily high bar. Browsers are in the business of connecting to sites over the internet and if it is possible for such communication to crash the browser (which is likely to be true for almost any browser) then that's a fairly equivalent problem.
To my knowledge, Firefox doesn't do that. Safari doesn't do that. Internet browsers are probably the #1 most important app on a computer these days, browser reliability is vital.