I'm going through the Coursera machine learning class right now and I have to say that the professor glosses over several details and often makes comments like "if you're not familiar with calculus..." and "if you're not familiar with statistics..." which caught me off guard at first. I really doubt that actual Stanford students enrolled in a machine learning course would be lost on the incredibly basic operations (e.g., taking the partial derivative of a polynomial function) he is using.
Also, there has been no acknowledgement of how contrived the exercises are. For instance: exercise one gives a data set of a the profitability of a company's existing stores versus the population size of the city in which the store is located (in units of 10,000 dollars and people, respectively). The range of the data is 5-23 (population), with most of it concentrated below 10. We fit a straight line to the data using least squares, then use that line to predict the profitability of two new locations--in cities of populations 35 and 75. I understand that this is an intro course, but there is not a word about how ridiculous this is.
I don't mean to be overly negative. I am enjoying the course, but I am surprised a bit by how basic it is. Let me say that I do like the approach of the course to ML, which is to formulate a parameterized cost function and then minimize it by some general method, rather than the typical statistics course approach which is to solve ordinary least squares directly, which gives an "exact solution" (given the data) but does not generalize to more general models.
I know this is foundational material and overall, I am impressed by the approach of the course, but I would expect more comments on the weakness of the naïve methods we are employing at this early stage and how they will eventually be improved. I find it very helpful when professors at least reference more advanced methods or provide references for further reading by the interested student. Admittedly, that is more frequently a feature of graduate courses, but encouraging students to go beyond the material is an important aspect of good teaching. I have watched the videos for several other online courses and I do appreciate the fact that Coursera is allowing me to hand in assignments for grading, which vastly increases by engagement with the material. This, in fact, is the most valuable resource offered by the program. The lectures themselves are fine--if a bit dry--but a good book or a set of well-prepared notes (not slides) would probably suffice just as well if accompanied by the assignment grader.
All in all, this is great. The more people who know about machine learning (and have access to higher education in general), the better.
I'm pretty rusty on my math, so I guess it happens to hit the sweet spot for me at the moment. Once I get back up to speed on calculus I might feel different about it.
IIRC from a Stanford student's comment on HN, Stanford offers two versions of machine learning, one that is more math focused and a more applied one designed for all majors. The ML course offered through Udacity is the latter one.
Also, there has been no acknowledgement of how contrived the exercises are. For instance: exercise one gives a data set of a the profitability of a company's existing stores versus the population size of the city in which the store is located (in units of 10,000 dollars and people, respectively). The range of the data is 5-23 (population), with most of it concentrated below 10. We fit a straight line to the data using least squares, then use that line to predict the profitability of two new locations--in cities of populations 35 and 75. I understand that this is an intro course, but there is not a word about how ridiculous this is.
I don't mean to be overly negative. I am enjoying the course, but I am surprised a bit by how basic it is. Let me say that I do like the approach of the course to ML, which is to formulate a parameterized cost function and then minimize it by some general method, rather than the typical statistics course approach which is to solve ordinary least squares directly, which gives an "exact solution" (given the data) but does not generalize to more general models.
I know this is foundational material and overall, I am impressed by the approach of the course, but I would expect more comments on the weakness of the naïve methods we are employing at this early stage and how they will eventually be improved. I find it very helpful when professors at least reference more advanced methods or provide references for further reading by the interested student. Admittedly, that is more frequently a feature of graduate courses, but encouraging students to go beyond the material is an important aspect of good teaching. I have watched the videos for several other online courses and I do appreciate the fact that Coursera is allowing me to hand in assignments for grading, which vastly increases by engagement with the material. This, in fact, is the most valuable resource offered by the program. The lectures themselves are fine--if a bit dry--but a good book or a set of well-prepared notes (not slides) would probably suffice just as well if accompanied by the assignment grader.
All in all, this is great. The more people who know about machine learning (and have access to higher education in general), the better.