## Sunday, February 23, 2014

### Week 5 : Zipfian Academy - Graphs and Community Detection

The update for last week will be short and quick. Doing these blog posts is getting much harder.

We started the week looking at unsupervised learning techniques like k-means and  hierarchical clustering. We also visited dimension reduction techniques like SVD and NMF. By mid-week, we switched gears to graph analysis  and covered in no particular order BFS, DFS, A*, Dijkstra and community detection in graph networks

Take aways from the week:
• We had several guest lectures this week. @kanjoya is working on the cutting edge of Natural Language Processing. They help their clients derive actionable intelligence from emotions and intuition. The speaker discussed the general NLP landscape : tools and techniques. I found it interesting that some of their training data comes from The Experience Project
• @geli gave an interesting talk. They've basically built an OS for energy systems and hope to revolutionize the energy management space
• @thomaslevine talk was on open data initiatives around the country. Open Data is one of those things cities like to talk about but very few of them are doing it well
• Things were switched around this week. We ended the week working on a dataset from one of the partner companies. The dataset recorded mobile ads served to user at various locations, we were supposed to do some exploration and find out the best locations to serve ads to users. The dataset had a couple million records. Trying to wrangle giga-byte sized data on just 4gb of RAM is definitely not fun. I ordered a 16gb RAM kit, should get it by this weekend. If you are thinking of enrolling for the course, you should shoot for at least 8gb of RAM.

## Saturday, February 15, 2014

### Week 4 : Zipfian Academy - Oh SQL, Oh SQL... MySQL and some NLP too

So things were totally ramped up this week. We started out by scrapping, parsing and cleaning data from the NYTimes API and then jsonified and stored the data in MongoDB. The next day we used the same dataset ported to a few SQL tables and implemented the Naive Bayes algorithm in SQL to classify which labels an article would fall under. We continued with some diagnostics like confusion matrix , confusion tables, false alarm rate, hit rate, precision, recall,  ROC curves, etc. Other topics covered include NLTK, tokenization, TF-IDF, n-grams, regular expressions, feature selection using Chi-Squared and Mutual Information. We ended the week by working on another past Kaggle Competition -  StumbleUpon Evergreen Classification Challenge

We are at the half way point for the structured part of the class. Just in case you're thinking of doing this, my schedule these days is about 12 - 15 hrs / day during the week doing daily sprints (data scrubbing , transformation / machine learning challenges), reading data science materials and lectures. Over the weekend, I'd say about 10 hrs / day closing the loop on a few of the sprints from the current week and doing more data science / readings for the following week. You basically live and breathe data science... all day long..all week long

Highlights from the week :
• We had two guest lectures this week. They were on Naive Bayes and feature extraction in NLP. Zipfian also added a guest lecturer to their roster. The new instructor is a Deep Learning expert and I'm really excited to explore working on new datasets with Neural Networks.
• Implementing things from first principles gives you a better understanding of how some of these algorithms work and what may be going on under the hood when they fail.
• My team also took the top spot in the Kaggle competition for the second week. The problem we worked on was a classification problem using AUC (Area Under the Curve) as the evaluation metric. We achieved an AUC of $\approx 0.8895$  which is about 0.008 off the leading Kaggle submission on the public leaderboard
• Cross-validating on your training set is always a good idea

## Thursday, February 13, 2014

### Some Things you should be reading

An Intro to Statistical Learning has become my favorite statistical learning book.  It was written by two Masters in the field and others. You don't often see such brilliance and clarity in a book.

A Few Useful Things to Know about Machine Learning  is a really good paper you should look at if you're interested in the field of Data Science

Some Practical Machine Learning tricks is a summary of some tips and trick to always keep in mind when working on machine learning problems

A collection of videos on different Machine Learning topics from Caltech

## Saturday, February 8, 2014

### Week 3 : Zipfian Academy - Multi-armed bandits and some Machine Learning

We started the week by finishing off the session on Bayesian Statistics with the study of Bayesian A/B Testing techniques. Some of the strategies covered are extensions of the Multi-armed Bandit problem : epsilon greedy, Bayesian Bandits and UCB1. These algorithms typically out perform traditional A/B testing. We officially started machine learning this week with the treatment of  linear regression, multiple linear regression, hetero/homo-scedasticity and multicolinearity. Other topics we covered include Lasso / Ridge regression, cross-validation , over fitting, bias / variance and Gradient Descent. We capped off the week by working on data from one of the past Kaggle competitions - Blue Book for Bulldozers

A few take aways from this week:
• There were a few algorithms I had always sort of understood. Some of these algorithms become very clear once you implement them from first principles and then apply them on a dataset. We implemented a Gradient Descent function and then used it to minimize the cost function of both linear and logistic regression problems ( I'll probably have a more detailed blog post on this). Working on some regularization with Lasso and Ridge also gave a better understand on how they both work
• We had a visit from @StreetLightData .Very cool problem they're working on. They essentially model mini migration patterns in cities / across the country. They feed data from cell signals, GPS, Census Data (Demographics / Geo) and Traffic data into their systems to extract insights used for marketing and planning
• Always remember 80-20. Data scientists spend 80% of their time cleaning datasets and extracting features (or at least more than half their time) and about 20% of their time doing modeling and parameter tuning. Forget those datasets you used in Stats class, real world data can be real messy
• $k-fold$ Cross-validation helps you prevent over fitting, get an estimate for your prediction error and helps you understand how stable / robust your model is
$$CV_{(k)} = \frac{1}{k}\sum_{i=1}^k MSE_{i}$$
where MSE is Mean Squared Error
• My team took the top spot in the Kaggle competition we worked on. We had an RMSLE (Root Mean Squared Log Error) of $\approx 0.43$ which is about $0.2$ off the winning Kaggle submission. Decent for a few hours of work. It does look like working on Kaggle competitions may become a mainstay / regular end of week exercise

### Math equations in Blogger with MathJax / Latex

After a few attempts, I was able to get this working. Go to Template > Edit HTML and paste the piece of code below just after your <head > tag

Testing it out :
$$RSS = \sum_{i=1}^n(y_{i} - \hat{y}_{i})^{2}$$

### Week 2 : Zipfian Academy - Are you Frequentist or Bayesian ?

We started the week with a tour de force of matplotlib and then switched gears to Statistics. For the rest of the week, we covered Hypothesis Testing, Goodness of Fit (Kolmogorov Smirnov Test), Distributions, Confidence Intervals, p-values, t-tests, Frequentist A/B vs Bayesian A/B testing, MCMC

A few notes from this week:
• There's a huge debate on the Frequentist vs Bayesian schools of thought with proponents on both sides. Just in case you're on the fence here's an Open Letter on why you should think about going Bayesian
• The goal in Bayesian inference is to get a good handle on the posterior distribution over the input parameters. Some of the math can get pretty beefy so you would normally use a package like pymc or rjags to fit you distributions / models. Some good resources for pymc and MCMC are this Stats Book for Hackers and this set of videos from mathematicalmonk
• Some EDA (Exploratory Data Analysis)  tools you should have in your workflow include Raw  and CartoDB
• We had two pretty good talks by @nitin on LearnDataScience and @Udacity on their Data Science course development at one of the local meetup groups

## Sunday, February 2, 2014

### Learning to Learn (Again)

I thought this would be an interesting segue to the Week 2 update giving that I'm currently knee deep in a Data Science bootcamp program. Two weeks in and I'm trying to prioritize and balance breadth / depth for some of the topics we've covered so far.

I read a few studies that say starting from first the principles and then working on understanding the more difficult / complex concepts works most of the time. This is sort of like an SVD / PCA of learning - learning only the important things which will help you reconstruct the rest of the knowledge when you need to use it. Obviously there are other schools of thought about learning.

Anyway, I came across this guy's extreme learning experiment. He is attempting to learn and become fluent in 4 languages (Spanish, Portuguese, Mandarin Chinese and Korean) within a year, 12 weeks at a time. Looks like he is half way through. This is another project (MIT Challenge) he recently completed.