Another chapter from Udacity course is data visualization. What I have learned was very important. To acquire, munge and analyze data. But it would be nothing when such a information wouldn’t be communicated to others. The easiest way it is to be done with data visualization. I learned about Napoleon’s march to Russia and why you have to think twice before you will post hue-colored diagram.
Lesson no 3 is about data analysis. If you were able to collect data and prepare it it’s time to draw conclusions. How to use datasets? How to predict the future? This is what I hoped to learn now..
Are you going to play with data? First, you have to wrangle data, prepare it and make useful. Having practice session Wrangling Subway Data in udacity course was nice way to use my knowledge in practice. Subway data is a good sample of dataset consisting of several columns in CSV file.
Czytaj dalej Data wrangling in practice
Data science in progress. Right now reading and applying rules from lesson no 2: Data preparation, so called data munching. This is not something you want to do as a data scientist, it’s just indispensable to prepare your data for later processing. As Nick says:
More than 50 percent of time is just coming through the data and guessing is it OK – Nick
What can you learn in lessons from Data Wrangling section.
First I was reminded about common data formats. CSV, XML, JSON – these 3 are most popular in data world. Thankfully Pandas offers good way of consuming and producing these formats.
SQL found here
It was brand new for me that after forming dataframe I can play with it as with the SQL table using
pandasql library. Udacity course shows potential of these SQL extensions in Aadhaar dataset containg our dear registered friends from India. Using pandasql I can query dataset using SQL-92 syntax freely, including filtering (where) and grouping (group by).
To process data you have to get it first. Sometimes it can be available on some endpoint. In Python you can easily call APIs using
requests. It is as easy as
body = requests.get(url)
response = json.loads(body.text)
so you can easily get all info from any available endpoint and parse JSON response easily. It was an example to use that when querying against OpenFM API .
I miss you, value!
Missing values is another challenge you will find here, when dealing with data preparation. Usually in pythonic way such values are None’s in dataframe. What then? We can impute, or guess what to do. In baseball dataset I was encourage to use mean value as imputation. But better think twice! Imputation can lead to unclear conclusions. Here it was done using
numpy‚s mean function.
I’ve just started nice Online Course at Udacity. This it Introduction to Data Science. So what are my first impressions? After using Pandas and it’s stack for some months back?
Data science. Tyle o tym ostatnio czytałem. Temat ląduje w kilku miejscach. Ludzie mówią : hype. A jest atrakcyjny! Dlaczego? Zaraz opowiem. Poza tym – tak między nami – ile ten skaut może czekać na nowe dane.. Pan da 3? Pandas.