{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "\n", "\"Open\n", "\n", "| - | - | - |\n", "|-----------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------|\n", "| [Exercise 1 (cities)](<#Exercise-1-(cities)>) | [Exercise 2 (powers of series)](<#Exercise-2-(powers-of-series)>) | [Exercise 3 (municipal information)](<#Exercise-3-(municipal-information)>) |\n", "| [Exercise 4 (municipalities of finland)](<#Exercise-4-(municipalities-of-finland)>) | [Exercise 5 (swedish and foreigners)](<#Exercise-5-(swedish-and-foreigners)>) | [Exercise 6 (growing municipalities)](<#Exercise-6-(growing-municipalities)>) |\n", "| [Exercise 7 (subsetting with loc)](<#Exercise-7-(subsetting-with-loc)>) | [Exercise 8 (subsetting by positions)](<#Exercise-8-(subsetting-by-positions)>) | [Exercise 9 (snow depth)](<#Exercise-9-(snow-depth)>) |\n", "| [Exercise 10 (average temperature)](<#Exercise-10-(average-temperature)>) | [Exercise 11 (below zero)](<#Exercise-11-(below-zero)>) | [Exercise 12 (cyclists)](<#Exercise-12-(cyclists)>) |\n", "| [Exercise 13 (missing value types)](<#Exercise-13-(missing-value-types)>) | [Exercise 14 (special missing values)](<#Exercise-14-(special-missing-values)>) | [Exercise 15 (last week)](<#Exercise-15-(last-week)>) |\n", "| [Exercise 16 (split date)](<#Exercise-16-(split-date)>) | [Exercise 17 (cleaning data)](<#Exercise-17-(cleaning-data)>) | |\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Pandas (continues)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "import numpy as np" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Creation of dataframes\n", "\n", "The DataFrame is essentially a two dimensional object, and it can be created in three different ways:\n", "\n", "* out of a two dimensional NumPy array\n", "* out of given columns\n", "* out of given rows\n", "\n", "### Creating DataFrames from a NumPy array\n", "\n", "In the following example a DataFrame with 2 rows and 3 column is created. The row and column indices are given explicitly." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df=pd.DataFrame(np.random.randn(2,3), columns=[\"First\", \"Second\", \"Third\"], index=[\"a\", \"b\"])\n", "df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that now both the rows and columns can be accessed using the special `Index` object:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.index # These are the \"row names\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.columns # These are the \"column names\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If either `columns` or `index` argument is left out, then an implicit integer index will be used:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df2=pd.DataFrame(np.random.randn(2,3), index=[\"a\", \"b\"])\n", "df2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now the column index is an object similar to Python's builtin `range` type:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df2.columns" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Creating DataFrames from columns\n", "\n", "A column can be specified as a list, an NumPy array, or a Pandas' Series. The names of the columns can be given either with the `columns` parameter, or if Series objects are used, then the `name` attribute of each Series is used as the column name." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "s1 = pd.Series([1,2,3])\n", "s1" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "s2 = pd.Series([4,5,6], name=\"b\")\n", "s2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Give the column name explicitly:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pd.DataFrame(s1, columns=[\"a\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Use the `name` attribute of Series s2 as the column name:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pd.DataFrame(s2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If using multiple columns, then they must be given as the dictionary, whose keys give the column names and values are the actual column content." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pd.DataFrame({\"a\": s1, \"b\": s2})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Creating DataFrames from rows" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can give a list of rows as a parameter to the DataFrame constructor. Each row is given as a dict, list, Series, or NumPy array. If we want to give names for the columns, then either the rows must be dictionaries, where the key is the column name and the values are the elements of the DataFrame on that row and column, or else the column names must be given explicitly. An example of this:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df=pd.DataFrame([{\"Wage\" : 1000, \"Name\" : \"Jack\", \"Age\" : 21}, {\"Wage\" : 1500, \"Name\" : \"John\", \"Age\" : 29}])\n", "df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Or:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = pd.DataFrame([[1000, \"Jack\", 21], [1500, \"John\", 29]], columns=[\"Wage\", \"Name\", \"Age\"])\n", "df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
Note that the order of columns is not always the same order as they were in the parameter list. In this case you can use the `columns` parameter to specify the exact order.\n", "\n", "\n", "In the earlier case, however, where we created DataFrames from a dictionary of columns, the order of columns should be the same as in the parameter dictionary in the recent versions of Python and Pandas.\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the sense of information content the order of columns should not matter, but sometimes you want to specify a certain order to make the Frame more readable, or to make it obey some semantic meaning of column order." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "####
Exercise 1 (cities)
\n", "\n", "Write function `cities` that returns the following DataFrame of top Finnish cities by population:\n", "\n", "```\n", " Population Total area\n", "Helsinki 643272 715.48\n", "Espoo 279044 528.03\n", "Tampere 231853 689.59\n", "Vantaa 223027 240.35\n", "Oulu 201810 3817.52\n", "```\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "####
Exercise 2 (powers of series)
\n", "\n", "Make function `powers_of_series` that takes a Series and a positive integer `k` as parameters and returns a DataFrame. The resulting DataFrame should have the same index as the input Series. The first column of the dataFrame should be the input Series, the second column should contain the Series raised to power of two. The third column should contain the Series raised to the power of three, and so on until (and including) power of `k`. The columns should have indices from 1 to k.\n", "\n", "The values should be numbers, but the index can have any type.\n", "Test your function from the `main` function. Example of usage:\n", "\n", "```\n", "s = pd.Series([1,2,3,4], index=list(\"abcd\"))\n", "print(powers_of_series(s, 3))\n", "```\n", "Should print:\n", "```\n", " 1 2 3\n", "a 1 1 1\n", "b 2 4 8\n", "c 3 9 27\n", "d 4 16 64\n", "```\n", "\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "####
Exercise 3 (municipal information)
\n", "\n", "In the `main` function load a data set of municipal information from the `src` folder (originally from [Statistics Finland](https://pxnet2.stat.fi/PXWeb/pxweb/en/)). Use the function `pd.read_csv`, and note that the separator is a tabulator.\n", "\n", "Print the shape of the DataFrame (number of rows and columns) and the column names in the following format:\n", "```\n", "Shape: r,c\n", "Columns:\n", "col1 \n", "col2\n", "...\n", "```\n", "\n", "Note, sometimes file ending `tsv` (tab separated values) is used instead of `csv` if the separator is a tab.\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Accessing columns and rows of a dataframe\n", "\n", "Even though DataFrames are basically just two dimensional arrays, the way to access their elements is different from NumPy arrays. There are a couple of complications, which we will go through in this section.\n", "\n", "Firstly, the bracket notation `[]` does not allow the use of an index pair to access a single element of the DataFrame. Instead only one dimension can be specified.\n", "\n", "Well, does this dimension specify the rows of the DataFrame, like NumPy arrays if only one index is given, or does it specify the columns of the DataFrame?\n", "\n", "It depends!\n", "\n", "If an integer is used, then it specifies a column of the DataFrame in the case the **explicit** indices for the column contain that integer. In any other case an error will result. For example, with the above DataFrame, the following indexing will not work, because the explicit column index consist of the column names \"Name\" and \"Wage\" which are not integers." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "try:\n", " df[0]\n", "except KeyError:\n", " import sys\n", " print(\"Key error\", file=sys.stderr)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following will however work." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df[\"Wage\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As does the fancy indexing:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df[[\"Wage\", \"Name\"]]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If one indexes with a slice or a boolean mask, then the **rows** are referred to. Examples of these:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df[0:1] # slice" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df[df.Wage > 1200] # boolean mask" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If some of the above calls return a Series object, then you can chain the bracket calls to get a single value from the DataFrame:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df[\"Wage\"][1] # Note order of dimensions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "But there is a better way to achieve this, which we will see in the next section." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "####
Exercise 4 (municipalities of finland)
\n", "\n", "Load again the municipal information DataFrame. The rows of the DataFrame correspond to various geographical areas of Finland. The first row is about Finland as a whole, then rows from Akaa to Äänekoski are municipalities of Finland in alphabetical order. After that some larger regions are listed.\n", "\n", "Write function `municipalities_of_finland` that returns a DataFrame containing only rows about municipalities.\n", "Give an appropriate argument for `pd.read_csv` so that it interprets the column about region name as the (row) index. This way you can index the DataFrame with the names of the regions.\n", "\n", "Test your function from the `main` function.\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "####
Exercise 5 (swedish and foreigners)
\n", "\n", "Write function `swedish_and_foreigners` that\n", "\n", "* Reads the municipalities data set\n", "* Takes the subset about municipalities (like in previous exercise)\n", "* Further take a subset of rows that have proportion of Swedish speaking people and proportion of foreigners both above 5 % level\n", "* From this data set take only columns about population, the proportions of Swedish speaking people and foreigners, that is three columns.\n", "\n", "The function should return this final DataFrame.\n", "\n", "Do you see some kind of correlation between the columns about Swedish speaking and foreign people? Do you see correlation between the columns about the population and the proportion of Swedish speaking people in this subset?\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "####
Exercise 6 (growing municipalities)
\n", "\n", "Write function `growing_municipalities` that gets subset of municipalities (a DataFrame) as a parameter and returns the proportion of municipalities with increasing population in that subset.\n", "\n", "Test your function from the `main` function using some subset of the municipalities.\n", "Print the proportion as percentages using 1 decimal precision.\n", "\n", "Example output:\n", "\n", "```\n", "Proportion of growing municipalities: 12.4%\n", "```\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Alternative indexing and data selection\n", "\n", "If the explanation in the previous section sounded confusing or ambiguous, or if you didn't understand a thing, you don't have to worry.\n", "\n", "There is another way to index Pandas DataFrames, which\n", "\n", "* allows use of index pairs to access a single element\n", "* has the same order of dimensions as NumPy: first index specifies rows, second columns\n", "* is not ambiguous about implicit or explicit indices\n", "\n", "Pandas DataFrames have attributes `loc` and `iloc` that have the above qualities.\n", "You can use `loc` and `iloc` attributes and forget everything about the previous section. Or you can use these attributes\n", "and sometimes use the methods from the previous section as shortcuts if you understand them well.\n", "\n", "The difference between `loc` and `iloc` attributes is that the former uses explicit indices and the latter uses the implicit integer indices. Examples of use:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.loc[1, \"Wage\"]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.iloc[-1,-1] # Right lower corner of the DataFrame" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.loc[1, [\"Name\", \"Wage\"]]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With `iloc` everything works like with NumPy arrays: indexing, slicing, fancy indexing, masking and their combinations. With `loc` it is the same but now the names in the explicit indices are used for specifying rows and columns. Make sure your understand why the above examples work as they do!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "####
Exercise 7 (subsetting with loc)
\n", "\n", "Write function `subsetting_with_loc` that in one go takes the subset of municipalities from Akaa to Äänekoski and restricts it to columns: \"Population\", \"Share of Swedish-speakers of the population, %\", and \"Share of foreign citizens of the population, %\".\n", "The function should return this content as a DataFrame. Use the attribute `loc`.\n", "\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "####
Exercise 8 (subsetting by positions)
\n", "\n", "Write function `subsetting_by_positions` that does the following.\n", "\n", "Read the data set of the top forty singles from the beginning of the year 1964 from the `src` folder. Return the top 10 entries and only the columns `Title` and `Artist`. Get these elements by their positions, that is, by using a single call to the `iloc` attribute. The function should return these as a DataFrame.\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Summary statistics\n", "\n", "The summary statistic methods work in a similar way as their counter parts in NumPy. By default, the aggregation is done over columns." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "wh = pd.read_csv(\"https://www.cs.helsinki.fi/u/jttoivon/dap/data/fmi/kumpula-weather-2017.csv\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "wh2 = wh.drop([\"Year\", \"m\", \"d\"], axis=1) # taking averages over these is not very interesting\n", "wh2.mean()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `describe` method of the `DataFrame` object gives different summary statistics for each (numeric) column. The result is a DataFrame. This method gives a good overview of the data, and is typically used in the exploratory data analysis phase." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "wh.describe()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "####
Exercise 9 (snow depth)
\n", "\n", "Write function `snow_depth` that reads in the weather DataFrame from the `src` folder and returns the maximum amount of snow in the year 2017.\n", "\n", "Print the result in the `main` function in the following form:\n", "```\n", "Max snow depth: xx.x\n", "```\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "####
Exercise 10 (average temperature)
\n", "\n", "Write function `average_temperature` that reads the weather data set and returns the average temperature in July.\n", "\n", "Print the result in the `main` function in the following form:\n", "```\n", "Average temperature in July: xx.x\n", "```\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "####
Exercise 11 (below zero)
\n", "\n", "Write function `below_zero` that returns the number of days when the temperature was below zero.\n", "\n", "Print the result in the main function in the following form:\n", "\n", "```\n", "Number of days below zero: xx\n", "```\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Missing data\n", "\n", "You may have noticed something strange in the output of the `describe` method. First, the minimum value in both precipitation and snow depth fields is -1. The special value -1 means that on that day there was absolutely no snow or rain, whereas the value 0 might indicate that the value was close to zero. Secondly, the snow depth column has count 358, whereas the other columns have count 365, one measurement/value for each day of the year. How is this possible? Every field in a DataFrame should have the same number of rows. Let's use the `unique` method of the Series object to find out, which different values are used in this column:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "wh[\"Snow depth (cm)\"].unique()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `float` type allows a special value `nan` (Not A Number), in addition to normal floating point numbers. This value can represent the result from an illegal operation. For example, the operation 0/0 can either cause an exception to occur or just silently produce a `nan`. In Pandas `nan` can be used to represent a missing value. In the weather DataFrame the `nan` value tells us that the measurement from that day is not available, possibly due to a broken measuring instrument or some other problem.\n", "\n", "Note that only float types allow the `nan` value (in Python, NumPy or Pandas). So, if we try to create an integer series with missing values, its dtype gets promoted to `float`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pd.Series([1,3,2])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pd.Series([1,3,2, np.nan])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For non-numeric types the special value `None` is used to denote a missing value, and the dtype is promoted to `object`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pd.Series([\"jack\", \"joe\", None])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Pandas excludes the missing values from the summary statistics, like we saw in the previous section. Pandas also provides some functions to handle missing values.\n", "\n", "The missing values can be located with the `isnull` method:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "wh.isnull() # returns a boolean mask DataFrame" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is not very useful as we cannot directly use the mask to index the DataFrame. We can, however, combine it with the `any` method to find out all the rows that contain at least one missing value:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "wh[wh.isnull().any(axis=1)]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `notnull` method works conversively to the `isnull` method." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `dropna` method of a DataFrame drops columns or rows that contain missing values from the DataFrame, depending on the `axis` parameter." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "wh.dropna().shape # Default axis is 0" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "wh.dropna(axis=1).shape # Drops the columns containing missing values" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `how` and `thresh` parameters of the `dropna` method allow one to specify how many values need to be missing in order for the row/column to be dropped.\n", "\n", "The `fillna` method allows to fill the missing values with some constant or interpolated values. The `method` parameter can be:\n", "\n", "* `None`: use the given positional parameter as the constant to fill missing values with\n", "* `ffill`: use the previous value to fill the current value\n", "* `bfill`: use the next value to fill the current value\n", "\n", "For example, for the weather data we could use forward fill" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "wh = wh.fillna(method='ffill')\n", "wh[wh.isnull().any(axis=1)]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `interpolate` method, which we will not cover here, offers more elaborate ways to interpolate the missing values from their neighbouring non-missing values." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "####
Exercise 12 (cyclists)
\n", "\n", "Write function `cyclists` that does the following.\n", "\n", "Load the Helsinki bicycle data set from the `src` folder (https://hri.fi/data/dataset//helsingin-pyorailijamaarat). The dataset contains the number of cyclists passing by measuring points per hour. The data is gathered over about four years, and there are 20 measuring points around Helsinki. The dataset contains some empty rows at the end. Get rid of these. Also, get rid of columns that contain only missing values. Return the cleaned dataset. \n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "####
Exercise 13 (missing value types)
\n", "\n", "Make function `missing_value_types` that returns the following DataFrame. Use the `State` column as the (row) index. The value types for the two other columns should be `float` and `object`, respectively. Replace the dashes with the appropriate missing value symbols.\n", "\n", "State | Year of independence | President\n", "------|----------------------|----------\n", "United Kingdom | - | -\n", "Finland | 1917 | Niinistö\n", "USA | 1776 | Trump\n", "Sweden | 1523 | -\n", "Germany | - | Steinmeier\n", "Russia | 1992 | Putin\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "####
Exercise 14 (special missing values)
\n", "\n", "Write function `special_missing_values` that does the following.\n", "\n", "Read the data set of the top forty singles from the beginning of the year 1964 from the `src` folder. Return the rows whose singles' position dropped compared to last week's position (column LW=Last Week).\n", "\n", "To do this you first have to convert the special values \"New\" and \"Re\" (Re-entry) to missing values (`None`).\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "####
Exercise 15 (last week)
\n", "\n", "This exercise can give two points at maximum!\n", "\n", "Write function `last_week` that reads the top40 data set mentioned in the above exercise. The function should then try to reconstruct the top40 list of the previous week based on that week's list. Try to do this as well as possible. You can fill the values that are impossible to reconstruct by missing value symbols. Your solution should work for a top40 list of any week. So don't rely on specific features of this top40 list. The column `WoC` means \"Weeks on Chart\", that is, on how many weeks this song has been on the top 40 list.\n", "\n", "Hint. First create the last week's top40 list of those songs that are also on this week's list. Then add those entries that were not on this week's list. Finally sort by position.\n", "\n", "Hint 2. The `where` method of Series and DataFrame can be useful. It can also be nested.\n", "\n", "Hint 3. Like in NumPy, you can use with Pandas the bitwise operators `&`, `|`, and `~`.\n", "Remember that he bitwise operators have higher precedence than the comparison operations, so you may\n", "have to use parentheses around comparisons, if you combined result of comparisons with bitwise operators.\n", "\n", "You get a second point, if you get the columns `LW` and `Peak Pos` correct.\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Converting columns from one type to another\n", "\n", "There are several ways of converting a column to another type. For converting single columns (a Series) one can use the `pd.to_numeric` function or the `map` method. For converting several columns in one go one can use the `astype` method. We will give a few examples of use of these methods/functions. For more details, look from the Pandas documentation." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pd.Series([\"1\",\"2\"]).map(int) # str -> int" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pd.Series([1,2]).map(str) # int -> str" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pd.to_numeric(pd.Series([1,1.0]), downcast=\"integer\") # object -> int" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pd.to_numeric(pd.Series([1,\"a\"]), errors=\"coerce\") # conversion error produces Nan" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pd.Series([1,2]).astype(str) # works for a single series" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = pd.DataFrame({\"a\": [1,2,3], \"b\" : [4,5,6], \"c\" : [7,8,9]})\n", "print(df.dtypes)\n", "print(df)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.astype(float) # Convert all columns" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df2 = df.astype({\"b\" : float, \"c\" : str}) # different types for columns\n", "print(df2.dtypes)\n", "print(df2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## String processing\n", "\n", "If the elements in a column are strings, then the vectorized versions of Python's string processing methods are available. These are accessed through the `str` attribute of a Series or a DataFrame. For example, to capitalize all the strings of a Series, we can use the `str.capitalize` method:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "names = pd.Series([\"donald\", \"theresa\", \"angela\", \"vladimir\"])\n", "names.str.capitalize()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One can find all the available methods by pressing the tab key after the text `names.str.` in a Python prompt. Try it in below cell!" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#names.str." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can split a column or Series into several columns using the `split` method. For example:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "full_names = pd.Series([\"Donald Trump\", \"Theresa May\", \"Angela Merkel\", \"Vladimir Putin\"])\n", "full_names.str.split()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is not exactly what we wanted: now each element is a list. We need to use the `expand` parameter to split into columns:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "full_names.str.split(expand=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "####
Exercise 16 (split date)
\n", "\n", "Read again the bicycle data set from `src` folder,\n", "and clean it as in the earlier exercise. Then split the `Päivämäärä` column into a DataFrame with five columns with column names `Weekday`, `Day`, `Month`, `Year`, and `Hour`. Note that you also need to to do some conversions. To get Hours, drop the colon and minutes. Convert field `Weekday` according the following rule:\n", "```\n", "ma -> Mon\n", "ti -> Tue\n", "ke -> Wed\n", "to -> Thu\n", "pe -> Fri\n", "la -> Sat\n", "su -> Sun\n", "```\n", "Convert the `Month` column according to the following mapping\n", "```\n", "tammi 1\n", "helmi 2\n", "maalis 3\n", "huhti 4\n", "touko 5\n", "kesä 6\n", "heinä 7\n", "elo 8\n", "syys 9\n", "loka 10\n", "marras 11\n", "joulu 12\n", "```\n", "\n", "Create function `split_date` that does the above and returns a DataFrame with five columns. You may want to use the `map` method of Series objects.\n", "\n", "So the first element in the `Päivämäärä` column of the original data set should be converted from\n", "`ke 1 tammi 2014 00:00`\n", "to\n", "`Wed 1 1 2014 0` . Test your solution from the `main` function.\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "####
Exercise 17 (cleaning data)
\n", "\n", "This exercise can give two points at maximum!\n", "\n", "The entries in the following table of US presidents are not uniformly formatted. Make function `cleaning_data` that reads the table from the tsv file `src/presidents.tsv` and returns the cleaned version of it. Note, you must do the edits programmatically using the string edit methods, not by creating a new DataFrame by hand. The columns should have `dtype`s `object`, `integer`, `float`, `integer`, `object`. The `where` method of DataFrames can be helpful, likewise the [string methods](http://pandas.pydata.org/pandas-docs/stable/api.html#string-handling) of Series objects. You get an additional point, if you manage to get the columns President and Vice-president right!\n", "\n", "President |\tStart |\tLast |\tSeasons | \tVice-president|\n", "----------|-------|------|----------|------------------|\n", "donald trump|\t2017 Jan|\t-|\t1|\tMike pence\n", "barack obama|\t2009|\t2017|\t2|\tjoe Biden\n", "bush, george|\t2001|\t2009|\t2|\tCheney, dick\n", "Clinton, Bill|\t1993|\t2001|\ttwo|\tgore, Al" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Additional information\n", "\n", "We covered subsetting of DataFrames with the indexers `[]`, `.loc[]`, and `.iloc[]` quite concisely.\n", "For a more verbose explanation, look at the [tutorials at Dunder Data](https://medium.com/dunder-data/pandas-tutorials/home). Especially, the problems with chained indexing operators (like `df[\"a\"][1]`) are explained well there (tutorial 4), which we did not cover at all. As a rule of thumb: one should avoid chained indexing combined with assignment! See [Pandas documentation](http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#why-does-assignment-fail-when-using-chained-indexing)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Summary (week 4)\n", "\n", "* You can create DataFrames in several ways:\n", " * By reading from a csv file\n", " * Out out two dimensional NumPy array\n", " * Out of rows\n", " * Out of columns\n", "* You know how to access rows, columns and individual elements of DataFrames\n", "* You can use the `describe` method to get a quick overview of a DataFrame\n", "* You know how missing values are represented in Series and DataFrames, and you know how to manipulate them\n", "* There are similarities between Python's string methods and the vectorized forms of string operations in Series and DataFrames\n", "* You can do complicated text processing with the `str.replace` method combined with regular expressions\n", "* The powerful `where` method is the vectorized form of Python's `if-else` construct\n", "* We remember that with NumPy arrays we preferred vectorized operations instead of, for instance, `for` loops. Same goes with Pandas. It may first feel that things are easier to achieve with loops, but after a while vectorized operations will feel natural." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "\n", "\"Open\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.1" } }, "nbformat": 4, "nbformat_minor": 2 }