|
| 1 | +{ |
| 2 | + "cells": [ |
| 3 | + { |
| 4 | + "cell_type": "markdown", |
| 5 | + "metadata": {}, |
| 6 | + "source": [ |
| 7 | + "# Dataframe basics\n", |
| 8 | + "### Dr. Tirthajyoti Sarkar, Fremont, CA 94536\n", |
| 9 | + "### Apache Spark\n", |
| 10 | + "Apache Spark is one of the hottest new trends in the technology domain. It is the framework with probably the highest potential to realize the fruit of the marriage between Big Data and Machine Learning. It runs fast (up to 100x faster than traditional Hadoop MapReduce) due to in-memory operation, offers robust, distributed, fault-tolerant data objects (called RDD), and inte-grates beautifully with the world of machine learning and graph analytics through supplementary\n", |
| 11 | + "packages like Mlib and GraphX.\n", |
| 12 | + "\n", |
| 13 | + "Spark is implemented on Hadoop/HDFS and written mostly in Scala, a functional programming language, similar to Java. In fact, Scala needs the latest Java installation on your system and runs on JVM. However, for most of the beginners, Scala is not a language that they learn first to venture into the world of data science. Fortunately, Spark provides a wonderful Python integration, called PySpark, which lets Python programmers to interface with the Spark framework and\n", |
| 14 | + "learn how to manipulate data at scale and work with objects and algorithms over a distributed file system.\n", |
| 15 | + "\n", |
| 16 | + "### Dataframe\n", |
| 17 | + "In Apache Spark, a DataFrame is a distributed collection of rows under named columns. It is conceptually equivalent to a table in a relational database, an Excel sheet with Column headers, or a data frame in R/Python, but with richer optimizations under the hood. DataFrames can be constructed from a wide array of sources such as: structured data files, tables in Hive, external databases, or existing RDDs. It also shares some common characteristics with RDD:\n", |
| 18 | + "\n", |
| 19 | + "* Immutable in nature : We can create DataFrame / RDD once but can’t change it. And we can transform a DataFrame / RDD after applying transformations.\n", |
| 20 | + "* Lazy Evaluations: Which means that a task is not executed until an action is performed.\n", |
| 21 | + "* Distributed: RDD and DataFrame both are distributed in nature.\n", |
| 22 | + "\n", |
| 23 | + "### Advantages of the DataFrame\n", |
| 24 | + "* DataFrames are designed for processing large collection of structured or semi-structured data.\n", |
| 25 | + "* Observations in Spark DataFrame are organised under named columns, which helps Apache Spark to understand the schema of a DataFrame. This helps Spark optimize execution plan on these queries.\n", |
| 26 | + "* DataFrame in Apache Spark has the ability to handle petabytes of data.\n", |
| 27 | + "* DataFrame has a support for wide range of data format and sources.\n", |
| 28 | + "* It has API support for different languages like Python, R, Scala, Java." |
| 29 | + ] |
| 30 | + }, |
| 31 | + { |
| 32 | + "cell_type": "code", |
| 33 | + "execution_count": 1, |
| 34 | + "metadata": {}, |
| 35 | + "outputs": [], |
| 36 | + "source": [ |
| 37 | + "import pyspark\n", |
| 38 | + "from pyspark import SparkContext as sc\n", |
| 39 | + "from pyspark.sql import Row" |
| 40 | + ] |
| 41 | + }, |
| 42 | + { |
| 43 | + "cell_type": "markdown", |
| 44 | + "metadata": {}, |
| 45 | + "source": [ |
| 46 | + "### Create a `SparkSession` app object" |
| 47 | + ] |
| 48 | + }, |
| 49 | + { |
| 50 | + "cell_type": "code", |
| 51 | + "execution_count": 2, |
| 52 | + "metadata": {}, |
| 53 | + "outputs": [], |
| 54 | + "source": [ |
| 55 | + "from pyspark.sql import SparkSession" |
| 56 | + ] |
| 57 | + }, |
| 58 | + { |
| 59 | + "cell_type": "code", |
| 60 | + "execution_count": 3, |
| 61 | + "metadata": {}, |
| 62 | + "outputs": [], |
| 63 | + "source": [ |
| 64 | + "spark1 = SparkSession.builder.appName('Basics').getOrCreate()" |
| 65 | + ] |
| 66 | + }, |
| 67 | + { |
| 68 | + "cell_type": "markdown", |
| 69 | + "metadata": {}, |
| 70 | + "source": [ |
| 71 | + "### Read in a JSON file" |
| 72 | + ] |
| 73 | + }, |
| 74 | + { |
| 75 | + "cell_type": "code", |
| 76 | + "execution_count": 4, |
| 77 | + "metadata": {}, |
| 78 | + "outputs": [], |
| 79 | + "source": [ |
| 80 | + "df = spark1.read.json('Data/people.json')" |
| 81 | + ] |
| 82 | + }, |
| 83 | + { |
| 84 | + "cell_type": "markdown", |
| 85 | + "metadata": {}, |
| 86 | + "source": [ |
| 87 | + "#### Unlike Pandas DataFrame, it does not show itself when called" |
| 88 | + ] |
| 89 | + }, |
| 90 | + { |
| 91 | + "cell_type": "code", |
| 92 | + "execution_count": 5, |
| 93 | + "metadata": {}, |
| 94 | + "outputs": [ |
| 95 | + { |
| 96 | + "data": { |
| 97 | + "text/plain": [ |
| 98 | + "DataFrame[age: bigint, name: string]" |
| 99 | + ] |
| 100 | + }, |
| 101 | + "execution_count": 5, |
| 102 | + "metadata": {}, |
| 103 | + "output_type": "execute_result" |
| 104 | + } |
| 105 | + ], |
| 106 | + "source": [ |
| 107 | + "df" |
| 108 | + ] |
| 109 | + }, |
| 110 | + { |
| 111 | + "cell_type": "markdown", |
| 112 | + "metadata": {}, |
| 113 | + "source": [ |
| 114 | + "#### You have to call show() method to evaluate it i.e. show it" |
| 115 | + ] |
| 116 | + }, |
| 117 | + { |
| 118 | + "cell_type": "code", |
| 119 | + "execution_count": 6, |
| 120 | + "metadata": {}, |
| 121 | + "outputs": [ |
| 122 | + { |
| 123 | + "name": "stdout", |
| 124 | + "output_type": "stream", |
| 125 | + "text": [ |
| 126 | + "+----+-------+\n", |
| 127 | + "| age| name|\n", |
| 128 | + "+----+-------+\n", |
| 129 | + "|null|Michael|\n", |
| 130 | + "| 30| Andy|\n", |
| 131 | + "| 19| Justin|\n", |
| 132 | + "+----+-------+\n", |
| 133 | + "\n" |
| 134 | + ] |
| 135 | + } |
| 136 | + ], |
| 137 | + "source": [ |
| 138 | + "df.show()" |
| 139 | + ] |
| 140 | + }, |
| 141 | + { |
| 142 | + "cell_type": "markdown", |
| 143 | + "metadata": {}, |
| 144 | + "source": [ |
| 145 | + "#### Use `printSchema()` to show he schema of the data. Note, how tightly it is integrated to the SQL-like framework. You can even see that the schema accepts `null` values because nullable property is set `True`." |
| 146 | + ] |
| 147 | + }, |
| 148 | + { |
| 149 | + "cell_type": "code", |
| 150 | + "execution_count": 7, |
| 151 | + "metadata": {}, |
| 152 | + "outputs": [ |
| 153 | + { |
| 154 | + "name": "stdout", |
| 155 | + "output_type": "stream", |
| 156 | + "text": [ |
| 157 | + "root\n", |
| 158 | + " |-- age: long (nullable = true)\n", |
| 159 | + " |-- name: string (nullable = true)\n", |
| 160 | + "\n" |
| 161 | + ] |
| 162 | + } |
| 163 | + ], |
| 164 | + "source": [ |
| 165 | + "df.printSchema()" |
| 166 | + ] |
| 167 | + }, |
| 168 | + { |
| 169 | + "cell_type": "markdown", |
| 170 | + "metadata": {}, |
| 171 | + "source": [ |
| 172 | + "#### Fortunately a simple `columns` method exists to get column names back as a Python list" |
| 173 | + ] |
| 174 | + }, |
| 175 | + { |
| 176 | + "cell_type": "code", |
| 177 | + "execution_count": 9, |
| 178 | + "metadata": {}, |
| 179 | + "outputs": [ |
| 180 | + { |
| 181 | + "data": { |
| 182 | + "text/plain": [ |
| 183 | + "['age', 'name']" |
| 184 | + ] |
| 185 | + }, |
| 186 | + "execution_count": 9, |
| 187 | + "metadata": {}, |
| 188 | + "output_type": "execute_result" |
| 189 | + } |
| 190 | + ], |
| 191 | + "source": [ |
| 192 | + "df.columns" |
| 193 | + ] |
| 194 | + }, |
| 195 | + { |
| 196 | + "cell_type": "markdown", |
| 197 | + "metadata": {}, |
| 198 | + "source": [ |
| 199 | + "#### Similar to Pandas, the `describe` method is used for the statistical summary. But unlike Pandas, calling only `describe()` returns a DataFrame! This is due to lazy evaluation - the actual computation is delayed as much as possible." |
| 200 | + ] |
| 201 | + }, |
| 202 | + { |
| 203 | + "cell_type": "code", |
| 204 | + "execution_count": 10, |
| 205 | + "metadata": {}, |
| 206 | + "outputs": [ |
| 207 | + { |
| 208 | + "data": { |
| 209 | + "text/plain": [ |
| 210 | + "DataFrame[summary: string, age: string, name: string]" |
| 211 | + ] |
| 212 | + }, |
| 213 | + "execution_count": 10, |
| 214 | + "metadata": {}, |
| 215 | + "output_type": "execute_result" |
| 216 | + } |
| 217 | + ], |
| 218 | + "source": [ |
| 219 | + "df.describe()" |
| 220 | + ] |
| 221 | + }, |
| 222 | + { |
| 223 | + "cell_type": "markdown", |
| 224 | + "metadata": {}, |
| 225 | + "source": [ |
| 226 | + "#### We have to call `show` again" |
| 227 | + ] |
| 228 | + }, |
| 229 | + { |
| 230 | + "cell_type": "code", |
| 231 | + "execution_count": 11, |
| 232 | + "metadata": {}, |
| 233 | + "outputs": [ |
| 234 | + { |
| 235 | + "name": "stdout", |
| 236 | + "output_type": "stream", |
| 237 | + "text": [ |
| 238 | + "+-------+------------------+-------+\n", |
| 239 | + "|summary| age| name|\n", |
| 240 | + "+-------+------------------+-------+\n", |
| 241 | + "| count| 2| 3|\n", |
| 242 | + "| mean| 24.5| null|\n", |
| 243 | + "| stddev|7.7781745930520225| null|\n", |
| 244 | + "| min| 19| Andy|\n", |
| 245 | + "| max| 30|Michael|\n", |
| 246 | + "+-------+------------------+-------+\n", |
| 247 | + "\n" |
| 248 | + ] |
| 249 | + } |
| 250 | + ], |
| 251 | + "source": [ |
| 252 | + "df.describe().show()" |
| 253 | + ] |
| 254 | + }, |
| 255 | + { |
| 256 | + "cell_type": "code", |
| 257 | + "execution_count": null, |
| 258 | + "metadata": {}, |
| 259 | + "outputs": [], |
| 260 | + "source": [] |
| 261 | + } |
| 262 | + ], |
| 263 | + "metadata": { |
| 264 | + "kernelspec": { |
| 265 | + "display_name": "Python 3", |
| 266 | + "language": "python", |
| 267 | + "name": "python3" |
| 268 | + }, |
| 269 | + "language_info": { |
| 270 | + "codemirror_mode": { |
| 271 | + "name": "ipython", |
| 272 | + "version": 3 |
| 273 | + }, |
| 274 | + "file_extension": ".py", |
| 275 | + "mimetype": "text/x-python", |
| 276 | + "name": "python", |
| 277 | + "nbconvert_exporter": "python", |
| 278 | + "pygments_lexer": "ipython3", |
| 279 | + "version": "3.6.8" |
| 280 | + } |
| 281 | + }, |
| 282 | + "nbformat": 4, |
| 283 | + "nbformat_minor": 2 |
| 284 | +} |
0 commit comments