Applied data science using Pyspark : learn the end-to-end predictive model-building cycle

Discover the capabilities of PySpark and its application in the realm of data science. This comprehensive guide with hand-picked examples of daily use cases will walk you through the end-to-end predictive model-building cycle with the latest techniques and tricks of the trade. Applied Data Science U...

Full description

Saved in:
Bibliographic Details
Main Author: Kakarla, Ramcharan.
Other Authors: Krishnan, Sundar., Alla, Sridhar.
Format: eBook
Language: English
Published: Berkeley, CA : Apress, 2021.
Subjects:
ISBN: 9781484265000
1484265009
9781484265017
1484265017
1484264991
9781484264997
Physical Description: 1 online resource (427 pages)

Cover

Table of contents

LEADER 06804cam a2200529 a 4500
001 kn-on1228037528
003 OCoLC
005 20240717213016.0
006 m o d
007 cr cn|||||||||
008 201226s2021 cau o 001 0 eng d
040 |a EBLCP  |b eng  |e pn  |c EBLCP  |d YDX  |d ERF  |d OCLCF  |d GW5XE  |d OCLCO  |d VT2  |d SFB  |d N$T  |d OCL  |d K6U  |d OCLCQ  |d OCLCO  |d OCLCQ  |d OCLCO  |d OCLCL 
020 |a 9781484265000  |q (electronic bk.) 
020 |a 1484265009  |q (electronic bk.) 
020 |a 9781484265017  |q (print) 
020 |a 1484265017 
020 |z 1484264991 
020 |z 9781484264997 
024 7 |a 10.1007/978-1-4842-6500-0  |2 doi 
035 |a (OCoLC)1228037528  |z (OCoLC)1227448842  |z (OCoLC)1232856870  |z (OCoLC)1235845480  |z (OCoLC)1240522822 
100 1 |a Kakarla, Ramcharan. 
245 1 0 |a Applied data science using Pyspark :  |b learn the end-to-end predictive model-building cycle /  |c Ramcharan Kakarla, Sundar Krishnan, Sridhar Alla. 
260 |a Berkeley, CA :  |b Apress,  |c 2021. 
300 |a 1 online resource (427 pages) 
336 |a text  |b txt  |2 rdacontent 
337 |a computer  |b c  |2 rdamedia 
338 |a online resource  |b cr  |2 rdacarrier 
505 0 |a Intro -- Table of Contents -- About the Authors -- About the Technical Reviewer -- Acknowledgments -- Foreword 1 -- Foreword 2 -- Foreword 3 -- Introduction -- Chapter 1: Setting Up the PySpark Environment -- Local Installation using Anaconda -- Step 1: Install Anaconda -- Step 2: Conda Environment Creation -- Step 3: Download and Unpack Apache Spark -- Step 4: Install Java 8 or Later -- Step 5: Mac & Linux Users -- Step 6: Windows Users -- Step 7: Run PySpark -- Step 8: Jupyter Notebook Extension -- Docker-based Installation -- Why Do We Need to Use Docker? -- What Is Docker? 
505 8 |a Create a Simple Docker Image -- Download PySpark Docker -- Step-by-Step Approach to Understanding the Docker PySpark run Command -- Databricks Community Edition -- Create Databricks Account -- Create a New Cluster -- Create Notebooks -- How Do You Import Data Files into the Databricks Environment? -- Basic Operations -- Upload Data -- Access Data -- Calculate Pi -- Summary -- Chapter 2: PySpark Basics -- PySpark Background -- PySpark Resilient Distributed Datasets (RDDs) and DataFrames -- Data Manipulations -- Reading Data from a File -- Reading Data from Hive Table -- Reading Metadata 
505 8 |a Counting Records -- Subset Columns and View a Glimpse of the Data -- Missing Values -- One-Way Frequencies -- Sorting and Filtering One-Way Frequencies -- Casting Variables -- Descriptive Statistics -- Unique/Distinct Values and Counts -- Filtering -- Creating New Columns -- Deleting and Renaming Columns -- Summary -- Chapter 3: Utility Functions and Visualizations -- Additional Data Manipulations -- String Functions -- Registering DataFrames -- Window Functions -- Other Useful Functions -- Collect List -- Sampling -- Caching and Persisting -- Saving Data -- Pandas Support -- Joins 
505 8 |a Dropping Duplicates -- Data Visualizations -- Introduction to Machine Learning -- Summary -- Chapter 4: Variable Selection -- Exploratory Data Analysis -- Cardinality -- Missing Values -- Missing at Random (MAR) -- Missing Completely at Random (MCAR) -- Missing Not at Random (MNAR) -- Code 1: Cardinality Check -- Code 2: Missing Values Check -- Step 1: Identify Variable Types -- Step 2: Apply StringIndexer to Character Columns -- Step 3: Assemble Features -- Built-in Variable Selection Process: Without Target -- Principal Component Analysis -- Mechanics -- Singular Value Decomposition 
505 8 |a Built-in Variable Selection Process: With Target -- ChiSq Selector -- Model-based Feature Selection -- Custom-built Variable Selection Process -- Information Value Using Weight of Evidence -- Monotonic Binning Using Spearman Correlation -- How Do You Calculate the Spearman Correlation by Hand? -- How Is Spearman Correlation Used to Create Monotonic Bins for Continuous Variables? -- Custom Transformers -- Main Concepts in Pipelines -- Voting-based Selection -- Summary -- Chapter 5: Supervised Learning Algorithms -- Basics -- Regression -- Classification -- Loss Functions -- Optimizers 
500 |a Gradient Descent. 
500 |a Includes index. 
506 |a Plný text je dostupný pouze z IP adres počítačů Univerzity Tomáše Bati ve Zlíně nebo vzdáleným přístupem pro zaměstnance a studenty 
520 |a Discover the capabilities of PySpark and its application in the realm of data science. This comprehensive guide with hand-picked examples of daily use cases will walk you through the end-to-end predictive model-building cycle with the latest techniques and tricks of the trade. Applied Data Science Using PySpark is divided unto six sections which walk you through the book. In section 1, you start with the basics of PySpark focusing on data manipulation. We make you comfortable with the language and then build upon it to introduce you to the mathematical functions available off the shelf. In section 2, you will dive into the art of variable selection where we demonstrate various selection techniques available in PySpark. In section 3, we take you on a journey through machine learning algorithms, implementations, and fine-tuning techniques. We will also talk about different validation metrics and how to use them for picking the best models. Sections 4 and 5 go through machine learning pipelines and various methods available to operationalize the model and serve it through Docker/an API. In the final section, you will cover reusable objects for easy experimentation and learn some tricks that can help you optimize your programs and machine learning pipelines. By the end of this book, you will have seen the flexibility and advantages of PySpark in data science applications. This book is recommended to those who want to unleash the power of parallel computing by simultaneously working with big datasets. You will: Build an end-to-end predictive model Implement multiple variable selection techniques Operationalize models Master multiple algorithms and implementations. 
590 |a Knovel  |b Knovel (All titles) 
650 0 |a Big data. 
650 0 |a Machine learning. 
650 0 |a Python (Computer program language) 
650 0 |a Parallel processing (Electronic computers) 
655 7 |a elektronické knihy  |7 fd186907  |2 czenas 
655 9 |a electronic books  |2 eczenas 
700 1 |a Krishnan, Sundar. 
700 1 |a Alla, Sridhar. 
776 0 8 |i Print version:  |a Kakarla, Ramcharan.  |t Applied Data Science Using Pyspark : Learn the End-To-End Predictive Model-Building Cycle.  |d Berkeley, CA : Apress L.P., ©2021  |z 9781484264997 
856 4 0 |u https://proxy.k.utb.cz/login?url=https://app.knovel.com/hotlink/toc/id:kpADSUPSL1/applied-data-science?kpromoter=marc  |y Full text