Helper functions for data science using python and spark
Helper functions for the python data science stack as well as spark, AWS, jupyter.
This library contains helpers and wrappers for common data science libraries in the python stack:
There are also functions that simplify common manipulations for machine learning and data science
in general, as well as interfacing with the following tools:
# PyPI
pip install seipy
from seipy import apply_uniq
df2 = apply_uniq(df, orig_col, new_col, _func)
This will return the same DataFrame as performing:df[new_col] = df[orig_col].apply(_func)
but is much more performant when there are many duplicate entries in orig_col
.
It works by performing the function _func
only on the unique entries and then merging with the original DataFrame.
Originally answered on stack overflow:
https://stackoverflow.com/questions/46798532/how-do-you-effectively-use-pd-dataframe-apply-on-rows-with-duplicate-values/
from seipy import filt
# example with keyword arguments
filt(df,
season="summer",
age=(">", 18),
sport=("isin", ["Basketball", "Soccer"]),
name=("contains", "Armstrong")
)
# example with dict notation
a = {'season': "summer", 'age': (">", 18)}
filt(df, **a)
from seipy import distmat
distmat()
This will prints possible distance metrics such as “euclidean” “chebyshev”, “hamming”.
distmat(fframe, metric)
This generates a distance matrix using metric
.
Note, this function is a wrapper of scipy.spatial.distance.cdist
from seipy import notebook_contains
notebook_contains(search_str,
on_docker=False,
git_dir='~/git/experiments/',
start_date='2015-01-01', end_date='2018-12-31')
Prints a list of notebooks that contain the str search_str
.
Very useful for these situations: “Where’s that notebook where I was trying that one thing that one time?”
from seipy import s3zip_func
s3zip_func(s3zip_path, _func, cred_fpath=cred_fpath, **kwargs)
This one’s kinda nice. It allows one to apply a function _func
to each subfile in a zip file sitting on s3.
I use it to filter and enrich some csv files that periodically get zipped to s3, for example.
from seipy import s3spark_init
spark = s3spark_init(cred_fpath)
Returns spark
, a SparkSession
that makes it possible to interact with s3 from jupyter notebooks.cred_fpath
is the file path to the aws credentials file containing your keys.
from seiji import merge_two_dicts
merge_two_dicts(dict_1, dict_2)
Returns the merged dict {**dict_1, **dict_2}
.
An extension for mulitple dicts is reduce(lambda d1,d2: {**d1,**d2}, dict_args[0])
Please either post an issue on this github repo, or email the author seiji dot armstrong at gmail
with feedback,
feature requests, or to complain that something doesn’t work as expected.