pyspark mapreduce dataframe

Solutions on MaxInterview for pyspark mapreduce dataframe by the best coders in the world

showing results for - "pyspark mapreduce dataframe"
Alan
24 Oct 2016
1df.rdd \
2  .filter(lambda x: x[1] == "france") \ # only french stations
3  .map(lambda x: (x[0], x[2])) \ # select station & temp
4  .mapValues(lambda x: (x, 1)) \ # generate count
5  .reduceByKey(lambda x, y: (x[0]+y[0], x[1]+y[1])) \ # calculate sum & count
6  .mapValues(lambda x: x[0]/x[1]) \ # calculate average
7  .sortBy(lambda x: x[1], ascending = False) \ # sort
8  .take(100)
9