Searchable podcast transcripts.
Link to presentation.
Podcasting is a popular and growing medium, with over $1B in advertising revenue expected in 2021. However, podcast metadata spread around various sources which makes it difficult to perform analytics for brands interested in advertising in the space. Furthermore, one missing piece of data that could be used for analytics (or by interested consumers or bored data scientists…) is full transcriptions of the episodes. While some authors provide transcripts of their podcasts, it is not common practice, and there is currently no way to quickly search transcripts.
This project aims to index podcast transcripts for search, along with other metadata. In order to make these text documents easy and fast to search, the main design choice was made to use an Elasticsearch index for the podcast data. Other pieces of technology followed from that decision:
In order to run the scripts the following tools must be setup and connected to each other. In my case, everything was set up on AWS EC2 resources:
Once everything is installed, the Elasticsearch, Logstash, and Kibana services can be started. If logstash was configured properly, it will ingest RSS data and populate an Elasticsearch index with podcast metadata (can be quickly checked in Kibana).
In order to start making transcripts, one can use the spark programs in the data-processing directory. The script download_mp3s.py gets all the episodes from Elasticsearch and downloads an MP3 into S3 for each if it does not yet exist. Then, transcribe_S3_to_ES.py finds all the episodes which have an MP3 but no transcription, and uses CMU Sphinx to transcribe each episode, putting the results back into Elasticsearch.