{"id":8146,"date":"2023-11-09T08:32:00","date_gmt":"2023-11-09T16:32:00","guid":{"rendered":"https:\/\/live-cometml.pantheonsite.io\/?p=8146"},"modified":"2025-04-24T17:04:31","modified_gmt":"2025-04-24T17:04:31","slug":"nlp-topic-modeling-for-a-tv-series-episode-summary","status":"publish","type":"post","link":"https:\/\/www.comet.com\/site\/blog\/nlp-topic-modeling-for-a-tv-series-episode-summary\/","title":{"rendered":"NLP \u2014 Topic Modeling For a TV Series Episode Summary"},"content":{"rendered":"\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/nlp-topic-modeling-for-a-tv-series-episode-summary\">\n\n\n\n<h1 class=\"wp-block-heading lv lw fr be lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp mq mr ms bj\" id=\"20d2\"><strong class=\"al\">Introduction<\/strong><\/h1>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv mw mx my mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no np fk bj\" id=\"ccb4\">Many of us watch TV shows for leisure within our daily mundane routines via various online streaming platforms (such as Amazon Prime or Netflix), each having its share of show categories. Whenever a new show is launched, its other details (such as episode plot summary) become available online.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"d3b8\">Each episode present in the show contains a story with some difference in their nature. To gain an understanding of them in the form of topics\/keywords, we will carry out the Topic Modeling concept of Natural Language Processing (NLP) in this article.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"b98d\">The purpose here is to build a machine learning model that analyzes the data for each TV episode and summarizes them into shorter segments while identifying their underlying sentiments.<\/p>\n\n\n\n<h1 class=\"wp-block-heading lv lw fr be lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp mq mr ms bj\" id=\"2649\"><strong class=\"al\">Data<\/strong><\/h1>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv mw mx my mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no np fk bj\" id=\"2a3f\">The data used here comes from the show \u201cAlice in Borderland\u201d currently streaming on Netflix, which includes the following details \u2014<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"5481\">\u00b7 Name of the Series<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"13fc\">\u00b7 Season No<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"fbdf\">\u00b7 Episode Name\/No<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"ccf3\">\u00b7 Episode Plot Summary<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"1532\">The reference from the above pointers can be taken from the below link \u2014<\/p>\n\n\n\n<figure class=\"wp-block-embed pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\"><div class=\"wp-block-embed__wrapper\">\nhttps:\/\/en.wikipedia.org\/wiki\/Alice_in_Borderland_(TV_series)\n<\/div><\/figure>\n\n\n\n<h1 class=\"wp-block-heading lv lw fr be lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp mq mr ms bj\" id=\"1ecf\"><strong class=\"al\">Text Preparation<\/strong><\/h1>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv mw mx my mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no np fk bj\" id=\"9709\">Before building the algorithm, it is necessary to prepare the data for processing to perform better analysis to gain valuable insights. Some of the steps that are being followed in this section are as follows:<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"1fd1\"><strong class=\"be nw\">a)<\/strong> <strong class=\"be nw\">Punctuation Removal \u2014<\/strong><\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"5a21\">\u00b7 All the punctuations such as \u2018!\u201d#$%&amp;\u2019()*+,-.\/:;?@[\\]^_{|}~\u2019 contained in the text are removed.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"b564\">\u00b7 These are defined within the string library of Python.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span id=\"b599\" class=\"og lw fr od b bf oh oi l oj ok\" data-selectable-paragraph=\"\"><span class=\"hljs-keyword\">def<\/span> <span class=\"hljs-title.function\">rem_punct<\/span>(<span class=\"hljs-params\">txt<\/span>):\n    pfree = <span class=\"hljs-string\">\"\"<\/span>.join([i <span class=\"hljs-keyword\">for<\/span> i <span class=\"hljs-keyword\">in<\/span> txt <span class=\"hljs-keyword\">if<\/span> i <span class=\"hljs-keyword\">not<\/span> <span class=\"hljs-keyword\">in<\/span> string.punctuation])\n    <span class=\"hljs-keyword\">return<\/span> pfree\nread[<span class=\"hljs-string\">\"Episode Plot Summary\"<\/span>] = read[<span class=\"hljs-string\">\"Episode Plot Summary\"<\/span>].apply(<span class=\"hljs-keyword\">lambda<\/span> x:rem_punct(x))p<\/span><\/pre>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"c163\"><strong class=\"be nw\">b)<\/strong> <strong class=\"be nw\">Lowercasing Text \u2014<\/strong><\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"0e8c\">\u00b7 One of the most common tasks in text processing is to convert the characters to lowercase to eliminate useless data or noise.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span id=\"107f\" class=\"og lw fr od b bf oh oi l oj ok\" data-selectable-paragraph=\"\">read[<span class=\"hljs-string\">\"Episode Plot Summary\"<\/span>] = read[<span class=\"hljs-string\">\"Episode Plot Summary\"<\/span>].<span class=\"hljs-built_in\">str<\/span>.lower()<\/span><\/pre>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"dc8f\"><strong class=\"be nw\">c)<\/strong> <strong class=\"be nw\">Tokenization \u2014<\/strong><\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"9a62\">\u00b7 Part of the NLP pipeline that converts the text into small tokens (words\/sentences).<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"1d6e\">\u00b7 Here, word tokenization is performed where each word is subjected to deeper analysis to gather their importance in the text.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span id=\"523b\" class=\"og lw fr od b bf oh oi l oj ok\" data-selectable-paragraph=\"\">read[<span class=\"hljs-string\">\"Episode Plot Summary\"<\/span>] = read[<span class=\"hljs-string\">\"Episode Plot Summary\"<\/span>].<span class=\"hljs-built_in\">str<\/span>.split()<\/span><\/pre>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"2362\"><strong class=\"be nw\">d)<\/strong> <strong class=\"be nw\">Stopword Removal \u2014<\/strong><\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"bd2f\">\u00b7 Stopwords are commonly used words that are removed since they add no value to the analysis.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"c2d5\">\u00b7 The NLTK library used here helps to remove the stopwords from the text.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span id=\"6020\" class=\"og lw fr od b bf oh oi l oj ok\" data-selectable-paragraph=\"\">stp = nltk.corpus.stopwords.words(<span class=\"hljs-string\">'english'<\/span>)\n<span class=\"hljs-keyword\">def<\/span> <span class=\"hljs-title.function\">rem_stopwords<\/span>(<span class=\"hljs-params\">txt<\/span>):\n    st_free = [i <span class=\"hljs-keyword\">for<\/span> i <span class=\"hljs-keyword\">in<\/span> txt <span class=\"hljs-keyword\">if<\/span> i <span class=\"hljs-keyword\">not<\/span> <span class=\"hljs-keyword\">in<\/span> stp]\n    <span class=\"hljs-keyword\">return<\/span> st_free\nread[<span class=\"hljs-string\">\"Episode Plot Summary\"<\/span>] = read[<span class=\"hljs-string\">\"Episode Plot Summary\"<\/span>].apply(<span class=\"hljs-keyword\">lambda<\/span> x:rem_stopwords(x))<\/span><\/pre>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"0a14\"><strong class=\"be nw\">e)<\/strong> <strong class=\"be nw\">Lemmatization \u2014<\/strong><\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"6323\">\u00b7 This is an algorithmic process followed to convert the word to its root form while also keeping its meaning intact.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span id=\"2964\" class=\"og lw fr od b bf oh oi l oj ok\" data-selectable-paragraph=\"\">word_lemma = WordNetLemmatizer()\n<span class=\"hljs-keyword\">def<\/span> <span class=\"hljs-title.function\">lemma<\/span>(<span class=\"hljs-params\">txt<\/span>):\n    l_txt = [word_lemma.lemmatize(w) <span class=\"hljs-keyword\">for<\/span> w <span class=\"hljs-keyword\">in<\/span> txt]\n    <span class=\"hljs-keyword\">return<\/span> l_txt\nread[<span class=\"hljs-string\">\"Episode Plot Summary\"<\/span>] = read[<span class=\"hljs-string\">\"Episode Plot Summary\"<\/span>].apply(<span class=\"hljs-keyword\">lambda<\/span> x:lemma(x))\n\n<\/span><\/pre>\n\n\n\n<h1 class=\"wp-block-heading lv lw fr be lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp mq mr ms bj\" id=\"36a8\"><strong class=\"al\">Model Preparation<\/strong><\/h1>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv mw mx my mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no np fk bj\" id=\"bff7\">After preparing the data through the series of processing tasks in the previous section, we begin our demonstration for implementing the process for building the Topic Model.<\/p>\n\n\n\n<h2 class=\"wp-block-heading ol lw fr be lx om on oo mb op oq or mf nd os ot ou nh ov ow ox nl oy oz pa pb bj\" id=\"a811\"><strong class=\"al\">TF-IDF VECTORIZATION<\/strong><\/h2>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv mw mx my mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no np fk bj\" id=\"2464\">\u00b7 TF-IDF (Term Frequency \u2014 Inverse Document Frequency) is based on the Bag of Words model that provides insights about the relevance of words in a document.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"8c52\">\u00b7 Term Frequency (TF) measures the frequency of the word occurring in the document.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"2d64\">\u00b7 Inverse Document Frequency (IDF) measures the importance of words.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"068e\">In this sub-process, the data is converted into a vectorized format to continue to the next step.<\/p>\n\n\n\n<h2 class=\"wp-block-heading ol lw fr be lx om on oo mb op oq or mf nd os ot ou nh ov ow ox nl oy oz pa pb bj\" id=\"cc76\"><strong class=\"al\">LATENT DIRICHLET ALLOCATION (LDA) PROCESS<\/strong><\/h2>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv mw mx my mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no np fk bj\" id=\"c315\">LDA is a model where \u2014<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"a2c5\">\u00b7 Latent means that the model finds hidden topics in a document.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"1199\">\u00b7 Dirichlet indicates that the model assumes the distribution of topics in a document.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"cf96\">The parameters used in the model when carrying out LDA on the vectorized data are as follows \u2014<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"bb17\">a) Number of Topics<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"485a\">b) Learning method (the way by which the model assigns the topics to the documents)<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"013e\">c) Number of iterations<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"18fd\">d) Random state<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span id=\"8851\" class=\"og lw fr od b bf oh oi l oj ok\" data-selectable-paragraph=\"\"><span class=\"hljs-comment\">#SEASON 1<\/span>\nread_season1 = read[read[<span class=\"hljs-string\">\"Season No\"<\/span>]==<span class=\"hljs-string\">\"Season 1\"<\/span>]\n\n<span class=\"hljs-keyword\">for<\/span> i <span class=\"hljs-keyword\">in<\/span> <span class=\"hljs-built_in\">range<\/span>(<span class=\"hljs-number\">0<\/span>,<span class=\"hljs-built_in\">len<\/span>(read_season1[<span class=\"hljs-string\">'Episode Plot Summary'<\/span>])):\n    vect1 = TfidfVectorizer(stop_words = stp, max_features = <span class=\"hljs-number\">1000<\/span>)\n    vect_text1 = vect1.fit_transform(read_season1[<span class=\"hljs-string\">'Episode Plot Summary'<\/span>][i])\n    lda_model = LatentDirichletAllocation(n_components = <span class=\"hljs-number\">5<\/span>, learning_method = <span class=\"hljs-string\">'online'<\/span>, random_state = <span class=\"hljs-number\">42<\/span>, max_iter = <span class=\"hljs-number\">1<\/span>)\n    lda_t = lda_model.fit_transform(vect_text1)\n\n    vocab = vect1.get_feature_names()\n    <span class=\"hljs-keyword\">for<\/span> k, comp <span class=\"hljs-keyword\">in<\/span> <span class=\"hljs-built_in\">enumerate<\/span>(lda_model.components_):\n        vocab_comp = <span class=\"hljs-built_in\">zip<\/span>(vocab, comp)\n        s_words = <span class=\"hljs-built_in\">sorted<\/span>(vocab_comp, key = <span class=\"hljs-keyword\">lambda<\/span> x:x[<span class=\"hljs-number\">1<\/span>], reverse=<span class=\"hljs-literal\">True<\/span>)[:<span class=\"hljs-number\">7<\/span>]\n        <span class=\"hljs-built_in\">print<\/span>(<span class=\"hljs-string\">\"Topic \"<\/span>+<span class=\"hljs-built_in\">str<\/span>(k)+<span class=\"hljs-string\">\": \"<\/span>)\n        <span class=\"hljs-keyword\">for<\/span> t1 <span class=\"hljs-keyword\">in<\/span> s_words:\n            <span class=\"hljs-built_in\">print<\/span>(t1[<span class=\"hljs-number\">0<\/span>],end=<span class=\"hljs-string\">\" \"<\/span>)\n        <span class=\"hljs-built_in\">print<\/span>(<span class=\"hljs-string\">\"\\n\"<\/span>)\n\n\n<span class=\"hljs-comment\">#SEASON 2<\/span>\n read_season2 = read[read[<span class=\"hljs-string\">\"Season No\"<\/span>]==<span class=\"hljs-string\">\"Season 2\"<\/span>].reset_index()\n\n<span class=\"hljs-keyword\">for<\/span> i <span class=\"hljs-keyword\">in<\/span> <span class=\"hljs-built_in\">range<\/span>(<span class=\"hljs-number\">0<\/span>,<span class=\"hljs-built_in\">len<\/span>(read_season2[<span class=\"hljs-string\">'Episode Plot Summary'<\/span>])):\n    vect1 = TfidfVectorizer(stop_words = stp, max_features = <span class=\"hljs-number\">1000<\/span>)\n    vect_text1 = vect1.fit_transform(read_season2[<span class=\"hljs-string\">'Episode Plot Summary'<\/span>][i])\n\n\n    lda_model = LatentDirichletAllocation(n_components = <span class=\"hljs-number\">5<\/span>, learning_method = <span class=\"hljs-string\">'online'<\/span>, random_state = <span class=\"hljs-number\">42<\/span>, max_iter = <span class=\"hljs-number\">1<\/span>)\n    lda_t = lda_model.fit_transform(vect_text1)\n\n   <span class=\"hljs-comment\">## for j, topic in enumerate(lda_t[0]):<\/span>\n     <span class=\"hljs-comment\">##   print(topic*100)<\/span>\n\n    <span class=\"hljs-built_in\">print<\/span>(read_season1[<span class=\"hljs-string\">\"Episode No\"<\/span>][i].upper())\n\n    vocab = vect1.get_feature_names()\n    <span class=\"hljs-keyword\">for<\/span> k, comp <span class=\"hljs-keyword\">in<\/span> <span class=\"hljs-built_in\">enumerate<\/span>(lda_model.components_):\n        vocab_comp = <span class=\"hljs-built_in\">zip<\/span>(vocab, comp)\n        s_words = <span class=\"hljs-built_in\">sorted<\/span>(vocab_comp, key = <span class=\"hljs-keyword\">lambda<\/span> x:x[<span class=\"hljs-number\">1<\/span>], reverse=<span class=\"hljs-literal\">True<\/span>)[:<span class=\"hljs-number\">7<\/span>]\n        <span class=\"hljs-keyword\">for<\/span> t1 <span class=\"hljs-keyword\">in<\/span> s_words:\n            <span class=\"hljs-built_in\">print<\/span>(t1[<span class=\"hljs-number\">0<\/span>],end=<span class=\"hljs-string\">\" \"<\/span>)\n        <span class=\"hljs-built_in\">print<\/span>(<span class=\"hljs-string\">\"\\n\"<\/span>)\n\n<\/span><\/pre>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"c5d1\">With the output generated by following the complete model preparation process in this section, we can obtain the main topics and their importance for each of the episodes in the TV series.<\/p>\n\n\n\n<h1 class=\"wp-block-heading lv lw fr be lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp mq mr ms bj\" id=\"ded2\"><strong class=\"al\">Topic Sentiments<\/strong><\/h1>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv mw mx my mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no np fk bj\" id=\"0370\">After obtaining the main keywords for the episodes in each season, we come over to analyze and understand the sentiment around them. The technique used here is VADER (Valence Aware Dictionary Sentiment Reasoner), a lexicon and rule-based tool designed for sentiments.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"ead4\">The sentiment scores calculated here are based on the polarity returned as an output using this technique, which lies in the range of [-1, 1] where \u2014<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"cf2f\">\u00b7 Positive sentiment lies between 0 and 1<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"8708\">\u00b7 Negative sentiment lies between -1 and 0<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"9ca0\">After determining the scores for episodes, we obtained the results in the form of average sentiment for each season, as shown in the below table \u2014<\/p>\n\n\n\n<figure class=\"wp-block-image nx ny nz oa ob pf pc pd paragraph-image\"><img decoding=\"async\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:313\/1*ZBcKgWRnbmH0zq_QcplZzg.png\" alt=\"\"\/><figcaption class=\"wp-element-caption\">Source: Author<\/figcaption><\/figure>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"cf62\">Based on the given results, we observe that in both seasons, we are getting a negative sentiment. This signifies the show was built with a dark theme in mind, and it can fall into any of the following genre categories \u2014<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"f92b\">\u00b7 Violence<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"277f\">\u00b7 Thriller<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"b467\">\u00b7 Suspenseful<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"f7c7\">\u00b7 Action, etc.<\/p>\n\n\n\n<h1 class=\"wp-block-heading lv lw fr be lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp mq mr ms bj\" id=\"0201\"><strong class=\"al\">Closing Notes<\/strong><\/h1>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv mw mx my mz na nb nc nd ne nf ng nh ni nj nk nl nm nn no np fk bj\" id=\"21ef\">Reaching the end of the article, we understood the practical implementation of topic modeling in the entertainment field and how the entire summary for the TV Show episodes can be defined into smaller segments.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mt mu fr be b mv nq mx my mz nr nb nc nd ns nf ng nh nt nj nk nl nu nn no np fk bj\" id=\"08b1\">The NLP procedure carried out here processes the data to get it ready for further machine learning analysis and to identify the genre of the show based on the sentiment scores obtained for each season.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Introduction Many of us watch TV shows for leisure within our daily mundane routines via various online streaming platforms (such as Amazon Prime or Netflix), each having its share of show categories. Whenever a new show is launched, its other details (such as episode plot summary) become available online. Each episode present in the show [&hellip;]<\/p>\n","protected":false},"author":110,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"customer_name":"","customer_description":"","customer_industry":"","customer_technologies":"","customer_logo":"","footnotes":""},"categories":[7],"tags":[],"coauthors":[208],"class_list":["post-8146","post","type-post","status-publish","format-standard","hentry","category-tutorials"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>NLP \u2014 Topic Modeling For a TV Series Episode Summary - Comet<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/nlp-topic-modeling-for-a-tv-series-episode-summary\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"NLP \u2014 Topic Modeling For a TV Series Episode Summary\" \/>\n<meta property=\"og:description\" content=\"Introduction Many of us watch TV shows for leisure within our daily mundane routines via various online streaming platforms (such as Amazon Prime or Netflix), each having its share of show categories. Whenever a new show is launched, its other details (such as episode plot summary) become available online. Each episode present in the show [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.comet.com\/site\/blog\/nlp-topic-modeling-for-a-tv-series-episode-summary\" \/>\n<meta property=\"og:site_name\" content=\"Comet\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cometdotml\" \/>\n<meta property=\"article:published_time\" content=\"2023-11-09T16:32:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-04-24T17:04:31+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/miro.medium.com\/v2\/resize:fit:313\/1*ZBcKgWRnbmH0zq_QcplZzg.png\" \/>\n<meta name=\"author\" content=\"Kaustubh Verma\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Cometml\" \/>\n<meta name=\"twitter:site\" content=\"@Cometml\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Kaustubh Verma\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"NLP \u2014 Topic Modeling For a TV Series Episode Summary - Comet","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.comet.com\/site\/blog\/nlp-topic-modeling-for-a-tv-series-episode-summary","og_locale":"en_US","og_type":"article","og_title":"NLP \u2014 Topic Modeling For a TV Series Episode Summary","og_description":"Introduction Many of us watch TV shows for leisure within our daily mundane routines via various online streaming platforms (such as Amazon Prime or Netflix), each having its share of show categories. Whenever a new show is launched, its other details (such as episode plot summary) become available online. Each episode present in the show [&hellip;]","og_url":"https:\/\/www.comet.com\/site\/blog\/nlp-topic-modeling-for-a-tv-series-episode-summary","og_site_name":"Comet","article_publisher":"https:\/\/www.facebook.com\/cometdotml","article_published_time":"2023-11-09T16:32:00+00:00","article_modified_time":"2025-04-24T17:04:31+00:00","og_image":[{"url":"https:\/\/miro.medium.com\/v2\/resize:fit:313\/1*ZBcKgWRnbmH0zq_QcplZzg.png","type":"","width":"","height":""}],"author":"Kaustubh Verma","twitter_card":"summary_large_image","twitter_creator":"@Cometml","twitter_site":"@Cometml","twitter_misc":{"Written by":"Kaustubh Verma","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.comet.com\/site\/blog\/nlp-topic-modeling-for-a-tv-series-episode-summary#article","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/blog\/nlp-topic-modeling-for-a-tv-series-episode-summary\/"},"author":{"name":"Kaustubh Verma","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/ac98309f13023683e5bc56d82f207203"},"headline":"NLP \u2014 Topic Modeling For a TV Series Episode Summary","datePublished":"2023-11-09T16:32:00+00:00","dateModified":"2025-04-24T17:04:31+00:00","mainEntityOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/nlp-topic-modeling-for-a-tv-series-episode-summary\/"},"wordCount":789,"publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/nlp-topic-modeling-for-a-tv-series-episode-summary#primaryimage"},"thumbnailUrl":"https:\/\/miro.medium.com\/v2\/resize:fit:313\/1*ZBcKgWRnbmH0zq_QcplZzg.png","articleSection":["Tutorials"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.comet.com\/site\/blog\/nlp-topic-modeling-for-a-tv-series-episode-summary\/","url":"https:\/\/www.comet.com\/site\/blog\/nlp-topic-modeling-for-a-tv-series-episode-summary","name":"NLP \u2014 Topic Modeling For a TV Series Episode Summary - Comet","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/nlp-topic-modeling-for-a-tv-series-episode-summary#primaryimage"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/nlp-topic-modeling-for-a-tv-series-episode-summary#primaryimage"},"thumbnailUrl":"https:\/\/miro.medium.com\/v2\/resize:fit:313\/1*ZBcKgWRnbmH0zq_QcplZzg.png","datePublished":"2023-11-09T16:32:00+00:00","dateModified":"2025-04-24T17:04:31+00:00","breadcrumb":{"@id":"https:\/\/www.comet.com\/site\/blog\/nlp-topic-modeling-for-a-tv-series-episode-summary#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.comet.com\/site\/blog\/nlp-topic-modeling-for-a-tv-series-episode-summary"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/blog\/nlp-topic-modeling-for-a-tv-series-episode-summary#primaryimage","url":"https:\/\/miro.medium.com\/v2\/resize:fit:313\/1*ZBcKgWRnbmH0zq_QcplZzg.png","contentUrl":"https:\/\/miro.medium.com\/v2\/resize:fit:313\/1*ZBcKgWRnbmH0zq_QcplZzg.png"},{"@type":"BreadcrumbList","@id":"https:\/\/www.comet.com\/site\/blog\/nlp-topic-modeling-for-a-tv-series-episode-summary#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.comet.com\/site\/"},{"@type":"ListItem","position":2,"name":"NLP \u2014 Topic Modeling For a TV Series Episode Summary"}]},{"@type":"WebSite","@id":"https:\/\/www.comet.com\/site\/#website","url":"https:\/\/www.comet.com\/site\/","name":"Comet","description":"Build Better Models Faster","publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.comet.com\/site\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.comet.com\/site\/#organization","name":"Comet ML, Inc.","alternateName":"Comet","url":"https:\/\/www.comet.com\/site\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","width":310,"height":310,"caption":"Comet ML, Inc."},"image":{"@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/cometdotml","https:\/\/x.com\/Cometml","https:\/\/www.youtube.com\/channel\/UCmN63HKvfXSCS-UwVwmK8Hw"]},{"@type":"Person","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/ac98309f13023683e5bc56d82f207203","name":"Kaustubh Verma","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/image\/ad5b7eb02afbe0680dfb86badb99dd53","url":"https:\/\/secure.gravatar.com\/avatar\/911adc53aa794b561dbab39696f21d8e6e16c431d21c2815692a26dce240d981?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/911adc53aa794b561dbab39696f21d8e6e16c431d21c2815692a26dce240d981?s=96&d=mm&r=g","caption":"Kaustubh Verma"},"url":"https:\/\/www.comet.com\/site\/blog\/author\/kaustubhverma994gmail-com\/"}]}},"_links":{"self":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/8146","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/users\/110"}],"replies":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/comments?post=8146"}],"version-history":[{"count":1,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/8146\/revisions"}],"predecessor-version":[{"id":15453,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/8146\/revisions\/15453"}],"wp:attachment":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media?parent=8146"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/categories?post=8146"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/tags?post=8146"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/coauthors?post=8146"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}