{"id":8256,"date":"2023-11-29T09:46:04","date_gmt":"2023-11-29T17:46:04","guid":{"rendered":"https:\/\/live-cometml.pantheonsite.io\/?p=8256"},"modified":"2025-04-24T17:04:12","modified_gmt":"2025-04-24T17:04:12","slug":"a-step-by-step-guide-efficiently-managing-tensorflow-keras-model-development-with-comet","status":"publish","type":"post","link":"https:\/\/www.comet.com\/site\/blog\/a-step-by-step-guide-efficiently-managing-tensorflow-keras-model-development-with-comet\/","title":{"rendered":"A Step-by-Step Guide: Efficiently Managing TensorFlow\/Keras Model Development with Comet"},"content":{"rendered":"\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"a8c5\">Welcome to the step-by-step guide on efficiently managing TensorFlow\/Keras model development with Comet. TensorFlow and Keras have emerged as powerful frameworks for building and training deep learning models. However, as your model development process becomes more complex and involves numerous experiments and iterations, keeping track of your progress, managing experiments, and collaborating effectively with team members becomes increasingly challenging.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"bbbd\">This is where Comet comes to the rescue. Comet is a comprehensive experiment tracking and collaboration platform for machine learning projects. It empowers data scientists and machine learning practitioners to streamline their model development workflow, maintain a structured record of experiments, and foster seamless collaboration among team members.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"ec74\">In this guide, we will walk you through the process of efficiently managing TensorFlow\/Keras model development using Comet. We will explore the essential features of Comet that enable you to track experiments, log hyperparameters and metrics, visualize model performance, optimize hyperparameter configurations, and facilitate collaboration within your team. Following our step-by-step instructions and incorporating Comet into your workflow can enhance productivity, maintain experiment reproducibility, and derive valuable insights from your model development process.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"acec\">Whether you are an experienced machine learning practitioner or just starting your journey in deep learning, this article will provide practical strategies and tips to leverage Comet effectively. Let&#8217;s dive in and discover how you can take control of your TensorFlow\/Keras model development with Comet.<\/p>\n\n\n\n<h2 class=\"wp-block-heading nj nk fr be nl nm nn no np nq nr ns nt mw nu nv nw na nx ny nz ne oa ob oc od bj\" id=\"7c67\">Introducing MLOps<\/h2>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp oe mr ms gs of mu mv mw og my mz na oh nc nd ne oi ng nh ni fk bj\" id=\"6ec4\">Machine learning (ML) is an essential tool for businesses of all sizes. However, deploying ML models in production can be complex and challenging. This is where MLOps comes in.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"adfd\">MLOps is a set of principles and practices that combine software engineering, data science, and DevOps to ensure that ML models are deployed and managed effectively in production. MLOps encompasses the entire ML lifecycle, from data preparation to model deployment and monitoring.<\/p>\n\n\n\n<h2 class=\"wp-block-heading nj nk fr be nl nm nn no np nq nr ns nt mw nu nv nw na nx ny nz ne oa ob oc od bj\" id=\"7925\">Why Is MLOps Important?<\/h2>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp oe mr ms gs of mu mv mw og my mz na oh nc nd ne oi ng nh ni fk bj\" id=\"adc4\">There are several reasons why MLOps is essential. First, ML models are becoming increasingly complex and require a lot of data to train. This means it is necessary to have a scalable and efficient way to deploy and manage ML models in production.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"436c\">Second, ML models are constantly evolving. This means that it is vital to have a way to monitor and update ML models as new data becomes available. MLOps provides a framework for doing this.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"4c00\">Finally, ML models need to be secure. They can make important decisions, such as approving loans or predicting customer behavior. MLOps provides a framework for securing ML models.<\/p>\n\n\n\n<h2 class=\"wp-block-heading nj nk fr be nl nm nn no np nq nr ns nt mw nu nv nw na nx ny nz ne oa ob oc od bj\" id=\"a7c9\">How Does MLOps Work?<\/h2>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp oe mr ms gs of mu mv mw og my mz na oh nc nd ne oi ng nh ni fk bj\" id=\"cfcf\">MLOps typically involves the following steps:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Data Preparation: The first step is preparing the data that will be used to train the ML model. This includes cleaning the data, removing outliers, and transforming the data into a format that the ML model can use.<\/li>\n\n\n\n<li>Model Training: The next step is training the ML model. This involves using the prepared data to train the model. The training process can be iterative, and trying different models and hyperparameters may be necessary to find the best model.<\/li>\n\n\n\n<li>Model Deployment: Once the ML model is trained, it must be deployed in production. This means making the model available to users so they can use it to make predictions.<\/li>\n\n\n\n<li>Model Monitoring: Once the ML model is deployed, it must be monitored to ensure it performs as expected. This involves tracking the model&#8217;s accuracy, latency, and other metrics.<\/li>\n\n\n\n<li>Model Maintenance: As new data becomes available, the ML model may need to be updated. This is known as model maintenance. Model maintenance involves retraining the model with the latest data and deploying the updated model in production.<\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading or nk fr be nl os ot gr np ou ov gu nt ow ox oy oz pa pb pc pd pe pf pg ph pi bj\" id=\"cf1d\">Keeping Track of Your ML Experiments<\/h2>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp oe mr ms gs of mu mv mw og my mz na oh nc nd ne oi ng nh ni fk bj\" id=\"fb1d\">Accurate experiment tracking simplifies comparing metrics and parameters across different data versions, evaluating experiment results, and identifying the best or worst predictions on test or validation sets. Additionally, it allows for in-depth analysis of hardware consumption during model training.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"7864\">The following explanations will guide you in efficiently tracking your experiments and generating insightful charts. By implementing these strategies, you can enhance your experiment management and visualization capabilities, allowing you to derive valuable insights from your data.<\/p>\n\n\n\n<h2 class=\"wp-block-heading nj nk fr be nl nm nn no np nq nr ns nt mw nu nv nw na nx ny nz ne oa ob oc od bj\" id=\"f1ff\">Project Requirements<\/h2>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp oe mr ms gs of mu mv mw og my mz na oh nc nd ne oi ng nh ni fk bj\" id=\"95d4\">To ensure adequate tracking and management of your TensorFlow model development, it is crucial to establish a performance metric as a project goal. For instance, you may set the F1-score as the metric to optimize your model&#8217;s performance.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"6c8d\">The initial deployment phase should focus on building a simple model while prioritizing the development of a robust machine-learning pipeline for prediction. This approach allows for the swift delivery of value and prevents excessive time spent pursuing the elusive perfect model.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"9a2d\">As your organization embarks on new machine learning projects, the number of experiment runs can quickly multiply, ranging from tens to hundreds or even thousands. Without proper tracking, your workflow can become convoluted and challenging to navigate.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"66fc\">That&#8217;s why tracking tools like Comet have become standard in machine learning projects. Comet enables you to log essential information such as data, model architecture, hyperparameters, confusion matrices, graphs, etc. Integrating a tool like Comet into your workflow or code is relatively simple compared to the complications that arise when you neglect proper tracking.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"60ef\">To illustrate the tracking approach, let&#8217;s consider an example where we train a text classification model using TensorFlow and Long Short-Term Memory (LSTM) networks. Following the steps in this guide will provide insights into effectively utilizing tracking tools and seamlessly managing your TensorFlow\/Keras model development process.<\/p>\n\n\n\n<h2 class=\"wp-block-heading nj nk fr be nl nm nn no np nq nr ns nt mw nu nv nw na nx ny nz ne oa ob oc od bj\" id=\"2559\">Achieve a Well-Organized Model Development Process with Comet<\/h2>\n\n\n\n<h2 class=\"wp-block-heading nj nk fr be nl nm nn no np nq nr ns nt mw nu nv nw na nx ny nz ne oa ob oc od bj\" id=\"0a9e\">Install Dependencies For This Project<\/h2>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp oe mr ms gs of mu mv mw og my mz na oh nc nd ne oi ng nh ni fk bj\" id=\"03c1\">We&#8217;ll be using Comet in Google Colab, so we need to install Comet on our machine. Follow the commands below to do this.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span id=\"a867\" class=\"ps nk fr pp b bf pt pu l pv pw\" data-selectable-paragraph=\"\">`%pip install comet_ml tensorflow numpy\n  !pip3 install comet_ml\n   <span class=\"hljs-keyword\">import<\/span> comet_ml\n   <span class=\"hljs-keyword\">from<\/span> comet_ml <span class=\"hljs-keyword\">import<\/span> Experiment\n   <span class=\"hljs-keyword\">import<\/span> logging\n\nlogging.basicConfig(level=logging.INFO)\nLOGGER = logging.getLogger(<span class=\"hljs-string\">\"comet_ml\"<\/span>)<\/span><\/pre>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"39c0\">Now that we&#8217;ve installed the necessary dependencies let&#8217;s import them.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span id=\"d20d\" class=\"ps nk fr pp b bf pt pu l pv pw\" data-selectable-paragraph=\"\"><span class=\"hljs-keyword\">import<\/span> comet_ml\n<span class=\"hljs-keyword\">from<\/span> comet_ml <span class=\"hljs-keyword\">import<\/span> Experiment\n<span class=\"hljs-keyword\">import<\/span> logging\n<span class=\"hljs-keyword\">import<\/span> pandas <span class=\"hljs-keyword\">as<\/span> pd\n<span class=\"hljs-keyword\">import<\/span> tensorflow <span class=\"hljs-keyword\">as<\/span> tfl\n<span class=\"hljs-keyword\">import<\/span> numpy <span class=\"hljs-keyword\">as<\/span> np\n<span class=\"hljs-keyword\">import<\/span> csv\n<span class=\"hljs-keyword\">from<\/span> tensorflow.keras.preprocessing.text <span class=\"hljs-keyword\">import<\/span> Tokenizer\n<span class=\"hljs-keyword\">from<\/span> tensorflow.keras.preprocessing.sequence <span class=\"hljs-keyword\">import<\/span> pad_sequences\n<span class=\"hljs-keyword\">from<\/span> tensorflow <span class=\"hljs-keyword\">import<\/span> keras\n<span class=\"hljs-keyword\">from<\/span> tensorflow.keras <span class=\"hljs-keyword\">import<\/span> layers\n<span class=\"hljs-keyword\">import<\/span> re\n<span class=\"hljs-keyword\">import<\/span> nltk\n\nnltk.download(<span class=\"hljs-string\">'stopwords'<\/span>)\n<span class=\"hljs-keyword\">from<\/span> nltk.corpus <span class=\"hljs-keyword\">import<\/span> stopwords<\/span><\/pre>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"21c1\">Connect your project to the Comet platform. If you&#8217;re new to the platform, read the <a href=\"https:\/\/www.comet.com\/docs\/v2\/\">guide<\/a>.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span id=\"8e9b\" class=\"ps nk fr pp b bf pt pu l pv pw\" data-selectable-paragraph=\"\">`<span class=\"hljs-comment\"># Create an experiment<\/span>\nexperiment = comet_ml.Experiment(\n    project_name=<span class=\"hljs-string\">\"Tensorflow_Classification\"<\/span>,\n    workspace=<span class=\"hljs-string\">\"olujerry\"<\/span>,\n      api_key=<span class=\"hljs-string\">\"YOUR API-KEYS\"<\/span>,\n    log_code=<span class=\"hljs-literal\">True<\/span>,\n\n   auto_metric_logging=<span class=\"hljs-literal\">True<\/span>,\n    auto_param_logging=<span class=\"hljs-literal\">True<\/span>,\n    auto_histogram_weight_logging=<span class=\"hljs-literal\">True<\/span>,\n    auto_histogram_gradient_logging=<span class=\"hljs-literal\">True<\/span>,\n    auto_histogram_activation_logging=<span class=\"hljs-literal\">True<\/span>,<\/span><\/pre>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"a125\">It&#8217;s important to connect your project to the Comet platform at the beginning of your project so every single parameter and metric can be logged.<\/p>\n\n\n\n<h2 class=\"wp-block-heading nj nk fr be nl nm nn no np nq nr ns nt mw nu nv nw na nx ny nz ne oa ob oc od bj\" id=\"001d\">Save the Hyperparameters (For Each Iteration)<\/h2>\n\n\n\n<pre class=\"wp-block-preformatted\"><span id=\"ffb0\" class=\"ps nk fr pp b bf pt pu l pv pw\" data-selectable-paragraph=\"\">params={\n                    <span class=\"hljs-string\">'embed_dims'<\/span>: <span class=\"hljs-number\">64<\/span>,\n                    <span class=\"hljs-string\">'vocab_size'<\/span>: <span class=\"hljs-number\">5200<\/span>,\n                    <span class=\"hljs-string\">'max_len'<\/span>: <span class=\"hljs-number\">200<\/span>,\n                    <span class=\"hljs-string\">'padding_type'<\/span>: <span class=\"hljs-string\">'post'<\/span>,\n                    <span class=\"hljs-string\">'trunc_type'<\/span>: <span class=\"hljs-string\">'post'<\/span>,\n                    <span class=\"hljs-string\">'oov_tok'<\/span>: <span class=\"hljs-string\">'&lt;OOV&gt;'<\/span>,\n                    <span class=\"hljs-string\">'training_portion'<\/span>: <span class=\"hljs-number\">0.75<\/span>\n    }\n\nexperiment.log_parameters(params)<\/span><\/pre>\n\n\n\n<h2 class=\"wp-block-heading or nk fr be nl os ot gr np ou ov gu nt ow ox oy oz pa pb pc pd pe pf pg ph pi bj\" id=\"4d3b\">About The Dataset<\/h2>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp oe mr ms gs of mu mv mw og my mz na oh nc nd ne oi ng nh ni fk bj\" id=\"d05f\">The dataset we&#8217;re using is BBC news article data for classification. It consists of 2225 documents from the <a href=\"http:\/\/news.bbc.co.uk\/\">BBC<\/a> News website corresponding to stories in five topical areas from 2004\u20132005.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Class Labels: 5 (business, entertainment, politics, sport, tech)<\/li>\n\n\n\n<li>Download the data <a href=\"http:\/\/mlg.ucd.ie\/datasets\/bbc.html\">here<\/a>.<\/li>\n<\/ul>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"4373\">In the below section, I&#8217;ve created a list called labels and text, which will help us store the labels of the news article and the actual text associated with it. We&#8217;re also removing the stopwords using <a href=\"https:\/\/www.nltk.org\/\">nltk<\/a>.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span id=\"360c\" class=\"ps nk fr pp b bf pt pu l pv pw\" data-selectable-paragraph=\"\">labels = []\ntexts = []\n\n<span class=\"hljs-keyword\">with<\/span> <span class=\"hljs-built_in\">open<\/span>(<span class=\"hljs-string\">'dataset.csv'<\/span>, <span class=\"hljs-string\">'r'<\/span>) <span class=\"hljs-keyword\">as<\/span> file:\n    data = csv.reader(file, delimiter=<span class=\"hljs-string\">','<\/span>)\n    <span class=\"hljs-built_in\">next<\/span>(data)\n    <span class=\"hljs-keyword\">for<\/span> row <span class=\"hljs-keyword\">in<\/span> data:\n        labels.append(row[<span class=\"hljs-number\">0<\/span>])\n        text = row[<span class=\"hljs-number\">1<\/span>]\n        <span class=\"hljs-keyword\">for<\/span> word <span class=\"hljs-keyword\">in<\/span> stopwords_list:  <span class=\"hljs-comment\"># Iterate over the stop words list<\/span>\n            token = <span class=\"hljs-string\">' '<\/span> + word + <span class=\"hljs-string\">' '<\/span>\n            text = text.replace(token, <span class=\"hljs-string\">' '<\/span>)\n            text = text.replace(<span class=\"hljs-string\">' '<\/span>, <span class=\"hljs-string\">' '<\/span>)\n        texts.append(text)\n\n<span class=\"hljs-built_in\">print<\/span>(<span class=\"hljs-built_in\">len<\/span>(labels))\n<span class=\"hljs-built_in\">print<\/span>(<span class=\"hljs-built_in\">len<\/span>(texts))<\/span><\/pre>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"ce0e\">Let&#8217;s split the data into training and validation sets. If you look at the above parameters, we&#8217;re using 80% for training and 20% for validating the model we&#8217;ve built for this use case.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span id=\"956c\" class=\"ps nk fr pp b bf pt pu l pv pw\" data-selectable-paragraph=\"\">`training_portion = 0.8  <span class=\"hljs-comment\"># Assigning a value of 0.8 for an 80% training portion<\/span>\ntrain_size = int(len(texts) * training_portion)\n\ntrain_text = texts[0:train_size]\ntrain_labels = labels[0:train_size]\n\nvalidation_text = texts[train_size:]\nvalidation_labels = labels[train_size:]<\/span><\/pre>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"9fb4\">To tokenize the sentences into subword tokens, we will consider the top five thousand most common words. We will use the &#8220;oov_token&#8221; placeholder when encountering unseen special values. For words not found in the &#8220;word_index,&#8221; we will use &#8220;&lt;00V&gt;&#8221;. The &#8220;fit_on_texts&#8221; method will update the internal vocabulary utilizing a list of texts. This approach allows us to create a vocabulary index based on word frequency.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span id=\"a607\" class=\"ps nk fr pp b bf pt pu l pv pw\" data-selectable-paragraph=\"\">`vocab_size = 10000  <span class=\"hljs-comment\"># Assigning a value for the vocabulary size<\/span>\n oov_tok = '&lt;OOV&gt;'  <span class=\"hljs-comment\"># Assigning a value for the out-of-vocabulary token<\/span>\ntokenizer = Tokenizer(num_words = vocab_size, oov_token=oov_tok)\ntokenizer.fit_on_texts(train_text)\nword_index = tokenizer.word_index\n<span class=\"hljs-section\">dict(list(word_index.items())[0:8])<\/span><\/span><\/pre>\n\n\n\n<figure class=\"wp-block-image pj pk pl pm pn qc pz qa paragraph-image\"><img decoding=\"async\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:256\/1*2aig0e37Uadypu0R70eOcA.png\" alt=\"code screenshot\"\/><\/figure>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"ee42\">Observing the provided output, we notice that &#8220;&lt;oov&gt;&#8221; is the most frequently occurring token in the corpus, followed by other words.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"dc06\">With the vocabulary index constructed based on frequency, our next step is converting these tokens into sequence lists. The &#8220;text_to_sequence&#8221; function accomplishes this task by transforming the text into a sequence of integers. It maps the words in the text to their corresponding integer values according to the word_index dictionary.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span id=\"a85f\" class=\"ps nk fr pp b bf pt pu l pv pw\" data-selectable-paragraph=\"\">`train_sequences = tokenizer.texts_to_sequences(train_text)\n <span class=\"hljs-built_in\">print<\/span>(train_sequences[16])\nmax_length = 100  <span class=\"hljs-comment\"># Assigning a value for the maximum sequence length<\/span>\n\ntrain_sequences = tokenizer.texts_to_sequences(train_text)\ntrain_padded = pad_sequences(train_sequences, maxlen=max_length, truncating=<span class=\"hljs-string\">'post'<\/span>, padding=<span class=\"hljs-string\">'post'<\/span>)\n<\/span><\/pre>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"9074\">When training neural networks for downstream natural language processing (NLP) tasks, ensuring that the input sequences are the same size is important. We can use the <code class=\"cw qe qf qg pp b\">max_len<\/code> parameter to add padding to the sequences to achieve this. In our case, we initially set <code class=\"cw qe qf qg pp b\">max_len<\/code> to 200, and we applied padding using <code class=\"cw qe qf qg pp b\">padding_sequences<\/code>.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"d9ce\">For sequences with lengths smaller or greater than <code class=\"cw qe qf qg pp b\">max_len<\/code>, we truncate or pad them to the specified length of 200. For example, if a sequence has a length of 186, we add 14 zeros at the end to pad it to 200. Typically, we fit the data once but perform sequence conversion multiple times, so we have separate training and validation sets instead of combining them.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span id=\"5160\" class=\"ps nk fr pp b bf pt pu l pv pw\" data-selectable-paragraph=\"\">`padding_type = 'post'  <span class=\"hljs-comment\"># Assigning a value for the padding type ('post' or 'pre')<\/span>\n\n\ntrunc_type = 'post'  <span class=\"hljs-comment\"># Assigning a value for the truncation type ('post' or 'pre')<\/span>\n\n\nvaldn_padded = pad_sequences(valdn_sequences, maxlen=max_len, padding=padding_type, truncating=trunc_type)\n\n\nvmax_len = 100  <span class=\"hljs-comment\"># Assigning a value for the maximum sequence length<\/span>\n\nvaldn_sequences = tokenizer.texts_to_sequences(validation_text)\nvaldn_padded = pad_sequences(valdn_sequences, maxlen=max_len, padding=padding_type, truncating=trunc_type)\n\nprint(len(valdn_sequences))\nprint(valdn_padded.shape)<\/span><\/pre>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"3e3a\">Next, let&#8217;s examine the labels for our dataset. To work with the labels effectively, we need to tokenize them. Additionally, all training labels are expected to be in the form of a NumPy array. We can use the following code snippet to convert our labels into a NumPy array.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span id=\"a74d\" class=\"ps nk fr pp b bf pt pu l pv pw\" data-selectable-paragraph=\"\"><span class=\"hljs-string\">`label_tokenizer = Tokenizer()\nlabel_tokenizer.fit_on_texts(labels)<\/span><\/span><\/pre>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"b152\">Before we proceed with the modeling task, let&#8217;s examine how the texts appear after padding and tokenization. It is important to note that some words may be represented as &#8220;&lt;oov&gt;&#8221; (out of vocabulary) because they are not included in the vocabulary size specified at the beginning of our code. This is a common occurrence when dealing with limited vocabulary sizes.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span id=\"89b9\" class=\"ps nk fr pp b bf pt pu l pv pw\" data-selectable-paragraph=\"\">`word_index_reverse = {index: word <span class=\"hljs-keyword\">for<\/span> word, index <span class=\"hljs-keyword\">in<\/span> word_index.items()}\n\n<span class=\"hljs-comment\"># %% In [41]:<\/span>\n<span class=\"hljs-keyword\">def<\/span> <span class=\"hljs-title.function\">decode_article<\/span>(<span class=\"hljs-params\">text<\/span>):\n    <span class=\"hljs-keyword\">return<\/span> <span class=\"hljs-string\">' '<\/span>.join([word_index_reverse.get(i, <span class=\"hljs-string\">'?'<\/span>) <span class=\"hljs-keyword\">for<\/span> i <span class=\"hljs-keyword\">in<\/span> text])\n<span class=\"hljs-built_in\">print<\/span>(decode_article(train_padded[<span class=\"hljs-number\">24<\/span>]))\n<span class=\"hljs-built_in\">print<\/span>(<span class=\"hljs-string\">'**********'<\/span>)\n<span class=\"hljs-built_in\">print<\/span>(train_text[<span class=\"hljs-number\">24<\/span>])<\/span><\/pre>\n\n\n\n<figure class=\"wp-block-image pj pk pl pm pn qc pz qa paragraph-image\"><img decoding=\"async\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*xHGumjD_ySf2gTUjKdihdg.png\" alt=\"code screenshot\"\/><\/figure>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"c8aa\">To train our TensorFlow model, we will use the <code class=\"cw qe qf qg pp b\">tfl.keras.Sequential<\/code> class that allows us to group a linear stack of layers into a TensorFlow Keras model. The first layer in our model is the embedding layer, which stores a vector representation for each word. It converts sequences of words into sequences of vectors. Word embeddings are commonly used in NLP to ensure that words with similar meanings have similar vector representations.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"1aaa\">We then use the <code class=\"cw qe qf qg pp b\">tfl.keras.layers.Bidirectional<\/code> wrapper to create a bidirectional LSTM layer. This layer helps propagate inputs forward and backward through the LSTM layers, enabling the network to learn long-term dependencies more effectively. After that, we form it into a dense neural network for classification.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"caa0\">Our model uses the &#8216;relu&#8217; activation function, which returns the input value for positive values and 0 for negative values. The <code class=\"cw qe qf qg pp b\">embed_dims<\/code> variable represents the dimensionality of the embedding vectors and can be adjusted based on your specific needs.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"359e\">The final layer in our model is a dense layer with six units, followed by the &#8216;softmax&#8217; activation function. The &#8216;softmax&#8217; function normalizes the network&#8217;s output, producing a probability distribution over the predicted output classes.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"ab90\">Here&#8217;s the code for the model:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span id=\"668a\" class=\"ps nk fr pp b bf pt pu l pv pw\" data-selectable-paragraph=\"\">`embed_dims = 100  <span class=\"hljs-comment\"># Placeholder value, adjust it based on your needs<\/span>\n\nmodel = tfl.keras.Sequential([\n    tfl.keras.layers.Embedding(vocab_size, embed_dims),\n    tfl.keras.layers.Bidirectional(tfl.keras.layers.LSTM(embed_dims)),\n    tfl.keras.layers.Dense(embed_dims, activation=<span class=\"hljs-string\">'relu'<\/span>),\n    tfl.keras.layers.Dense(6, activation=<span class=\"hljs-string\">'softmax'<\/span>)\n])\nmodel.summary()<\/span><\/pre>\n\n\n\n<figure class=\"wp-block-image pj pk pl pm pn qc pz qa paragraph-image\"><img decoding=\"async\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*B4NrIbto5Gce_vmddI7xYw.png\" alt=\"code screenshot\"\/><\/figure>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"7a71\">From the model summary above, we can observe that our model consists of an embedding layer and a bidirectional LSTM layer. The output size from the bidirectional layer is twice the size we specified for the LSTM layer, as it considers both forward and backward information.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"85c7\">We used the &#8216;categorical_crossentropy&#8217; loss function for this multi-class classification task. This loss function is commonly used in tasks where we have multiple classes and want to quantify the difference between the predicted probability distribution and the true distribution.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"0764\">The optimizer we have chosen is &#8216;adam,&#8217; a variant of gradient descent. &#8216;Adam&#8217; is known for its adaptive learning rate and performs well in many scenarios.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"8526\">Our model is designed to learn word embeddings through the embedding layer, capture long-term dependencies with the bidirectional LSTM layer, and produce predictions using the softmax activation function in the final dense layer.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span id=\"5c3c\" class=\"ps nk fr pp b bf pt pu l pv pw\" data-selectable-paragraph=\"\">`model.<span class=\"hljs-built_in\">compile<\/span>(loss=<span class=\"hljs-string\">'sparse_categorical_crossentropy'<\/span>, optimizer=<span class=\"hljs-string\">'adam'<\/span>, metrics=[<span class=\"hljs-string\">'accuracy'<\/span>])<\/span><\/pre>\n\n\n\n<h2 class=\"wp-block-heading nj nk fr be nl nm nn no np nq nr ns nt mw nu nv nw na nx ny nz ne oa ob oc od bj\" id=\"1903\">ML Model Development Organized Using Comet<\/h2>\n\n\n\n<pre class=\"wp-block-preformatted\"><span id=\"69ba\" class=\"ps nk fr pp b bf pt pu l pv pw\" data-selectable-paragraph=\"\">`epochs_count = 10\n\n<span class=\"hljs-built_in\">history<\/span> = model.fit(train_padded, training_label_seq,\n                    epochs=epochs_count,\n                    validation_data=(valdn_padded, validation_labels_seq),\n                    verbose=2)<\/span><\/pre>\n\n\n\n<figure class=\"wp-block-image pj pk pl pm pn qc pz qa paragraph-image\"><img decoding=\"async\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*RRCkIiTi-KZnjeswWWmhiA.png\" alt=\"6 line graphs\"\/><\/figure>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"736c\">The accuracy of the experiment was logged:<\/p>\n\n\n\n<figure class=\"wp-block-image pj pk pl pm pn qc pz qa paragraph-image\"><img decoding=\"async\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*XhjBLkf330wEvj6YeLo_Eg.png\" alt=\"line graph \"\/><\/figure>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"6c5d\">We can also see the loss of the experiment:<\/p>\n\n\n\n<figure class=\"wp-block-image pj pk pl pm pn qc pz qa paragraph-image\"><img decoding=\"async\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*4eFY__BFS0Ed3BKDZx9EZg.png\" alt=\"line graph \"\/><\/figure>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"dc33\">We can also monitor RAM and CPU usage as part of model training. The information can be found in the System Metrics section of the experiments.<\/p>\n\n\n\n<figure class=\"wp-block-image pj pk pl pm pn qc pz qa paragraph-image\"><img decoding=\"async\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*7iVV0yRQgxyAvmJnJit9tA.png\" alt=\"memory usage graph and CPU utilization graph\"\/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading nj nk fr be nl nm nn no np nq nr ns nt mw nu nv nw na nx ny nz ne oa ob oc od bj\" id=\"483e\">Viewing Your Experiment On The Comet Platform<\/h2>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp oe mr ms gs of mu mv mw og my mz na oh nc nd ne oi ng nh ni fk bj\" id=\"4df5\">To view all your logged experiments, you need to end the experiment using the code below:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span id=\"bb44\" class=\"ps nk fr pp b bf pt pu l pv pw\" data-selectable-paragraph=\"\">experiment.<span class=\"hljs-keyword\">end<\/span>()<\/span><\/pre>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"8203\">After running the code, you will get a link to the Comet platform and a summary of everything logged.<\/p>\n\n\n\n<figure class=\"wp-block-image pj pk pl pm pn qc pz qa paragraph-image\"><img decoding=\"async\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*K3ERb-tQdXCyoUdMoGZMKg.png\" alt=\"screenshot of code\"\/><\/figure>\n\n\n\n<figure class=\"pj pk pl pm pn qc\">\n<div class=\"qs iu l ee\">\n<div class=\"qt qu l\"><iframe loading=\"lazy\" class=\"eo n ff dy bg\" title=\"Comet Text Classification\" src=\"https:\/\/cdn.embedly.com\/widgets\/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Fe_JnMaGFfGQ%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3De_JnMaGFfGQ&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2Fe_JnMaGFfGQ%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube\" width=\"854\" height=\"480\" frameborder=\"0\" scrolling=\"no\" allowfullscreen=\"allowfullscreen\"><\/iframe><\/div>\n<\/div>\n<\/figure>\n\n\n\n<h2 class=\"wp-block-heading or nk fr be nl os ot gr np ou ov gu nt ow ox oy oz pa pb pc pd pe pf pg ph pi bj\" id=\"e964\">Conclusion<\/h2>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp oe mr ms gs of mu mv mw og my mz na oh nc nd ne oi ng nh ni fk bj\" id=\"b79c\">If the above model shows signs of overfitting after 6 epochs, it is recommended to adjust the number of epochs and retrain the model. By experimenting with different numbers of epochs, you can find the optimal point where the model achieves good performance without overfitting.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"1211\">Debugging and analyzing the model&#8217;s performance during development iteratively is crucial. Error analysis helps identify areas where the model may be failing and provides insights for improvement. Tracking how the model&#8217;s performance scales as training data increases is also essential. This can help determine if collecting more data will lead to better results.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"6401\">Model-specific optimization techniques can be applied when addressing underfitting, characterized by high bias and low variance. This includes performing error analysis, increasing model capacity, tuning hyperparameters, and adding new features to capture more patterns in the data.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"3064\">On the other hand, when dealing with overfitting, which is characterized by low bias and high variance, it is recommended to consider the following approaches:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Adding more training data: Increasing the training data can help the model generalize better and reduce overfitting.<\/li>\n\n\n\n<li>Regularization: Techniques like L1 or L2 regularization, dropout, or early stopping can prevent the model from over-relying on specific features or reducing complex interactions between neurons.<\/li>\n\n\n\n<li>Error Analysis: Analyzing the model&#8217;s errors in training and validation data can provide insights into specific patterns or classes that the model struggles with. This information can guide further improvements.<\/li>\n\n\n\n<li>Hyperparameter Tuning: Adjusting hyperparameters like learning rate, batch size, or optimizer settings can help find a better balance between underfitting and overfitting.<\/li>\n\n\n\n<li>Reducing Model Size: If the model is too complex, it may have a higher tendency to overfit. Consider reducing the model&#8217;s size by decreasing the number of layers or reducing the number of units in each layer.<\/li>\n<\/ol>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"ff2f\">It is also valuable to consult existing literature and seek guidance from domain experts or colleagues who have experience with similar problems. Their insights can provide helpful directions for addressing overfitting effectively.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"292a\">Remember that model development is an iterative process that may require multiple iterations of adjustments and experimentation to achieve the best performance for your specific problem.<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mo mp fr be b gp mq mr ms gs mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni fk bj\" id=\"f9f1\">Here is a link to <a href=\"https:\/\/colab.research.google.com\/drive\/1yTDP5NO5RNSDAAVEhZ_fCMgkuT_fYBIz\">my notebook on Google Colab<\/a>, as well as the <a href=\"https:\/\/app.neptune.ai\/aravindcr\/Tensorflow-Text-Classification\/n\/code-walk-through-942a1459-ea07-426a-9703-033614bb52cf\/4d3cdd39-eea5-441c-872e-23302882a95d\">original notebook by Aravind CR<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Welcome to the step-by-step guide on efficiently managing TensorFlow\/Keras model development with Comet. TensorFlow and Keras have emerged as powerful frameworks for building and training deep learning models. However, as your model development process becomes more complex and involves numerous experiments and iterations, keeping track of your progress, managing experiments, and collaborating effectively with team [&hellip;]<\/p>\n","protected":false},"author":99,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"customer_name":"","customer_description":"","customer_industry":"","customer_technologies":"","customer_logo":"","footnotes":""},"categories":[9,7],"tags":[],"coauthors":[197],"class_list":["post-8256","post","type-post","status-publish","format-standard","hentry","category-product","category-tutorials"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Efficiently Managing TensorFlow\/Keras Model Development<\/title>\n<meta name=\"description\" content=\"In this article, get a step-by-step guide on efficiently managing TensorFlow\/Keras model development with Comet\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/a-step-by-step-guide-efficiently-managing-tensorflow-keras-model-development-with-comet\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"A Step-by-Step Guide: Efficiently Managing TensorFlow\/Keras Model Development with Comet\" \/>\n<meta property=\"og:description\" content=\"In this article, get a step-by-step guide on efficiently managing TensorFlow\/Keras model development with Comet\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.comet.com\/site\/blog\/a-step-by-step-guide-efficiently-managing-tensorflow-keras-model-development-with-comet\" \/>\n<meta property=\"og:site_name\" content=\"Comet\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cometdotml\" \/>\n<meta property=\"article:published_time\" content=\"2023-11-29T17:46:04+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-04-24T17:04:12+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/miro.medium.com\/v2\/resize:fit:256\/1*2aig0e37Uadypu0R70eOcA.png\" \/>\n<meta name=\"author\" content=\"Jeremiah Oluseye\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Cometml\" \/>\n<meta name=\"twitter:site\" content=\"@Cometml\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Jeremiah Oluseye\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"14 minutes\" \/>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Efficiently Managing TensorFlow\/Keras Model Development","description":"In this article, get a step-by-step guide on efficiently managing TensorFlow\/Keras model development with Comet","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.comet.com\/site\/blog\/a-step-by-step-guide-efficiently-managing-tensorflow-keras-model-development-with-comet","og_locale":"en_US","og_type":"article","og_title":"A Step-by-Step Guide: Efficiently Managing TensorFlow\/Keras Model Development with Comet","og_description":"In this article, get a step-by-step guide on efficiently managing TensorFlow\/Keras model development with Comet","og_url":"https:\/\/www.comet.com\/site\/blog\/a-step-by-step-guide-efficiently-managing-tensorflow-keras-model-development-with-comet","og_site_name":"Comet","article_publisher":"https:\/\/www.facebook.com\/cometdotml","article_published_time":"2023-11-29T17:46:04+00:00","article_modified_time":"2025-04-24T17:04:12+00:00","og_image":[{"url":"https:\/\/miro.medium.com\/v2\/resize:fit:256\/1*2aig0e37Uadypu0R70eOcA.png","type":"","width":"","height":""}],"author":"Jeremiah Oluseye","twitter_card":"summary_large_image","twitter_creator":"@Cometml","twitter_site":"@Cometml","twitter_misc":{"Written by":"Jeremiah Oluseye","Est. reading time":"14 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.comet.com\/site\/blog\/a-step-by-step-guide-efficiently-managing-tensorflow-keras-model-development-with-comet#article","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/blog\/a-step-by-step-guide-efficiently-managing-tensorflow-keras-model-development-with-comet\/"},"author":{"name":"Jeremiah Oluseye","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/73f233b4e35ab400cb753ebd29e58621"},"headline":"A Step-by-Step Guide: Efficiently Managing TensorFlow\/Keras Model Development with Comet","datePublished":"2023-11-29T17:46:04+00:00","dateModified":"2025-04-24T17:04:12+00:00","mainEntityOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/a-step-by-step-guide-efficiently-managing-tensorflow-keras-model-development-with-comet\/"},"wordCount":2297,"publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/a-step-by-step-guide-efficiently-managing-tensorflow-keras-model-development-with-comet#primaryimage"},"thumbnailUrl":"https:\/\/miro.medium.com\/v2\/resize:fit:256\/1*2aig0e37Uadypu0R70eOcA.png","articleSection":["Product","Tutorials"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.comet.com\/site\/blog\/a-step-by-step-guide-efficiently-managing-tensorflow-keras-model-development-with-comet\/","url":"https:\/\/www.comet.com\/site\/blog\/a-step-by-step-guide-efficiently-managing-tensorflow-keras-model-development-with-comet","name":"Efficiently Managing TensorFlow\/Keras Model Development","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/a-step-by-step-guide-efficiently-managing-tensorflow-keras-model-development-with-comet#primaryimage"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/a-step-by-step-guide-efficiently-managing-tensorflow-keras-model-development-with-comet#primaryimage"},"thumbnailUrl":"https:\/\/miro.medium.com\/v2\/resize:fit:256\/1*2aig0e37Uadypu0R70eOcA.png","datePublished":"2023-11-29T17:46:04+00:00","dateModified":"2025-04-24T17:04:12+00:00","description":"In this article, get a step-by-step guide on efficiently managing TensorFlow\/Keras model development with Comet","breadcrumb":{"@id":"https:\/\/www.comet.com\/site\/blog\/a-step-by-step-guide-efficiently-managing-tensorflow-keras-model-development-with-comet#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.comet.com\/site\/blog\/a-step-by-step-guide-efficiently-managing-tensorflow-keras-model-development-with-comet"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/blog\/a-step-by-step-guide-efficiently-managing-tensorflow-keras-model-development-with-comet#primaryimage","url":"https:\/\/miro.medium.com\/v2\/resize:fit:256\/1*2aig0e37Uadypu0R70eOcA.png","contentUrl":"https:\/\/miro.medium.com\/v2\/resize:fit:256\/1*2aig0e37Uadypu0R70eOcA.png"},{"@type":"BreadcrumbList","@id":"https:\/\/www.comet.com\/site\/blog\/a-step-by-step-guide-efficiently-managing-tensorflow-keras-model-development-with-comet#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.comet.com\/site\/"},{"@type":"ListItem","position":2,"name":"A Step-by-Step Guide: Efficiently Managing TensorFlow\/Keras Model Development with Comet"}]},{"@type":"WebSite","@id":"https:\/\/www.comet.com\/site\/#website","url":"https:\/\/www.comet.com\/site\/","name":"Comet","description":"Build Better Models Faster","publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.comet.com\/site\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.comet.com\/site\/#organization","name":"Comet ML, Inc.","alternateName":"Comet","url":"https:\/\/www.comet.com\/site\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","width":310,"height":310,"caption":"Comet ML, Inc."},"image":{"@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/cometdotml","https:\/\/x.com\/Cometml","https:\/\/www.youtube.com\/channel\/UCmN63HKvfXSCS-UwVwmK8Hw"]},{"@type":"Person","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/73f233b4e35ab400cb753ebd29e58621","name":"Jeremiah Oluseye","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/image\/5534a0faf067d6eb66a7f0c328629fbf","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/10\/cropped-mBj9qH7g_400x400-96x96.jpg","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/10\/cropped-mBj9qH7g_400x400-96x96.jpg","caption":"Jeremiah Oluseye"},"url":"https:\/\/www.comet.com\/site\/blog\/author\/oluseyejeremiahgmail-com\/"}]}},"_links":{"self":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/8256","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/users\/99"}],"replies":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/comments?post=8256"}],"version-history":[{"count":1,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/8256\/revisions"}],"predecessor-version":[{"id":15438,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/8256\/revisions\/15438"}],"wp:attachment":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media?parent=8256"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/categories?post=8256"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/tags?post=8256"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/coauthors?post=8256"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}