{"id":4946,"date":"2023-01-03T11:33:03","date_gmt":"2023-01-03T19:33:03","guid":{"rendered":"https:\/\/live-cometml.pantheonsite.io\/?p=4946"},"modified":"2025-04-24T17:16:15","modified_gmt":"2025-04-24T17:16:15","slug":"pythae-comet","status":"publish","type":"post","link":"https:\/\/www.comet.com\/site\/blog\/pythae-comet\/","title":{"rendered":"Pythae +\u00a0Comet"},"content":{"rendered":"\n<section class=\"section section--body\">\n<div class=\"section-divider\">\n<hr class=\"section-divider\">\n<\/div>\n<div class=\"section-content\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-4947 size-full\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak1.png\" alt=\"Two boxes with the number six and an arrow to indicate the original image goes through an autoencoder and is reconstructed\" width=\"1280\" height=\"720\" srcset=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak1.png 1280w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak1-300x169.png 300w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak1-1024x576.png 1024w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak1-768x432.png 768w\" sizes=\"auto, (max-width: 1280px) 100vw, 1280px\" \/><\/figure><div class=\"section-inner sectionLayout--insetColumn\" style=\"text-align: left;\">\n<p class=\"graf graf--p\">The <a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/github.com\/clementchadebec\/benchmark_VAE\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/github.com\/clementchadebec\/benchmark_VAE\"><strong class=\"markup--strong markup--p-strong\">Pythae<\/strong> library<\/a>, which brings together many Variational Autoencoder models and enables researchers to make comparisons and conduct reproducible research, is now integrated with <strong class=\"markup--strong markup--p-strong\">Comet ML<\/strong>!<\/p>\n<figure class=\"graf graf--figure\"><\/figure>\n<p class=\"graf graf--p\">The <a class=\"markup--anchor markup--p-anchor\" href=\"\/signup?utm_source=website&amp;utm_medium=referral&amp;utm_campaign=Heartbeat_Content\" target=\"_blank\" rel=\"noopener\" data-href=\"\/signup?utm_source=heartbeat&amp;utm_medium=referral&amp;utm_campaign=AMS_US_EN_SNUP_heartbeat_CTA\">Comet ML<\/a> experiment tracking tool is very useful for researchers to store their experiment configs, track their training, and compare the results in an easy and understandable way through a visual interface.<\/p>\n<p class=\"graf graf--p\">Now let\u2019s see in practice how to easily monitor an experiment with Comet ML in Pythae!<\/p>\n<h3 class=\"graf graf--h3\">What is a (Variational) Autoencoder?<\/h3>\n<p class=\"graf graf--p\">Images, texts, sounds, and more, produced realistically with deep neural networks, have come to the fore in recent years as the output of surprisingly talented models.<\/p>\n<p class=\"graf graf--p\">Although these generative models, which are well-designed and require huge data, appear in the literature with many different architectures, the Generative Attractive Networks and Variable Autoencoders model families are at the forefront of this race!<\/p>\n<p class=\"graf graf--p\">Autoencoders are non-generative models that aim to automatically learn to convert any data to code, consisting of two basic parts, an Encoder and a Decoder. Its main purpose is to compress the data given as input and reproduce it with as little loss as possible. The Variational Autoencoder family, on the other hand, includes generator models that have the ability to generate random codes by sampling and obtain new data with this code.<\/p>\n<figure class=\"graf graf--figure\">\n<\/figure><\/div><\/div><\/section>\n\n\n\n<figure class=\"wp-block-image aligncenter wp-image-4949 size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"600\" height=\"152\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak2.png\" alt=\"Two rows of black and white numbers - the numbers are the same but the top row is blurred and the bottom row is clear\" class=\"wp-image-4949\" srcset=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak2.png 600w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak2-300x76.png 300w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><figcaption class=\"wp-element-caption\">Denoising with Auto Encoders &#8211; Keras<\/figcaption><\/figure>\n\n\n\n<p class=\"graf graf--p\">The idea of Variational Autoencoder was presented in 2013 by Diederik P. Kingma and Max Welling in the article <a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/arxiv.org\/abs\/1312.6114\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/arxiv.org\/abs\/1312.6114\"><strong class=\"markup--strong markup--p-strong\">\u201cAuto-Encoding Variational Bayes.\u201d<\/strong><\/a><\/p>\n\n\n\n<figure class=\"graf graf--figure\">\n<\/figure>\n\n\n\n<figure class=\"wp-block-image aligncenter wp-image-4950 size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"590\" height=\"318\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak3.jpeg\" alt=\"A large number of AI generated faces and their evolution\" class=\"wp-image-4950\" srcset=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak3.jpeg 590w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak3-300x162.jpeg 300w\" sizes=\"auto, (max-width: 590px) 100vw, 590px\" \/><figcaption class=\"wp-element-caption\">A continuous space of faces generated by Tom White using VAEs. Source: https:\/\/gaussian37.github.io\/deep-learning-chollet-8-4\/<\/figcaption><\/figure>\n\n\n\n<p class=\"graf graf--p\">What distinguishes the models in this family from standard Autoencoders is that the input passed through the Encoder is encoded as a probability distribution. If this probability distribution is, for example, a normal distribution, the Encoder output will be the mean and variance values. By sampling these values, the code is obtained and this code can be solved with the help of a Decoder. Although the Decoder structure is the same in Standard and Variational Autoencoder models, there is a difference in the Encoder structure.<\/p>\n\n\n\n<figure class=\"graf graf--figure\"><\/figure>\n\n\n\n<figure class=\"wp-block-image aligncenter wp-image-4951 size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"770\" height=\"346\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak4.png\" alt=\"an illustration of the structure of an autoencoder vs. variational auto encoder\" class=\"wp-image-4951\" srcset=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak4.png 770w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak4-300x135.png 300w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak4-768x345.png 768w\" sizes=\"auto, (max-width: 770px) 100vw, 770px\" \/><figcaption class=\"wp-element-caption\">Auto-Encoder vs Variational Auto-Encoder [Kaynak]<\/figcaption><\/figure>\n\n\n\n<p class=\"graf graf--p\">With Auto Encoder, they can be trained easily with a single loss function without the need for extra parameters. In Variational Encoders, on the other hand, it is difficult and challenging to find the balance between reconstruction and latent loss, so training is quite difficult. However, the fact that Auto Encoders are prone to overfitting causes Variational Auto Encoders to be preferred.<\/p>\n\n\n\n<h3 class=\"wp-block-heading graf graf--h3\">What is&nbsp;Pythae?<\/h3>\n\n\n\n<p class=\"graf graf--p\">Pythae is a Python library that gathers the most commonly used (Variational) Auto-Encoder models under one roof, providing easy benchmarking experiments and comparison opportunities, especially for researchers.<\/p>\n\n\n\n<p class=\"graf graf--p\">It allows researchers to train any of the models it contains with their own data and allows the sharing and uploading of existing models on HuggingFace Hub. On the other hand, thanks to its integration with tools such as Comet, it allows you to monitor the experiments.<\/p>\n\n\n\n<p class=\"graf graf--p\">\ud83d\udfe0 <strong class=\"markup--strong markup--p-strong\">You can install the latest stable version of the pythae library using pip:<\/strong> <code class=\"markup--code markup--p-code\">pip install pythae<\/code><\/p>\n\n\n\n<p class=\"graf graf--p\">\ud83d\udfe0<strong class=\"markup--strong markup--p-strong\">To install the latest version of the Pythae library: <\/strong><code class=\"markup--code markup--p-code\">pip install git+https:\/\/github.com\/clementchadebec\/benchmark_VAE.git<\/code><\/p>\n\n\n\n<p class=\"graf graf--p\"><strong class=\"markup--strong markup--p-strong\">Some models in the Pythae library:<\/strong> Autoencoder (AE), Variational Autoencoder (VAE), Beta Variational Autoencoder (BetaVAE), VAE with Linear Normalizing Flows (VAE_LinNF), VAE with Inverse Autoregressive Flows (VAE_IAF), Disentangled Beta Variational Autoencoder (DisentangledBetaVAE), Disentangling by Factorising (FactorVAE), Beta-TC-VAE (BetaTCVAE).<\/p>\n\n\n\n<p class=\"graf graf--p\"><strong class=\"markup--strong markup--p-strong\">Some samplers available with the Pythae library: <\/strong>Normal prior (NormalSampler), Gaussian mixture (GaussianMixtureSampler), Two stage VAE sampler (TwoStageVAESampler), Unit sphere uniform sampler (HypersphereUniformSampler), Poincar\u00e9 Disk sampler (PoincareDiskSampler), VAMP prior sampler (VAMPSampler).<\/p>\n\n\n\n<p class=\"graf graf--p\">You can find the complete list of models and samplers and all sample source codes for the library <a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/github.com\/clementchadebec\/benchmark_VAE#monitoring-your-experiments-with-comet_ml-\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/github.com\/clementchadebec\/benchmark_VAE#monitoring-your-experiments-with-comet_ml-\"><strong class=\"markup--strong markup--p-strong\">here.<\/strong><\/a><\/p>\n\n\n\n<h3 class=\"wp-block-heading graf graf--h3\">What is&nbsp;Comet?<\/h3>\n\n\n\n<p class=\"graf graf--p\">Comet is a platform that enables teams and individuals to manage key machine learning lifecycle steps such as monitoring, versioning, model registration, and comparing results, especially iterative model training processes performed by teams of data science and machine learning researchers. And now Comet is fully integrated with Pythae!<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter graf graf--figure\"><img loading=\"lazy\" decoding=\"async\" width=\"770\" height=\"328\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak5.png\" alt=\"an image of the machine learning lifecyle which is iterative \" class=\"wp-image-4952\" srcset=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak5.png 770w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak5-300x128.png 300w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak5-768x327.png 768w\" sizes=\"auto, (max-width: 770px) 100vw, 770px\" \/><\/figure>\n\n\n\n<p>&nbsp;<\/p>\n\n\n\n<p class=\"graf graf--p\">The Comet platform supports every stage of the machine learning lifecycle, from monitoring training runs to monitoring models in production. In addition, due to its flexible nature, it can be run with dozens of installation options both for the company and on any infrastructure, including virtual private cloud (VPC) installations. On the other hand, you can launch Comet easily and quickly by adding just two lines of code to your script, notebook, or pipeline. This will ensure that everything you need to monitor and manage your code, metrics and hyperparameters will be transferred to the platform.<\/p>\n\n\n\n<p class=\"graf graf--p\">Comet is free for individuals and you can create your account<a class=\"markup--anchor markup--p-anchor\" href=\"\/signup?utm_source=website&amp;utm_medium=referral&amp;utm_campaign=Heartbeat_Content\" target=\"_blank\" rel=\"noopener\" data-href=\"\/signup?utm_source=heartbeat&amp;utm_medium=referral&amp;utm_campaign=AMS_US_EN_SNUP_heartbeat_CTA\"> here.<\/a><\/p>\n\n\n\n<p class=\"graf graf--p\"><strong class=\"markup--strong markup--p-strong\">The easiest way to install Comet on your system is to use <\/strong><code class=\"markup--code markup--p-code\"><strong class=\"markup--strong markup--p-strong\">pip<\/strong><\/code><strong class=\"markup--strong markup--p-strong\">: <\/strong><code class=\"markup--code markup--p-code\">pip install comet_ml<\/code><\/p>\n\n\n\n<section class=\"section section--body\">\n<div class=\"section-content\">\n<div class=\"section-inner sectionLayout--insetColumn\">\n<h3 class=\"graf graf--h3\">Pythae +&nbsp;Comet<\/h3>\n<p class=\"graf graf--p\">Now that we have an idea about Pythae and Comet, it\u2019s time to practice!<\/p>\n<p class=\"graf graf--p\">The Variational Autoencoder (VAE) model has been implemented in the <a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/github.com\/clementchadebec\/benchmark_VAE\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/github.com\/clementchadebec\/benchmark_VAE\"><strong class=\"markup--strong markup--p-strong\">Pythae<\/strong><\/a> library. We will also train the reconstruction example in the MNIST dataset using the Pythae library and see how we can follow the training logs via <a class=\"markup--anchor markup--p-anchor\" href=\"\/signup?utm_source=website&amp;utm_medium=referral&amp;utm_campaign=Heartbeat_Content\" target=\"_blank\" rel=\"noopener\" data-href=\"\/signup?utm_source=heartbeat&amp;utm_medium=referral&amp;utm_campaign=AMS_US_EN_SNUP_heartbeat_CTA\"><strong class=\"markup--strong markup--p-strong\">Comet ML<\/strong><\/a><strong class=\"markup--strong markup--p-strong\">.<\/strong><\/p>\n<h4 class=\"graf graf--h4\"><strong class=\"markup--strong markup--h4-strong\">Importing Libraries<\/strong><\/h4>\n<p class=\"graf graf--p\">In PyTorch, you can import the <strong class=\"markup--strong markup--p-strong\"><em class=\"markup--em markup--p-em\">torchvision <\/em><\/strong>package to use the vision datasets, the <strong class=\"markup--strong markup--p-strong\"><em class=\"markup--em markup--p-em\">pythae<\/em><\/strong> package to train the reconstruction model practically, the CometCallback function of Pythae to monitor the logs, which includes the integration of Pythae and CometML, and the <strong class=\"markup--strong markup--p-strong\"><em class=\"markup--em markup--p-em\">comet_ml<\/em><\/strong> package if you want to show the Comet UI directly in the Jupyter notebook.<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\">import torchvision.datasets as datasets\n\n#pythae\nfrom pythae.models import BetaVAE, BetaVAEConfig\nfrom pythae.trainers import BaseTrainerConfig\nfrom pythae.pipelines.training import TrainingPipeline\nfrom pythae.models.nn.benchmarks.mnist import Encoder_ResNet_VAE_MNIST, Decoder_ResNet_AE_MNIST\n\n# Create you callback\nfrom pythae.trainers.training_callbacks import CometCallback\n\n# Or you can alternatively ability to view the Comet UI in the jupyter notebook\nimport comet_ml<\/span><\/pre>\n<h4 class=\"graf graf--h4\">Downloading the Dataset<\/h4>\n<p class=\"graf graf--p\">Let\u2019s download the MNIST dataset with the <strong class=\"markup--strong markup--p-strong\"><em class=\"markup--em markup--p-em\">torchvision.datasets<\/em><\/strong> package. Then, let\u2019s reshape it according to the input shape of the model by separating it as train and eval. Finally, let\u2019s normalize.<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\">mnist_trainset = datasets.MNIST(root='..\/data', train=True, download=True, transform=None)\n\ntrain_dataset = mnist_trainset.data[:-10000].reshape(-1, 1, 28, 28) \/ 255.\neval_dataset = mnist_trainset.data[-10000:].reshape(-1, 1, 28, 28) \/ 255.<\/span><\/pre>\n<h4 class=\"graf graf--h4\">Defining Model Parameters<\/h4>\n<ul class=\"postList\">\n<li class=\"graf graf--li\">Let\u2019s define Trainer arguments:<\/li>\n<\/ul>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\">training_config = BaseTrainerConfig(\n    output_dir='my_model',\n    learning_rate=1e-4,\n    batch_size=100,\n    num_epochs=10, # Change this to train the model a bit more,\n    steps_predict=3\n)<\/span><\/pre>\n<ul class=\"postList\">\n<li class=\"graf graf--li\">Let\u2019s define the VAE model arguments:<\/li>\n<\/ul>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\">model_config = BetaVAEConfig(\n    input_dim=(1, 28, 28),\n    latent_dim=16,\n    beta=2.\n\n)<\/span><\/pre>\n<ul class=\"postList\">\n<li class=\"graf graf--li\">To create the VAE model, let\u2019s define the above config and Autoencoder\u2019s encoder and decoder neural network model structure:<\/li>\n<\/ul>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\">model = BetaVAE(\n    model_config=model_config,\n    encoder=Encoder_ResNet_VAE_MNIST(model_config),\n    decoder=Decoder_ResNet_AE_MNIST(model_config)\n)<\/span><\/pre>\n<h4 class=\"graf graf--h4\">Comet ML\u200a\u2014\u200aDefining Callback<\/h4>\n<p class=\"graf graf--p\">Before starting the training pipeline, we need to create the <strong class=\"markup--strong markup--p-strong\">CometCallback<\/strong>. To access this feature;<\/p>\n<ul class=\"postList\">\n<li class=\"graf graf--li\">You must create a <a class=\"markup--anchor markup--li-anchor\" href=\"\/signup?utm_source=website&amp;utm_medium=referral&amp;utm_campaign=Heartbeat_Content\" target=\"_blank\" rel=\"noopener\" data-href=\"\/signup?utm_source=heartbeat&amp;utm_medium=referral&amp;utm_campaign=AMS_US_EN_SNUP_heartbeat_CTA\"><strong class=\"markup--strong markup--li-strong\">comet_ml <\/strong><\/a>account.<\/li>\n<li class=\"graf graf--li\">You should create a new project under your account and note the project name.<\/li>\n<\/ul>\n<figure class=\"graf graf--figure\">\n<\/figure><\/div><\/div><\/section>\n\n\n\n<figure class=\"wp-block-image aligncenter wp-image-4953 size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"770\" height=\"279\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak6.png\" alt=\"A screenshot of the 'create a project' page on Comet ML\" class=\"wp-image-4953\" srcset=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak6.png 770w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak6-300x109.png 300w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak6-768x278.png 768w\" sizes=\"auto, (max-width: 770px) 100vw, 770px\" \/><figcaption class=\"wp-element-caption\">CometML\u200a\u2014\u200aCREATE NEW\u00a0PROJECT<\/figcaption><\/figure>\n\n\n\n<figcaption class=\"imageCaption\"><\/figcaption>\n\n\n\n<ul class=\"wp-block-list postList\">\n<li>You should create and note the API KEY from your Comet ML account settings.<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-image aligncenter graf graf--figure\"><img loading=\"lazy\" decoding=\"async\" width=\"770\" height=\"525\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak7.png\" alt=\"a screenshot of the login API page on Comet ML\" class=\"wp-image-4954\" srcset=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak7.png 770w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak7-300x205.png 300w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak7-768x524.png 768w\" sizes=\"auto, (max-width: 770px) 100vw, 770px\" \/><\/figure>\n\n\n\n<p>&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list postList\">\n<li>You can note down your Comet ML username as the default workspace name.<\/li>\n<\/ul>\n\n\n\n<p class=\"graf graf--p\">In order to be able to monitor on Comet ML, let\u2019s define all the information we noted in the Comet setup arguments as follows and add it to the callbacks array upon adding it to the TrainingPipeline.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">callbacks = [] # the TrainingPipeline expects a list of callbacks\n\ncomet_cb = CometCallback() # Build the callback\n\n# SetUp the callback\ncomet_cb.setup(\n    training_config=training_config, # training config\n    model_config=model_config, # model config\n    api_key={{API KEY}}, # specify your comet api-key\n    project_name={{PROJECT NAME}}, # specify your wandb project\n    workspace={{WORKSPACE NAME}}, #default workspace name = comet ml username\n    #offline_run=True, # run in offline mode\n    #offline_directory='my_offline_runs' # set the directory to store the offline runs\n)\n\ncallbacks.append(comet_cb) # Add it to the callbacks list<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">As a result of these definitions, you will get a link output in the format <strong class=\"markup--strong markup--p-strong\">`https:\/\/www.comet.com\/{username}\/{project_name}\/{id}` <\/strong>to view the test results.<\/p>\n\n\n\n<h4 class=\"wp-block-heading graf graf--h4\">Training Pipeline Definition and VAE&nbsp;Training<\/h4>\n\n\n\n<p class=\"graf graf--p\">Let\u2019s define a training pipeline with the TrainingPipeline function of Pythae, the BaseTrainerConfig variable containing the model arguments we defined earlier, and the BetaVAE variable containing the VAE Autoencoder model structure.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">pipeline = TrainingPipeline(\n    training_config=training_config,\n    model=model\n)<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">Let\u2019s start the training by giving the training and validation dataset and the callbacks array containing the <strong class=\"markup--strong markup--p-strong\">CometCallback<\/strong> to the parameters of the pipeline we have defined as follows:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">pipeline(\n    train_data=train_dataset,\n    eval_data=eval_dataset,\n    callbacks=callbacks # pass the callbacks to the TrainingPipeline and you are done!\n)\n# You can log to https:\/\/comet.com\/your_comet_username\/your_comet_project to monitor your training<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">The training has started and the test results will immediately start to be displayed in the project you created in Comet ML.<\/p>\n\n\n\n<figure class=\"graf graf--figure\">\n<\/figure>\n\n\n\n<figure class=\"wp-block-image aligncenter wp-image-4955 size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"770\" height=\"555\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak8.png\" alt=\"A screenshot of the Jupyter Notebook results showing training and eval at 100%\" class=\"wp-image-4955\" srcset=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak8.png 770w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak8-300x216.png 300w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak8-768x554.png 768w\" sizes=\"auto, (max-width: 770px) 100vw, 770px\" \/><figcaption class=\"wp-element-caption\">Jupyter Notebook Pipeline Results<\/figcaption><\/figure>\n\n\n\n<p class=\"graf graf--p\">Train loss and eval loss graphs of the training can be viewed in real time via Comet ML.<\/p>\n\n\n\n<figure class=\"graf graf--figure\">\n<\/figure>\n\n\n\n<figure class=\"wp-block-image aligncenter wp-image-4956 size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"770\" height=\"316\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak9.png\" alt=\"a screenshot of training epochs on Comet ML\" class=\"wp-image-4956\" srcset=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak9.png 770w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak9-300x123.png 300w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak9-768x315.png 768w\" sizes=\"auto, (max-width: 770px) 100vw, 770px\" \/><figcaption class=\"wp-element-caption\">Model training logs from https:\/\/comet.com\/your_comet_username\/your_comet_project<\/figcaption><\/figure>\n\n\n\n<p class=\"graf graf--p\">If you want to display the test results as Comet UI directly in a Jupyter Notebook, you can pull the test results with the <em class=\"markup--em markup--p-em\">`get_global_experiment()`<\/em> function in the <strong class=\"markup--strong markup--p-strong\"><em class=\"markup--em markup--p-em\">comet_ml<\/em><\/strong> package and display them with the <em class=\"markup--em markup--p-em\">`display()`<\/em> function.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">experiment = comet_ml.get_global_experiment()\nexperiment.display()<\/span><\/pre>\n\n\n\n<h4 class=\"wp-block-heading graf graf--h4\">Other Features of&nbsp;Comet<\/h4>\n\n\n\n<p class=\"graf graf--p\">You can view the step by step outputs of the model in the Graphics page in the Experiment tab of Comet ML.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter graf graf--figure\"><img loading=\"lazy\" decoding=\"async\" width=\"770\" height=\"377\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak10.png\" alt=\"A screenshot of the Image Panel feature in Comet ML\" class=\"wp-image-4957\" srcset=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak10.png 770w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak10-300x147.png 300w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak10-768x376.png 768w\" sizes=\"auto, (max-width: 770px) 100vw, 770px\" \/><\/figure>\n\n\n\n<p>&nbsp;<\/p>\n\n\n\n<p class=\"graf graf--p\">On the System Metrics page, you can see GPU Memory usage, CPU usage, and Memory usage values.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter graf graf--figure\"><img loading=\"lazy\" decoding=\"async\" width=\"770\" height=\"327\" src=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak11.png\" alt=\"A screenshot of GPU usage in Comet ML\" class=\"wp-image-4958\" srcset=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak11.png 770w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak11-300x127.png 300w, https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak11-768x326.png 768w\" sizes=\"auto, (max-width: 770px) 100vw, 770px\" \/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\"><\/h3>\n\n\n\n<h3 class=\"wp-block-heading graf graf--h3\">Conclusion<\/h3>\n\n\n\n<p class=\"graf graf--p\">In this article we learned how you can follow the training logs of the model we used to reconstruct the images in the MNIST dataset and more by using the Comet ML tool, which is fully integrated with Pythae. Hope it was helpful!<\/p>\n\n\n\n<p class=\"graf graf--p\"><strong class=\"markup--strong markup--p-strong\">References:<\/strong><\/p>\n\n\n\n<ol class=\"wp-block-list postList\">\n<li>\u00d6ng\u00fcn, C. <a class=\"markup--anchor markup--li-anchor\" data-href=\"https:\/\/cihanongun.medium.com\/autoencoder-otokodlay%C4%B1c%C4%B1-nedir-ne-i%C3%A7in-kullan%C4%B1l%C4%B1r-e520a591746a\" href=\"https:\/\/cihanongun.medium.com\/autoencoder-otokodlay%C4%B1c%C4%B1-nedir-ne-i%C3%A7in-kullan%C4%B1l%C4%B1r-e520a591746a\" target=\"_blank\" rel=\"noopener\">Autoencoder (Otokodlay\u0131c\u0131) nedir? Ne i\u00e7in kullan\u0131l\u0131r?<\/a><\/li>\n\n\n\n<li>\u00d6ng\u00fcn, C. <a class=\"markup--anchor markup--li-anchor\" data-href=\"https:\/\/cihanongun.medium.com\/variational-autoencoder-vae-nedir-autoencoderdan-ne-fark%C4%B1-vard%C4%B1r-c3f44f6f25c8\" href=\"https:\/\/cihanongun.medium.com\/variational-autoencoder-vae-nedir-autoencoderdan-ne-fark%C4%B1-vard%C4%B1r-c3f44f6f25c8\" target=\"_blank\" rel=\"noopener\">Variational Autoencoder (VAE) nedir? Autoencoder\u2019dan ne fark\u0131 vard\u0131r?<\/a><\/li>\n\n\n\n<li><a class=\"markup--anchor markup--li-anchor\" data-href=\"https:\/\/github.com\/clementchadebec\/benchmark_VAE\" href=\"https:\/\/github.com\/clementchadebec\/benchmark_VAE\" target=\"_blank\" rel=\"noopener\">Pythae Github Repo<\/a><\/li>\n<\/ol>\n\n\n\n<section class=\"section section--body\">\n<div class=\"section-divider\">\n<hr class=\"section-divider\">\n<\/div>\n<div class=\"section-content\">\n<div class=\"section-inner sectionLayout--insetColumn\"><\/div>\n<\/div>\n<\/section>\n","protected":false},"excerpt":{"rendered":"<p>The Pythae library, which brings together many Variational Autoencoder models and enables researchers to make comparisons and conduct reproducible research, is now integrated with Comet ML! The Comet ML experiment tracking tool is very useful for researchers to store their experiment configs, track their training, and compare the results in an easy and understandable way [&hellip;]<\/p>\n","protected":false},"author":112,"featured_media":4947,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"customer_name":"","customer_description":"","customer_industry":"","customer_technologies":"","customer_logo":"","_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[23,6],"tags":[],"coauthors":[131,144],"class_list":["post-4946","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-integrations","category-machine-learning"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Pythae +\u00a0Comet - Comet<\/title>\n<meta name=\"description\" content=\"The Pythae library, which brings together many Variational Autoencoder models and enables researchers to make comparisons and conduct reproducible research, is now integrated with Comet ML, allowing you to log and manage experiments.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/pythae-comet\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Pythae +\u00a0Comet\" \/>\n<meta property=\"og:description\" content=\"The Pythae library, which brings together many Variational Autoencoder models and enables researchers to make comparisons and conduct reproducible research, is now integrated with Comet ML, allowing you to log and manage experiments.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.comet.com\/site\/blog\/pythae-comet\/\" \/>\n<meta property=\"og:site_name\" content=\"Comet\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cometdotml\" \/>\n<meta property=\"article:published_time\" content=\"2023-01-03T19:33:03+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-04-24T17:16:15+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak1.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1280\" \/>\n\t<meta property=\"og:image:height\" content=\"720\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Claire Pena, Ba\u015fak Buluz K\u00f6me\u00e7o\u011flu\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Cometml\" \/>\n<meta name=\"twitter:site\" content=\"@Cometml\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Claire Pena, Ba\u015fak Buluz K\u00f6me\u00e7o\u011flu\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"10 minutes\" \/>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Pythae +\u00a0Comet - Comet","description":"The Pythae library, which brings together many Variational Autoencoder models and enables researchers to make comparisons and conduct reproducible research, is now integrated with Comet ML, allowing you to log and manage experiments.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.comet.com\/site\/blog\/pythae-comet\/","og_locale":"en_US","og_type":"article","og_title":"Pythae +\u00a0Comet","og_description":"The Pythae library, which brings together many Variational Autoencoder models and enables researchers to make comparisons and conduct reproducible research, is now integrated with Comet ML, allowing you to log and manage experiments.","og_url":"https:\/\/www.comet.com\/site\/blog\/pythae-comet\/","og_site_name":"Comet","article_publisher":"https:\/\/www.facebook.com\/cometdotml","article_published_time":"2023-01-03T19:33:03+00:00","article_modified_time":"2025-04-24T17:16:15+00:00","og_image":[{"width":1280,"height":720,"url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak1.png","type":"image\/png"}],"author":"Claire Pena, Ba\u015fak Buluz K\u00f6me\u00e7o\u011flu","twitter_card":"summary_large_image","twitter_creator":"@Cometml","twitter_site":"@Cometml","twitter_misc":{"Written by":"Claire Pena, Ba\u015fak Buluz K\u00f6me\u00e7o\u011flu","Est. reading time":"10 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.comet.com\/site\/blog\/pythae-comet\/#article","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/blog\/pythae-comet\/"},"author":{"name":"Claire Pena","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/b73b3ffc304cf8bec8866340329c5e89"},"headline":"Pythae +\u00a0Comet","datePublished":"2023-01-03T19:33:03+00:00","dateModified":"2025-04-24T17:16:15+00:00","mainEntityOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/pythae-comet\/"},"wordCount":1390,"publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/pythae-comet\/#primaryimage"},"thumbnailUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak1.png","articleSection":["Integrations","Machine Learning"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.comet.com\/site\/blog\/pythae-comet\/","url":"https:\/\/www.comet.com\/site\/blog\/pythae-comet\/","name":"Pythae +\u00a0Comet - Comet","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/pythae-comet\/#primaryimage"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/pythae-comet\/#primaryimage"},"thumbnailUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak1.png","datePublished":"2023-01-03T19:33:03+00:00","dateModified":"2025-04-24T17:16:15+00:00","description":"The Pythae library, which brings together many Variational Autoencoder models and enables researchers to make comparisons and conduct reproducible research, is now integrated with Comet ML, allowing you to log and manage experiments.","breadcrumb":{"@id":"https:\/\/www.comet.com\/site\/blog\/pythae-comet\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.comet.com\/site\/blog\/pythae-comet\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/blog\/pythae-comet\/#primaryimage","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak1.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak1.png","width":1280,"height":720,"caption":"Comet + Pythae logo with an original and reconstructed image"},{"@type":"BreadcrumbList","@id":"https:\/\/www.comet.com\/site\/blog\/pythae-comet\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.comet.com\/site\/"},{"@type":"ListItem","position":2,"name":"Pythae +\u00a0Comet"}]},{"@type":"WebSite","@id":"https:\/\/www.comet.com\/site\/#website","url":"https:\/\/www.comet.com\/site\/","name":"Comet","description":"Build Better Models Faster","publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.comet.com\/site\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.comet.com\/site\/#organization","name":"Comet ML, Inc.","alternateName":"Comet","url":"https:\/\/www.comet.com\/site\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","width":310,"height":310,"caption":"Comet ML, Inc."},"image":{"@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/cometdotml","https:\/\/x.com\/Cometml","https:\/\/www.youtube.com\/channel\/UCmN63HKvfXSCS-UwVwmK8Hw"]},{"@type":"Person","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/b73b3ffc304cf8bec8866340329c5e89","name":"Claire Pena","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/image\/6c42de20d82274b5bcc55f12d2480401","url":"https:\/\/secure.gravatar.com\/avatar\/0158b496f72fba29753917da405441fa923b21dec99134ee8818143fc4113fe4?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/0158b496f72fba29753917da405441fa923b21dec99134ee8818143fc4113fe4?s=96&d=mm&r=g","caption":"Claire Pena"},"url":"https:\/\/www.comet.com\/site\/blog\/author\/clairep\/"}]}},"jetpack_featured_media_url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/01\/Basak1.png","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/4946","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/users\/112"}],"replies":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/comments?post=4946"}],"version-history":[{"count":1,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/4946\/revisions"}],"predecessor-version":[{"id":15637,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/4946\/revisions\/15637"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media\/4947"}],"wp:attachment":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media?parent=4946"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/categories?post=4946"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/tags?post=4946"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/coauthors?post=4946"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}