{"id":6873,"date":"2023-07-19T10:01:40","date_gmt":"2023-07-19T18:01:40","guid":{"rendered":"https:\/\/live-cometml.pantheonsite.io\/?p=6873"},"modified":"2025-04-24T17:15:10","modified_gmt":"2025-04-24T17:15:10","slug":"running-tensorflow-lite-image-classification-models-in-python","status":"publish","type":"post","link":"https:\/\/www.comet.com\/site\/blog\/running-tensorflow-lite-image-classification-models-in-python\/","title":{"rendered":"Running TensorFlow Lite Image Classification Models in Python"},"content":{"rendered":"\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/running-tensorflow-lite-image-classification-models-in-python\">\n\n\n\n<div class=\"fh fi fj fk fl\">\n<div class=\"mg bg\">\n<figure class=\"mh mi mj mk ml mg bg paragraph-image\"><picture><img loading=\"lazy\" decoding=\"async\" class=\"bg mm mn c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:2500\/1*aEOXmVBaGdZvuoK6uFi7mQ.jpeg\" alt=\"\" width=\"2400\" height=\"1669\"><\/picture><figcaption class=\"mo mp mq mr ms mt mu be b bf z dv\" data-selectable-paragraph=\"\">Photo by&nbsp;<a class=\"af mv\" href=\"https:\/\/unsplash.com\/@guillaumedegermain?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText\" target=\"_blank\" rel=\"noopener ugc nofollow\">Guillaume de Germain<\/a>&nbsp;on&nbsp;<a class=\"af mv\" href=\"https:\/\/unsplash.com\/s\/photos\/small-and-fast?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText\" target=\"_blank\" rel=\"noopener ugc nofollow\">Unsplash<\/a><\/figcaption><\/figure>\n<\/div>\n<div class=\"ab ca\">\n<div class=\"ch bg et eu ev ew\">\n<p id=\"44bd\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">Following up on my earlier blogs on running edge models in Python, this fifth blog in the series of Training and running Tensorflow models will explore how to run a TensorFlow Lite image classification model in Python. If you haven\u2019t read my earlier post on model training for this task, you can read that <a href=\"https:\/\/heartbeat.comet.ml\/using-google-cloud-automl-edge-image-classification-models-in-python-92f2885c767\">here<\/a>.<\/p>\n<h1 id=\"2ea9\" class=\"ol om fo be on oo op go oq or os gr ot ou ov ow ox oy oz pa pb pc pd pe pf pg bj\" data-selectable-paragraph=\"\">Series Pit Stops<\/h1>\n<ul class=\"\">\n<li id=\"d3cb\" class=\"mw mx fo be b gm ph mz na gp pi nc nd pj pk ng nh pl pm nk nl pn po no np nq pp pq pr bj\" data-selectable-paragraph=\"\"><a class=\"af mv\" href=\"https:\/\/heartbeat.comet.ml\/training-a-tensorflow-lite-model-for-mobile-using-automl-vision-edge-1eb61e00be47\" target=\"_blank\" rel=\"noopener ugc nofollow\"><strong class=\"be ps\">Training a TensorFlow Lite Image Classification model using AutoML Vision Edge<\/strong><\/a><\/li>\n<li id=\"e293\" class=\"mw mx fo be b gm pt mz na gp pu nc nd pj pv ng nh pl pw nk nl pn px no np nq pp pq pr bj\" data-selectable-paragraph=\"\"><a class=\"af mv\" href=\"https:\/\/heartbeat.comet.ml\/creating-a-tensorflow-lite-object-detection-model-using-google-cloud-automl-d83f997c1848\" target=\"_blank\" rel=\"noopener ugc nofollow\"><strong class=\"be ps\">Creating a TensorFlow Lite Object Detection Model using Google Cloud AutoML<\/strong><\/a><\/li>\n<li id=\"5f30\" class=\"mw mx fo be b gm pt mz na gp pu nc nd pj pv ng nh pl pw nk nl pn px no np nq pp pq pr bj\" data-selectable-paragraph=\"\"><a class=\"af mv\" href=\"https:\/\/heartbeat.comet.ml\/using-google-cloud-automl-edge-image-classification-models-in-python-92f2885c767\" target=\"_blank\" rel=\"noopener ugc nofollow\"><strong class=\"be ps\">Using Google Cloud AutoML Edge Image Classification Models in Python<\/strong><\/a><\/li>\n<li id=\"5c41\" class=\"mw mx fo be b gm pt mz na gp pu nc nd pj pv ng nh pl pw nk nl pn px no np nq pp pq pr bj\" data-selectable-paragraph=\"\"><a class=\"af mv\" href=\"https:\/\/heartbeat.comet.ml\/using-google-cloud-automl-edge-object-detection-models-in-python-edae622adce1\" target=\"_blank\" rel=\"noopener ugc nofollow\"><strong class=\"be ps\">Using Google Cloud AutoML Edge Object Detection Models in Python<\/strong><\/a><\/li>\n<li id=\"b7be\" class=\"mw mx fo be b gm pt mz na gp pu nc nd pj pv ng nh pl pw nk nl pn px no np nq pp pq pr bj\" data-selectable-paragraph=\"\">Running TensorFlow Lite Image Classification Models in Python (You are here)<\/li>\n<li id=\"6a83\" class=\"mw mx fo be b gm pt mz na gp pu nc nd pj pv ng nh pl pw nk nl pn px no np nq pp pq pr bj\" data-selectable-paragraph=\"\"><a class=\"af mv\" href=\"https:\/\/heartbeat.comet.ml\/running-tensorflow-lite-object-detection-models-in-python-8a73b77e13f8\" target=\"_blank\" rel=\"noopener ugc nofollow\"><strong class=\"be ps\">Running TensorFlow Lite Object Detection Models in Python<\/strong><\/a><\/li>\n<li id=\"6094\" class=\"mw mx fo be b gm pt mz na gp pu nc nd pj pv ng nh pl pw nk nl pn px no np nq pp pq pr bj\" data-selectable-paragraph=\"\"><a class=\"af mv\" href=\"https:\/\/heartbeat.comet.ml\/optimizing-the-performance-of-tensorflow-models-for-the-edge-c2d67b53824\" target=\"_blank\" rel=\"noopener ugc nofollow\"><strong class=\"be ps\">Optimizing the performance of TensorFlow models for the edge<\/strong><\/a><\/li>\n<\/ul>\n<p id=\"6929\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">While the previous blog covered building and preparing this model, this blog will look at how to run this TensorFlow Lite model in Python.<\/p>\n<p id=\"f132\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">TensorFlow Lite models have certain benefits when compared to traditional TensorFlow models\u2014namely, they\u2019re typically smaller in size and have lower inference latency.<\/p>\n<p id=\"a7fe\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">However, this comes with a slight tradeoff as far as model accuracy goes; but if your use-case allows for this tradeoff, then you\u2019re golden!<\/p>\n<p id=\"d2a0\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">In this blog, I\u2019ll be using a classification model that I trained earlier.<\/p>\n<p id=\"6cbb\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">If you don\u2019t know what an image classification model is\/does, or if you want to train your own model, feel free to read <a href=\"https:\/\/heartbeat.comet.ml\/training-a-tensorflow-lite-model-for-mobile-using-automl-vision-edge-1eb61e00be47\">this blog<\/a> here in which I outline how to do so.<\/p>\n<h1 id=\"323b\" class=\"ol om fo be on oo op go oq or os gr ot ou ov ow ox oy oz pa pb pc pd pe pf pg bj\" data-selectable-paragraph=\"\">Step 1: Downloading the TensorFlow Lite model<\/h1>\n<p id=\"a9bb\" class=\"pw-post-body-paragraph mw mx fo be b gm ph mz na gp pi nc nd ne pk ng nh ni pm nk nl nm po no np nq fh bj\" data-selectable-paragraph=\"\">Assuming that you\u2019ve trained your TensorFlow model with Google Cloud, you can download the model from the Vision dashboard as shown in the screenshot here:<\/p>\n<\/div>\n<\/div>\n<div class=\"mg\">\n<div class=\"ab ca\">\n<div class=\"pz qa qb qc qd qe ce qf cf qg ch bg\">\n<figure class=\"mh mi mj mk ml mg qi qj paragraph-image\">\n<div class=\"qk ql eb qm bg qn\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mm mn c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:1000\/1*FVZjRpJ6Wxdefl2Z_CMKfg.png\" alt=\"\" width=\"1000\" height=\"356\"><\/figure><div class=\"mr ms qh\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*FVZjRpJ6Wxdefl2Z_CMKfg.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*FVZjRpJ6Wxdefl2Z_CMKfg.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*FVZjRpJ6Wxdefl2Z_CMKfg.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*FVZjRpJ6Wxdefl2Z_CMKfg.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*FVZjRpJ6Wxdefl2Z_CMKfg.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*FVZjRpJ6Wxdefl2Z_CMKfg.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:2000\/format:webp\/1*FVZjRpJ6Wxdefl2Z_CMKfg.png 2000w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 1000px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*FVZjRpJ6Wxdefl2Z_CMKfg.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*FVZjRpJ6Wxdefl2Z_CMKfg.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*FVZjRpJ6Wxdefl2Z_CMKfg.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*FVZjRpJ6Wxdefl2Z_CMKfg.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*FVZjRpJ6Wxdefl2Z_CMKfg.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*FVZjRpJ6Wxdefl2Z_CMKfg.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:2000\/1*FVZjRpJ6Wxdefl2Z_CMKfg.png 2000w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 1000px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<\/figure>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"ab ca\">\n<div class=\"ch bg et eu ev ew\">\n<p id=\"983f\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">Once downloaded, we\u2019re ready to set up our environment and proceed with the next steps.<\/p>\n<h1 id=\"3c45\" class=\"ol om fo be on oo op go oq or os gr ot ou ov ow ox oy oz pa pb pc pd pe pf pg bj\" data-selectable-paragraph=\"\">Step 2: Installing the required dependencies<\/h1>\n<p id=\"1845\" class=\"pw-post-body-paragraph mw mx fo be b gm ph mz na gp pi nc nd ne pk ng nh ni pm nk nl nm po no np nq fh bj\" data-selectable-paragraph=\"\">Before we go ahead and write any code, it\u2019s important that we first have all the required dependencies installed on our development machine.<\/p>\n<p id=\"60df\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">For the current example, these are the dependencies we\u2019ll need:<\/p>\n<pre class=\"mh mi mj mk ml qo qp qq qr ax qs bj\"><span id=\"6456\" class=\"qt om fo qp b ia qu qv l iq qw\" data-selectable-paragraph=\"\"><strong class=\"qp fp\">tensorflow==1.13.1\npathlib\nopencv-python<\/strong><\/span><\/pre>\n<p id=\"d19b\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">We can use pip to install these dependencies with the following command:<\/p>\n<pre class=\"mh mi mj mk ml qo qp qq qr ax qs bj\"><span id=\"992e\" class=\"qt om fo qp b ia qu qv l iq qw\" data-selectable-paragraph=\"\"><strong class=\"qp fp\">pip install dependency_name<\/strong><\/span><\/pre>\n<blockquote class=\"qx qy qz\"><p id=\"e10e\" class=\"mw mx ra be b gm my mz na gp nb nc nd pj nf ng nh pl nj nk nl pn nn no np nq fh bj\" data-selectable-paragraph=\"\">Note: While not mandatory, it\u2019s strongly suggested that you always use a virtual environment for testing out new projects. You can read more about how to set up and activate one in <a href=\"https:\/\/heartbeat.comet.ml\/creating-python-virtual-environments-with-conda-why-and-how-180ebd02d1db\">this link<\/a>.<\/p><\/blockquote>\n<h1 id=\"6a3f\" class=\"ol om fo be on oo op go oq or os gr ot ou ov ow ox oy oz pa pb pc pd pe pf pg bj\" data-selectable-paragraph=\"\">Step 3: Loading the model and studying its input and output<\/h1>\n<p id=\"6f01\" class=\"pw-post-body-paragraph mw mx fo be b gm ph mz na gp pi nc nd ne pk ng nh ni pm nk nl nm po no np nq fh bj\" data-selectable-paragraph=\"\">Now that we have the model and our development environment ready, the next step is to create a Python snippet that allows us to load this model and run inferencing with it.<\/p>\n<p id=\"2502\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">Here\u2019s what such a snippet might look like:<\/p>\n<pre>import numpy as np\nimport tensorflow as tf\n\n# Load TFLite model and allocate tensors.\ninterpreter = tf.contrib.lite.Interpreter(model_path=\"exposure.tflite\")\n\n# Get input and output tensors.\ninput_details = interpreter.get_input_details()\noutput_details = interpreter.get_output_details()\n\ninterpreter.allocate_tensors()\n\n# input details\nprint(input_details)\n# output details\nprint(output_details)<\/pre>\n<p id=\"a799\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">Here, we first load the downloaded model and then get the input and output tensors from it.<\/p>\n<p id=\"969f\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">Next, we print the input and outputs tensors we obtained earlier.<br>\nIf you run the code, this is what the output might look like:<\/p>\n<pre class=\"mh mi mj mk ml qo qp qq qr ax qs bj\"><span id=\"80c3\" class=\"qt om fo qp b ia qu qv l iq qw\" data-selectable-paragraph=\"\"><strong class=\"qp fp\">Input: [{'name': 'image', 'index': 0, 'shape': array([  1, 224, 224,   3], dtype=int32), 'dtype': &lt;class 'numpy.uint8'&gt;, 'quantization': (0.007874015718698502, 128)}]<\/strong><\/span><span id=\"774f\" class=\"qt om fo qp b ia rf qv l iq qw\" data-selectable-paragraph=\"\"><strong class=\"qp fp\">Output: [{'name': 'scores', 'index': 173, 'shape': array([1, 3], dtype=int32), 'dtype': &lt;class 'numpy.uint8'&gt;, 'quantization': (0.00390625, 0)}]<\/strong><\/span><\/pre>\n<p id=\"dc1a\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">Here, the model takes the input at index 0, and the type of input is an array of shape [1,224,224,3], which basically means that it takes a single image of size 224 x 224 in RGB format.<\/p>\n<p id=\"1e25\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">The model gives the output at index 173, and the shape of the output array is [1,3], which essentially means we\u2019ll be getting the scores for each of our labels. For the purposes of my project, I\u2019ve trained the model to classify an input image as underexposed, overexposed, or good\u2014so the output has the shape [1,3].<\/p>\n<p id=\"fc82\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">If I trained my model to detect&nbsp;<strong class=\"be ps\">n<\/strong>&nbsp;labels, then the output shape would have been [1,n].<\/p>\n<p id=\"c75a\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">Now that we have the input and output type and shapes of the model, let\u2019s load an image and run it through the TensorFlow Lite model.<\/p>\n<h1 id=\"c678\" class=\"ol om fo be on oo op go oq or os gr ot ou ov ow ox oy oz pa pb pc pd pe pf pg bj\" data-selectable-paragraph=\"\">Step 4: Reading an image and passing it to the TFLite model<\/h1>\n<p id=\"5ce2\" class=\"pw-post-body-paragraph mw mx fo be b gm ph mz na gp pi nc nd ne pk ng nh ni pm nk nl nm po no np nq fh bj\" data-selectable-paragraph=\"\">Up next, we\u2019ll use Pathlib to iterate through a folder containing some images that we\u2019ll be running inference on. We\u2019ll then read each image with OpenCV, resize it to 224&#215;224, and pass it to our model.<\/p>\n<p id=\"10e5\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">Once done, we\u2019ll print the file name and the output for that file:<\/p>\n<pre>import numpy as np\nimport tensorflow as tf\nimport cv2\nimport pathlib\n\n# Load TFLite model and allocate tensors.\ninterpreter = tf.contrib.lite.Interpreter(model_path=\"exposure.tflite\")\n\n# Get input and output tensors.\ninput_details = interpreter.get_input_details()\noutput_details = interpreter.get_output_details()\n\ninterpreter.allocate_tensors()\n\n# input details\nprint(input_details)\n# output details\nprint(output_details)\n\nfor file in pathlib.Path(folder_path).iterdir():\n\n    # read and resize the image\n    img = cv2.imread(r\"{}\".format(file.resolve()))\n    new_img = cv2.resize(img, (224, 224))\n\n    # input_details[0]['index'] = the index which accepts the input\n    interpreter.set_tensor(input_details[0]['index'], [new_img])\n\n    # run the inference\n    interpreter.invoke()\n\n    # output_details[0]['index'] = the index which provides the input\n    output_data = interpreter.get_tensor(output_details[0]['index'])\n\n    print(\"For file {}, the output is {}\".format(file.stem, output_data))<\/pre>\n<p id=\"64bc\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">Upon running this code, here\u2019s what the output might look like:<\/p>\n<pre class=\"mh mi mj mk ml qo qp qq qr ax qs bj\"><span id=\"e275\" class=\"qt om fo qp b ia qu qv l iq qw\" data-selectable-paragraph=\"\">For file DSC00117, the output is [[ 75 164  17]]\nFor file DSC00261, the output is [[252   2   2]]\nFor file DSC00248, the output is [[252   2   2]]\nFor file DSC00116, the output is [[ 21 210  25]]\nFor file DSC00128, the output is [[  9 112 136]]\nFor file DSC00114, the output is [[203  42  12]]<\/span><\/pre>\n<p id=\"2cc7\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">As you can see, the output contains 3 values, which add up to 256. These represent the scores for the image, which indicate whether it\u2019s underexposed, good, or overexposed, respectively.<\/p>\n<p id=\"9bba\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">To get your model\u2019s predicted output, simply fetch the index with the greatest score and map it with your label.<\/p>\n<p id=\"cfc1\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">For example, in the first image in the output above, the highest score is the second element in the array, which maps with the label \u201c<strong class=\"be ps\">good<\/strong>\u201d; whereas, the second image is \u201c<strong class=\"be ps\">underexposed<\/strong>\u201d and the second to last is \u201c<strong class=\"be ps\">overexposed<\/strong>\u201d.<\/p>\n<p id=\"e682\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">To further improve model performance, you can batch the requests as well. Let\u2019s see how to do this.<\/p>\n<h1 id=\"14e1\" class=\"ol om fo be on oo op go oq or os gr ot ou ov ow ox oy oz pa pb pc pd pe pf pg bj\" data-selectable-paragraph=\"\">Step 5: Batching requests for better performance<\/h1>\n<p id=\"2136\" class=\"pw-post-body-paragraph mw mx fo be b gm ph mz na gp pi nc nd ne pk ng nh ni pm nk nl nm po no np nq fh bj\" data-selectable-paragraph=\"\">Batching with TensorFlow essentially means running a bunch of inference requests at once instead of running them one by one.<\/p>\n<p id=\"57a6\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">This will result in a reduced latency in our model.<\/p>\n<p id=\"3c90\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">Before we go ahead and batch the requests, we need to decide upon the batch size. This might vary from machine to machine, and setting a very large batch size might result in an out of memory exception.<\/p>\n<p id=\"ce62\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">To make things simple, we will set the batch size as 50. Here\u2019s what the above example with batching looks like:<\/p>\n<pre>import numpy as np\nimport tensorflow as tf\nimport cv2\nimport pathlib\n\n# Load TFLite model and allocate tensors.\ninterpreter = tf.contrib.lite.Interpreter(model_path=\"exposure.tflite\")\n\n# Get input and output tensors.\ninput_details = interpreter.get_input_details()\noutput_details = interpreter.get_output_details()\n\ninterpreter.allocate_tensors()\n\n# input details\nprint(input_details)\n# output details\nprint(output_details)\n\nimages = []\n\nfor index, file in enumberate(pathlib.Path(folder_path).iterdir()):\n\n    # read and resize the image\n    img = cv2.imread(r\"{}\".format(file.resolve()))\n    new_img = cv2.resize(img, (224, 224))\n\n    images.append(new_img)\n\n    if index == 49:\n        interpreter.set_tensor(input_details[0]['index'], images)\n        # run the inference\n        interpreter.invoke()\n        # output_details[0]['index'] = the index which provides the input\n        output_data = interpreter.get_tensor(output_details[0]['index'])\n        # clear the list\n        images.clear()<\/pre>\n<p id=\"8178\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">But upon running the code above, you might see an error, which looks as follows:<\/p>\n<pre class=\"mh mi mj mk ml qo qp qq qr ax qs bj\"><span id=\"734b\" class=\"qt om fo qp b ia qu qv l iq qw\" data-selectable-paragraph=\"\">Traceback (most recent call last):\n  File \"test.py\", line 48, in &lt;module&gt;\n    interpreter.set_tensor(input_details[0]['index'], input_data)\n  File \"\/Users\/harshitdwivedi\/Desktop\/tf_env\/lib\/python3.7\/site-packages\/tensorflow\/lite\/python\/interpreter.py\", line 175, in set_tensor\n    self._interpreter.SetTensor(tensor_index, value)\n  File \"\/Users\/harshitdwivedi\/Desktop\/tf_env\/lib\/python3.7\/site-packages\/tensorflow\/lite\/python\/interpreter_wrapper\/tensorflow_wrap_interpreter_wrapper.py\", line 136, in SetTensor\n    return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_SetTensor(self, i, value)\n<strong class=\"qp fp\">ValueError: Cannot set tensor: Dimension mismatch<\/strong><\/span><\/pre>\n<p id=\"b508\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">The reason for this is a mismatch in the shape of the input tensor vs the shape of the input that we\u2019re providing.<\/p>\n<p id=\"a880\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">The input that the model accepts is [1, 224, 224, 3], whereas the input we\u2019re providing is [50, 224, 224, 3]. To fix this, we can simply resize our input before running inference on it.<\/p>\n<p id=\"ee63\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">Here\u2019s what the code for doing so looks like:<\/p>\n<pre>import numpy as np\nimport tensorflow as tf\nimport cv2\nimport pathlib\n\n# Load TFLite model and allocate tensors.\ninterpreter = tf.contrib.lite.Interpreter(model_path=\"exposure.tflite\")\n\n# Get input and output tensors.\ninput_details = interpreter.get_input_details()\noutput_details = interpreter.get_output_details()\n\n# input details\nprint(input_details)\n# output details\nprint(output_details)\n\nimages = []\n\nfor index, file in enumberate(pathlib.Path(folder_path).iterdir()):\n\n    # read and resize the image\n    img = cv2.imread(r\"{}\".format(file.resolve()))\n    new_img = cv2.resize(img, (224, 224))\n\n    images.append(new_img)\n\n    if index == 49:\n        # resize the input tensor\n        interpreter.resize_tensor_input(input_details[0]['index'],[len(images), 224, 224, 3])\n        interpreter.allocate_tensors()\n        interpreter.set_tensor(input_details[0]['index'], images)\n        # run the inference\n        interpreter.invoke()\n        # output_details[0]['index'] = the index which provides the input\n        output_data = interpreter.get_tensor(output_details[0]['index'])\n        # clear the list\n        images.clear()<\/pre>\n<p id=\"e777\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">Before running inference, we can also resize the input so that it accepts as many images as there are in the current batch of images.<\/p>\n<p id=\"113c\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">On running the code above, this is what you might get as the output:<\/p>\n<pre class=\"mh mi mj mk ml qo qp qq qr ax qs bj\"><span id=\"64aa\" class=\"qt om fo qp b ia qu qv l iq qw\" data-selectable-paragraph=\"\">[[ 75 164  17], [252   2   2], [252   2   2], [ 21 210  25], ..., [203  42  12], [  9 112 136]]<\/span><\/pre>\n<p id=\"2f3b\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">The length of this array would be the same as your batch size (here 50); so if you maintain another list of file names, you can reference the score for each image easily!<\/p>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"ab ca rg rh ri rj\" role=\"separator\"><\/div>\n\n\n\n<div class=\"fh fi fj fk fl\">\n<div class=\"ab ca\">\n<div class=\"ch bg et eu ev ew\">\n<p id=\"ecbb\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">And that\u2019s it! While not always the model effective solution, TFLite models are nonetheless an extremely viable alternative when it comes to running your models on edge hardware, or if the model\u2019s latency is a core concern for your app!<\/p>\n<p id=\"4ee0\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\">In the next part of this series, I\u2019ll be covering how we can do the same for object detection TF Lite models, in order to locate and track detected objects in an image. Stay tuned for that!<\/p>\n<p id=\"8cea\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\"><em class=\"ra\">Thanks for reading! If you enjoyed this story, please&nbsp;<\/em><strong class=\"be ps\"><em class=\"ra\">click the&nbsp;<\/em><\/strong>\ud83d\udc4f<strong class=\"be ps\"><em class=\"ra\">&nbsp;button<\/em><\/strong><em class=\"ra\">&nbsp;<\/em><strong class=\"be ps\"><em class=\"ra\">and share it&nbsp;<\/em><\/strong><em class=\"ra\">to help others find it! Feel free to leave a comment&nbsp;<\/em>\ud83d\udcac<em class=\"ra\">&nbsp;below.<\/em><\/p>\n<p id=\"1d42\" class=\"pw-post-body-paragraph mw mx fo be b gm my mz na gp nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq fh bj\" data-selectable-paragraph=\"\"><em class=\"ra\">Have feedback? Let\u2019s connect&nbsp;<\/em><a class=\"af mv\" href=\"https:\/\/twitter.com\/harshithdwivedi\" target=\"_blank\" rel=\"noopener ugc nofollow\"><strong class=\"be ps\"><em class=\"ra\">on Twitter<\/em><\/strong><\/a><em class=\"ra\">.<\/em><\/p>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Photo by&nbsp;Guillaume de Germain&nbsp;on&nbsp;Unsplash Following up on my earlier blogs on running edge models in Python, this fifth blog in the series of Training and running Tensorflow models will explore how to run a TensorFlow Lite image classification model in Python. If you haven\u2019t read my earlier post on model training for this task, you [&hellip;]<\/p>\n","protected":false},"author":56,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"customer_name":"","customer_description":"","customer_industry":"","customer_technologies":"","customer_logo":"","footnotes":""},"categories":[7],"tags":[],"coauthors":[159],"class_list":["post-6873","post","type-post","status-publish","format-standard","hentry","category-tutorials"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Running TensorFlow Lite Image Classification Models in Python - Comet<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/running-tensorflow-lite-image-classification-models-in-python\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Running TensorFlow Lite Image Classification Models in Python\" \/>\n<meta property=\"og:description\" content=\"Photo by&nbsp;Guillaume de Germain&nbsp;on&nbsp;Unsplash Following up on my earlier blogs on running edge models in Python, this fifth blog in the series of Training and running Tensorflow models will explore how to run a TensorFlow Lite image classification model in Python. If you haven\u2019t read my earlier post on model training for this task, you [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.comet.com\/site\/blog\/running-tensorflow-lite-image-classification-models-in-python\/\" \/>\n<meta property=\"og:site_name\" content=\"Comet\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cometdotml\" \/>\n<meta property=\"article:published_time\" content=\"2023-07-19T18:01:40+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-04-24T17:15:10+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/miro.medium.com\/v2\/resize:fit:2500\/1*aEOXmVBaGdZvuoK6uFi7mQ.jpeg\" \/>\n<meta name=\"author\" content=\"Harshit Dwivedi\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Cometml\" \/>\n<meta name=\"twitter:site\" content=\"@Cometml\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Harshit Dwivedi\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Running TensorFlow Lite Image Classification Models in Python - Comet","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.comet.com\/site\/blog\/running-tensorflow-lite-image-classification-models-in-python\/","og_locale":"en_US","og_type":"article","og_title":"Running TensorFlow Lite Image Classification Models in Python","og_description":"Photo by&nbsp;Guillaume de Germain&nbsp;on&nbsp;Unsplash Following up on my earlier blogs on running edge models in Python, this fifth blog in the series of Training and running Tensorflow models will explore how to run a TensorFlow Lite image classification model in Python. If you haven\u2019t read my earlier post on model training for this task, you [&hellip;]","og_url":"https:\/\/www.comet.com\/site\/blog\/running-tensorflow-lite-image-classification-models-in-python\/","og_site_name":"Comet","article_publisher":"https:\/\/www.facebook.com\/cometdotml","article_published_time":"2023-07-19T18:01:40+00:00","article_modified_time":"2025-04-24T17:15:10+00:00","og_image":[{"url":"https:\/\/miro.medium.com\/v2\/resize:fit:2500\/1*aEOXmVBaGdZvuoK6uFi7mQ.jpeg","type":"","width":"","height":""}],"author":"Harshit Dwivedi","twitter_card":"summary_large_image","twitter_creator":"@Cometml","twitter_site":"@Cometml","twitter_misc":{"Written by":"Harshit Dwivedi","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.comet.com\/site\/blog\/running-tensorflow-lite-image-classification-models-in-python\/#article","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/blog\/running-tensorflow-lite-image-classification-models-in-python\/"},"author":{"name":"Harshit Dwivedi","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/3bd50ca837ac698f7041652196a3456d"},"headline":"Running TensorFlow Lite Image Classification Models in Python","datePublished":"2023-07-19T18:01:40+00:00","dateModified":"2025-04-24T17:15:10+00:00","mainEntityOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/running-tensorflow-lite-image-classification-models-in-python\/"},"wordCount":1181,"publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/running-tensorflow-lite-image-classification-models-in-python\/#primaryimage"},"thumbnailUrl":"https:\/\/miro.medium.com\/v2\/resize:fit:2500\/1*aEOXmVBaGdZvuoK6uFi7mQ.jpeg","articleSection":["Tutorials"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.comet.com\/site\/blog\/running-tensorflow-lite-image-classification-models-in-python\/","url":"https:\/\/www.comet.com\/site\/blog\/running-tensorflow-lite-image-classification-models-in-python\/","name":"Running TensorFlow Lite Image Classification Models in Python - Comet","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/running-tensorflow-lite-image-classification-models-in-python\/#primaryimage"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/running-tensorflow-lite-image-classification-models-in-python\/#primaryimage"},"thumbnailUrl":"https:\/\/miro.medium.com\/v2\/resize:fit:2500\/1*aEOXmVBaGdZvuoK6uFi7mQ.jpeg","datePublished":"2023-07-19T18:01:40+00:00","dateModified":"2025-04-24T17:15:10+00:00","breadcrumb":{"@id":"https:\/\/www.comet.com\/site\/blog\/running-tensorflow-lite-image-classification-models-in-python\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.comet.com\/site\/blog\/running-tensorflow-lite-image-classification-models-in-python\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/blog\/running-tensorflow-lite-image-classification-models-in-python\/#primaryimage","url":"https:\/\/miro.medium.com\/v2\/resize:fit:2500\/1*aEOXmVBaGdZvuoK6uFi7mQ.jpeg","contentUrl":"https:\/\/miro.medium.com\/v2\/resize:fit:2500\/1*aEOXmVBaGdZvuoK6uFi7mQ.jpeg"},{"@type":"BreadcrumbList","@id":"https:\/\/www.comet.com\/site\/blog\/running-tensorflow-lite-image-classification-models-in-python\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.comet.com\/site\/"},{"@type":"ListItem","position":2,"name":"Running TensorFlow Lite Image Classification Models in Python"}]},{"@type":"WebSite","@id":"https:\/\/www.comet.com\/site\/#website","url":"https:\/\/www.comet.com\/site\/","name":"Comet","description":"Build Better Models Faster","publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.comet.com\/site\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.comet.com\/site\/#organization","name":"Comet ML, Inc.","alternateName":"Comet","url":"https:\/\/www.comet.com\/site\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","width":310,"height":310,"caption":"Comet ML, Inc."},"image":{"@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/cometdotml","https:\/\/x.com\/Cometml","https:\/\/www.youtube.com\/channel\/UCmN63HKvfXSCS-UwVwmK8Hw"]},{"@type":"Person","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/3bd50ca837ac698f7041652196a3456d","name":"Harshit Dwivedi","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/image\/54c8c9192b5afd9f7101fbd03edda390","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/08\/o34DyZS__400x400-96x96.jpg","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/08\/o34DyZS__400x400-96x96.jpg","caption":"Harshit Dwivedi"},"url":"https:\/\/www.comet.com\/site\/blog\/author\/harshit-dwivedigmail-com\/"}]}},"_links":{"self":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/6873","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/users\/56"}],"replies":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/comments?post=6873"}],"version-history":[{"count":1,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/6873\/revisions"}],"predecessor-version":[{"id":15601,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/6873\/revisions\/15601"}],"wp:attachment":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media?parent=6873"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/categories?post=6873"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/tags?post=6873"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/coauthors?post=6873"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}