{"id":7621,"date":"2023-09-22T12:22:31","date_gmt":"2023-09-22T20:22:31","guid":{"rendered":"https:\/\/live-cometml.pantheonsite.io\/?p=7621"},"modified":"2025-04-24T17:13:50","modified_gmt":"2025-04-24T17:13:50","slug":"ai-emotion-recognition-using-computer-vision","status":"publish","type":"post","link":"https:\/\/www.comet.com\/site\/blog\/ai-emotion-recognition-using-computer-vision\/","title":{"rendered":"AI Emotion Recognition Using Computer Vision"},"content":{"rendered":"\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/ai-emotion-recognition-using-computer-vision\">\n\n\n\n<div class=\"fh fi fj fk fl\">\n<div class=\"ab ca\">\n<div class=\"ch bg et eu ev ew\">\n<figure class=\"lw lx ly lz ma mb lt lu paragraph-image\">\n<div class=\"mc md eb me bg mf\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mg mh c\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*UghBE0Itc0NhXkbbMPKMRA.jpeg\" alt=\"Grid of images of a blonde young woman experiencing many different emotions, including: anger, surprise, sadness, fear, joy, disgust, embarrassment, anxiety, happiness, shame, and more.\" width=\"700\" height=\"700\"><\/figure><div class=\"lt lu lv\"><picture><\/picture><\/div>\n<\/div><figcaption class=\"mi mj mk lt lu ml mm be b bf z dv\" data-selectable-paragraph=\"\">Photo by Andrea Piacquadio: <a class=\"af mn\" href=\"https:\/\/www.pexels.com\/photo\/collage-photo-of-woman-3812743\/\" target=\"_blank\" rel=\"noopener ugc nofollow\">https:\/\/www.pexels.com\/photo\/collage-photo-of-woman-3812743\/<\/a><\/figcaption><\/figure>\n<p id=\"f989\" class=\"pw-post-body-paragraph mo mp fo be b mq mr ms mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni nj nk fh bj\" data-selectable-paragraph=\"\">Computer vision is one of the most widely used and evolving fields of AI. It gives the computer the ability to observe and learn from visual data just like humans. In this process, the computer derives meaningful information from digital images, videos etc. and applies this learning tosolving problems.<\/p>\n<h1 id=\"9daa\" class=\"nl nm fo be nn no np nq nr ns nt nu nv nw nx ny nz oa ob oc od oe of og oh oi bj\" data-selectable-paragraph=\"\">Visual AI Emotion Recognition<\/h1>\n<p id=\"38f6\" class=\"pw-post-body-paragraph mo mp fo be b mq oj ms mt mu ok mw mx my ol na nb nc om ne nf ng on ni nj nk fh bj\" data-selectable-paragraph=\"\">Emotion recognition is one such derivative of computer vision that involves analysis, interpretation, and classification of the emotional quotient using multimodal features like visual data, body language and gestures to assess the emotional state of the person.<\/p>\n<h2 id=\"0b5b\" class=\"oo nm fo be nn op oq or nr os ot ou nv my ov ow ox nc oy oz pa ng pb pc pd pe bj\" data-selectable-paragraph=\"\">Computer\u2019s perception of facial expressions<\/h2>\n<figure class=\"pg ph pi pj pk mb lt lu paragraph-image\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mg mh c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:531\/1*OjebslKaDvbIiELKlyjAKw.png\" alt=\"\" width=\"531\" height=\"238\"><\/figure><div class=\"lt lu pf\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*OjebslKaDvbIiELKlyjAKw.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*OjebslKaDvbIiELKlyjAKw.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*OjebslKaDvbIiELKlyjAKw.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*OjebslKaDvbIiELKlyjAKw.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*OjebslKaDvbIiELKlyjAKw.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*OjebslKaDvbIiELKlyjAKw.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1062\/format:webp\/1*OjebslKaDvbIiELKlyjAKw.png 1062w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 531px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*OjebslKaDvbIiELKlyjAKw.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*OjebslKaDvbIiELKlyjAKw.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*OjebslKaDvbIiELKlyjAKw.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*OjebslKaDvbIiELKlyjAKw.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*OjebslKaDvbIiELKlyjAKw.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*OjebslKaDvbIiELKlyjAKw.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1062\/1*OjebslKaDvbIiELKlyjAKw.png 1062w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 531px\" data-testid=\"og\"><\/picture><\/div>\n<figcaption class=\"mi mj mk lt lu ml mm be b bf z dv\" data-selectable-paragraph=\"\">Steps of facial emotion recognition [<a class=\"af mn\" href=\"https:\/\/edps.europa.eu\/system\/files\/2021-05\/21-05-26_techdispatch-facial-emotion-recognition_ref_en.pdf\" target=\"_blank\" rel=\"noopener ugc nofollow\">Source<\/a>]<\/figcaption>\n<\/figure>\n<p id=\"1fbf\" class=\"pw-post-body-paragraph mo mp fo be b mq mr ms mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni nj nk fh bj\" data-selectable-paragraph=\"\">Facial expressions and human emotions are tightly coupled. So, for machines to be able to understand the meaning of each emotion, it&#8217;s important for them to excel in facial detection, and particularly facial landmark detection.<\/p>\n<figure class=\"pg ph pi pj pk mb lt lu paragraph-image\">\n<div class=\"mc md eb me bg mf\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mg mh c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*kP1AX4uWgDbwUuh8F8WkvQ.png\" alt=\"\" width=\"700\" height=\"195\"><\/figure><div class=\"lt lu pl\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*kP1AX4uWgDbwUuh8F8WkvQ.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*kP1AX4uWgDbwUuh8F8WkvQ.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*kP1AX4uWgDbwUuh8F8WkvQ.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*kP1AX4uWgDbwUuh8F8WkvQ.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*kP1AX4uWgDbwUuh8F8WkvQ.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*kP1AX4uWgDbwUuh8F8WkvQ.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/1*kP1AX4uWgDbwUuh8F8WkvQ.png 1400w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*kP1AX4uWgDbwUuh8F8WkvQ.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*kP1AX4uWgDbwUuh8F8WkvQ.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*kP1AX4uWgDbwUuh8F8WkvQ.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*kP1AX4uWgDbwUuh8F8WkvQ.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*kP1AX4uWgDbwUuh8F8WkvQ.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*kP1AX4uWgDbwUuh8F8WkvQ.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*kP1AX4uWgDbwUuh8F8WkvQ.png 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<figcaption class=\"mi mj mk lt lu ml mm be b bf z dv\" data-selectable-paragraph=\"\">Modeling representations of facial expressions by detecting facial landmarks [<a class=\"af mn\" href=\"https:\/\/www.researchgate.net\/profile\/Seth-Pollak\" target=\"_blank\" rel=\"noopener ugc nofollow\">Source<\/a>]<\/figcaption>\n<\/figure>\n<p id=\"b7da\" class=\"pw-post-body-paragraph mo mp fo be b mq mr ms mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni nj nk fh bj\" data-selectable-paragraph=\"\">A model is trained to first identify regions of interest from visual images. It is then re-trained to derive patterns of facial expressions using negative and positive values. Genetic algorithms [<a class=\"af mn\" href=\"https:\/\/www.researchgate.net\/publication\/330780352_Facial_Emotion_Recognition_Using_Computer_Vision\" target=\"_blank\" rel=\"noopener ugc nofollow\">1<\/a>] are one way to detect faces in a digital image, followed by the Eigenface technique to verify the fitness of the region of interest. All the valley regions are detected from the gray-scale image to derive high-level facial landmarks like eyes, eyebrows, lips etc.<\/p>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"fh fi fj fk fl\">\n<div class=\"ab ca\">\n<div class=\"ch bg et eu ev ew\">\n<blockquote class=\"pu\"><p id=\"cc55\" class=\"pv pw fo be px py pz qa qb qc qd nk dv\" data-selectable-paragraph=\"\">Curious to see how Comet works? <a class=\"af mn\" href=\"https:\/\/www.comet.com\/site\/blog\/debugging-your-machine-learning-models-with-comet-artifacts\/?utm_source=heartbeat&amp;utm_medium=referral&amp;utm_campaign=AMS_US_EN_AWA_heartbeat_CTA\" target=\"_blank\" rel=\"noopener ugc nofollow\">Check out our PetCam scenario to see MLOps in action<\/a>.<\/p><\/blockquote>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"fh fi fj fk fl\">\n<div class=\"ab ca\">\n<div class=\"ch bg et eu ev ew\">\n<h2 id=\"2401\" class=\"oo nm fo be nn op oq or nr os ot ou nv my ov ow ox nc oy oz pa ng pb pc pd pe bj\" data-selectable-paragraph=\"\">Applications of an emotion recognition system<\/h2>\n<p id=\"94d9\" class=\"pw-post-body-paragraph mo mp fo be b mq oj ms mt mu ok mw mx my ol na nb nc om ne nf ng on ni nj nk fh bj\" data-selectable-paragraph=\"\">Emotion AI has applications in a variety of fields. Some of these are highlighted below:<\/p>\n<figure class=\"pg ph pi pj pk mb lt lu paragraph-image\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mg mh c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:662\/1*0Ssjx60T5BGH4Hu_iJPV1g.jpeg\" alt=\"\" width=\"662\" height=\"457\"><\/figure><div class=\"lt lu qe\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*0Ssjx60T5BGH4Hu_iJPV1g.jpeg 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*0Ssjx60T5BGH4Hu_iJPV1g.jpeg 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*0Ssjx60T5BGH4Hu_iJPV1g.jpeg 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*0Ssjx60T5BGH4Hu_iJPV1g.jpeg 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*0Ssjx60T5BGH4Hu_iJPV1g.jpeg 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*0Ssjx60T5BGH4Hu_iJPV1g.jpeg 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1324\/format:webp\/1*0Ssjx60T5BGH4Hu_iJPV1g.jpeg 1324w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 662px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*0Ssjx60T5BGH4Hu_iJPV1g.jpeg 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*0Ssjx60T5BGH4Hu_iJPV1g.jpeg 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*0Ssjx60T5BGH4Hu_iJPV1g.jpeg 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*0Ssjx60T5BGH4Hu_iJPV1g.jpeg 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*0Ssjx60T5BGH4Hu_iJPV1g.jpeg 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*0Ssjx60T5BGH4Hu_iJPV1g.jpeg 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1324\/1*0Ssjx60T5BGH4Hu_iJPV1g.jpeg 1324w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 662px\" data-testid=\"og\"><\/picture><\/div>\n<figcaption class=\"mi mj mk lt lu ml mm be b bf z dv\" data-selectable-paragraph=\"\">Facial emotion recognition applications in different scenarios [<a class=\"af mn\" href=\"https:\/\/www.researchgate.net\/figure\/Facial-expression-recognition-application-in-various-scenarios_fig1_351798878\" target=\"_blank\" rel=\"noopener ugc nofollow\">Source<\/a>]<\/figcaption>\n<\/figure>\n<ol class=\"\">\n<li id=\"cb96\" class=\"mo mp fo be b mq mr ms mt mu mv mw mx my qf na nb nc qg ne nf ng qh ni nj nk qi qj qk bj\" data-selectable-paragraph=\"\"><strong class=\"be ql\">Personalized content and services<\/strong>: Provide personalized content and movie\/music recommendations based on the current emotion of the user.<\/li>\n<li id=\"f928\" class=\"mo mp fo be b mq qm ms mt mu qn mw mx my qo na nb nc qp ne nf ng qq ni nj nk qi qj qk bj\" data-selectable-paragraph=\"\"><strong class=\"be ql\">Customer behavior analysis<\/strong>: Learn from the customer\u2019s emotions and expressions while looking at products\/services. Also, some intelligent robots are deployed for behavior-specific advertising.<\/li>\n<li id=\"2e31\" class=\"mo mp fo be b mq qm ms mt mu qn mw mx my qo na nb nc qp ne nf ng qq ni nj nk qi qj qk bj\" data-selectable-paragraph=\"\"><strong class=\"be ql\">Healthcare<\/strong>: Nurse bots that use Emotion AI observe the patient\u2019s condition and converses with them during the treatment for an overall wellbeing.<\/li>\n<li id=\"ec3c\" class=\"mo mp fo be b mq qm ms mt mu qn mw mx my qo na nb nc qp ne nf ng qq ni nj nk qi qj qk bj\" data-selectable-paragraph=\"\"><strong class=\"be ql\">Public safety and crime control<\/strong>: Detect the state of a driver or their level of drowsiness and trigger an alert to keep the passengers safe.<\/li>\n<li id=\"cbff\" class=\"mo mp fo be b mq qm ms mt mu qn mw mx my qo na nb nc qp ne nf ng qq ni nj nk qi qj qk bj\" data-selectable-paragraph=\"\"><strong class=\"be ql\">Education<\/strong>: Learning prototypes have been developed and proved efficient in the field of education to adapt to the child\u2019s mood and assess the concentration of the child in a learning environment.<\/li>\n<\/ol>\n<h1 id=\"ade2\" class=\"nl nm fo be nn no np nq nr ns nt nu nv nw nx ny nz oa ob oc od oe of og oh oi bj\" data-selectable-paragraph=\"\">How AI-based emotion analysis works?<\/h1>\n<p id=\"55ab\" class=\"pw-post-body-paragraph mo mp fo be b mq oj ms mt mu ok mw mx my ol na nb nc om ne nf ng on ni nj nk fh bj\" data-selectable-paragraph=\"\">On a broader level, the process of emotion recognition can be divided into the following steps:<\/p>\n<figure class=\"pg ph pi pj pk mb lt lu paragraph-image\">\n<div class=\"mc md eb me bg mf\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mg mh c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/0*i8bSV2IBVxfFtprX.png\" alt=\"\" width=\"700\" height=\"305\"><\/figure><div class=\"lt lu qr\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/0*i8bSV2IBVxfFtprX.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/0*i8bSV2IBVxfFtprX.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/0*i8bSV2IBVxfFtprX.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/0*i8bSV2IBVxfFtprX.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/0*i8bSV2IBVxfFtprX.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/0*i8bSV2IBVxfFtprX.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/0*i8bSV2IBVxfFtprX.png 1400w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/0*i8bSV2IBVxfFtprX.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/0*i8bSV2IBVxfFtprX.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/0*i8bSV2IBVxfFtprX.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/0*i8bSV2IBVxfFtprX.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/0*i8bSV2IBVxfFtprX.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/0*i8bSV2IBVxfFtprX.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/0*i8bSV2IBVxfFtprX.png 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<figcaption class=\"mi mj mk lt lu ml mm be b bf z dv\" data-selectable-paragraph=\"\">Working of emotion recognition system [<a class=\"af mn\" href=\"https:\/\/www.mdpi.com\/2078-2489\/13\/6\/268\" target=\"_blank\" rel=\"noopener ugc nofollow\">Source<\/a>]<\/figcaption>\n<\/figure>\n<h2 id=\"147e\" class=\"oo nm fo be nn op oq or nr os ot ou nv my ov ow ox nc oy oz pa ng pb pc pd pe bj\" data-selectable-paragraph=\"\">Data collection<\/h2>\n<p id=\"f2f4\" class=\"pw-post-body-paragraph mo mp fo be b mq oj ms mt mu ok mw mx my ol na nb nc om ne nf ng on ni nj nk fh bj\" data-selectable-paragraph=\"\">Data is collected by converting video into image sequences by breaking down the frames. Getting a large enough dataset that covers all emotions is difficult, if not impossible, but publicly available open <a class=\"af mn\" href=\"https:\/\/analyticsindiamag.com\/top-8-datasets-available-for-emotion-detection\/\" target=\"_blank\" rel=\"noopener ugc nofollow\">datasets<\/a> like AffectNet, Emotic, K-Emocon etc. are a good point to start.<\/p>\n<h2 id=\"a113\" class=\"oo nm fo be nn op oq or nr os ot ou nv my ov ow ox nc oy oz pa ng pb pc pd pe bj\" data-selectable-paragraph=\"\">Data preprocessing<\/h2>\n<p id=\"4a9b\" class=\"pw-post-body-paragraph mo mp fo be b mq oj ms mt mu ok mw mx my ol na nb nc om ne nf ng on ni nj nk fh bj\" data-selectable-paragraph=\"\">The image frames are preprocessed by applying techniques like cropping, rotating, resizing, color correction, image smoothening and noise correction to improve the feature vector and the corresponding accuracy of the model.<\/p>\n<h2 id=\"93e3\" class=\"oo nm fo be nn op oq or nr os ot ou nv my ov ow ox nc oy oz pa ng pb pc pd pe bj\" data-selectable-paragraph=\"\">Training and classification<\/h2>\n<figure class=\"pg ph pi pj pk mb lt lu paragraph-image\">\n<div class=\"mc md eb me bg mf\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mg mh c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*TuocZ_Ubd50f7WEgDeuUPg.png\" alt=\"\" width=\"700\" height=\"334\"><\/figure><div class=\"lt lu qs\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*TuocZ_Ubd50f7WEgDeuUPg.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*TuocZ_Ubd50f7WEgDeuUPg.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*TuocZ_Ubd50f7WEgDeuUPg.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*TuocZ_Ubd50f7WEgDeuUPg.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*TuocZ_Ubd50f7WEgDeuUPg.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*TuocZ_Ubd50f7WEgDeuUPg.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/1*TuocZ_Ubd50f7WEgDeuUPg.png 1400w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*TuocZ_Ubd50f7WEgDeuUPg.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*TuocZ_Ubd50f7WEgDeuUPg.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*TuocZ_Ubd50f7WEgDeuUPg.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*TuocZ_Ubd50f7WEgDeuUPg.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*TuocZ_Ubd50f7WEgDeuUPg.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*TuocZ_Ubd50f7WEgDeuUPg.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*TuocZ_Ubd50f7WEgDeuUPg.png 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<figcaption class=\"mi mj mk lt lu ml mm be b bf z dv\" data-selectable-paragraph=\"\">Face detection from an image using Python [Source: Author]<\/figcaption>\n<\/figure>\n<p id=\"b108\" class=\"pw-post-body-paragraph mo mp fo be b mq mr ms mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni nj nk fh bj\" data-selectable-paragraph=\"\">After pre-processing, we first detect the location of the face (as seen above). Then we detect the facial landmarks (as seen below).<\/p>\n<figure class=\"pg ph pi pj pk mb lt lu paragraph-image\">\n<div class=\"mc md eb me bg mf\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mg mh c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*H7pt_jZACfmDWJSK31OLRA.png\" alt=\"\" width=\"700\" height=\"348\"><\/figure><div class=\"lt lu qt\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*H7pt_jZACfmDWJSK31OLRA.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*H7pt_jZACfmDWJSK31OLRA.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*H7pt_jZACfmDWJSK31OLRA.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*H7pt_jZACfmDWJSK31OLRA.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*H7pt_jZACfmDWJSK31OLRA.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*H7pt_jZACfmDWJSK31OLRA.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/1*H7pt_jZACfmDWJSK31OLRA.png 1400w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*H7pt_jZACfmDWJSK31OLRA.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*H7pt_jZACfmDWJSK31OLRA.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*H7pt_jZACfmDWJSK31OLRA.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*H7pt_jZACfmDWJSK31OLRA.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*H7pt_jZACfmDWJSK31OLRA.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*H7pt_jZACfmDWJSK31OLRA.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*H7pt_jZACfmDWJSK31OLRA.png 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<figcaption class=\"mi mj mk lt lu ml mm be b bf z dv\" data-selectable-paragraph=\"\">Facial landmarks extraction that contributes to emotion analysis [Source: Author]<\/figcaption>\n<\/figure>\n<p id=\"d36c\" class=\"pw-post-body-paragraph mo mp fo be b mq mr ms mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni nj nk fh bj\" data-selectable-paragraph=\"\">The facial features that are extracted can be measured by Action Units (AU), or the distances between facial landmarks like eyebrow raise distance, mouth opening or gradient features, and more. Regions of interest that show maximum expression changes contain rich facial information. To extract more crucial expression features from the facial expression feature map, an attention module like ECA-Net (<a class=\"af mn\" href=\"https:\/\/www.frontiersin.org\/articles\/10.3389\/fpsyg.2021.759485\/full#B28\" target=\"_blank\" rel=\"noopener ugc nofollow\">Wang et al., 2020<\/a>) can be integrated to add greater weight to the core features.<\/p>\n<figure class=\"pg ph pi pj pk mb lt lu paragraph-image\">\n<div class=\"mc md eb me bg mf\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg mg mh c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/0*CAaQXSoJCHZVVbxO.jpg\" alt=\"\" width=\"700\" height=\"420\"><\/figure><div class=\"lt lu qu\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/0*CAaQXSoJCHZVVbxO.jpg 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/0*CAaQXSoJCHZVVbxO.jpg 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/0*CAaQXSoJCHZVVbxO.jpg 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/0*CAaQXSoJCHZVVbxO.jpg 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/0*CAaQXSoJCHZVVbxO.jpg 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/0*CAaQXSoJCHZVVbxO.jpg 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/0*CAaQXSoJCHZVVbxO.jpg 1400w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/0*CAaQXSoJCHZVVbxO.jpg 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/0*CAaQXSoJCHZVVbxO.jpg 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/0*CAaQXSoJCHZVVbxO.jpg 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/0*CAaQXSoJCHZVVbxO.jpg 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/0*CAaQXSoJCHZVVbxO.jpg 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/0*CAaQXSoJCHZVVbxO.jpg 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/0*CAaQXSoJCHZVVbxO.jpg 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<figcaption class=\"mi mj mk lt lu ml mm be b bf z dv\" data-selectable-paragraph=\"\">Schematic diagram of the overall framework of Emotion Recognition System [<a class=\"af mn\" href=\"https:\/\/www.frontiersin.org\/articles\/10.3389\/fpsyg.2021.759485\/full\" target=\"_blank\" rel=\"noopener ugc nofollow\">Source<\/a>]<\/figcaption>\n<\/figure>\n<p id=\"2047\" class=\"pw-post-body-paragraph mo mp fo be b mq mr ms mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni nj nk fh bj\" data-selectable-paragraph=\"\">The models that are used for AI emotion recognition can be based on linear models like Support Vector Machines (SVMs) or non-linear models like Convolutional Neural Networks (CNNs).<\/p>\n<p id=\"3914\" class=\"pw-post-body-paragraph mo mp fo be b mq mr ms mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni nj nk fh bj\" data-selectable-paragraph=\"\">The facial detection model is later fine-tuned to classify the emotional quotient into labels like sadness, happiness, anger, neutral, fear, surprise etc.<\/p>\n<h1 id=\"8d2c\" class=\"nl nm fo be nn no np nq nr ns nt nu nv nw nx ny nz oa ob oc od oe of og oh oi bj\" data-selectable-paragraph=\"\">Summary<\/h1>\n<p id=\"edb8\" class=\"pw-post-body-paragraph mo mp fo be b mq oj ms mt mu ok mw mx my ol na nb nc om ne nf ng on ni nj nk fh bj\" data-selectable-paragraph=\"\">Facial expression recognition is a crucial component of human-computer interaction. The main goal of this recognition system is to determine emotions in real-time, by analyzing the various features of a face such as eyebrows, eyes, mouth, and other features, and mapping them to a set of emotions such as anger, fear, surprise, sadness and happiness. Face detection, feature extraction, and classification are the three key stages that make up the facial expression recognition process. Emotion recognition is being used in many domains, from healthcare to intelligent marketing, to autonomous vehicles, to education.<\/p>\n<p id=\"d0d7\" class=\"pw-post-body-paragraph mo mp fo be b mq mr ms mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni nj nk fh bj\" data-selectable-paragraph=\"\">Thanks for reading!!<\/p>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Photo by Andrea Piacquadio: https:\/\/www.pexels.com\/photo\/collage-photo-of-woman-3812743\/ Computer vision is one of the most widely used and evolving fields of AI. It gives the computer the ability to observe and learn from visual data just like humans. In this process, the computer derives meaningful information from digital images, videos etc. and applies this learning tosolving problems. Visual [&hellip;]<\/p>\n","protected":false},"author":53,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"customer_name":"","customer_description":"","customer_industry":"","customer_technologies":"","customer_logo":"","footnotes":""},"categories":[6,7],"tags":[],"coauthors":[155],"class_list":["post-7621","post","type-post","status-publish","format-standard","hentry","category-machine-learning","category-tutorials"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>AI Emotion Recognition Using Computer Vision - Comet<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"http:\/\/www.comet.com\/site\/blog\/ai-emotion-recognition-using-computer-vision\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"AI Emotion Recognition Using Computer Vision\" \/>\n<meta property=\"og:description\" content=\"Photo by Andrea Piacquadio: https:\/\/www.pexels.com\/photo\/collage-photo-of-woman-3812743\/ Computer vision is one of the most widely used and evolving fields of AI. It gives the computer the ability to observe and learn from visual data just like humans. In this process, the computer derives meaningful information from digital images, videos etc. and applies this learning tosolving problems. Visual [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"http:\/\/www.comet.com\/site\/blog\/ai-emotion-recognition-using-computer-vision\/\" \/>\n<meta property=\"og:site_name\" content=\"Comet\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cometdotml\" \/>\n<meta property=\"article:published_time\" content=\"2023-09-22T20:22:31+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-04-24T17:13:50+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*UghBE0Itc0NhXkbbMPKMRA.jpeg\" \/>\n<meta name=\"author\" content=\"Pragati Baheti\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Cometml\" \/>\n<meta name=\"twitter:site\" content=\"@Cometml\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Pragati Baheti\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutes\" \/>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"AI Emotion Recognition Using Computer Vision - Comet","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"http:\/\/www.comet.com\/site\/blog\/ai-emotion-recognition-using-computer-vision\/","og_locale":"en_US","og_type":"article","og_title":"AI Emotion Recognition Using Computer Vision","og_description":"Photo by Andrea Piacquadio: https:\/\/www.pexels.com\/photo\/collage-photo-of-woman-3812743\/ Computer vision is one of the most widely used and evolving fields of AI. It gives the computer the ability to observe and learn from visual data just like humans. In this process, the computer derives meaningful information from digital images, videos etc. and applies this learning tosolving problems. Visual [&hellip;]","og_url":"http:\/\/www.comet.com\/site\/blog\/ai-emotion-recognition-using-computer-vision\/","og_site_name":"Comet","article_publisher":"https:\/\/www.facebook.com\/cometdotml","article_published_time":"2023-09-22T20:22:31+00:00","article_modified_time":"2025-04-24T17:13:50+00:00","og_image":[{"url":"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*UghBE0Itc0NhXkbbMPKMRA.jpeg","type":"","width":"","height":""}],"author":"Pragati Baheti","twitter_card":"summary_large_image","twitter_creator":"@Cometml","twitter_site":"@Cometml","twitter_misc":{"Written by":"Pragati Baheti","Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"http:\/\/www.comet.com\/site\/blog\/ai-emotion-recognition-using-computer-vision\/#article","isPartOf":{"@id":"http:\/\/www.comet.com\/site\/blog\/ai-emotion-recognition-using-computer-vision\/"},"author":{"name":"Pragati Baheti","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/54958874fd9a373469e70e19b6597439"},"headline":"AI Emotion Recognition Using Computer Vision","datePublished":"2023-09-22T20:22:31+00:00","dateModified":"2025-04-24T17:13:50+00:00","mainEntityOfPage":{"@id":"http:\/\/www.comet.com\/site\/blog\/ai-emotion-recognition-using-computer-vision\/"},"wordCount":787,"publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"image":{"@id":"http:\/\/www.comet.com\/site\/blog\/ai-emotion-recognition-using-computer-vision\/#primaryimage"},"thumbnailUrl":"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*UghBE0Itc0NhXkbbMPKMRA.jpeg","articleSection":["Machine Learning","Tutorials"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"http:\/\/www.comet.com\/site\/blog\/ai-emotion-recognition-using-computer-vision\/","url":"http:\/\/www.comet.com\/site\/blog\/ai-emotion-recognition-using-computer-vision\/","name":"AI Emotion Recognition Using Computer Vision - Comet","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/#website"},"primaryImageOfPage":{"@id":"http:\/\/www.comet.com\/site\/blog\/ai-emotion-recognition-using-computer-vision\/#primaryimage"},"image":{"@id":"http:\/\/www.comet.com\/site\/blog\/ai-emotion-recognition-using-computer-vision\/#primaryimage"},"thumbnailUrl":"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*UghBE0Itc0NhXkbbMPKMRA.jpeg","datePublished":"2023-09-22T20:22:31+00:00","dateModified":"2025-04-24T17:13:50+00:00","breadcrumb":{"@id":"http:\/\/www.comet.com\/site\/blog\/ai-emotion-recognition-using-computer-vision\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["http:\/\/www.comet.com\/site\/blog\/ai-emotion-recognition-using-computer-vision\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"http:\/\/www.comet.com\/site\/blog\/ai-emotion-recognition-using-computer-vision\/#primaryimage","url":"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*UghBE0Itc0NhXkbbMPKMRA.jpeg","contentUrl":"https:\/\/miro.medium.com\/v2\/resize:fit:700\/1*UghBE0Itc0NhXkbbMPKMRA.jpeg"},{"@type":"BreadcrumbList","@id":"http:\/\/www.comet.com\/site\/blog\/ai-emotion-recognition-using-computer-vision\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.comet.com\/site\/"},{"@type":"ListItem","position":2,"name":"AI Emotion Recognition Using Computer Vision"}]},{"@type":"WebSite","@id":"https:\/\/www.comet.com\/site\/#website","url":"https:\/\/www.comet.com\/site\/","name":"Comet","description":"Build Better Models Faster","publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.comet.com\/site\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.comet.com\/site\/#organization","name":"Comet ML, Inc.","alternateName":"Comet","url":"https:\/\/www.comet.com\/site\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","width":310,"height":310,"caption":"Comet ML, Inc."},"image":{"@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/cometdotml","https:\/\/x.com\/Cometml","https:\/\/www.youtube.com\/channel\/UCmN63HKvfXSCS-UwVwmK8Hw"]},{"@type":"Person","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/54958874fd9a373469e70e19b6597439","name":"Pragati Baheti","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/image\/851362323c20d10f17041155fc07cae2","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/08\/1535716570395-96x96.jpg","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/08\/1535716570395-96x96.jpg","caption":"Pragati Baheti"},"url":"https:\/\/www.comet.com\/site\/blog\/author\/pragatibaheti001gmail-com\/"}]}},"_links":{"self":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/7621","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/users\/53"}],"replies":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/comments?post=7621"}],"version-history":[{"count":1,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/7621\/revisions"}],"predecessor-version":[{"id":15528,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/7621\/revisions\/15528"}],"wp:attachment":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media?parent=7621"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/categories?post=7621"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/tags?post=7621"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/coauthors?post=7621"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}