{"id":9117,"date":"2024-02-12T06:00:50","date_gmt":"2024-02-12T14:00:50","guid":{"rendered":"https:\/\/live-cometml.pantheonsite.io\/?p=9117"},"modified":"2025-05-08T10:19:27","modified_gmt":"2025-05-08T10:19:27","slug":"learning-path-to-building-llm-based-solutions-for-practitioner-data-scientists","status":"publish","type":"post","link":"https:\/\/www.comet.com\/site\/blog\/learning-path-to-building-llm-based-solutions-for-practitioner-data-scientists\/","title":{"rendered":"Learning Path to Building LLM-Based Solutions\u200a-\u200aFor Practitioner Data Scientists"},"content":{"rendered":"\n<p class=\"graf graf--p\">As everyone would agree, the advent of LLM has transformed the technology industry, and technocrats have had a huge surge of interest in learning about LLMs.<\/p>\n\n\n\n<p class=\"graf graf--p\">Within a short time, technologies around LLMs have evolved so well, and different learning streams are catering to learners&#8217; varying needs. This article primarily captures my understanding and my learning path to build LLM-based solutions.<\/p>\n\n\n\n<figure class=\"wp-block-image graf graf--figure\"><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/800\/1*bcWretvLzZNTIQjBN0nXsQ.jpeg\" alt=\"graphic of two people look at a path to building an LLM\"\/><figcaption class=\"wp-element-caption\">Img <a class=\"markup--anchor markup--figure-anchor\" href=\"https:\/\/www.flexiquiz.com\/blog\/create-a-custom-learning-path\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/www.flexiquiz.com\/blog\/create-a-custom-learning-path\">src<\/a><\/figcaption><\/figure>\n\n\n\n<p>&nbsp;<\/p>\n\n\n\n<p class=\"graf graf--p\">This article is more useful for practitioner data scientists who want to get into the LLM field and expand their scope of skills.<\/p>\n\n\n\n<p class=\"graf graf--p\">So, let&#8217;s get started\u2026<\/p>\n\n\n\n<h4 class=\"wp-block-heading graf graf--h4\">Proprietary vs Open Source&nbsp;LLMs<\/h4>\n\n\n\n<p class=\"graf graf--p\">Though OpenAI&#8217;s ChatGPT is the leader among the LLM models and has revolutionized the industry with its offerings, open-source LLM ecosystems are rapidly evolving and are becoming as good as proprietary LLMs in terms of performance.<\/p>\n\n\n\n<p class=\"graf graf--p\">Open-source LLMs, especially for data scientists, give a broader scope to learn and apply new things. The best resource to monitor the current status of open source LLMS is Hugging Face Open LLM Leaderboard, where the open source LLMs are ranked based on various evaluations.<\/p>\n\n\n\n<p class=\"graf graf--p\"><a class=\"markup--anchor markup--mixtapeEmbed-anchor\" title=\"https:\/\/huggingface.co\/spaces\/HuggingFaceH4\/open_llm_leaderboard\" href=\"https:\/\/huggingface.co\/spaces\/HuggingFaceH4\/open_llm_leaderboard\" data-href=\"https:\/\/huggingface.co\/spaces\/HuggingFaceH4\/open_llm_leaderboard\"><strong class=\"markup--strong markup--mixtapeEmbed-strong\">Open LLM Leaderboard &#8211; a Hugging Face Space by HuggingFaceH4<\/strong><\/a><\/p>\n\n\n\n<p class=\"graf graf--p\">In this article, the learning path I explain is primarily associated with building large language model solutions based on open-source LLMs.<\/p>\n\n\n\n<h3 class=\"wp-block-heading graf graf--h3\">Broad Streams of LLM Applications<\/h3>\n\n\n\n<p class=\"graf graf--p\">Though LLM applications are vast, we can broadly categorize the LLM applications into the following streams:<\/p>\n\n\n\n<h4 class=\"wp-block-heading graf graf--h4\"><strong class=\"markup--strong markup--h4-strong\"><em class=\"markup--em markup--h4-em\">Prompt Engineering:&nbsp;<\/em><\/strong><\/h4>\n\n\n\n<p class=\"graf graf--p\">This is the most basic and widely applicable one. Here, we primarily work with proprietary large language models such as ChatGPT. This is about learning the best way to compose the prompt messages so LLMs would give you the most appropriate answer.<\/p>\n\n\n\n<p class=\"graf graf--p\">Data scientists may not have much scope here, as it is primarily about learning how to best use available chat-based LLM solutions.<\/p>\n\n\n\n<h4 class=\"wp-block-heading graf graf--h4\"><strong class=\"markup--strong markup--h4-strong\"><em class=\"markup--em markup--h4-em\">Langchain Integrations:&nbsp;<\/em><\/strong><\/h4>\n\n\n\n<p class=\"graf graf--p\">Langchain is a framework built to interface large language model solutions with other technologies. Say you have a use case that requires you to feed the input from your database to large language models\u200a\u2014\u200athen you would need Langchain for integration. Langchain is very comprehensive, and its applications are evolving rapidly.<\/p>\n\n\n\n<figure class=\"wp-block-embed graf graf--p\"><div class=\"wp-block-embed__wrapper\">\nhttps:\/\/python.langchain.com\/docs\/get_started\/introduction.html\n<\/div><\/figure>\n\n\n\n<p class=\"graf graf--p\">Again, data scientists have limited scope here. Its primary applications are with engineers who build enterprise-scale solutions leveraging LLM outputs.<\/p>\n\n\n\n<h4 class=\"wp-block-heading graf graf--h4\">Fine-tuning LLMs:<\/h4>\n\n\n\n<p class=\"graf graf--p\">LLM fine-tuning is one of the exciting areas where we curate the dataset specific to our needs and tune the LLM models built by the providers. This has a broad scope of learning for data scientists, and it&#8217;s necessary to have a good grasp of the following concepts to excel here.<\/p>\n\n\n\n<p class=\"graf graf--p\"><strong class=\"markup--strong markup--p-strong\"><em class=\"markup--em markup--p-em\">&nbsp;\u2192 Hugging Face Text Generation Pipeline: <\/em><\/strong>Hugging Face has become synonymous with large models, and they have built amazing libraries to aid the fine-tuning of pre-trained models.<\/p>\n\n\n\n<p class=\"graf graf--p\">Also, please note that large language models are causal language models that generate responses by predicting the best words to complete the sentence. So, it is essential to have a good grasp of training a causal language model from scratch, and the article below is beneficial:<\/p>\n\n\n\n<p class=\"graf graf--p\"><a class=\"markup--anchor markup--mixtapeEmbed-anchor\" title=\"https:\/\/huggingface.co\/learn\/nlp-course\/chapter7\/6\" href=\"https:\/\/huggingface.co\/learn\/nlp-course\/chapter7\/6\" data-href=\"https:\/\/huggingface.co\/learn\/nlp-course\/chapter7\/6\"><strong class=\"markup--strong markup--mixtapeEmbed-strong\">Training a causal language model from scratch &#8211; Hugging Face NLP Course<\/strong><\/a><\/p>\n\n\n\n<p class=\"graf graf--p\">&nbsp;\u2192 <strong class=\"markup--strong markup--p-strong\"><em class=\"markup--em markup--p-em\">PEFT, LORA, QLora concepts<\/em><\/strong>: Fine-tuning LLM models is not as straightforward as &#8216;transfer learning&#8217; that we do with other models. Since we must deal with parameters in the billions scale, we must employ a more sophisticated process based on PEFT (Parameter Efficient Fine Tuning) concepts.<\/p>\n\n\n\n<p class=\"graf graf--p\">The following videos and articles were very useful to learn about the mentioned concepts:<\/p>\n\n\n\n<p><strong class=\"markup--strong markup--mixtapeEmbed-strong\"><a class=\"markup--anchor markup--mixtapeEmbed-anchor\" title=\"https:\/\/huggingface.co\/blog\/peft\" href=\"https:\/\/huggingface.co\/blog\/peft\" data-href=\"https:\/\/huggingface.co\/blog\/peft\">Parameter-Efficient Fine-Tuning using \ud83e\udd17 PEFT<\/a><\/strong><\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"QLoRA PEFT Walkthrough! Hyperparameters Explained, Dataset Requirements, and Comparing Repo&#039;s.\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/8vmWGX1nfNM?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<p>&nbsp;<\/p>\n\n\n\n<p class=\"graf graf--p\">\u2192 <strong class=\"markup--strong markup--p-strong\"><em class=\"markup--em markup--p-em\">Quantization: <\/em><\/strong>Quantization helps fine-tune massive LLMs on a single GPU without compromising performance. Again, Hugging Face has published excellent articles to understand the nuances of quantization in detail, as listed below:<\/p>\n\n\n\n<p class=\"graf graf--p\"><a class=\"markup--anchor markup--mixtapeEmbed-anchor\" title=\"https:\/\/huggingface.co\/blog\/hf-bitsandbytes-integration\" href=\"https:\/\/huggingface.co\/blog\/hf-bitsandbytes-integration\" data-href=\"https:\/\/huggingface.co\/blog\/hf-bitsandbytes-integration\"><strong class=\"markup--strong markup--mixtapeEmbed-strong\">A Gentle Introduction to 8-bit Matrix Multiplication for transformers at scale using transformers<\/strong><\/a><\/p>\n\n\n\n<div class=\"graf graf--mixtapeEmbed\">\n<p><a class=\"markup--anchor markup--mixtapeEmbed-anchor\" title=\"https:\/\/huggingface.co\/blog\/4bit-transformers-bitsandbytes\" href=\"https:\/\/huggingface.co\/blog\/4bit-transformers-bitsandbytes\" data-href=\"https:\/\/huggingface.co\/blog\/4bit-transformers-bitsandbytes\"><strong class=\"markup--strong markup--mixtapeEmbed-strong\">Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA<\/strong><\/a><\/p>\n<\/div>\n\n\n\n<p class=\"graf graf--p\">\u2192 <strong class=\"markup--strong markup--p-strong\"><em class=\"markup--em markup--p-em\">Instruction Dataset<\/em><\/strong>: To fine-tune large language models properly, we require high-quality datasets in an instructional format.<\/p>\n\n\n\n<p class=\"graf graf--p\">If you observe the Open LLMs Llama, Falcon, etc\u200a\u2014\u200athey release two versions of models, such as the base version and the instruct version.<\/p>\n\n\n\n<p class=\"graf graf--p\">The base version is trained on an open corpus of massive text at an internet scale, and usually, it will be trained for multiple months in a GPU farm. These base version models are foundational to building the LLMs toward specific needs.<\/p>\n\n\n\n<p class=\"graf graf--p\">Instruct versions are the ones that are built using base versions with high-quality instructional datasets, where quality presides over quantity. We can build high-performance instruct models with the instructional dataset containing even ~20K records.<\/p>\n\n\n\n<p class=\"graf graf--p\">Alpaca is an industry standard that describes the format and contains around ~55K records spanning domains.<\/p>\n\n\n\n<p class=\"graf graf--p\"><a class=\"markup--anchor markup--mixtapeEmbed-anchor\" title=\"https:\/\/huggingface.co\/datasets\/tatsu-lab\/alpaca\" href=\"https:\/\/huggingface.co\/datasets\/tatsu-lab\/alpaca\" data-href=\"https:\/\/huggingface.co\/datasets\/tatsu-lab\/alpaca\"><strong class=\"markup--strong markup--mixtapeEmbed-strong\">tatsu-lab\/alpaca \u00b7 Datasets at Hugging Face<\/strong><\/a><\/p>\n\n\n\n<div class=\"graf graf--mixtapeEmbed\">So, to fine-tune the model for our business use case\u200a\u2014\u200awe need to curate high-quality datasets in the Alpaca format.<\/div>\n\n\n\n<p class=\"graf graf--p\">Also, please note that for fine-tuning, base versions are recommended.<\/p>\n\n\n\n<h4 class=\"wp-block-heading graf graf--h4\">Retrieval Augmented Generation (RAG):<\/h4>\n\n\n\n<p class=\"graf graf--p\">This is a more straightforward application of large language models, but it still has a good scope of learning for data scientists.<\/p>\n\n\n\n<p class=\"graf graf--p\">Here, we leverage the foundational models and build the RAG solution, where LLM models respond by summarizing the information associated with the query from your content.<\/p>\n\n\n\n<p class=\"graf graf--p\">Vector databases such as Chroma, Pinecone, etc., are widely employed here to identify the content specific to your query from your database, and identified content would be summarized by LLM as a response to the query.<\/p>\n\n\n\n<p class=\"graf graf--p\"><a class=\"markup--anchor markup--mixtapeEmbed-anchor\" title=\"https:\/\/towardsdatascience.com\/build-industry-specific-llms-using-retrieval-augmented-generation-af9e98bb6f68\" href=\"https:\/\/towardsdatascience.com\/build-industry-specific-llms-using-retrieval-augmented-generation-af9e98bb6f68\" data-href=\"https:\/\/towardsdatascience.com\/build-industry-specific-llms-using-retrieval-augmented-generation-af9e98bb6f68\"><strong class=\"markup--strong markup--mixtapeEmbed-strong\">Build Industry-Specific LLMs Using Retrieval Augmented Generation<\/strong><\/a><\/p>\n\n\n\n<p class=\"graf graf--p\">RAG-based applications will have a broader scope of applications in real-time, as they&#8217;re very simple and straightforward.<\/p>\n\n\n\n<h3 class=\"wp-block-heading graf graf--h3\">Summary<\/h3>\n\n\n\n<p class=\"graf graf--p\">Thus, we have seen the various sources associated with mastering large language models concepts in detail. Again, this path may be more applicable for practicing data scientists\u200a\u2014\u200aas foundational DS knowledge is required to understand the ideas above. This is the summary of the learning path I followed, and I hope this will be helpful for my fellow practitioners.<\/p>\n\n\n\n<p class=\"graf graf--p\">Please follow my handle to learn about other insightful concepts associated with LLMs and DS.<\/p>\n\n\n\n<p class=\"graf graf--p\">Thanks! Happy Learning!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>As everyone would agree, the advent of LLM has transformed the technology industry, and technocrats have had a huge surge of interest in learning about LLMs. Within a short time, technologies around LLMs have evolved so well, and different learning streams are catering to learners&#8217; varying needs. This article primarily captures my understanding and my [&hellip;]<\/p>\n","protected":false},"author":118,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"customer_name":"","customer_description":"","customer_industry":"","customer_technologies":"","customer_logo":"","footnotes":""},"categories":[65],"tags":[],"coauthors":[215],"class_list":["post-9117","post","type-post","status-publish","format-standard","hentry","category-llmops"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Learning Path to Building LLM-Based Solutions<\/title>\n<meta name=\"description\" content=\"Learn how to build an LLM based solution, in this tutorial catered toward practitioning data scientists. Read more.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/learning-path-to-building-llm-based-solutions-for-practitioner-data-scientists\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Learning Path to Building LLM-Based Solutions\u200a-\u200aFor Practitioner Data Scientists\" \/>\n<meta property=\"og:description\" content=\"Learn how to build an LLM based solution, in this tutorial catered toward practitioning data scientists. Read more.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.comet.com\/site\/blog\/learning-path-to-building-llm-based-solutions-for-practitioner-data-scientists\" \/>\n<meta property=\"og:site_name\" content=\"Comet\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cometdotml\" \/>\n<meta property=\"article:published_time\" content=\"2024-02-12T14:00:50+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-05-08T10:19:27+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/cdn-images-1.medium.com\/max\/800\/1*bcWretvLzZNTIQjBN0nXsQ.jpeg\" \/>\n<meta name=\"author\" content=\"Vasanthkumar Velayudham\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Cometml\" \/>\n<meta name=\"twitter:site\" content=\"@Cometml\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Vasanthkumar Velayudham\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutes\" \/>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Learning Path to Building LLM-Based Solutions","description":"Learn how to build an LLM based solution, in this tutorial catered toward practitioning data scientists. Read more.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.comet.com\/site\/blog\/learning-path-to-building-llm-based-solutions-for-practitioner-data-scientists","og_locale":"en_US","og_type":"article","og_title":"Learning Path to Building LLM-Based Solutions\u200a-\u200aFor Practitioner Data Scientists","og_description":"Learn how to build an LLM based solution, in this tutorial catered toward practitioning data scientists. Read more.","og_url":"https:\/\/www.comet.com\/site\/blog\/learning-path-to-building-llm-based-solutions-for-practitioner-data-scientists","og_site_name":"Comet","article_publisher":"https:\/\/www.facebook.com\/cometdotml","article_published_time":"2024-02-12T14:00:50+00:00","article_modified_time":"2025-05-08T10:19:27+00:00","og_image":[{"url":"https:\/\/cdn-images-1.medium.com\/max\/800\/1*bcWretvLzZNTIQjBN0nXsQ.jpeg","type":"","width":"","height":""}],"author":"Vasanthkumar Velayudham","twitter_card":"summary_large_image","twitter_creator":"@Cometml","twitter_site":"@Cometml","twitter_misc":{"Written by":"Vasanthkumar Velayudham","Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.comet.com\/site\/blog\/learning-path-to-building-llm-based-solutions-for-practitioner-data-scientists#article","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/blog\/learning-path-to-building-llm-based-solutions-for-practitioner-data-scientists\/"},"author":{"name":"Vasanthkumar Velayudham","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/eafd525d6dca0813f547b678598af186"},"headline":"Learning Path to Building LLM-Based Solutions\u200a-\u200aFor Practitioner Data Scientists","datePublished":"2024-02-12T14:00:50+00:00","dateModified":"2025-05-08T10:19:27+00:00","mainEntityOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/learning-path-to-building-llm-based-solutions-for-practitioner-data-scientists\/"},"wordCount":1019,"publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/learning-path-to-building-llm-based-solutions-for-practitioner-data-scientists#primaryimage"},"thumbnailUrl":"https:\/\/cdn-images-1.medium.com\/max\/800\/1*bcWretvLzZNTIQjBN0nXsQ.jpeg","articleSection":["LLMOps"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.comet.com\/site\/blog\/learning-path-to-building-llm-based-solutions-for-practitioner-data-scientists\/","url":"https:\/\/www.comet.com\/site\/blog\/learning-path-to-building-llm-based-solutions-for-practitioner-data-scientists","name":"Learning Path to Building LLM-Based Solutions","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/learning-path-to-building-llm-based-solutions-for-practitioner-data-scientists#primaryimage"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/learning-path-to-building-llm-based-solutions-for-practitioner-data-scientists#primaryimage"},"thumbnailUrl":"https:\/\/cdn-images-1.medium.com\/max\/800\/1*bcWretvLzZNTIQjBN0nXsQ.jpeg","datePublished":"2024-02-12T14:00:50+00:00","dateModified":"2025-05-08T10:19:27+00:00","description":"Learn how to build an LLM based solution, in this tutorial catered toward practitioning data scientists. Read more.","breadcrumb":{"@id":"https:\/\/www.comet.com\/site\/blog\/learning-path-to-building-llm-based-solutions-for-practitioner-data-scientists#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.comet.com\/site\/blog\/learning-path-to-building-llm-based-solutions-for-practitioner-data-scientists"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/blog\/learning-path-to-building-llm-based-solutions-for-practitioner-data-scientists#primaryimage","url":"https:\/\/cdn-images-1.medium.com\/max\/800\/1*bcWretvLzZNTIQjBN0nXsQ.jpeg","contentUrl":"https:\/\/cdn-images-1.medium.com\/max\/800\/1*bcWretvLzZNTIQjBN0nXsQ.jpeg"},{"@type":"BreadcrumbList","@id":"https:\/\/www.comet.com\/site\/blog\/learning-path-to-building-llm-based-solutions-for-practitioner-data-scientists#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.comet.com\/site\/"},{"@type":"ListItem","position":2,"name":"Learning Path to Building LLM-Based Solutions\u200a-\u200aFor Practitioner Data Scientists"}]},{"@type":"WebSite","@id":"https:\/\/www.comet.com\/site\/#website","url":"https:\/\/www.comet.com\/site\/","name":"Comet","description":"Build Better Models Faster","publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.comet.com\/site\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.comet.com\/site\/#organization","name":"Comet ML, Inc.","alternateName":"Comet","url":"https:\/\/www.comet.com\/site\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","width":310,"height":310,"caption":"Comet ML, Inc."},"image":{"@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/cometdotml","https:\/\/x.com\/Cometml","https:\/\/www.youtube.com\/channel\/UCmN63HKvfXSCS-UwVwmK8Hw"]},{"@type":"Person","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/eafd525d6dca0813f547b678598af186","name":"Vasanthkumar Velayudham","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/image\/0785408c3140acca85d8c056c56593fd","url":"https:\/\/secure.gravatar.com\/avatar\/2fbaa1f3b0905bc155c13a020f05668f43a66aa55a401d4e335355c501bdccb1?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/2fbaa1f3b0905bc155c13a020f05668f43a66aa55a401d4e335355c501bdccb1?s=96&d=mm&r=g","caption":"Vasanthkumar Velayudham"},"url":"https:\/\/www.comet.com\/site\/blog\/author\/vvk-victorygmail-com\/"}]}},"_links":{"self":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/9117","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/users\/118"}],"replies":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/comments?post=9117"}],"version-history":[{"count":3,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/9117\/revisions"}],"predecessor-version":[{"id":15870,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/9117\/revisions\/15870"}],"wp:attachment":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media?parent=9117"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/categories?post=9117"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/tags?post=9117"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/coauthors?post=9117"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}