{"id":8750,"date":"2024-01-17T06:00:07","date_gmt":"2024-01-17T14:00:07","guid":{"rendered":"https:\/\/live-cometml.pantheonsite.io\/?p=8750"},"modified":"2025-04-24T17:03:31","modified_gmt":"2025-04-24T17:03:31","slug":"exploring-the-power-of-llama-2-using-streamlit","status":"publish","type":"post","link":"https:\/\/www.comet.com\/site\/blog\/exploring-the-power-of-llama-2-using-streamlit\/","title":{"rendered":"Exploring the Power of Llama 2 Using Streamlit"},"content":{"rendered":"\n<figure class=\"wp-block-image graf graf--figure\"><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/800\/1*CQwd-w276OvM2TNMwdE3tw.jpeg\" alt=\"llama\"\/><figcaption class=\"wp-element-caption\">Photo by <a class=\"markup--anchor markup--figure-anchor\" href=\"https:\/\/www.pexels.com\/photo\/photo-of-an-animal-2499435\/\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/www.pexels.com\/photo\/photo-of-an-animal-2499435\/\">Trace&nbsp;Hudson<\/a><\/figcaption><\/figure>\n\n\n\n<h4 class=\"wp-block-heading graf graf--h4\">Introduction<\/h4>\n\n\n\n<p class=\"graf graf--p\">2023 has been significant for large language models. Many advancements have been made since ChatGPT, including open-source and licensed models. The recent release of Llama 2 by Meta and Microsoft has taken the AI world by storm. An open-source model that is just as good as GPT 3.5 or even GPT 4? We\u2019re yet to know.<\/p>\n\n\n\n<p class=\"graf graf--p\">In this article, we will explore the power of Llama 2 using <a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/streamlit.io\/\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/streamlit.io\/\">Streamlit<\/a>.<\/p>\n\n\n\n<h4 class=\"wp-block-heading graf graf--h4\">Understanding Llama&nbsp;v2<\/h4>\n\n\n\n<p class=\"graf graf--p\">Llama 2, a natural language processing library from Meta, consists of pre-trained and fine-tuned large language models (LLMs) whose scale ranges from 7 billion to 70 billion parameters. The Llama 2-Chat is optimized to support human conversation use cases. It is a suitable alternative for closed models since it outperforms the open-source chat models on most tested benchmarks. We\u2019ll explore how Streamlit provides an intuitive user interface, and Llama v2 empowers the chatbot with sophisticated language understanding capabilities, resulting in a powerful conversational AI application.<\/p>\n\n\n\n<h4 class=\"wp-block-heading graf graf--h4\">Setting Up Streamlit<\/h4>\n\n\n\n<p class=\"graf graf--p\">The first step to setting up a Streamlit app is to create a virtual environment.<\/p>\n\n\n\n<p class=\"graf graf--p\">We will use the <strong class=\"markup--strong markup--p-strong\">Pipenv <\/strong>library as follows:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">pipenv shell<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">Next, we will install the necessary libraries to build our chatbot.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">pipenv install streamlit replicate<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\"><strong class=\"markup--strong markup--p-strong\">Streamlit <\/strong>is an open-source Python framework used to deploy data science apps.<\/p>\n\n\n\n<p class=\"graf graf--p\"><strong class=\"markup--strong markup--p-strong\">Replicate <\/strong>is a cloud platform that hosts large machine learning models for easy deployment.<\/p>\n\n\n\n<h4 class=\"wp-block-heading graf graf--h4\">Accessing Llama 2 API&nbsp;Token<\/h4>\n\n\n\n<p class=\"graf graf--p\">Before you access Replicate\u2019s token key, you must register an account on <a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/replicate.com\/signin?next=\/docs\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/replicate.com\/signin?next=\/docs\">Replicate<\/a>.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote graf graf--blockquote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><strong class=\"markup--strong markup--blockquote-strong\">Note<\/strong>: You must have a <a class=\"markup--anchor markup--blockquote-anchor\" href=\"https:\/\/github.com\/\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/github.com\/\">GitHub<\/a> account to sign in to Replicate.<\/p>\n<\/blockquote>\n\n\n\n<p class=\"graf graf--p\">Search for Llama 2 chat on the Replicate dashboard.<\/p>\n\n\n\n<figure class=\"wp-block-image graf graf--figure\"><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/800\/1*r80xT9GsNpZorbzpsrFgCg.png\" alt=\"Llama 2 chat\"\/><figcaption class=\"wp-element-caption\">Replicate Dashboard<\/figcaption><\/figure>\n\n\n\n<p>&nbsp;<\/p>\n\n\n\n<p class=\"graf graf--p\">Click on the <strong class=\"markup--strong markup--p-strong\">llama-2\u201370b-chat<\/strong> model to view the Llama 2 API endpoints.<\/p>\n\n\n\n<p class=\"graf graf--p\">Click on the API button on the <strong class=\"markup--strong markup--p-strong\">llama-2\u201370b-chat<\/strong> model\u2019s navbar.<\/p>\n\n\n\n<p class=\"graf graf--p\">Next, on the right side of the page, click on the <strong class=\"markup--strong markup--p-strong\">Python <\/strong>button to access the API token for Python Applications.<\/p>\n\n\n\n<figure class=\"wp-block-image graf graf--figure\"><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/800\/1*fXr6GAQkcn-CBFkeBfteGA.png\" alt=\"screenshot of Llama 2\"\/><\/figure>\n\n\n\n<p class=\"graf graf--p\">Copy the <strong class=\"markup--strong markup--p-strong\">REPLICATE_API_TOKEN <\/strong>and keep it safe for future use.<\/p>\n\n\n\n<h4 class=\"wp-block-heading graf graf--h4\">Building A&nbsp;Chatbot<\/h4>\n\n\n\n<p class=\"graf graf--p\">First, create a Python file called <em class=\"markup--em markup--p-em\">chatbot.py<\/em> and&nbsp;.<em class=\"markup--em markup--p-em\">env <\/em>file. We will write our code on the chatbot.py and store our secret keys and API tokens in the&nbsp;<em class=\"markup--em markup--p-em\">.env<\/em> file.<\/p>\n\n\n\n<p class=\"graf graf--p\">On the chatbot.py file, import the libraries as follows.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\"><span class=\"hljs-keyword\">import<\/span> streamlit <span class=\"hljs-keyword\">as<\/span> st\n<span class=\"hljs-keyword\">import<\/span> os\n<span class=\"hljs-keyword\">import<\/span> replicate<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">Next, set the global variables of the <strong class=\"markup--strong markup--p-strong\">llama-2\u201370b-chat<\/strong> model.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\"><span class=\"hljs-meta\"># Global variables<\/span>\nREPLICATE_API_TOKEN = os.environ.<span class=\"hljs-keyword\">get<\/span>(<span class=\"hljs-string\">'REPLICATE_API_TOKEN'<\/span>, <span class=\"hljs-literal\">default<\/span>=<span class=\"hljs-string\">''<\/span>)\nREPLICATE_MODEL_ENDPOINTS = {\n    <span class=\"hljs-string\">'LLaMA2-7B'<\/span>: os.environ.<span class=\"hljs-keyword\">get<\/span>(<span class=\"hljs-string\">'REPLICATE_MODEL_ENDPOINT7B'<\/span>, <span class=\"hljs-literal\">default<\/span>=<span class=\"hljs-string\">''<\/span>),\n    <span class=\"hljs-string\">'LLaMA2-13B'<\/span>: os.environ.<span class=\"hljs-keyword\">get<\/span>(<span class=\"hljs-string\">'REPLICATE_MODEL_ENDPOINT13B'<\/span>, <span class=\"hljs-literal\">default<\/span>=<span class=\"hljs-string\">''<\/span>),\n    <span class=\"hljs-string\">'LLaMA2-70B'<\/span>: os.environ.<span class=\"hljs-keyword\">get<\/span>(<span class=\"hljs-string\">'REPLICATE_MODEL_ENDPOINT70B'<\/span>, <span class=\"hljs-literal\">default<\/span>=<span class=\"hljs-string\">''<\/span>)\n}<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">On the&nbsp;<em class=\"markup--em markup--p-em\">.env<\/em> file, add the Replicate token and model endpoints in the following format:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\"><span class=\"hljs-comment\"># \ud83d\udcc1 .env -----<\/span>\n<span class=\"hljs-attr\">REPLICATE_API_TOKEN<\/span>=<span class=\"hljs-string\">'Your_Replicate_Token'<\/span>\n<span class=\"hljs-attr\">REPLICATE_MODEL_ENDPOINT7B<\/span>=<span class=\"hljs-string\">'a16z-infra\/llama7b-v2-chat:4f0a4744c7295c024a1de15e1a63c880d3da035fa1f49bfd344fe076074c8eea'<\/span>\n<span class=\"hljs-attr\">REPLICATE_MODEL_ENDPOINT13B<\/span>=<span class=\"hljs-string\">'a16z-infra\/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5'<\/span>\n<span class=\"hljs-attr\">REPLICATE_MODEL_ENDPOINT70B<\/span>=<span class=\"hljs-string\">'replicate\/llama70b-v2-chat:e951f18578850b652510200860fc4ea62b3b16fac280f83ff32282f87bbd2e48'<\/span><\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">Create a pre-prompt for the Llama 2 model. We will write a simple prompt to set the model as an assistant in this case.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\"><span class=\"hljs-comment\"># Set Pre-propmt<\/span>\n<span class=\"hljs-attr\">PRE_PROMPT<\/span> = <span class=\"hljs-string\">\"You are a helpful assistant. You do not respond as 'User' or pretend to be 'User'. You only respond once as Assistant.\"<\/span><\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">Set up the page configuration for our chatbot as follows:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\"><span class=\"hljs-comment\"># Set initial page configuration<\/span>\nst.set_page_config(page_title=<span class=\"hljs-string\">\"LLaMA2 Chatbot\"<\/span>, page_icon=<span class=\"hljs-string\">\":left_speech_bubble:\"<\/span>, layout=<span class=\"hljs-string\">\"wide\"<\/span>)<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">Write a function to handle the chatbot functionality.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">def render_app():\n    # Set up containers\n    response_container = st.<span class=\"hljs-built_in\">container<\/span>()\n    container = st.<span class=\"hljs-built_in\">container<\/span>()\n\n    # Set up Session State variables\n    st.session_state.<span class=\"hljs-built_in\">setdefault<\/span>(<span class=\"hljs-string\">'chat_dialogue'<\/span>, [])\n    st.session_state.<span class=\"hljs-built_in\">setdefault<\/span>(<span class=\"hljs-string\">'llm'<\/span>, REPLICATE_MODEL_ENDPOINTS[<span class=\"hljs-string\">'LLaMA2-70B'<\/span>])\n    st.session_state.<span class=\"hljs-built_in\">setdefault<\/span>(<span class=\"hljs-string\">'temperature'<\/span>, <span class=\"hljs-number\">0.1<\/span>)\n    st.session_state.<span class=\"hljs-built_in\">setdefault<\/span>(<span class=\"hljs-string\">'top_p'<\/span>, <span class=\"hljs-number\">0.9<\/span>)\n    st.session_state.<span class=\"hljs-built_in\">setdefault<\/span>(<span class=\"hljs-string\">'max_seq_len'<\/span>, <span class=\"hljs-number\">512<\/span>)\n    st.session_state.<span class=\"hljs-built_in\">setdefault<\/span>(<span class=\"hljs-string\">'pre_prompt'<\/span>, PRE_PROMPT)\n    st.session_state.<span class=\"hljs-built_in\">setdefault<\/span>(<span class=\"hljs-string\">'string_dialogue'<\/span>, <span class=\"hljs-string\">''<\/span>)\n\n\n    # Display chat history\n    for message in st.session_state.chat_dialogue:\n        with st.<span class=\"hljs-built_in\">chat_message<\/span>(message[<span class=\"hljs-string\">\"role\"<\/span>]):\n            st.<span class=\"hljs-built_in\">markdown<\/span>(message[<span class=\"hljs-string\">\"content\"<\/span>])\n\n    # User input\n    if prompt := st.<span class=\"hljs-built_in\">chat_input<\/span>(<span class=\"hljs-string\">\"Ask a question here to LLaMA2\"<\/span>):\n        st.session_state.chat_dialogue.<span class=\"hljs-built_in\">append<\/span>({\"role\": <span class=\"hljs-string\">\"user\"<\/span>, <span class=\"hljs-string\">\"content\"<\/span>: prompt})\n        with st<span class=\"hljs-selector-class\">.chat_message<\/span>(\"user\"):\n            st.<span class=\"hljs-built_in\">markdown<\/span>(prompt)\n\n        # Assistant response\n        with st.<span class=\"hljs-built_in\">chat_message<\/span>(<span class=\"hljs-string\">\"assistant\"<\/span>):\n            message_placeholder = st.<span class=\"hljs-built_in\">empty<\/span>()\n            full_response = <span class=\"hljs-string\">\"\"<\/span>\n            string_dialogue = st.session_state[<span class=\"hljs-string\">'pre_prompt'<\/span>]\n            for dict_message in st.session_state.chat_dialogue:\n                speaker = <span class=\"hljs-string\">\"User\"<\/span> if dict_message[<span class=\"hljs-string\">\"role\"<\/span>] == <span class=\"hljs-string\">\"user\"<\/span> else <span class=\"hljs-string\">\"Assistant\"<\/span>\n                string_dialogue += f<span class=\"hljs-string\">\"{speaker}: {dict_message['content']}\\n\\n\"<\/span>\n            output = <span class=\"hljs-built_in\">debounce_replicate_run<\/span>(\n                st.session_state[<span class=\"hljs-string\">'llm'<\/span>],\n                string_dialogue + <span class=\"hljs-string\">\"Assistant: \"<\/span>,\n                st.session_state[<span class=\"hljs-string\">'max_seq_len'<\/span>],\n                st.session_state[<span class=\"hljs-string\">'temperature'<\/span>],\n                st.session_state[<span class=\"hljs-string\">'top_p'<\/span>],\n                REPLICATE_API_TOKEN\n            )\n            for item in output:\n                full_response += item\n                message_placeholder.<span class=\"hljs-built_in\">markdown<\/span>(full_response + <span class=\"hljs-string\">\"\u258c\"<\/span>)\n            message_placeholder.<span class=\"hljs-built_in\">markdown<\/span>(full_response)\n        st.session_state.chat_dialogue.<span class=\"hljs-built_in\">append<\/span>({\"role\": <span class=\"hljs-string\">\"assistant\"<\/span>, <span class=\"hljs-string\">\"content\"<\/span>: full_response})<\/span><\/pre>\n\n\n\n<h4 class=\"wp-block-heading graf graf--h4\">Page UI Configurations<\/h4>\n\n\n\n<p class=\"graf graf--p\">Within the <em class=\"markup--em markup--p-em\">render_app()<\/em> function, immediately after the session state variables, add the following lines of code to enable users to customize the chatbot with their desired Llama model and custom prompt:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\"><span class=\"hljs-comment\"># Set up left sidebar<\/span>\n    st.sidebar.header(<span class=\"hljs-string\">\"LLaMA2 Chatbot\"<\/span>)\n\n     <span class=\"hljs-comment\">#container for the chat history<\/span>\n    response_container = st.container()\n    <span class=\"hljs-comment\">#container for the user's text input<\/span>\n    container = st.container()\n    <span class=\"hljs-comment\">#Set up\/Initialize Session State variables:<\/span>\n    <span class=\"hljs-keyword\">if<\/span> <span class=\"hljs-string\">'chat_dialogue'<\/span> <span class=\"hljs-keyword\">not<\/span> <span class=\"hljs-keyword\">in<\/span> st.session_state:\n        st.session_state[<span class=\"hljs-string\">'chat_dialogue'<\/span>] = []\n    <span class=\"hljs-keyword\">if<\/span> <span class=\"hljs-string\">'llm'<\/span> <span class=\"hljs-keyword\">not<\/span> <span class=\"hljs-keyword\">in<\/span> st.session_state:\n        <span class=\"hljs-comment\">#st.session_state['llm'] = REPLICATE_MODEL_ENDPOINT13B<\/span>\n        st.session_state[<span class=\"hljs-string\">'llm'<\/span>] = <span class=\"hljs-string\">'LLaMA2-70B'<\/span>\n    <span class=\"hljs-keyword\">if<\/span> <span class=\"hljs-string\">'temperature'<\/span> <span class=\"hljs-keyword\">not<\/span> <span class=\"hljs-keyword\">in<\/span> st.session_state:\n        st.session_state[<span class=\"hljs-string\">'temperature'<\/span>] = <span class=\"hljs-number\">0.1<\/span>\n    <span class=\"hljs-keyword\">if<\/span> <span class=\"hljs-string\">'top_p'<\/span> <span class=\"hljs-keyword\">not<\/span> <span class=\"hljs-keyword\">in<\/span> st.session_state:\n        st.session_state[<span class=\"hljs-string\">'top_p'<\/span>] = <span class=\"hljs-number\">0.9<\/span>\n    <span class=\"hljs-keyword\">if<\/span> <span class=\"hljs-string\">'max_seq_len'<\/span> <span class=\"hljs-keyword\">not<\/span> <span class=\"hljs-keyword\">in<\/span> st.session_state:\n        st.session_state[<span class=\"hljs-string\">'max_seq_len'<\/span>] = <span class=\"hljs-number\">512<\/span>\n    <span class=\"hljs-keyword\">if<\/span> <span class=\"hljs-string\">'pre_prompt'<\/span> <span class=\"hljs-keyword\">not<\/span> <span class=\"hljs-keyword\">in<\/span> st.session_state:\n        st.session_state[<span class=\"hljs-string\">'pre_prompt'<\/span>] = PRE_PROMPT\n    <span class=\"hljs-keyword\">if<\/span> <span class=\"hljs-string\">'string_dialogue'<\/span> <span class=\"hljs-keyword\">not<\/span> <span class=\"hljs-keyword\">in<\/span> st.session_state:\n        st.session_state[<span class=\"hljs-string\">'string_dialogue'<\/span>] = <span class=\"hljs-string\">''<\/span>\n\n    <span class=\"hljs-comment\">#Dropdown menu to select the model edpoint:<\/span>\n    selected_option = st.sidebar.selectbox(<span class=\"hljs-string\">'Choose a LLaMA2 model:'<\/span>, [<span class=\"hljs-string\">'LLaMA2-70B'<\/span>, <span class=\"hljs-string\">'LLaMA2-13B'<\/span>, <span class=\"hljs-string\">'LLaMA2-7B'<\/span>], key=<span class=\"hljs-string\">'model'<\/span>)\n    <span class=\"hljs-keyword\">if<\/span> selected_option == <span class=\"hljs-string\">'LLaMA2-7B'<\/span>:\n        st.session_state[<span class=\"hljs-string\">'llm'<\/span>] = <span class=\"hljs-string\">'LLaMA2-7B'<\/span>\n    <span class=\"hljs-keyword\">elif<\/span> selected_option == <span class=\"hljs-string\">'LLaMA2-13B'<\/span>:\n        st.session_state[<span class=\"hljs-string\">'llm'<\/span>] = <span class=\"hljs-string\">'LLaMA2-13B'<\/span>\n    <span class=\"hljs-keyword\">else<\/span>:\n        st.session_state[<span class=\"hljs-string\">'llm'<\/span>] = <span class=\"hljs-string\">'LLaMA2-70B'<\/span>\n    <span class=\"hljs-comment\">#Model hyper parameters:<\/span>\n    st.session_state[<span class=\"hljs-string\">'temperature'<\/span>] = st.sidebar.slider(<span class=\"hljs-string\">'Temperature:'<\/span>, min_value=<span class=\"hljs-number\">0.01<\/span>, max_value=<span class=\"hljs-number\">5.0<\/span>, value=<span class=\"hljs-number\">0.1<\/span>, step=<span class=\"hljs-number\">0.01<\/span>)\n    st.session_state[<span class=\"hljs-string\">'top_p'<\/span>] = st.sidebar.slider(<span class=\"hljs-string\">'Top P:'<\/span>, min_value=<span class=\"hljs-number\">0.01<\/span>, max_value=<span class=\"hljs-number\">1.0<\/span>, value=<span class=\"hljs-number\">0.9<\/span>, step=<span class=\"hljs-number\">0.01<\/span>)\n    st.session_state[<span class=\"hljs-string\">'max_seq_len'<\/span>] = st.sidebar.slider(<span class=\"hljs-string\">'Max Sequence Length:'<\/span>, min_value=<span class=\"hljs-number\">64<\/span>, max_value=<span class=\"hljs-number\">4096<\/span>, value=<span class=\"hljs-number\">2048<\/span>, step=<span class=\"hljs-number\">8<\/span>)\n\n    NEW_P = st.sidebar.text_area(<span class=\"hljs-string\">'Prompt before the chat starts. Edit here if desired:'<\/span>, PRE_PROMPT, height=<span class=\"hljs-number\">60<\/span>)\n    <span class=\"hljs-keyword\">if<\/span> NEW_P != PRE_PROMPT <span class=\"hljs-keyword\">and<\/span> NEW_P != <span class=\"hljs-string\">\"\"<\/span> <span class=\"hljs-keyword\">and<\/span> NEW_P != <span class=\"hljs-literal\">None<\/span>:\n        st.session_state[<span class=\"hljs-string\">'pre_prompt'<\/span>] = NEW_P + <span class=\"hljs-string\">\"\\n\\n\"<\/span>\n    <span class=\"hljs-keyword\">else<\/span>:\n        st.session_state[<span class=\"hljs-string\">'pre_prompt'<\/span>] = PRE_PROMPT\n\n    btn_col1, btn_col2 = st.sidebar.columns(<span class=\"hljs-number\">2<\/span>)<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">Next, call the <em class=\"markup--em markup--p-em\">render_app()<\/em> function:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\"><span class=\"hljs-function\">def <span class=\"hljs-title\">main<\/span>():\n    <span class=\"hljs-title\">render_app<\/span>()\n\n<span class=\"hljs-keyword\">if<\/span> __name__<\/span> == <span class=\"hljs-string\">\"__main__\"<\/span>:\n    main()<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">Once done, add a <em class=\"markup--em markup--p-em\">utils.py<\/em> file and import it into your <em class=\"markup--em markup--p-em\">chatbot.py<\/em> file.<\/p>\n\n\n\n<p class=\"graf graf--p\">On the utils.py file, create a debounce replicate run function that ensures the model usage does not result in poor performance, double activations, or user frustrations.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\"><span class=\"hljs-keyword\">import<\/span> replicate\n<span class=\"hljs-keyword\">import<\/span> time\n\n<span class=\"hljs-comment\"># Initialize debounce variables<\/span>\nlast_call_time = <span class=\"hljs-number\">0<\/span>\ndebounce_interval = <span class=\"hljs-number\">2<\/span>  <span class=\"hljs-comment\"># Set the debounce interval (in seconds) to your desired value<\/span>\n\n<span class=\"hljs-keyword\">def<\/span> <span class=\"hljs-title function_\">debounce_replicate_run<\/span>(<span class=\"hljs-params\">llm, prompt, max_len, temperature, top_p, API_TOKEN<\/span>):\n    <span class=\"hljs-keyword\">global<\/span> last_call_time\n    <span class=\"hljs-built_in\">print<\/span>(<span class=\"hljs-string\">\"last call time: \"<\/span>, last_call_time)\n\n    <span class=\"hljs-comment\"># Get the current time<\/span>\n    current_time = time.time()\n\n    <span class=\"hljs-comment\"># Calculate the time elapsed since the last call<\/span>\n    elapsed_time = current_time - last_call_time\n\n    <span class=\"hljs-comment\"># Check if the elapsed time is less than the debounce interval<\/span>\n    <span class=\"hljs-keyword\">if<\/span> elapsed_time &lt; debounce_interval:\n        <span class=\"hljs-built_in\">print<\/span>(<span class=\"hljs-string\">\"Debouncing\"<\/span>)\n        <span class=\"hljs-keyword\">return<\/span> <span class=\"hljs-string\">\"Hello! You are sending requests too fast. Please wait a few seconds before sending another request.\"<\/span>\n\n\n    <span class=\"hljs-comment\"># Update the last call time to the current time<\/span>\n    last_call_time = time.time()\n\n    output = replicate.run(llm, <span class=\"hljs-built_in\">input<\/span>={<span class=\"hljs-string\">\"prompt\"<\/span>: prompt + <span class=\"hljs-string\">\"Assistant: \"<\/span>, <span class=\"hljs-string\">\"max_length\"<\/span>: max_len, <span class=\"hljs-string\">\"temperature\"<\/span>: temperature, <span class=\"hljs-string\">\"top_p\"<\/span>: top_p, <span class=\"hljs-string\">\"repetition_penalty\"<\/span>: <span class=\"hljs-number\">1<\/span>}, api_token=API_TOKEN)\n    <span class=\"hljs-keyword\">return<\/span> output\n\n\n<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">In this code, the <em class=\"markup--em markup--p-em\">debounce_replicate_run<\/em> function implements a debounce mechanism to help prevent frequent and excessive API queries from a user\u2019s input.<\/p>\n\n\n\n<p class=\"graf graf--p\">The function makes API calls using the <strong class=\"markup--strong markup--p-strong\">Replicate <\/strong>library and manages time intervals using the <strong class=\"markup--strong markup--p-strong\">time <\/strong>module. The function accepts several API call-related parameters, including <em class=\"markup--em markup--p-em\">model, prompt, maximum length, temperature, and API token<\/em>. The debounce process includes comparing the time from the most recent API call to a predetermined interval.<\/p>\n\n\n\n<p class=\"graf graf--p\">The function provides a message indicating that requests are being sent too quickly and advises the user to wait if the time elapsed is less than the debounce interval. The function changes the last call time and makes the API request utilizing the debounce interval if it has.<\/p>\n\n\n\n<p class=\"graf graf--p\">Back on the chatbot.py file, import the <em class=\"markup--em markup--p-em\">debounce_replicate_run<\/em> function from utils as follows:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\"><span class=\"hljs-keyword\">from<\/span> utils <span class=\"hljs-keyword\">import<\/span> debounce_replicate_run<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">After that, you can now run your application using the command below:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">streamlit run chatbot.<span class=\"hljs-property\">py<\/span><\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">Expected Results:<\/p>\n\n\n\n<figure class=\"wp-block-image graf graf--figure\"><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/800\/1*B6DHfkWQjtT-JEu81sOeUw.png\" alt=\"Llama 2 and Streamlit screenshot\"\/><figcaption class=\"wp-element-caption\">Llamav2 and Streamlit app screenshot&nbsp;<\/figcaption><\/figure>\n\n\n\n<h4 class=\"wp-block-heading graf graf--h4\">Conclusion<\/h4>\n\n\n\n<p class=\"graf graf--p\">From the tutorial, you have learned how to explore the power of Llama2 using Streamlit. Compared to the popular closed-source model, GPT-3.5, which is based on data up to 2021, Llama2 is trained on data up to December 2022. This makes the model a capable rival to some of the closed models in the market, advancing the development of AI applications at lower costs.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Introduction 2023 has been significant for large language models. Many advancements have been made since ChatGPT, including open-source and licensed models. The recent release of Llama 2 by Meta and Microsoft has taken the AI world by storm. An open-source model that is just as good as GPT 3.5 or even GPT 4? We\u2019re yet [&hellip;]<\/p>\n","protected":false},"author":52,"featured_media":8806,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"customer_name":"","customer_description":"","customer_industry":"","customer_technologies":"","customer_logo":"","footnotes":""},"categories":[7],"tags":[],"coauthors":[154],"class_list":["post-8750","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tutorials"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Exploring the Power of Llama 2 Using Streamlit - Comet<\/title>\n<meta name=\"description\" content=\"In this blog article, learn how to explore the power of Llama 2 using Streamlit in a step by step tutorial.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/exploring-the-power-of-llama-2-using-streamlit\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Exploring the Power of Llama 2 Using Streamlit\" \/>\n<meta property=\"og:description\" content=\"In this blog article, learn how to explore the power of Llama 2 using Streamlit in a step by step tutorial.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.comet.com\/site\/blog\/exploring-the-power-of-llama-2-using-streamlit\" \/>\n<meta property=\"og:site_name\" content=\"Comet\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cometdotml\" \/>\n<meta property=\"article:published_time\" content=\"2024-01-17T14:00:07+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-04-24T17:03:31+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2024\/01\/Llama.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"800\" \/>\n\t<meta property=\"og:image:height\" content=\"575\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Adhing&#039;a Fredrick\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:image\" content=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2024\/01\/Llama.jpg\" \/>\n<meta name=\"twitter:creator\" content=\"@Cometml\" \/>\n<meta name=\"twitter:site\" content=\"@Cometml\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Adhing&#039;a Fredrick\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Exploring the Power of Llama 2 Using Streamlit - Comet","description":"In this blog article, learn how to explore the power of Llama 2 using Streamlit in a step by step tutorial.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.comet.com\/site\/blog\/exploring-the-power-of-llama-2-using-streamlit","og_locale":"en_US","og_type":"article","og_title":"Exploring the Power of Llama 2 Using Streamlit","og_description":"In this blog article, learn how to explore the power of Llama 2 using Streamlit in a step by step tutorial.","og_url":"https:\/\/www.comet.com\/site\/blog\/exploring-the-power-of-llama-2-using-streamlit","og_site_name":"Comet","article_publisher":"https:\/\/www.facebook.com\/cometdotml","article_published_time":"2024-01-17T14:00:07+00:00","article_modified_time":"2025-04-24T17:03:31+00:00","og_image":[{"width":800,"height":575,"url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2024\/01\/Llama.jpg","type":"image\/jpeg"}],"author":"Adhing'a Fredrick","twitter_card":"summary_large_image","twitter_image":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2024\/01\/Llama.jpg","twitter_creator":"@Cometml","twitter_site":"@Cometml","twitter_misc":{"Written by":"Adhing'a Fredrick","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.comet.com\/site\/blog\/exploring-the-power-of-llama-2-using-streamlit#article","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/blog\/exploring-the-power-of-llama-2-using-streamlit\/"},"author":{"name":"Adhing'a Fredrick","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/2f5012b38311f83e8165100848778d19"},"headline":"Exploring the Power of Llama 2 Using Streamlit","datePublished":"2024-01-17T14:00:07+00:00","dateModified":"2025-04-24T17:03:31+00:00","mainEntityOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/exploring-the-power-of-llama-2-using-streamlit\/"},"wordCount":747,"publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/exploring-the-power-of-llama-2-using-streamlit#primaryimage"},"thumbnailUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2024\/01\/Llama.jpg","articleSection":["Tutorials"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.comet.com\/site\/blog\/exploring-the-power-of-llama-2-using-streamlit\/","url":"https:\/\/www.comet.com\/site\/blog\/exploring-the-power-of-llama-2-using-streamlit","name":"Exploring the Power of Llama 2 Using Streamlit - Comet","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/exploring-the-power-of-llama-2-using-streamlit#primaryimage"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/exploring-the-power-of-llama-2-using-streamlit#primaryimage"},"thumbnailUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2024\/01\/Llama.jpg","datePublished":"2024-01-17T14:00:07+00:00","dateModified":"2025-04-24T17:03:31+00:00","description":"In this blog article, learn how to explore the power of Llama 2 using Streamlit in a step by step tutorial.","breadcrumb":{"@id":"https:\/\/www.comet.com\/site\/blog\/exploring-the-power-of-llama-2-using-streamlit#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.comet.com\/site\/blog\/exploring-the-power-of-llama-2-using-streamlit"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/blog\/exploring-the-power-of-llama-2-using-streamlit#primaryimage","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2024\/01\/Llama.jpg","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2024\/01\/Llama.jpg","width":800,"height":575,"caption":"llama"},{"@type":"BreadcrumbList","@id":"https:\/\/www.comet.com\/site\/blog\/exploring-the-power-of-llama-2-using-streamlit#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.comet.com\/site\/"},{"@type":"ListItem","position":2,"name":"Exploring the Power of Llama 2 Using Streamlit"}]},{"@type":"WebSite","@id":"https:\/\/www.comet.com\/site\/#website","url":"https:\/\/www.comet.com\/site\/","name":"Comet","description":"Build Better Models Faster","publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.comet.com\/site\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.comet.com\/site\/#organization","name":"Comet ML, Inc.","alternateName":"Comet","url":"https:\/\/www.comet.com\/site\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","width":310,"height":310,"caption":"Comet ML, Inc."},"image":{"@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/cometdotml","https:\/\/x.com\/Cometml","https:\/\/www.youtube.com\/channel\/UCmN63HKvfXSCS-UwVwmK8Hw"]},{"@type":"Person","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/2f5012b38311f83e8165100848778d19","name":"Adhing'a Fredrick","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/image\/8f4b6487c0f3b99fbc3e482d4b376924","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/08\/1652714258953-96x96.jpg","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/08\/1652714258953-96x96.jpg","caption":"Adhing'a Fredrick"},"url":"https:\/\/www.comet.com\/site\/blog\/author\/adhingafredrickgmail-com\/"}]}},"_links":{"self":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/8750","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/users\/52"}],"replies":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/comments?post=8750"}],"version-history":[{"count":1,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/8750\/revisions"}],"predecessor-version":[{"id":15403,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/8750\/revisions\/15403"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media\/8806"}],"wp:attachment":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media?parent=8750"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/categories?post=8750"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/tags?post=8750"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/coauthors?post=8750"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}