from onprem import LLM
import os
= """<|start_header_id|>system<|end_header_id|>
prompt_template
You are a super-intelligent helpful assistant that executes instructions.<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
"""
Use Prompts to Solve Problems
This notebook shows various examples of using OnPrem.LLM to solve different tasks.
Setup the LLM
instance
In this notebook, we will use the Llama-3.1-8B model from Meta. In particular, we will use Meta-Llama-3.1-8B-Instruct-GGUF. There are different instances of this model on the Hugging Face model hub, and we will use the one from LM Studio. When selecting a model that is different than the default ones in OnPrem.LLM, it is important to inspect the model’s home page and identify the correct prompt format. The prompt format for this model is located here, and we will supply it directly to the LLM
constructor along with the URL to the specific model file we want (i.e., Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf). We will offload layers to our GPU(s) to speed up inference using the n_gpu_layers
parameter. (For more information on GPU acceleration, see here.) For the purposes of this notebook, we also supply temperature=0
so that there is no variability in outputs. You can increase this value for more creativity in the outputs. Note that you can change the system prompt (i.e., “You are a super-intelligent helpful assistant…”) to fit your needs.
= LLM(model_url='https://huggingface.co/lmstudio-community/Meta-Llama-3.1-8B-Instruct-GGUF/resolve/main/Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf',
llm = prompt_template,
prompt_template=-1,
n_gpu_layers=0,
temperature=False) verbose
llama_new_context_with_model: n_ctx_per_seq (3904) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
Note that, if supplying the convenience parameter, default_model='llama'
to the LLM
constructor, model_url
and prompt_template
are set automatically and do not need to be supplied as we did above.
Information Extraction
This is an example of zero-shot prompting:
= """Extract the names of people in the supplied sentences. Separate the names with commas.
prompt [Sentence]: I like Cillian Murphy's acting. Florence Pugh is great, too.
[People]:"""
= llm.prompt(prompt, stop=[]) saved_output
Cillian Murphy, Florence Pugh
A more complicated example of Information Extraction using few-shot prompting:
= """ Extract the Name, Current Position, and Current Company from each piece of Text.
prompt
Text: Alan F. Estevez serves as the Under Secretary of Commerce for Industry and Security. As Under Secretary, Mr. Estevez leads
the Bureau of Industry and Security, which advances U.S. national security, foreign policy, and economic objectives by ensuring an
effective export control and treaty compliance system and promoting U.S. strategic technology leadership.
A: Name: Alan F. Estevez | Current Position: Under Secretary | Current Company: Bureau of Industry and Security
Text: Pichai Sundararajan (born June 10, 1972[3][4][5]), better known as Sundar Pichai (/ˈsʊndɑːr pɪˈtʃaɪ/), is an Indian-born American
business executive.[6][7] He is the chief executive officer (CEO) of Alphabet Inc. and its subsidiary Google.[8]
A: Name: Sundar Pichai | Current Position: CEO | Current Company: Google
Now, provide the answer (A) from this Text:
Text: Norton Allan Schwartz (born December 14, 1951)[1] is a retired United States Air Force general[2] who served as the 19th Chief of Staff of the
Air Force from August 12, 2008, until his retirement in 2012.[3] He previously served as commander, United States Transportation Command from
September 2005 to August 2008. He is currently the president of the Institute for Defense Analyses, serving since January 2, 2020.[4]
A:"""
= llm.prompt(prompt, stop=[]) saved_output
Name: Norton Allan Schwartz | Current Position: President | Current Company: Institute for Defense Analyses
Resume Parsing
Resume parsing is yet an even more complex example of information extraction.
!wget https://arun.maiya.net/asmcv.pdf -O /tmp/cv.pdf
--2024-11-13 12:52:50-- https://arun.maiya.net/asmcv.pdf
Resolving arun.maiya.net (arun.maiya.net)... 185.199.109.153, 185.199.108.153, 185.199.111.153, ...
Connecting to arun.maiya.net (arun.maiya.net)|185.199.109.153|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 62791 (61K) [application/pdf]
Saving to: ‘/tmp/cv.pdf’
/tmp/cv.pdf 100%[===================>] 61.32K --.-KB/s in 0.002s
2024-11-13 12:52:51 (36.3 MB/s) - ‘/tmp/cv.pdf’ saved [62791/62791]
from onprem.ingest import load_single_document
= load_single_document('/tmp/cv.pdf')
docs = docs[0].page_content # we'll only consider the first page of CV as "resume" resume_text
= """
prompt Analyze the resume below and extract the relevant details. Format the response in JSON according to the specified structure below.
Only return the JSON response, with no additional text or explanations.
Ensure to:
- Format the full name in proper case.
- Remove any spaces and country code from the contact number.
- Format dates as "dd-mm-yyyy" if given in a more complex format, or retain the year if only the year is present.
- Do not make up a phone number.
- Extract only the first two jobs for Work Experience.
Use the following JSON structure:
```json
{
"Personal Information": {
"Name": " ",
"Contact Number": " ",
"Address": " ",
"Email": " ",
"Date of Birth": " "
},
"Education": [
{
"Degree": " ",
"Institution": " ",
"Year": " "
},
// Additional educational qualifications in a similar format
],
"Work Experience": [
{
"Position": " ",
"Organization": " ",
"Duration": " ",
"Responsibilities": " "
},
// Additional work experiences in a similar format
],
"Skills": [
{
"Skills": " ", // e.g., Python, R, Java, statistics, quantitative psychology, applied mathematics, machine learning, gel electrophoresis
},
// A list of skills or fields that the person has experience with
],
}
```
Here is the text of the resume:
---RESUMETXT---
"""
= llm.prompt(prompt.replace('---RESUMETXT---', resume_text)) json_string
{
"Personal Information": {
"Name": "Arun S. Maiya",
"Contact Number": "",
"Address": "",
"Email": "arun@maiya.net",
"Date of Birth": ""
},
"Education": [
{
"Degree": "Ph.D.",
"Institution": "University of Illinois at Chicago",
"Year": " "
},
{
"Degree": "M.S.",
"Institution": "DePaul University",
"Year": " "
},
{
"Degree": "B.S.",
"Institution": "University of Illinois at Urbana-Champaign",
"Year": " "
}
],
"Work Experience": [
{
"Position": "Research Leader",
"Organization": "Institute for Defense Analyses – Alexandria, VA USA",
"Duration": "2011-Present",
"Responsibilities": ""
},
{
"Position": "Researcher",
"Organization": "University of Illinois at Chicago",
"Duration": "2007-2011",
"Responsibilities": ""
}
],
"Skills": [
{
"Skills": "applied machine learning, data science, natural language processing (NLP), network science, computer vision"
}
]
}
Let’s convert the output to a Python dictionary:
import json
= json.loads(json_string)
d d.keys()
dict_keys(['Personal Information', 'Education', 'Work Experience', 'Skills'])
'Personal Information']['Name'] d[
'Arun S. Maiya'
Structured Outputs
In the example above, we prompted the model to output results as a JSON string with some prompt engineering. The LLM.pydantic_prompt
method lets you more easily describe your desired output structure by defining a Pydantic model. In the example below, we ask the LLM to generate a joke in such a way that the setup and punchline are stored as distinct variables.
from pydantic import BaseModel, Field
class Joke(BaseModel):
str = Field(description="question to set up a joke")
setup: str = Field(description="answer to resolve the joke") punchline:
= llm.pydantic_prompt('Tell me a joke.', pydantic_model=Joke) structured_output
{
"setup": "Why don't scientists trust atoms?",
"punchline": "Because they make up everything!"
}
structured_output
Joke(setup="Why don't scientists trust atoms?", punchline='Because they make up everything!')
structured_output.setup
"Why don't scientists trust atoms?"
structured_output.punchline
'Because they make up everything!'
Tip:
The attempt_fix
parameter allows you to have the LLM attempt to fix any malformed or incomplete outputs. The fix_llm
parameter allows you to specific a different LLM to make the fix (the current LLM is used if fix_llm=None
):
from langchain_openai import ChatOpenAI
= llm.pydantic_prompt('Tell me a joke.', pydantic_model=Joke,
structured_output =True) attempt_fix
Theme Discovery
= """Please provide thematic coding for the following 20 survey responses to the question: "What did you notice about nature today?"
prompt
1. I noticed a family of ducks waddling through a nearby park this morning.
2. As I walked by a tree, I saw a flurry of feathers and realized a bird had just landed in its branches.
3. While driving, I observed a herd of deer gracefully moving through a meadow.
4. The sun's rays filtered through the leaves of trees, casting intricate patterns on the ground below.
5. As I stepped outside, a gentle breeze carried with it the fragrance of blooming flowers.
6. A butterfly fluttered past me as I was sitting in my garden, reminding me to enjoy life's simple pleasures.
7. The sound of birdsong filled the air as I walked through a park this afternoon.
8. I saw a group of ants working together to move a large pebble across the sidewalk.
9. A squirrel darted up a tree, leaving a trail of nuts behind it.
10. The leaves on the trees rustled as if whispering secrets in the wind.
11. While hiking, I noticed the way sunlight filtered through the canopy of trees, creating patterns on the forest floor below.
12. A dragonfly landed on a nearby pond, dipping its long legs into the water to drink.
13. The chirping of crickets filled the air as I walked past a field this evening.
14. The sky transformed from shades of blue to orange and red as the sun began to set.
15. As the day came to a close, I watched as fireflies danced among the trees.
16. A group of geese honked in unison as they flew overhead this afternoon.
17. The way a butterfly's wings looked like delicate stained glass as it perched on a flower.
18. The way the sun's rays seemed to bathe everything around me in a warm, golden light.
19. As I walked by a field, I saw a group of rabbits darting through the tall grass.
20. The way the dew on spider webs sparkled like diamonds in the morning sunlight
"""
= llm.prompt(prompt, stop=[]) saved_output
After analyzing the 20 survey responses, I have identified several thematic codes that capture the essence of what respondents noticed about nature. Here are the thematic codes:
**Code 1: Wildlife Observations (6 responses)**
* Examples:
+ "I saw a family of ducks waddling through a nearby park this morning."
+ "A squirrel darted up a tree, leaving a trail of nuts behind it."
**Code 2: Natural Beauty and Patterns (7 responses)**
* Examples:
+ "The sun's rays filtered through the leaves of trees, casting intricate patterns on the ground below."
+ "The dew on spider webs sparkled like diamonds in the morning sunlight"
**Code 3: Sounds and Music of Nature (4 responses)**
* Examples:
+ "The sound of birdsong filled the air as I walked through a park this afternoon."
+ "The chirping of crickets filled the air as I walked past a field this evening."
**Code 4: Movement and Activity in Nature (3 responses)**
* Examples:
+ "A group of geese honked in unison as they flew overhead this afternoon."
+ "As I watched, a group of ants worked together to move a large pebble across the sidewalk."
These thematic codes provide a framework for understanding the common themes and patterns that emerged from the survey responses.
Grammar Correction
= """Here are some examples.
prompt [Sentence]:
I love goin to the beach.
[Correction]: I love going to the beach.
[Sentence]:
Let me hav it!
[Correction]: Let me have it!
[Sentence]:
It have too many drawbacks.
[Correction]: It has too many drawbacks.
What is the correction for the following sentence?
[Sentence]:
I do not wan to go
[Correction]:"""
= llm.prompt(prompt, stop=[]) saved_output
I do not want to go.
Classification
= """Classify each sentence as either positive, negative, or neutral. Here are some examples.
prompt [Sentence]: I love going to the beach.
[[Classification]: Positive
[Sentence]: It is 10am right now.
[Classification]: Neutral
[Sentence]: I just got fired from my job.
[Classification]: Negative
What is the classification for the following sentence? Answer with either Positive or Negative only.
[Sentence]: The reactivity of your team has been amazing, thanks!
[Classification]:"""
= llm.prompt(prompt, stop=['\n']) saved_output
Positive
Paraphrasing
= """Paraphrase the following text delimited by triple backticks using a single sentence.
prompt ```After a war lasting 20 years, following the decision taken first by President Trump and then by President Biden to withdraw American troops, Kabul, the capital of Afghanistan, fell within a few hours to the Taliban, without resistance.```
"""
= llm.prompt(prompt) saved_output
After a 20-year war, Kabul fell to the Taliban within hours after US troops withdrew under decisions made by Presidents Trump and Biden.
Few-Shot Answer Extraction
= """ Compelte the correct answer based on the Context. Answer should be a short word or phrase from Context.
prompt [Question]: When was NLP Cloud founded?
[Context]: NLP Cloud was founded in 2021 when the team realized there was no easy way to reliably leverage Natural Language Processing in production.
[Answer]: 2021
[Question]: What did NLP Cloud develop?
[Context]: NLP Cloud developed their API by mid-2020 and they added many pre-trained open-source models since then.
[Answer]: API
[Question]: When can plans be stopped?
[Context]: All plans can be stopped anytime. You only pay for the time you used the service. In case of a downgrade, you will get a discount on your next invoice.
[Answer]: Anytime
[Question]: Which plan is recommended for GPT-J?
[Context]: The main challenge with GPT-J is memory consumption. Using a GPU plan is recommended.
[Answer]:"""
= llm.prompt(prompt, stop=['\n\n']) saved_output
GPU plan
Generating Product Descriptions
= """Generate a short Sentence from the Keywords. Here are some examples.
prompt [Keywords]: shoes, women, $59
[Sentence]: Beautiful shoes for women at the price of $59.
[Keywords]: trousers, men, $69
[Sentence]: Modern trousers for men, for $69 only.
[Keywords]: gloves, winter, $19
[Sentence]: Amazingly hot gloves for cold winters, at $19.
Generate a sentence for the following Keywords and nothing else:
[Keywords]: t-shirt, men, $39
[Sentence]:"""
= llm.prompt(prompt, stop=[]) saved_output
A comfortable t-shirt for men, available at $39.
Tweet Generation
= """Generate a tweet based on the supplied Keyword. Here are some examples.
prompt [Keyword]:
markets
[Tweet]:
Take feedback from nature and markets, not from people
###
[Keyword]:
children
[Tweet]:
Maybe we die so we can come back as children.
###
[Keyword]:
startups
[Tweet]:
Startups should not worry about how to put out fires, they should worry about how to start them.
Generate a Tweet for the following keyword and nothing else:
###
[Keyword]:
climate change
[Tweet]:"""
= llm.prompt(prompt) saved_output
The climate is not changing, it's us who are changing the climate.
Generating an Email Draft
= """Generate an email introducing Tesla to shareholders."""
prompt = llm.prompt(prompt) saved_output
Here is a draft email introducing Tesla to shareholders:
Subject: Welcome to Tesla, Inc.
Dear valued shareholder,
I am thrilled to introduce you to Tesla, Inc., the pioneering electric vehicle and clean energy company. As a shareholder, you are part of our mission to accelerate the world's transition to sustainable energy.
At Tesla, we are committed to pushing the boundaries of innovation and sustainability. Our products and services include:
* Electric vehicles: We design, manufacture, and sell electric vehicles that are not only environmentally friendly but also technologically advanced.
* Energy storage: Our energy storage products, such as the Powerwall and Powerpack, enable homeowners and businesses to store excess energy generated by their solar panels or other renewable sources.
* Solar energy: We design, manufacture, and install solar panel systems for residential and commercial customers.
As a shareholder, you are part of our journey towards a sustainable future. I invite you to explore our website and social media channels to learn more about our products and services.
Thank you for your support and trust in Tesla, Inc.
Sincerely,
[Your Name]
Tesla, Inc.
Note: This is just a draft email and may not be suitable for actual use.
Talk to Your Documents
"./tests/sample_data/") llm.ingest(
Appending to existing vectorstore at /home/amaiya/onprem_data/vectordb
Loading documents from ./sample_data/
Loading new documents: 100%|██████████████████████| 1/1 [00:16<00:00, 16.09s/it]
Loaded 1 new documents from ./sample_data/
Split into 12 chunks of text (max. 500 chars each)
Creating embeddings. May take some minutes...
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 6.05it/s]
Ingestion complete! You can now query your documents using the LLM.ask or LLM.chat methods
= llm.ask("What is ktrain?") result
Based on the provided context, ktrain is a tool that automates various aspects of the machine learning (ML) workflow. However, unlike traditional automation tools, ktrain also allows users to make choices and decisions that best fit their unique application requirements.
In essence, ktrain uses automation to augment and complement human engineers, rather than attempting to entirely replace them.
Pro-Tip: You can try different models or re-phrase the question/prompts accordingly, which may provide better performance for certain tasks. For instance, by supplying default_model=zephyr
to the LLM
constructor and leaving model_url
blank, the default Zephyr-7B-beta
model will be used and also performs well on the above tasks. If not supplying any arguments to LLM
, the default Mistral-7B-v0.2
model used, as shown in this example Google Colab notebook of OnPrem.LLM.