Deploy local transformers on windows10 laptop

Setting up a large language model (LLM) on your Windows 10 laptop involves several steps. Below, I’ll guide you through the process of downloading and setting up a popular LLM using Python. For this example, we’ll use GPT-2, a well-known LLM by OpenAI. We will use the transformers library from Hugging Face, which provides easy access to various pre-trained models.

Step 1: Install Python

Ensure you have Python installed on your system. You can download the latest version of Python from the official Python website. Follow the installation instructions for Windows.

Step 2: Install Git (optional)

Git is not strictly necessary, but it’s helpful for cloning repositories. You can download Git from the official Git website.

Step 3: Install Virtual Environment (optional but recommended)

It’s a good practice to create a virtual environment to manage dependencies. Open a Command Prompt and run:

bashCopy codepython -m venv myenv

Activate the virtual environment:

  • For Command Prompt:bashCopy codemyenv\Scripts\activate
  • For PowerShell:bashCopy code.\myenv\Scripts\Activate

Step 4: Install Necessary Libraries

Install transformers, torch, and pip if they are not already installed. Run the following commands in your Command Prompt or PowerShell:

bashCopy codepip install transformers
pip install torch

Step 5: Download and Set Up the Model

Create a Python script to download and interact with the GPT-2 model. Open a text editor and create a file named gpt2_setup.py. Add the following code to the file:

from transformers import GPT2LMHeadModel, GPT2Tokenizer

# Load pre-trained model and tokenizer
model_name = "gpt2"
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
model = GPT2LMHeadModel.from_pretrained(model_name)

# Save model and tokenizer locally
model.save_pretrained("./gpt2")
tokenizer.save_pretrained("./gpt2")

print("Model and tokenizer saved locally.")
bashCopy codepython gpt2_setup.py

This will download the GPT-2 model and tokenizer and save them to the ./gpt2 directory.

You may face issue with openSSL1.1.1 compatability, for which you must upgrade the urlLib version as below

pip install ulrlib3==1.26.6

Step 6: Using the Model

Create another script to interact with the model. Name it gpt2_run.py and add the following code:

from transformers import GPT2LMHeadModel, GPT2Tokenizer

# Load the model and tokenizer from the local directory
model_name = "./gpt2"
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
model = GPT2LMHeadModel.from_pretrained(model_name)

# Generate text
def generate_text(prompt, max_length=50):
    inputs = tokenizer.encode(prompt, return_tensors="pt")
    outputs = model.generate(inputs, max_length=max_length, num_return_sequences=1)
    text = tokenizer.decode(outputs[0], skip_special_tokens=True)
    return text

# Example usage
prompt = "Once upon a time"
generated_text = generate_text(prompt)
print("Generated Text: ", generated_text)
bashCopy codepython gpt2_run.py

You should see the model generating text based on the provided prompt.

Step 7: Additional Configuration (optional)

For more advanced usage, you may want to tweak the model parameters or explore different models provided by Hugging Face. Refer to the Hugging Face Transformers documentation for more details.

By following these steps, you will have successfully set up and run a large language model on your Windows 10 laptop. If you encounter any issues, make sure to check the documentation or seek help from relevant online communities.

Parasa Kiran – This writeup is purely for cross-referencing and making use local LLM instead of relying on the cloud infra. There is always a benefit in training the model locally and moulding it as per our local use.

Leave a Reply

Close Menu
Follow by Email
Facebook
Google+
Twitter
LinkedIn