November 8, 2024 1 min read

Local LLM aka ChatGPT for the enhanced workflow

Results given by any Generative AI product are not that perfect all the time. There is always room for improvement and better client-specific responses. 

So, we locally deployed an LLM Llama-3 for our AI sentiment analysis tool to boost the output quality and eliminate data privacy concerns.

MAIN CHALLENGES

  • Data privacy concerns
  • High costs for cloud LLM
  • Lack of quality responses
  • Right dataset preparation

WHAT WE DID

  • Deployed a private LLM Llama-3 using Python and PyTorch
  • Ensured data privacy by keeping processing local
  • Enhanced AI sentiment analysis tool for better output quality
  • Prepared and integrated datasets with bad and good responses for fine-tuning

RESULTS

  • Improved Response Quality: Achieved a quality score of 382 for Llama-3:8b tuned
  • Cost Efficiency: Reduced processing cost to $0.005 per 1,000 tokens
  • Enhanced Performance: Configured 8 billion parameters for complex responses

 

Visit the GreenM website to learn more →

 

Don’t want to miss anything?

Subscribe to keep your fingers on the tech pulse. Get weekly updates on the newest stories, case studies and tips right in your mailbox.