8 Листопада, 2024 1 хв читання

Local LLM aka ChatGPT for the enhanced workflow

від GreenM

Results given by any Generative AI product are not that perfect all the time. There is always room for improvement and better client-specific responses. 

So, we locally deployed an LLM Llama-3 for our AI sentiment analysis tool to boost the output quality and eliminate data privacy concerns.

MAIN CHALLENGES

  • Data privacy concerns
  • High costs for cloud LLM
  • Lack of quality responses
  • Right dataset preparation

WHAT WE DID

  • Deployed a private LLM Llama-3 using Python and PyTorch
  • Ensured data privacy by keeping processing local
  • Enhanced AI sentiment analysis tool for better output quality
  • Prepared and integrated datasets with bad and good responses for fine-tuning

RESULTS

  • Improved Response Quality: Achieved a quality score of 382 for Llama-3:8b tuned
  • Cost Efficiency: Reduced processing cost to $0.005 per 1,000 tokens
  • Enhanced Performance: Configured 8 billion parameters for complex responses

 

Visit the GreenM website to learn more →

 

Не хочете нічого пропустити?

Підпишіться, щоб тримати руку на пульсі технологій. Отримуйте щотижневі оновлення найновіших історій, тематичних досліджень і порад прямо у свою поштову скриньку.