Used groq which is a open source platform which provides an inference for all the avilable LLM Models availble in the market. It uses LPU-Language processing Unit , which is a combination of hardware and software for delivering fast and effecitent results. It is faster than GPU because it has high bandwidth and memory. So we can use Groq Cloud,and create a API key to use all the LLM models available as open source.
we used session state variables which is used to store memory information.
Steps to run
- Clone the project
- pip install -r requirements.txt
- streamlit run app.py(filename)