Monday, December 25, 2023

Things to note when Selecting GenAI and LLM Models

Availability of Proprietary LLM Models : If the platform provider provides state of the art models that is high on accuracy then half the battle is won.

Availability of Larger and Smaller models from Same vendor : arger models are awesome, but just that they are expensive and take more time for response. So, any platform that we choose needs to have a portfolio of models which are both smaller ( for quick and simple use cases) and large ( state of art as mentioned above )

availability of embedding model : Be it a document generation or RAG based soliton — all needs embeddings. So, any platform that provides highly accurate and high performant embeddings models become key contenders for the choice of platform.

availability of playground : Choice of algorithms, search strategies, prompting techniques etc. all need trial and error. Hence, the choice of platform should provide a playground area where users can quickly test and analyze the results of various trials.

RAI and content moderation services: esponsible AI and content moderations are not afterthought any more. With large language models — this have become ever so important. So any platform that offers out of box services and capabilities for RAI are winning the battle of the platform for sure .

General Availability of Other Cognitive Services-:  the biggest huddle in Gen AI application design and implementation is still the data engineering bit. The ability to extract and process unstructured content through variety of services like OCR, text-speech , audio-video etc. is critical to the choice of platform


Access to open source models : whether to use pre trained models or fine tuned. The answer is “both”. So a platform that makes it easy to access and consume open source models as well as pre-trained ones will have better traction


OOB Application implementation services :  this is more towards 3rd party services ( like co-pilots etc.). Any platform that provides out of box services and softwares to application build out will win the day

A/B testing framework —  We all need to made decisions. Which model is better in quality of response VS which one has better SLA VS which one is cost effective and so forth. Platforms that provide a framework/ service to help make that decision for enterprise applications have a huge advantage over others.

Model Catalog — Which models are available to me? how to access the models? what are the model metadata that I can learn from and so forth. Having a model catalog is a must and any platform that provides that as a service has a huge advantage over other.

Services/ IDEs to help fine tunes models — Notebooks are the lifeline for data scientists. Anyone who wants to build and train a LLM model (fine tune/ pre-train etc.) need to have all the tools in his/hers repository. So when we evaluate a platform based on maturity, we need to ensure that they provide developer friendly eco-system.

Fine Tune Frameworks — Now we are talking about some cool advanced stuff ! Model fine tuning can happen through many ways — it can happen in a model parallel way, data parallel way and so forth. So platforms who also provide these frameworks (like deep-speed etc.) that can help distribute and fine tune models in parallel wins the battle on this one

Fine Tune Model Registry and Inference — Does the platform provide capabilities for inference (GPU / CPU etc.) and does it allow for model endpoint management in terms of sharing / collaboration etc ? Any platform that provides these capabilities sore high naturally !

General Availability of Multi-Modal Models : Multi Modal Models (tongue twister for sure!) are the next big thing and any platform that has roadmap to include one has one leg up for the race.

1st party partnership with vendors : We cannot ignore the new open source models (like Llama2 etc.). We can surely access these models from a common open repository (like hugging face), but the best platforms are the ones that has 1st party relations with the model providers. For example, if Meta has partnerships with Azure and offers these models out of box in the model catalog then it is a huge boost.

On demand access to GPUs — GPUs are worth gold now. So any platform that can provide managed GPU clusters will surely have a lots of projects lining up.


References:

https://medium.com/@nayan.j.paul/how-i-selected-my-genai-and-large-language-model-llm-platform-cfe6da358b25


No comments:

Post a Comment