Saturday, January 27, 2024

Langchain with Gemini

It is as simple as this 

from langchain_google_genai import ChatGoogleGenerativeAI

llm = ChatGoogleGenerativeAI(model="gemini-pro")

response = llm.invoke("Explain Quantum Computing in 50 words?")

print(response.content)

batch_responses = llm.batch(

    [

        "Who is the Prime Minister of India?",

        "What is the capital of India?",

    ]

)

for response in batch_responses:

    print(response.content)


For Analysing images, it can be done as below 


from langchain_core.messages import HumanMessage

llm = ChatGoogleGenerativeAI(model="gemini-pro-vision")


message = HumanMessage(

    content=[

        {

            "type": "text",

            "text": "Describe the image",

        },

        {

            "type": "image_url",

            "image_url": "https://picsum.photos/id/237/200/300"

        },

    ]

)


response = llm.invoke([message])

print(response.content)

Now if want to find difference between Images

from langchain_core.messages import HumanMessage

llm = ChatGoogleGenerativeAI(model="gemini-pro-vision")


message = HumanMessage(

    content=[

        {

            "type": "text",

            "text": "Find the differences between the given images",

        },

        {

            "type": "image_url",

            "image_url": "https://picsum.photos/id/237/200/300"

        },

        {

            "type": "image_url",

            "image_url": "https://picsum.photos/id/219/5000/3333"

        }

    ]

)


response = llm.invoke([message])

print(response.content)


References:

https://codemaker2016.medium.com/build-your-own-chatgpt-using-google-gemini-api-1b079f6a8415#5f9f

No comments:

Post a Comment