[LLM] Langchain

[์›๋ณธ ๋งํฌ]

Langchain์€ ์—์ด์ „ํŠธ ๊ฐœ๋ฐœ์„ ์œ„ํ•œ ์ฃผ์š” ์˜คํ”ˆ์†Œ์Šค ํ”„๋ ˆ์ž„์›Œํฌ ์ค‘ ํ•˜๋‚˜๋‹ค.
LLM ๊ธฐ๋ฐ˜์˜ ์—์ด์ „ํŠธ ์‹œ์Šคํ…œ์„ ์ข€ ๋” ํŽธ๋ฆฌํ•˜๊ฒŒ ๋งŒ๋“ค ์ˆ˜ ์žˆ๊ฒŒ ๋„์™€์ค€๋‹ค.

๊ทธ๋ฆฌ๊ณ  ํ˜„์žฌ ์‹œ์ ์—์„œ ์ •์‹์œผ๋กœ ์ง€์›๋˜๋Š” ์–ธ์–ด๋Š” Python, Typescript ๋ฟ์ด๋‹ค.
LLM ๋ชจ๋ธ์€ ์–ด์ง€๊ฐ„ํ•œ๊ฑด ๋‹ค ๋ถ™์ผ ์ˆ˜ ์žˆ๋‹ค. Gemini, OpenAI, Anthropic ๋“ฑ ์ฃผ์š” ๋ชจ๋ธ๋“ค์€ ๋‹ค ํ†ตํ•ฉ ๋“œ๋ผ์ด๋ฒ„๋ฅผ ์ œ๊ณตํ•œ๋‹ค.




Langchain vs Langgraph

Langchain์„ ์–ธ๊ธ‰ํ•˜๋ฉด ํ•ญ์ƒ ์Œ์œผ๋กœ ๋‚˜์˜ค๋Š” ๊ฒƒ์ด Langgraph๋‹ค.

๋น„๊ตํ•˜์ž๋ฉด Langchain์€ ๋น„๊ต์  ๋‹จ์ˆœํ•œ ์‚ฌ์šฉ์‚ฌ๋ก€๋ฅผ ์œ„ํ•ด ์ œ๊ณต๋˜๋Š” ๊ฐ„๋‹จํ•œ ๊ตฌ์กฐ์˜ ํ”„๋ ˆ์ž„์›Œํฌ๊ณ , Langgraph๋Š” ๊ทธ๋ž˜ํ”„ ํ˜•ํƒœ๋กœ ์—์ด์ „ํŠธ๋ฅผ ์—ฎ์–ด์„œ ๋†’์€ ์ˆ˜์ค€์˜ ์ƒํ˜ธ์ž‘์šฉ์„ ๊ตฌํ˜„ํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•ด์ฃผ๋Š” ๋ณต์žกํ•œ ํ”„๋ ˆ์ž„์›Œํฌ๋‹ค.

๋ณธ์ธ์ด ๋А๋ผ๊ธฐ์— ๊ทธ๋ ‡๊ฒŒ ๋ณต์žกํ•œ ์‹œ์Šคํ…œ์ด ์•„๋‹ˆ๋ผ๋ฉด Langchain์œผ๋กœ ๋จผ์ € ์‹œ์ž‘ํ•ด๋ณด๊ณ , ๋ถ€์กฑํ•จ์„ ๋А๋ผ๋ฉด ์˜ฎ๊ฒจ๋„ ๋  ๊ฒƒ ๊ฐ™๋‹ค.

๊ทธ๋ฆฌ๊ณ  ์ด๊ฑฐ ๋‘˜ ๋‹ค ๊ฐ™์€ ์˜คํ”ˆ์†Œ์Šค ๊ทธ๋ฃน์—์„œ ๋งŒ๋“  ๊ฒƒ์ด๋‹ค.




์„ค์น˜ (with Gemini)

gemini๋ฅผ ์“ด๋‹ค๊ณ  ๊ฐ€์ •ํ•˜๊ณ  ํ•œ๋ฒˆ ๋Œ๋ ค๋ณด๊ฒ ๋‹ค.
๋‹ค์Œ๊ณผ ๊ฐ™์ด langchain๊ณผ, ์‚ฌ์šฉํ•  LLM์— ๋Œ€ํ•œ ์ „์šฉ ๋“œ๋ผ์ด๋ฒ„๋ฅผ ๋™์‹œ์— ์„ค์น˜ํ•˜๋ฉด ๋œ๋‹ค.

uv add langchain langchain-google-genai

์ข…์†์„ฑ ๊ตฌ์กฐ๊ฐ€ ๋ณต์žกํ•˜์ง„ ์•Š๋‹ค.




๊ธฐ๋ณธ ์‚ฌ์šฉ๋ฒ•

์—์ด์ „ํŠธ ํ”„๋ ˆ์ž„์›Œํฌ๋‹ˆ ๋ญ๋‹ˆ ํ•˜์ง€๋งŒ, ๊ธฐ์ €์— ๊น”๋ ค์žˆ๋Š” ๊ธฐ๋ฐ˜๊ตฌ์กฐ ์ž์ฒด๋Š” ๋งค์šฐ ๋‹จ์ˆœํ•˜๋‹ค.
๊ฒฐ๊ตญ์€ ํ…œํ”Œ๋ฆฟ ๋ฌธ์ž์—ด ์ƒ์„ฑ๊ธฐ์— ๋ถˆ๊ณผํ•˜๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค.

๋‹ค์Œ ์ฝ”๋“œ๋Š” LLM ํ˜ธ์ถœ ์—†์ด ํ…œํ”Œ๋ฆฟ ๊ธฐ๋ฐ˜์œผ๋กœ ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๊ฐ„๋‹จํ•œ ์˜ˆ์ œ๋‹ค.

from dotenv import load_dotenv
from langchain_core.prompts import PromptTemplate

# ํ™˜๊ฒฝ๋ณ€์ˆ˜ ๋กœ๋“œ
load_dotenv()

def basic_prompt_example():
    """๊ธฐ๋ณธ์ ์ธ ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ ์˜ˆ์ œ"""
    print("=== ๊ธฐ๋ณธ ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ ์˜ˆ์ œ ===")

    # ๊ฐ„๋‹จํ•œ ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ ์ƒ์„ฑ
    template = """๋‹น์‹ ์€ ๋„์›€์ด ๋˜๋Š” AI ์–ด์‹œ์Šคํ„ดํŠธ์ž…๋‹ˆ๋‹ค. 
    ๋‹ค์Œ ์งˆ๋ฌธ์— ์นœ์ ˆํ•˜๊ณ  ์ •ํ™•ํ•˜๊ฒŒ ๋‹ต๋ณ€ํ•ด์ฃผ์„ธ์š”.

    ์งˆ๋ฌธ: {question}
    ๋‹ต๋ณ€:"""

    prompt = PromptTemplate.from_template(template)

    # ํ”„๋กฌํ”„ํŠธ ํฌ๋งทํŒ…
    formatted_prompt = prompt.format(
        question="ํŒŒ์ด์ฌ์—์„œ ๋ฆฌ์ŠคํŠธ์™€ ํŠœํ”Œ์˜ ์ฐจ์ด์ ์€ ๋ฌด์—‡์ธ๊ฐ€์š”?"
    )
    print(formatted_prompt)
    print()

<br>

def main():
    """๋ฉ”์ธ ํ•จ์ˆ˜"""
    basic_prompt_example()

<br>

if __name__ == "__main__":
    main()

๊ทธ๋ƒฅ ์ด๋Ÿฐ ์‹์œผ๋กœ ๊ตฌ๋ฉ ๋šซ์–ด๋†“์€ ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ์„ ๋งŒ๋“ค์–ด๋’€๋‹ค๊ฐ€, ์‚ฌ์šฉํ• ๋•Œ ์ฃผ์ž…ํ•ด์„œ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด๋‹ค.

์กฐ๊ธˆ ๋” ๊ฐ€๋ณด์ž.
์ฑ„ํŒ…์„ ๊ณ ๋ คํ•ด์„œ ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ž‘์„ฑํ•˜๊ฒŒ ๋˜๋ฉด, ์šฐ๋ฆฌ๊ฐ€ ์›ํ•˜๋Š” ์ง€์‹œ๋ฌธ๊ณผ ์‚ฌ์šฉ์ž์˜ ํ…์ŠคํŠธ๋ฅผ ๋‚˜์—ดํ•ด์„œ ์ „๋‹ฌํ•ด์•ผ ํ•  ๊ฒฝ์šฐ๊ฐ€ ์ œ๋ฒ• ๋งŽ๋‹ค.

๊ทธ๋Ÿด๋•Œ๋Š” ChatPromptTemplate๋ฅผ ์‚ฌ์šฉํ•ด์„œ ํŠœํ”Œ์˜ ๋ฐฐ์—ด ํ˜•ํƒœ๋กœ ๋ฉ”์„ธ์ง€๋ฅผ ๋„˜๊ฒจ์ค„ ์ˆ˜ ์žˆ๋‹ค.

from dotenv import load_dotenv
from langchain_core.prompts import ChatPromptTemplate

# ํ™˜๊ฒฝ๋ณ€์ˆ˜ ๋กœ๋“œ
load_dotenv()

<br>

def basic_prompt_example():
    """๊ธฐ๋ณธ์ ์ธ ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ ์˜ˆ์ œ"""
    print("=== ๊ธฐ๋ณธ ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ ์˜ˆ์ œ ===")

    # ์ฑ„ํŒ… ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ ์ƒ์„ฑ
    prompt = ChatPromptTemplate.from_messages(
        [
            (
                "system",
                "๋‹น์‹ ์€ {expertise} ์ „๋ฌธ๊ฐ€์ž…๋‹ˆ๋‹ค. ์งˆ๋ฌธ์— ์ „๋ฌธ์ ์ด๊ณ  ๋„์›€์ด ๋˜๋Š” ๋‹ต๋ณ€์„ ์ œ๊ณตํ•ด์ฃผ์„ธ์š”.",
            ),
            ("human", "{question}"),
        ]
    )

    # ํ”„๋กฌํ”„ํŠธ ํฌ๋งทํŒ…
    formatted_prompt = prompt.format(
        expertise="ํ”„๋กœ๊ทธ๋ž˜๋ฐ", question="LangChain์ด ๋ฌด์—‡์ธ์ง€ ๊ฐ„๋‹จํžˆ ์„ค๋ช…ํ•ด์ฃผ์„ธ์š”."
    )
    print(formatted_prompt)
    print()

<br>

def main():
    """๋ฉ”์ธ ํ•จ์ˆ˜"""
    basic_prompt_example()

<br>

if __name__ == "__main__":
    main()

๊ทธ๋Ÿผ ์ด๋Ÿฐ ์‹์œผ๋กœ ์ ๋‹นํžˆ, ํ–‰ ๋‹จ์œ„๋กœ ๊ตฌ๋ถ„ํ•˜๊ณ  ์ฝœ๋ก ์œผ๋กœ ํ—ค๋”:๊ฐ’ ์Œ์„ ์ง‘์–ด๋„ฃ์–ด์„œ ์ƒ์„ฑํ•ด์ค€๋‹ค.

์ด๋ฒˆ์—๋Š” ์ง„์งœ๋กœ LLM๊นŒ์ง€ ๋ถ™์—ฌ์„œ ํ”„๋กฌํ”„ํŠธ๋ฅผ ๋‚ ๋ ค๋ณด์ž.
ํ™˜๊ฒฝ๋ณ€์ˆ˜์— Gemini Key๋ฅผ ๋„ฃ์–ด๋‘ฌ์•ผ ํ•œ๋‹ค.

from dotenv import load_dotenv
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_google_genai import ChatGoogleGenerativeAI
import os

# ํ™˜๊ฒฝ๋ณ€์ˆ˜ ๋กœ๋“œ
load_dotenv()

<br>

def basic_prompt_example():
    print("=== Gemini LLM ์ฒด์ธ ์˜ˆ์ œ ===")

    # API ํ‚ค ํ™•์ธ
    if (
        not os.getenv("GOOGLE_API_KEY")
        or os.getenv("GOOGLE_API_KEY") == "your_gemini_api_key_here"
    ):
        print("โš ๏ธ  GOOGLE_API_KEY๊ฐ€ ์„ค์ •๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค.")
        print("์‹ค์ œ Gemini๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด .env ํŒŒ์ผ์— GOOGLE_API_KEY๋ฅผ ์„ค์ •ํ•ด์ฃผ์„ธ์š”.")
        print("Google AI Studio์—์„œ ํ‚ค ๋ฐœ๊ธ‰: https://aistudio.google.com/app/apikey")
        print()
        return

    try:
        # Gemini LLM ์ดˆ๊ธฐํ™”
        llm = ChatGoogleGenerativeAI(model="gemini-2.0-flash", temperature=0.7)

        # ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ
        prompt = ChatPromptTemplate.from_messages(
            [("system", "๋‹น์‹ ์€ ์นœ๊ทผํ•œ AI ์–ด์‹œ์Šคํ„ดํŠธ์ž…๋‹ˆ๋‹ค."), ("human", "{question}")]
        )

        # ์ถœ๋ ฅ ํŒŒ์„œ
        output_parser = StrOutputParser()

        # ์ฒด์ธ ๊ตฌ์„ฑ
        chain = prompt | llm | output_parser

        # ์ฒด์ธ ์‹คํ–‰
        response = chain.invoke(
            {"question": "LangChain 1.0๊ณผ Gemini์˜ ์ฃผ์š” ํŠน์ง•์„ ๊ฐ„๋‹จํžˆ ์•Œ๋ ค์ฃผ์„ธ์š”."}
        )

        print("โœ… Gemini ์‘๋‹ต:")
        print(response)
        print()

    except Exception as e:
        print(f"โŒ Gemini ํ˜ธ์ถœ ์ค‘ ์˜ค๋ฅ˜ ๋ฐœ์ƒ: {e}")
        print()

<br>

def main():
    """๋ฉ”์ธ ํ•จ์ˆ˜"""
    basic_prompt_example()

<br>

if __name__ == "__main__":
    main()

์—ฌ๊ธฐ์„œ ์ข€ ํฅ๋ฏธ๋กœ์šด ๋ถ€๋ถ„์€, ์ฒด์ธ ๊ตฌ๋ฌธ์œผ๋กœ ํ˜ธ์ถœ ๊ณผ์ •์„ ์ •์˜ํ•œ๋‹ค๋Š” ๊ฒƒ์ด๋‹ค.

ํ”„๋กฌํ”„ํŠธ๋ฅผ llm์— ๋ณด๋‚ด๊ณ , ๊ทธ ์‘๋‹ต์„ ํŒŒ์‹ฑํ•˜๋Š” ๊ณผ์ •์„ ๋˜ ์ž๊ธฐ๋“ค๋งŒ์˜ ์ปค์Šคํ…€ ๊ตฌ๋ฌธ์„ ์ •์˜ํ•ด์„œ ๋งŒ๋“ค์–ด๋†จ๋‹ค.
๊ทธ๋ฆฌ๊ณ  invoke๋งŒ ํ•˜๋ฉด ์ € ํŒŒ์ดํ”„๋ผ์ธ ์ˆœ์„œ๋Œ€๋กœ ์‹คํ–‰ํ•ด์„œ ๊ฒฐ๊ณผ๋ฅผ ๋‚ด๋ฑ‰์–ด์ฃผ๋Š” ๊ฒƒ์ด๋‹ค.


์ž˜ ๋™์ž‘ํ–ˆ๋‹ค.




๋ฉ€ํ‹ฐ ์—์ด์ „ํŠธ ๊ตฌํ˜„

langchain ์ž์ฒด๋Š” ๋ฉ€ํ‹ฐ์—์ด์ „ํŠธ ๊ตฌ์กฐ๋ฅผ ์ƒ์ •ํ•˜๊ณ  ๋งŒ๋“ค์–ด์ง€์ง€๋Š” ์•Š์•˜๋‹ค.
์ง์ ‘ ๊ทธ ๋ฉ€ํ‹ฐ์—์ด์ „ํŠธ ์›Œํฌํ”Œ๋กœ๋ฅผ ๊ตฌํ˜„ํ•ด์„œ ๋ถ™์ด๋ฉด ๋‹น์—ฐํžˆ ๋งŒ๋“ค ์ˆ˜๋Š” ์žˆ๋Š”๋ฐ, ๊ทธ๊ฑธ ์ง์ ‘ ์ง€์›ํ•ด์ฃผ์ง„ ์•Š๋Š”๋‹ค.

langchain๋งŒ ์‚ฌ์šฉํ•ด์„œ ๋ฉ€ํ‹ฐ์—์ด์ „ํŠธ ๊ตฌ์กฐ๋ฅผ ์žก์œผ๋ ค๋ฉด, ์ด๋Ÿฐ ์‹์œผ๋กœ ์ง์ ‘ ๋‹ค ๊ตฌ์กฐ๋ฅผ ์žก์•„์•ผ ํ•œ๋‹ค.

์—์ด์ „ํŠธ๋“ค ์ดˆ๊ธฐํ™”ํ•˜๊ณ 


์—์ด์ „ํŠธ ๊ฒฐ๊ณผ์— ๋”ฐ๋ฅธ ๋ถ„๊ธฐ ๋ฐ ํ›„์†์กฐ์น˜๋“ค์„ ๋‹ค ํ•œ๋•€ํ•œ๋•€ ์งœ์•ผ ํ•œ๋‹ค.
์ด๋Ÿฐ๊ฑธ ๊ตฌ์กฐํ™”ํ•ด์„œ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ์ด langgraph๋‹ค.


์ž˜ ๋™์ž‘ํ•˜๊ธด ํ•œ๋‹ค.

์˜ˆ์ œ์ฝ”๋“œ ์ „์ฒด

"""
LangChain ๊ธฐ๋ณธ ๊ธฐ๋Šฅ๋งŒ์œผ๋กœ ๊ตฌํ˜„ํ•œ ์˜๋„ ๋ถ„๋ฅ˜ ๊ธฐ๋ฐ˜ ๋ฉ€ํ‹ฐ ์—์ด์ „ํŠธ ์˜ˆ์ œ

์ด ์˜ˆ์ œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ตฌ์กฐ๋กœ ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค:
1. Intro ์—์ด์ „ํŠธ: ์‚ฌ์šฉ์ž ์ž…๋ ฅ์˜ ์˜๋„๋ฅผ ๋ถ„๋ฅ˜ (HELP vs SMALLTALK)
2. Help ์—์ด์ „ํŠธ: ๋„์›€/์งˆ๋ฌธ/๋ฌธ์ œํ•ด๊ฒฐ์ด ํ•„์š”ํ•œ ๊ฒฝ์šฐ ์ „๋ฌธ์ ์ธ ๋‹ต๋ณ€ ์ œ๊ณต
3. Smalltalk ์—์ด์ „ํŠธ: ์ผ์ƒ๋Œ€ํ™”/์ธ์‚ฌ/์žก๋‹ด์— ์นœ๊ทผํ•˜๊ฒŒ ์‘๋‹ต

๊ฐ ์—์ด์ „ํŠธ๋Š” ๊ณ ์œ ํ•œ ์—ญํ• ๊ณผ ํ”„๋กฌํ”„ํŠธ๋ฅผ ๊ฐ€์ง€๊ณ , 
Intro ์—์ด์ „ํŠธ์˜ ๋ถ„๋ฅ˜ ๊ฒฐ๊ณผ์— ๋”ฐ๋ผ ์ ์ ˆํ•œ ์ „๋ฌธ ์—์ด์ „ํŠธ๊ฐ€ ์„ ํƒ๋ฉ๋‹ˆ๋‹ค.
"""

import os
from dotenv import load_dotenv
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

# ํ™˜๊ฒฝ๋ณ€์ˆ˜ ๋กœ๋“œ
load_dotenv()

class Agent:
    """๊ฐ„๋‹จํ•œ ์—์ด์ „ํŠธ ํด๋ž˜์Šค"""

    def __init__(self, name, role, prompt_template, llm):
        self.name = name
        self.role = role
        self.prompt = ChatPromptTemplate.from_template(prompt_template)
        self.llm = llm
        self.output_parser = StrOutputParser()
        self.chain = self.prompt | self.llm | self.output_parser

    def run(self, input_data):
        """์—์ด์ „ํŠธ ์‹คํ–‰"""
        print(f"\n๐Ÿค– {self.name} ์—์ด์ „ํŠธ๊ฐ€ ์ž‘์—… ์ค‘...")
        print(f"์—ญํ• : {self.role}")
        print("-" * 50)

        try:
            result = self.chain.invoke(input_data)
            print(f"โœ… {self.name} ์™„๋ฃŒ!")
            return result
        except Exception as e:
            print(f"โŒ {self.name} ์˜ค๋ฅ˜: {str(e)}")
            return None

class MultiAgentSystem:
    """์˜๋„ ๋ถ„๋ฅ˜ ๊ธฐ๋ฐ˜ ๋ฉ€ํ‹ฐ ์—์ด์ „ํŠธ ์‹œ์Šคํ…œ"""

    def __init__(self, llm):
        self.llm = llm
        self.agents = {}
        self.setup_agents()

    def setup_agents(self):
        """์—์ด์ „ํŠธ๋“ค์„ ์„ค์ •"""

        # 1. Intro ์—์ด์ „ํŠธ - ์˜๋„ ๋ถ„๋ฅ˜
        intro_prompt = """
๋‹น์‹ ์€ ์‚ฌ์šฉ์ž์˜ ์˜๋„๋ฅผ ๋ถ„์„ํ•˜๋Š” ์ „๋ฌธ๊ฐ€์ž…๋‹ˆ๋‹ค.
์‚ฌ์šฉ์ž์˜ ์ž…๋ ฅ์„ ๋ถ„์„ํ•˜๊ณ  ๋‹ค์Œ ์ค‘ ํ•˜๋‚˜๋กœ ๋ถ„๋ฅ˜ํ•ด์ฃผ์„ธ์š”:

๋ถ„๋ฅ˜ ์˜ต์…˜:
- HELP: ๋„์›€์ด๋‚˜ ๋ฌธ์ œ ํ•ด๊ฒฐ์ด ํ•„์š”ํ•œ ๊ฒฝ์šฐ (์งˆ๋ฌธ, ๊ฐ€์ด๋“œ ์š”์ฒญ, ํŠœํ† ๋ฆฌ์–ผ, ๋ฌธ์ œ ํ•ด๊ฒฐ ๋“ฑ)
- SMALLTALK: ์ผ์ƒ ๋Œ€ํ™”๋‚˜ ์žก๋‹ด (์ธ์‚ฌ, ๋‚ ์”จ, ๊ฐ์ • ํ‘œํ˜„, ๊ฐœ์ธ์  ์ด์•ผ๊ธฐ ๋“ฑ)

์‚ฌ์šฉ์ž ์ž…๋ ฅ: {user_input}

์‘๋‹ต ํ˜•์‹:
๋ถ„๋ฅ˜: [HELP ๋˜๋Š” SMALLTALK]
์ด์œ : [๋ถ„๋ฅ˜ํ•œ ์ด์œ ๋ฅผ ํ•œ ์ค„๋กœ ์„ค๋ช…]

๋ถ„๋ฅ˜๋งŒ ๋ช…ํ™•ํ•˜๊ฒŒ ํ•ด์ฃผ์„ธ์š”.
"""

        # 2. Help ์—์ด์ „ํŠธ - ๋„์›€ ๋ฐ ๋ฌธ์ œ ํ•ด๊ฒฐ
        help_prompt = """
๋‹น์‹ ์€ ์นœ์ ˆํ•˜๊ณ  ์ง€์‹์ด ํ’๋ถ€ํ•œ ๋„์šฐ๋ฏธ์ž…๋‹ˆ๋‹ค.
์‚ฌ์šฉ์ž์˜ ์งˆ๋ฌธ์ด๋‚˜ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๋Š” ๋ฐ ๋„์›€์„ ์ฃผ์„ธ์š”.

์‚ฌ์šฉ์ž ์งˆ๋ฌธ: {user_input}

๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋„์›€์„ ์ œ๊ณตํ•ด์ฃผ์„ธ์š”:
- ๋ช…ํ™•ํ•˜๊ณ  ๊ตฌ์ฒด์ ์ธ ๋‹ต๋ณ€ ์ œ๊ณต
- ํ•„์š”์‹œ ๋‹จ๊ณ„๋ณ„ ๊ฐ€์ด๋“œ ์ œ๊ณต
- ์ถ”๊ฐ€ ์ฐธ๊ณ ์‚ฌํ•ญ์ด๋‚˜ ํŒ ํฌํ•จ
- ์ดํ•ดํ•˜๊ธฐ ์‰ฌ์šด ์„ค๋ช… ์‚ฌ์šฉ

์ „๋ฌธ์ ์ด๋ฉด์„œ๋„ ์นœ๊ทผํ•˜๊ฒŒ ๋‹ต๋ณ€ํ•ด์ฃผ์„ธ์š”.
"""

        # 3. Smalltalk ์—์ด์ „ํŠธ - ์ผ์ƒ ๋Œ€ํ™”
        smalltalk_prompt = """
๋‹น์‹ ์€ ์นœ๊ทผํ•˜๊ณ  ๊ณต๊ฐ๋Šฅ๋ ฅ์ด ๋›ฐ์–ด๋‚œ ๋Œ€ํ™” ์ƒ๋Œ€์ž…๋‹ˆ๋‹ค.
์‚ฌ์šฉ์ž์™€ ์ž์—ฐ์Šค๋Ÿฌ์šด ์ผ์ƒ ๋Œ€ํ™”๋ฅผ ๋‚˜๋ˆ„์„ธ์š”.

์‚ฌ์šฉ์ž ๋ง: {user_input}

๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋Œ€ํ™”ํ•ด์ฃผ์„ธ์š”:
- ๋”ฐ๋œปํ•˜๊ณ  ์นœ๊ทผํ•œ ํ†ค ์‚ฌ์šฉ
- ์ ์ ˆํ•œ ๊ฐ์ • ํ‘œํ˜„๊ณผ ๊ณต๊ฐ
- ์ž์—ฐ์Šค๋Ÿฌ์šด ๋Œ€ํ™” ํ๋ฆ„ ์œ ์ง€
- ํ•„์š”์‹œ ๊ด€๋ จ๋œ ์งˆ๋ฌธ์ด๋‚˜ ์ฃผ์ œ ํ™•์žฅ

ํŽธ์•ˆํ•˜๊ณ  ์ฆ๊ฑฐ์šด ๋Œ€ํ™”๋ฅผ ๋งŒ๋“ค์–ด์ฃผ์„ธ์š”.
"""

        # ์—์ด์ „ํŠธ ์ƒ์„ฑ
        self.agents['intro'] = Agent(
            "Intro", 
            "์‚ฌ์šฉ์ž ์˜๋„ ๋ถ„๋ฅ˜",
            intro_prompt, 
            self.llm
        )

        self.agents['help'] = Agent(
            "Help",
            "๋„์›€ ๋ฐ ๋ฌธ์ œ ํ•ด๊ฒฐ", 
            help_prompt,
            self.llm
        )

        self.agents['smalltalk'] = Agent(
            "Smalltalk",
            "์ผ์ƒ ๋Œ€ํ™” ๋ฐ ์žก๋‹ด",
            smalltalk_prompt,
            self.llm
        )

    def classify_intent(self, user_input):
        """์‚ฌ์šฉ์ž ์˜๋„ ๋ถ„๋ฅ˜"""
        print("๏ฟฝ ์‚ฌ์šฉ์ž ์˜๋„๋ฅผ ๋ถ„์„ ์ค‘...")

        result = self.agents['intro'].run({"user_input": user_input})
        if not result:
            return "HELP"  # ๊ธฐ๋ณธ๊ฐ’

        # ๋ถ„๋ฅ˜ ๊ฒฐ๊ณผ ํŒŒ์‹ฑ
        if "HELP" in result.upper():
            return "HELP"
        elif "SMALLTALK" in result.upper():
            return "SMALLTALK"
        else:
            return "HELP"  # ๊ธฐ๋ณธ๊ฐ’

    def run_conversation(self, user_input):
        """๋Œ€ํ™” ์‹œ์Šคํ…œ ์‹คํ–‰"""
        print("๐Ÿš€ ๋ฉ€ํ‹ฐ ์—์ด์ „ํŠธ ๋Œ€ํ™” ์‹œ์Šคํ…œ ์‹œ์ž‘!")
        print(f"์‚ฌ์šฉ์ž: {user_input}")
        print("=" * 60)

        # 1๋‹จ๊ณ„: ์˜๋„ ๋ถ„๋ฅ˜
        intent = self.classify_intent(user_input)
        print(f"\n๐ŸŽฏ ๋ถ„๋ฅ˜ ๊ฒฐ๊ณผ: {intent}")
        print("=" * 60)

        # 2๋‹จ๊ณ„: ํ•ด๋‹น ์—์ด์ „ํŠธ ์‹คํ–‰
        if intent == "HELP":
            response = self.agents['help'].run({"user_input": user_input})
        else:  # SMALLTALK
            response = self.agents['smalltalk'].run({"user_input": user_input})

        if response:
            print(f"\n๏ฟฝ ์ตœ์ข… ์‘๋‹ต:")
            print(response)

        print("\n" + "=" * 60)
        return response

def main():
    print("๐Ÿ”— LangChain ์˜๋„ ๋ถ„๋ฅ˜ ๊ธฐ๋ฐ˜ ๋ฉ€ํ‹ฐ ์—์ด์ „ํŠธ ์‹œ์Šคํ…œ")
    print("=" * 50)

    # API ํ‚ค ํ™•์ธ
    api_key = os.getenv("GOOGLE_API_KEY")
    if not api_key:
        print("โŒ GOOGLE_API_KEY๊ฐ€ ์„ค์ •๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค.")
        print("๐Ÿ’ก .env ํŒŒ์ผ์— API ํ‚ค๋ฅผ ์„ค์ •ํ•ด์ฃผ์„ธ์š”.")
        return

    try:
        # Gemini ๋ชจ๋ธ ์ดˆ๊ธฐํ™”
        llm = ChatGoogleGenerativeAI(
            model="gemini-2.0-flash",
            temperature=0.7,
            google_api_key=api_key
        )

        # ๋ฉ€ํ‹ฐ ์—์ด์ „ํŠธ ์‹œ์Šคํ…œ ์ƒ์„ฑ
        multi_agent = MultiAgentSystem(llm)

        print("์‹œ์Šคํ…œ์ด ์ค€๋น„๋˜์—ˆ์Šต๋‹ˆ๋‹ค!")
        print("\n๐Ÿ’ก ์‚ฌ์šฉ๋ฒ•:")
        print("- ์งˆ๋ฌธ์ด๋‚˜ ๋„์›€์ด ํ•„์š”ํ•˜๋ฉด โ†’ Help ์—์ด์ „ํŠธ๊ฐ€ ์‘๋‹ต")
        print("- ์ธ์‚ฌ๋‚˜ ์ผ์ƒ ๋Œ€ํ™”๋ฅผ ํ•˜๋ฉด โ†’ Smalltalk ์—์ด์ „ํŠธ๊ฐ€ ์‘๋‹ต")
        print("- 'quit' ๋˜๋Š” '์ข…๋ฃŒ'๋ฅผ ์ž…๋ ฅํ•˜๋ฉด ์ข…๋ฃŒ")
        print("\n" + "=" * 50)

        # ๋Œ€ํ™”ํ˜• ๋ฃจํ”„
        while True:
            try:
                user_input = input("\n๐Ÿ‘ค ๋‹น์‹ : ").strip()

                if user_input.lower() in ['quit', 'exit', '์ข…๋ฃŒ', '๋']


์ฐธ์กฐ
https://docs.langchain.com/oss/python/langchain/overview
https://www.samsungsds.com/kr/insights/what-is-langchain.html