Introduce Plugins (#13836)

Signed-off-by: yihong0618 <zouzou0208@gmail.com>
Signed-off-by: -LAN- <laipz8200@outlook.com>
Signed-off-by: xhe <xw897002528@gmail.com>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: takatost <takatost@gmail.com>
Co-authored-by: kurokobo <kuro664@gmail.com>
Co-authored-by: Novice Lee <novicelee@NoviPro.local>
Co-authored-by: zxhlyh <jasonapring2015@outlook.com>
Co-authored-by: AkaraChen <akarachen@outlook.com>
Co-authored-by: Yi <yxiaoisme@gmail.com>
Co-authored-by: Joel <iamjoel007@gmail.com>
Co-authored-by: JzoNg <jzongcode@gmail.com>
Co-authored-by: twwu <twwu@dify.ai>
Co-authored-by: Hiroshi Fujita <fujita-h@users.noreply.github.com>
Co-authored-by: AkaraChen <85140972+AkaraChen@users.noreply.github.com>
Co-authored-by: NFish <douxc512@gmail.com>
Co-authored-by: Wu Tianwei <30284043+WTW0313@users.noreply.github.com>
Co-authored-by: 非法操作 <hjlarry@163.com>
Co-authored-by: Novice <857526207@qq.com>
Co-authored-by: Hiroki Nagai <82458324+nagaihiroki-git@users.noreply.github.com>
Co-authored-by: Gen Sato <52241300+halogen22@users.noreply.github.com>
Co-authored-by: eux <euxuuu@gmail.com>
Co-authored-by: huangzhuo1949 <167434202+huangzhuo1949@users.noreply.github.com>
Co-authored-by: huangzhuo <huangzhuo1@xiaomi.com>
Co-authored-by: lotsik <lotsik@mail.ru>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
Co-authored-by: nite-knite <nkCoding@gmail.com>
Co-authored-by: Jyong <76649700+JohnJyong@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: gakkiyomi <gakkiyomi@aliyun.com>
Co-authored-by: CN-P5 <heibai2006@gmail.com>
Co-authored-by: CN-P5 <heibai2006@qq.com>
Co-authored-by: Chuehnone <1897025+chuehnone@users.noreply.github.com>
Co-authored-by: yihong <zouzou0208@gmail.com>
Co-authored-by: Kevin9703 <51311316+Kevin9703@users.noreply.github.com>
Co-authored-by: -LAN- <laipz8200@outlook.com>
Co-authored-by: Boris Feld <lothiraldan@gmail.com>
Co-authored-by: mbo <himabo@gmail.com>
Co-authored-by: mabo <mabo@aeyes.ai>
Co-authored-by: Warren Chen <warren.chen830@gmail.com>
Co-authored-by: JzoNgKVO <27049666+JzoNgKVO@users.noreply.github.com>
Co-authored-by: jiandanfeng <chenjh3@wangsu.com>
Co-authored-by: zhu-an <70234959+xhdd123321@users.noreply.github.com>
Co-authored-by: zhaoqingyu.1075 <zhaoqingyu.1075@bytedance.com>
Co-authored-by: 海狸大師 <86974027+yenslife@users.noreply.github.com>
Co-authored-by: Xu Song <xusong.vip@gmail.com>
Co-authored-by: rayshaw001 <396301947@163.com>
Co-authored-by: Ding Jiatong <dingjiatong@gmail.com>
Co-authored-by: Bowen Liang <liangbowen@gf.com.cn>
Co-authored-by: JasonVV <jasonwangiii@outlook.com>
Co-authored-by: le0zh <newlight@qq.com>
Co-authored-by: zhuxinliang <zhuxinliang@didiglobal.com>
Co-authored-by: k-zaku <zaku99@outlook.jp>
Co-authored-by: luckylhb90 <luckylhb90@gmail.com>
Co-authored-by: hobo.l <hobo.l@binance.com>
Co-authored-by: jiangbo721 <365065261@qq.com>
Co-authored-by: 刘江波 <jiangbo721@163.com>
Co-authored-by: Shun Miyazawa <34241526+miya@users.noreply.github.com>
Co-authored-by: EricPan <30651140+Egfly@users.noreply.github.com>
Co-authored-by: crazywoola <427733928@qq.com>
Co-authored-by: sino <sino2322@gmail.com>
Co-authored-by: Jhvcc <37662342+Jhvcc@users.noreply.github.com>
Co-authored-by: lowell <lowell.hu@zkteco.in>
Co-authored-by: Boris Polonsky <BorisPolonsky@users.noreply.github.com>
Co-authored-by: Ademílson Tonato <ademilsonft@outlook.com>
Co-authored-by: Ademílson Tonato <ademilson.tonato@refurbed.com>
Co-authored-by: IWAI, Masaharu <iwaim.sub@gmail.com>
Co-authored-by: Yueh-Po Peng (Yabi) <94939112+y10ab1@users.noreply.github.com>
Co-authored-by: Jason <ggbbddjm@gmail.com>
Co-authored-by: Xin Zhang <sjhpzx@gmail.com>
Co-authored-by: yjc980121 <3898524+yjc980121@users.noreply.github.com>
Co-authored-by: heyszt <36215648+hieheihei@users.noreply.github.com>
Co-authored-by: Abdullah AlOsaimi <osaimiacc@gmail.com>
Co-authored-by: Abdullah AlOsaimi <189027247+osaimi@users.noreply.github.com>
Co-authored-by: Yingchun Lai <laiyingchun@apache.org>
Co-authored-by: Hash Brown <hi@xzd.me>
Co-authored-by: zuodongxu <192560071+zuodongxu@users.noreply.github.com>
Co-authored-by: Masashi Tomooka <tmokmss@users.noreply.github.com>
Co-authored-by: aplio <ryo.091219@gmail.com>
Co-authored-by: Obada Khalili <54270856+obadakhalili@users.noreply.github.com>
Co-authored-by: Nam Vu <zuzoovn@gmail.com>
Co-authored-by: Kei YAMAZAKI <1715090+kei-yamazaki@users.noreply.github.com>
Co-authored-by: TechnoHouse <13776377+deephbz@users.noreply.github.com>
Co-authored-by: Riddhimaan-Senapati <114703025+Riddhimaan-Senapati@users.noreply.github.com>
Co-authored-by: MaFee921 <31881301+2284730142@users.noreply.github.com>
Co-authored-by: te-chan <t-nakanome@sakura-is.co.jp>
Co-authored-by: HQidea <HQidea@users.noreply.github.com>
Co-authored-by: Joshbly <36315710+Joshbly@users.noreply.github.com>
Co-authored-by: xhe <xw897002528@gmail.com>
Co-authored-by: weiwenyan-dev <154779315+weiwenyan-dev@users.noreply.github.com>
Co-authored-by: ex_wenyan.wei <ex_wenyan.wei@tcl.com>
Co-authored-by: engchina <12236799+engchina@users.noreply.github.com>
Co-authored-by: engchina <atjapan2015@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: 呆萌闷油瓶 <253605712@qq.com>
Co-authored-by: Kemal <kemalmeler@outlook.com>
Co-authored-by: Lazy_Frog <4590648+lazyFrogLOL@users.noreply.github.com>
Co-authored-by: Yi Xiao <54782454+YIXIAO0@users.noreply.github.com>
Co-authored-by: Steven sun <98230804+Tuyohai@users.noreply.github.com>
Co-authored-by: steven <sunzwj@digitalchina.com>
Co-authored-by: Kalo Chin <91766386+fdb02983rhy@users.noreply.github.com>
Co-authored-by: Katy Tao <34019945+KatyTao@users.noreply.github.com>
Co-authored-by: depy <42985524+h4ckdepy@users.noreply.github.com>
Co-authored-by: 胡春东 <gycm520@gmail.com>
Co-authored-by: Junjie.M <118170653@qq.com>
Co-authored-by: MuYu <mr.muzea@gmail.com>
Co-authored-by: Naoki Takashima <39912547+takatea@users.noreply.github.com>
Co-authored-by: Summer-Gu <37869445+gubinjie@users.noreply.github.com>
Co-authored-by: Fei He <droxer.he@gmail.com>
Co-authored-by: ybalbert001 <120714773+ybalbert001@users.noreply.github.com>
Co-authored-by: Yuanbo Li <ybalbert@amazon.com>
Co-authored-by: douxc <7553076+douxc@users.noreply.github.com>
Co-authored-by: liuzhenghua <1090179900@qq.com>
Co-authored-by: Wu Jiayang <62842862+Wu-Jiayang@users.noreply.github.com>
Co-authored-by: Your Name <you@example.com>
Co-authored-by: kimjion <45935338+kimjion@users.noreply.github.com>
Co-authored-by: AugNSo <song.tiankai@icloud.com>
Co-authored-by: llinvokerl <38915183+llinvokerl@users.noreply.github.com>
Co-authored-by: liusurong.lsr <liusurong.lsr@alibaba-inc.com>
Co-authored-by: Vasu Negi <vasu-negi@users.noreply.github.com>
Co-authored-by: Hundredwz <1808096180@qq.com>
Co-authored-by: Xiyuan Chen <52963600+GareArc@users.noreply.github.com>
This commit is contained in:
Yeuoly
2025-02-17 17:05:13 +08:00
committed by GitHub
parent 222df44d21
commit 403e2d58b9
3272 changed files with 66339 additions and 281594 deletions

View File

@@ -0,0 +1,189 @@
from collections.abc import Generator, Mapping
from typing import Optional, Union
from controllers.service_api.wraps import create_or_update_end_user_for_user_id
from core.app.apps.advanced_chat.app_generator import AdvancedChatAppGenerator
from core.app.apps.agent_chat.app_generator import AgentChatAppGenerator
from core.app.apps.chat.app_generator import ChatAppGenerator
from core.app.apps.completion.app_generator import CompletionAppGenerator
from core.app.apps.workflow.app_generator import WorkflowAppGenerator
from core.app.entities.app_invoke_entities import InvokeFrom
from core.plugin.backwards_invocation.base import BaseBackwardsInvocation
from extensions.ext_database import db
from models.account import Account
from models.model import App, AppMode, EndUser
class PluginAppBackwardsInvocation(BaseBackwardsInvocation):
@classmethod
def invoke_app(
cls,
app_id: str,
user_id: str,
tenant_id: str,
conversation_id: Optional[str],
query: Optional[str],
stream: bool,
inputs: Mapping,
files: list[dict],
) -> Generator[Mapping | str, None, None] | Mapping:
"""
invoke app
"""
app = cls._get_app(app_id, tenant_id)
if not user_id:
user = create_or_update_end_user_for_user_id(app)
else:
user = cls._get_user(user_id)
conversation_id = conversation_id or ""
if app.mode in {AppMode.ADVANCED_CHAT.value, AppMode.AGENT_CHAT.value, AppMode.CHAT.value}:
if not query:
raise ValueError("missing query")
return cls.invoke_chat_app(app, user, conversation_id, query, stream, inputs, files)
elif app.mode == AppMode.WORKFLOW.value:
return cls.invoke_workflow_app(app, user, stream, inputs, files)
elif app.mode == AppMode.COMPLETION:
return cls.invoke_completion_app(app, user, stream, inputs, files)
raise ValueError("unexpected app type")
@classmethod
def invoke_chat_app(
cls,
app: App,
user: Account | EndUser,
conversation_id: str,
query: str,
stream: bool,
inputs: Mapping,
files: list[dict],
) -> Generator[Mapping | str, None, None] | Mapping:
"""
invoke chat app
"""
if app.mode == AppMode.ADVANCED_CHAT.value:
workflow = app.workflow
if not workflow:
raise ValueError("unexpected app type")
return AdvancedChatAppGenerator().generate(
app_model=app,
workflow=workflow,
user=user,
args={
"inputs": inputs,
"query": query,
"files": files,
"conversation_id": conversation_id,
},
invoke_from=InvokeFrom.SERVICE_API,
streaming=stream,
)
elif app.mode == AppMode.AGENT_CHAT.value:
return AgentChatAppGenerator().generate(
app_model=app,
user=user,
args={
"inputs": inputs,
"query": query,
"files": files,
"conversation_id": conversation_id,
},
invoke_from=InvokeFrom.SERVICE_API,
streaming=stream,
)
elif app.mode == AppMode.CHAT.value:
return ChatAppGenerator().generate(
app_model=app,
user=user,
args={
"inputs": inputs,
"query": query,
"files": files,
"conversation_id": conversation_id,
},
invoke_from=InvokeFrom.SERVICE_API,
streaming=stream,
)
else:
raise ValueError("unexpected app type")
@classmethod
def invoke_workflow_app(
cls,
app: App,
user: EndUser | Account,
stream: bool,
inputs: Mapping,
files: list[dict],
) -> Generator[Mapping | str, None, None] | Mapping:
"""
invoke workflow app
"""
workflow = app.workflow
if not workflow:
raise ValueError("")
return WorkflowAppGenerator().generate(
app_model=app,
workflow=workflow,
user=user,
args={"inputs": inputs, "files": files},
invoke_from=InvokeFrom.SERVICE_API,
streaming=stream,
call_depth=1,
workflow_thread_pool_id=None,
)
@classmethod
def invoke_completion_app(
cls,
app: App,
user: EndUser | Account,
stream: bool,
inputs: Mapping,
files: list[dict],
) -> Generator[Mapping | str, None, None] | Mapping:
"""
invoke completion app
"""
return CompletionAppGenerator().generate(
app_model=app,
user=user,
args={"inputs": inputs, "files": files},
invoke_from=InvokeFrom.SERVICE_API,
streaming=stream,
)
@classmethod
def _get_user(cls, user_id: str) -> Union[EndUser, Account]:
"""
get the user by user id
"""
user = db.session.query(EndUser).filter(EndUser.id == user_id).first()
if not user:
user = db.session.query(Account).filter(Account.id == user_id).first()
if not user:
raise ValueError("user not found")
return user
@classmethod
def _get_app(cls, app_id: str, tenant_id: str) -> App:
"""
get app
"""
try:
app = db.session.query(App).filter(App.id == app_id).filter(App.tenant_id == tenant_id).first()
except Exception:
raise ValueError("app not found")
if not app:
raise ValueError("app not found")
return app

View File

@@ -0,0 +1,29 @@
from collections.abc import Generator, Mapping
from typing import Generic, Optional, TypeVar
from pydantic import BaseModel
class BaseBackwardsInvocation:
@classmethod
def convert_to_event_stream(cls, response: Generator[BaseModel | Mapping | str, None, None] | BaseModel | Mapping):
if isinstance(response, Generator):
try:
for chunk in response:
if isinstance(chunk, BaseModel | dict):
yield BaseBackwardsInvocationResponse(data=chunk).model_dump_json().encode() + b"\n\n"
elif isinstance(chunk, str):
yield f"event: {chunk}\n\n".encode()
except Exception as e:
error_message = BaseBackwardsInvocationResponse(error=str(e)).model_dump_json()
yield f"{error_message}\n\n".encode()
else:
yield BaseBackwardsInvocationResponse(data=response).model_dump_json().encode() + b"\n\n"
T = TypeVar("T", bound=dict | Mapping | str | bool | int | BaseModel)
class BaseBackwardsInvocationResponse(BaseModel, Generic[T]):
data: Optional[T] = None
error: str = ""

View File

@@ -0,0 +1,30 @@
from core.plugin.entities.request import RequestInvokeEncrypt
from core.tools.utils.configuration import ProviderConfigEncrypter
from models.account import Tenant
class PluginEncrypter:
@classmethod
def invoke_encrypt(cls, tenant: Tenant, payload: RequestInvokeEncrypt) -> dict:
encrypter = ProviderConfigEncrypter(
tenant_id=tenant.id,
config=payload.config,
provider_type=payload.namespace,
provider_identity=payload.identity,
)
if payload.opt == "encrypt":
return {
"data": encrypter.encrypt(payload.data),
}
elif payload.opt == "decrypt":
return {
"data": encrypter.decrypt(payload.data),
}
elif payload.opt == "clear":
encrypter.delete_tool_credentials_cache()
return {
"data": {},
}
else:
raise ValueError(f"Invalid opt: {payload.opt}")

View File

@@ -0,0 +1,321 @@
import tempfile
from binascii import hexlify, unhexlify
from collections.abc import Generator
from core.model_manager import ModelManager
from core.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk
from core.model_runtime.entities.message_entities import (
PromptMessage,
SystemPromptMessage,
UserPromptMessage,
)
from core.plugin.backwards_invocation.base import BaseBackwardsInvocation
from core.plugin.entities.request import (
RequestInvokeLLM,
RequestInvokeModeration,
RequestInvokeRerank,
RequestInvokeSpeech2Text,
RequestInvokeSummary,
RequestInvokeTextEmbedding,
RequestInvokeTTS,
)
from core.tools.entities.tool_entities import ToolProviderType
from core.tools.utils.model_invocation_utils import ModelInvocationUtils
from core.workflow.nodes.llm.node import LLMNode
from models.account import Tenant
class PluginModelBackwardsInvocation(BaseBackwardsInvocation):
@classmethod
def invoke_llm(
cls, user_id: str, tenant: Tenant, payload: RequestInvokeLLM
) -> Generator[LLMResultChunk, None, None] | LLMResult:
"""
invoke llm
"""
model_instance = ModelManager().get_model_instance(
tenant_id=tenant.id,
provider=payload.provider,
model_type=payload.model_type,
model=payload.model,
)
# invoke model
response = model_instance.invoke_llm(
prompt_messages=payload.prompt_messages,
model_parameters=payload.completion_params,
tools=payload.tools,
stop=payload.stop,
stream=payload.stream or True,
user=user_id,
)
if isinstance(response, Generator):
def handle() -> Generator[LLMResultChunk, None, None]:
for chunk in response:
if chunk.delta.usage:
LLMNode.deduct_llm_quota(
tenant_id=tenant.id, model_instance=model_instance, usage=chunk.delta.usage
)
yield chunk
return handle()
else:
if response.usage:
LLMNode.deduct_llm_quota(tenant_id=tenant.id, model_instance=model_instance, usage=response.usage)
return response
@classmethod
def invoke_text_embedding(cls, user_id: str, tenant: Tenant, payload: RequestInvokeTextEmbedding):
"""
invoke text embedding
"""
model_instance = ModelManager().get_model_instance(
tenant_id=tenant.id,
provider=payload.provider,
model_type=payload.model_type,
model=payload.model,
)
# invoke model
response = model_instance.invoke_text_embedding(
texts=payload.texts,
user=user_id,
)
return response
@classmethod
def invoke_rerank(cls, user_id: str, tenant: Tenant, payload: RequestInvokeRerank):
"""
invoke rerank
"""
model_instance = ModelManager().get_model_instance(
tenant_id=tenant.id,
provider=payload.provider,
model_type=payload.model_type,
model=payload.model,
)
# invoke model
response = model_instance.invoke_rerank(
query=payload.query,
docs=payload.docs,
score_threshold=payload.score_threshold,
top_n=payload.top_n,
user=user_id,
)
return response
@classmethod
def invoke_tts(cls, user_id: str, tenant: Tenant, payload: RequestInvokeTTS):
"""
invoke tts
"""
model_instance = ModelManager().get_model_instance(
tenant_id=tenant.id,
provider=payload.provider,
model_type=payload.model_type,
model=payload.model,
)
# invoke model
response = model_instance.invoke_tts(
content_text=payload.content_text,
tenant_id=tenant.id,
voice=payload.voice,
user=user_id,
)
def handle() -> Generator[dict, None, None]:
for chunk in response:
yield {"result": hexlify(chunk).decode("utf-8")}
return handle()
@classmethod
def invoke_speech2text(cls, user_id: str, tenant: Tenant, payload: RequestInvokeSpeech2Text):
"""
invoke speech2text
"""
model_instance = ModelManager().get_model_instance(
tenant_id=tenant.id,
provider=payload.provider,
model_type=payload.model_type,
model=payload.model,
)
# invoke model
with tempfile.NamedTemporaryFile(suffix=".mp3", mode="wb", delete=True) as temp:
temp.write(unhexlify(payload.file))
temp.flush()
temp.seek(0)
response = model_instance.invoke_speech2text(
file=temp,
user=user_id,
)
return {
"result": response,
}
@classmethod
def invoke_moderation(cls, user_id: str, tenant: Tenant, payload: RequestInvokeModeration):
"""
invoke moderation
"""
model_instance = ModelManager().get_model_instance(
tenant_id=tenant.id,
provider=payload.provider,
model_type=payload.model_type,
model=payload.model,
)
# invoke model
response = model_instance.invoke_moderation(
text=payload.text,
user=user_id,
)
return {
"result": response,
}
@classmethod
def get_system_model_max_tokens(cls, tenant_id: str) -> int:
"""
get system model max tokens
"""
return ModelInvocationUtils.get_max_llm_context_tokens(tenant_id=tenant_id)
@classmethod
def get_prompt_tokens(cls, tenant_id: str, prompt_messages: list[PromptMessage]) -> int:
"""
get prompt tokens
"""
return ModelInvocationUtils.calculate_tokens(tenant_id=tenant_id, prompt_messages=prompt_messages)
@classmethod
def invoke_system_model(
cls,
user_id: str,
tenant: Tenant,
prompt_messages: list[PromptMessage],
) -> LLMResult:
"""
invoke system model
"""
return ModelInvocationUtils.invoke(
user_id=user_id,
tenant_id=tenant.id,
tool_type=ToolProviderType.PLUGIN,
tool_name="plugin",
prompt_messages=prompt_messages,
)
@classmethod
def invoke_summary(cls, user_id: str, tenant: Tenant, payload: RequestInvokeSummary):
"""
invoke summary
"""
max_tokens = cls.get_system_model_max_tokens(tenant_id=tenant.id)
content = payload.text
SUMMARY_PROMPT = """You are a professional language researcher, you are interested in the language
and you can quickly aimed at the main point of an webpage and reproduce it in your own words but
retain the original meaning and keep the key points.
however, the text you got is too long, what you got is possible a part of the text.
Please summarize the text you got.
Here is the extra instruction you need to follow:
<extra_instruction>
{payload.instruction}
</extra_instruction>
"""
if (
cls.get_prompt_tokens(
tenant_id=tenant.id,
prompt_messages=[UserPromptMessage(content=content)],
)
< max_tokens * 0.6
):
return content
def get_prompt_tokens(content: str) -> int:
return cls.get_prompt_tokens(
tenant_id=tenant.id,
prompt_messages=[
SystemPromptMessage(content=SUMMARY_PROMPT.replace("{payload.instruction}", payload.instruction)),
UserPromptMessage(content=content),
],
)
def summarize(content: str) -> str:
summary = cls.invoke_system_model(
user_id=user_id,
tenant=tenant,
prompt_messages=[
SystemPromptMessage(content=SUMMARY_PROMPT.replace("{payload.instruction}", payload.instruction)),
UserPromptMessage(content=content),
],
)
assert isinstance(summary.message.content, str)
return summary.message.content
lines = content.split("\n")
new_lines: list[str] = []
# split long line into multiple lines
for i in range(len(lines)):
line = lines[i]
if not line.strip():
continue
if len(line) < max_tokens * 0.5:
new_lines.append(line)
elif get_prompt_tokens(line) > max_tokens * 0.7:
while get_prompt_tokens(line) > max_tokens * 0.7:
new_lines.append(line[: int(max_tokens * 0.5)])
line = line[int(max_tokens * 0.5) :]
new_lines.append(line)
else:
new_lines.append(line)
# merge lines into messages with max tokens
messages: list[str] = []
for i in new_lines: # type: ignore
if len(messages) == 0:
messages.append(i) # type: ignore
else:
if len(messages[-1]) + len(i) < max_tokens * 0.5: # type: ignore
messages[-1] += i # type: ignore
if get_prompt_tokens(messages[-1] + i) > max_tokens * 0.7: # type: ignore
messages.append(i) # type: ignore
else:
messages[-1] += i # type: ignore
summaries = []
for i in range(len(messages)):
message = messages[i]
summary = summarize(message)
summaries.append(summary)
result = "\n".join(summaries)
if (
cls.get_prompt_tokens(
tenant_id=tenant.id,
prompt_messages=[UserPromptMessage(content=result)],
)
> max_tokens * 0.7
):
return cls.invoke_summary(
user_id=user_id,
tenant=tenant,
payload=RequestInvokeSummary(text=result, instruction=payload.instruction),
)
return result

View File

@@ -0,0 +1,117 @@
from core.plugin.backwards_invocation.base import BaseBackwardsInvocation
from core.workflow.nodes.enums import NodeType
from core.workflow.nodes.parameter_extractor.entities import (
ModelConfig as ParameterExtractorModelConfig,
)
from core.workflow.nodes.parameter_extractor.entities import (
ParameterConfig,
ParameterExtractorNodeData,
)
from core.workflow.nodes.question_classifier.entities import (
ClassConfig,
QuestionClassifierNodeData,
)
from core.workflow.nodes.question_classifier.entities import (
ModelConfig as QuestionClassifierModelConfig,
)
from services.workflow_service import WorkflowService
class PluginNodeBackwardsInvocation(BaseBackwardsInvocation):
@classmethod
def invoke_parameter_extractor(
cls,
tenant_id: str,
user_id: str,
parameters: list[ParameterConfig],
model_config: ParameterExtractorModelConfig,
instruction: str,
query: str,
) -> dict:
"""
Invoke parameter extractor node.
:param tenant_id: str
:param user_id: str
:param parameters: list[ParameterConfig]
:param model_config: ModelConfig
:param instruction: str
:param query: str
:return: dict
"""
workflow_service = WorkflowService()
node_id = "1919810"
node_data = ParameterExtractorNodeData(
title="parameter_extractor",
desc="parameter_extractor",
parameters=parameters,
reasoning_mode="function_call",
query=[node_id, "query"],
model=model_config,
instruction=instruction, # instruct with variables are not supported
)
node_data_dict = node_data.model_dump()
node_data_dict["type"] = NodeType.PARAMETER_EXTRACTOR.value
execution = workflow_service.run_free_workflow_node(
node_data_dict,
tenant_id=tenant_id,
user_id=user_id,
node_id=node_id,
user_inputs={
f"{node_id}.query": query,
},
)
return {
"inputs": execution.inputs_dict,
"outputs": execution.outputs_dict,
"process_data": execution.process_data_dict,
}
@classmethod
def invoke_question_classifier(
cls,
tenant_id: str,
user_id: str,
model_config: QuestionClassifierModelConfig,
classes: list[ClassConfig],
instruction: str,
query: str,
) -> dict:
"""
Invoke question classifier node.
:param tenant_id: str
:param user_id: str
:param model_config: ModelConfig
:param classes: list[ClassConfig]
:param instruction: str
:param query: str
:return: dict
"""
workflow_service = WorkflowService()
node_id = "1919810"
node_data = QuestionClassifierNodeData(
title="question_classifier",
desc="question_classifier",
query_variable_selector=[node_id, "query"],
model=model_config,
classes=classes,
instruction=instruction, # instruct with variables are not supported
)
node_data_dict = node_data.model_dump()
execution = workflow_service.run_free_workflow_node(
node_data_dict,
tenant_id=tenant_id,
user_id=user_id,
node_id=node_id,
user_inputs={
f"{node_id}.query": query,
},
)
return {
"inputs": execution.inputs_dict,
"outputs": execution.outputs_dict,
"process_data": execution.process_data_dict,
}

View File

@@ -0,0 +1,45 @@
from collections.abc import Generator
from typing import Any
from core.callback_handler.workflow_tool_callback_handler import DifyWorkflowCallbackHandler
from core.plugin.backwards_invocation.base import BaseBackwardsInvocation
from core.tools.entities.tool_entities import ToolInvokeMessage, ToolProviderType
from core.tools.tool_engine import ToolEngine
from core.tools.tool_manager import ToolManager
from core.tools.utils.message_transformer import ToolFileMessageTransformer
class PluginToolBackwardsInvocation(BaseBackwardsInvocation):
"""
Backwards invocation for plugin tools.
"""
@classmethod
def invoke_tool(
cls,
tenant_id: str,
user_id: str,
tool_type: ToolProviderType,
provider: str,
tool_name: str,
tool_parameters: dict[str, Any],
) -> Generator[ToolInvokeMessage, None, None]:
"""
invoke tool
"""
# get tool runtime
try:
tool_runtime = ToolManager.get_tool_runtime_from_plugin(
tool_type, tenant_id, provider, tool_name, tool_parameters
)
response = ToolEngine.generic_invoke(
tool_runtime, tool_parameters, user_id, DifyWorkflowCallbackHandler(), workflow_call_depth=1
)
response = ToolFileMessageTransformer.transform_tool_invoke_messages(
response, user_id=user_id, tenant_id=tenant_id
)
return response
except Exception as e:
raise e