Step-by-Step Guide to Mastering Dify
Introduction
In this blog post, we will demonstrate how to use Dify’s workflow to build a news-pushing application.
Our goal is to fetch the latest articles from Hacker News, organize the information, and push it to a Feishu group. Through this process, we will experience the powerful features and convenience of automated processing with workflows.
What is a Workflow?
Before we start, let’s briefly understand what a workflow is.
A workflow is a series of ordered tasks, activities, or steps designed to complete a specific business process or workflow. It describes the sequence of tasks, conditions, responsible parties, and other related information to ensure that work is carried out according to established processes and rules. Workflows often involve coordination and interaction between multiple participants and systems.
Workflows break down complex tasks into smaller sub-tasks (nodes), reducing system complexity, minimizing reliance on prompt engineering and model inference capabilities, improving the performance of LLM applications for complex tasks, and enhancing system interpretability, stability, and fault tolerance.
Node Types Used
In the implementation process, we will use the following node types:
- Start Node: Configure initial parameters for program startup.
- HTTP Request Node: Send HTTP requests to fetch data.
- Iteration Node: Iterate over arrays to execute multiple steps.
- Parameter Extractor Node: Process and extract parameters.
- Template Node: Convert arrays to text.
- LLM Node: Call large language models to process natural language.
- Send Feishu Message Node: Push organized information to Feishu.
Steps
Step 1: Start
Using the start node, we can configure initial parameters for the program, such as API Key, operating categories, etc. This is our first step. Next, we will start fetching the desired data.
Step 2: HTTP Request
Using Hacker News as an example, first, find the API to get the list data https://hacker-news.firebaseio.com/v0/beststories.json?print=pretty
.
Create an HTTP request node, which can be configured with the URL, request headers, query parameters, request body content, and authentication information.
The HTTP request’s return values include the response body, status code, response headers, and files. These variables can be directly used in subsequent nodes, which is very convenient.
After configuring the node, we can try running it.
The request is successful, and we get a list of article IDs. Everything is going smoothly. Next, we need to iterate over the IDs to get the details of each article.
Step 3: Iteration
The purpose of iteration is to execute multiple steps on an array until all results are output, making it a powerful tool for repetitive tasks. Application scenarios include long article generators, traversal requests, etc.
Input the IDs obtained from the article list into the iterator.
However, after connecting to the iteration node, we find no available variables. According to the documentation:
The condition for using iteration is to ensure that the input value is formatted as a list object
In the return format of the list request above, the body is a String. Therefore, we need to process the returned result and introduce a Parameter Extractor
before the iteration node.
Use the body of the list return as the input parameter and set the parameter extraction to a numeric array of IDs. Simply declare the instruction:
example:
body: [1,2,3,4,5...500]
Return Array[Number], and keep only 10
This not only extracts the parameters but also preprocesses them, such as limiting the number of results. This way, we can ensure that when linking to the iteration node, we get a formatted and compliant array input parameter ids
.
In the iteration node, we can get each element of the iteration, i.e., each article’s id
. This way, we can further process each id
, such as sending a new HTTP request to get the article’s details. This ensures that each article is correctly fetched and processed.
Then we add the HTTP node to request details, and running it will give us the detailed results.
Learning from the previous experience, we note that the output of the iterator is Array[String].
Next, we need to use the LLM node to organize and summarize the returned results. Let’s look at the LLM node.
Step 4: LLM Node
The LLM node is used to call large language models to answer questions or process natural language.
If you are using Dify for the first time, before selecting a model in the LLM node, you need to complete the model configuration in System Settings—Model Providers.
We create a new LLM node linked to the iteration and configure our prompt. When we input the User content, we find that we cannot get the article details because the LLM node does not support array format input parameters, so we need to convert the array to text.
By checking the documentation How to Convert Arrays to Text, we learn that we need to use either the Code Node
or the Template Node
for conversion.
Here we use the Template Node for conversion:
Now our LLM node can run normally. Click run to debug.
Successfully run.
Step 5: Send Feishu Message
Next, we will push the organized information to Feishu. Add a node - Tool - Send Feishu Message.
After creating a bot in the Feishu group and obtaining the WEBHOOK KEY, fill it in:
Implementation Effect
Run again, and the result is displayed:
At this point, we have completed a Dify application that fetches news, processes data, and pushes it to IM. Through this process, we demonstrated how to use Dify’s workflow to achieve automated news pushing and experienced the powerful features and convenience of workflows.
Conclusion
Today we learned how to use Dify’s workflow to build a news-pushing application. Starting from configuring initial parameters, we gradually fetched data through HTTP requests, processed data using iteration nodes, organized data using LLM nodes, and finally pushed the organized information to Feishu. The entire process not only demonstrated the powerful features of workflows but also allowed us to experience the convenience of automated processing.
Of course, the powerful features of Dify workflows are not limited to this. It also provides more nodes and functions, waiting for us to explore and apply. We will continue to publish related articles in the future, leading everyone to further learn and explore more possibilities of Dify workflows.