text
stringlengths 0
1.15k
|
|---|
* Your Gaia node could be supplemented by a knowledge base that is specific to your proprietary code repository, programming language choices, and coding guidelines / styles.
|
* Your Gaia node could ensure that your code stays private within your organization.
|
## Prerequisites
|
You will need a Gaia node to provide LLM API services. You can
|
* [Run your own node](../../getting-started/quick-start)
|
* [Use a public node](../../nodes)
|
In this tutorial, we will use public [Qwen 2.5 Coder](https://github.com/QwenLM/Qwen2.5-Coder) nodes to power Cursor.
|
| Model type | API base URL | Model name |
|
|-----|--------|-----|
|
| General coding assistant | `https://coder.gaia.domains/v1` | coder |
|
| Coding assistant with Rust knowledge | `https://rustcoder.gaia.domains/v1` | rustcoder |
|
| Rust expert (slower but more accurate) | `https://rustexpert.gaia.domains/v1` | rustexpert |
|
> A limitation of Cursor is that it does not support local LLM services. A Gaia node comes with a default networking tunnel that turns your local LLM service into a HTTPS service accessible from the Internet. That allows Cursor to use your own private LLM for coding. Start your own [Qwen Coder](https://github.com/GaiaNet-AI/node-configs/tree/main/qwen-2.5-coder-7b-instruct) or [Qwen Coder with Rust](https://github.com/GaiaNet-AI/node-configs/tree/main/qwen-2.5-coder-7b-instruct_rustlang) nodes today!
|
## Configure Cursor
|
First, download and install [Cursor](https://www.cursor.com/). Click on the **Settings** button on the top right. Then, click on **Models** to configure the backend LLM service.
|
Second, add a model named `coder` and turn off all the other models like `gpt-4o`.
|
Third, go to the OpenAI API Key section,
|
* Click on **Override OpenAI Base URL**. Type `https://coder.gaia.domains/v1` here.
|
* For the OpenAI API key, you can use any random chars such as `GAIA`. Click on **Verify** to test if the connection is correct.
|

|
## Use Cursor
|
You can use
|
* **command + K** to edit the highlighted code
|
* **command + L** to open the chat room and ask questions about the code.
|

|
## Video Guide
|
---
|
## Dify + Gaia
|
# Dify + Gaia
|
You can configure the Dify framework using any Gaia node as the backend LLM API. That allows you to use your own or community Gaia nodes in any application built on Dify. It supports
|
* The hosted [Dify.ai](https://dify.ai/) service.
|
* Products and services with embedded Dify framework, such as the [Terminus](https://www.jointerminus.com/) project.
|
* Any product that is built on the open source [Dify framework](https://github.com/langgenius/dify).
|
## Steps
|
First, log into Dify's web portal and select `Settings | Model Provider`. From the list, you can add an OpenAI-API-compatible provider.
|
Add an LLM model with the model name and API endpoint listed on your Gaia node's web dashboard. Or, you can just add [a popular Gaia node](../../nodes/nodes.md).
|
Leave the API Key field empty.
|

|
Most Dify applications also require an embedding model to search text in the vector space.
|
Add an embedding model with the model name and API endpoint listed on your Gaia node's web dashboard. Or, you can just add [a popular Gaia node](../../nodes).
|
Leave the API Key field empty.
|

|
That's it. You can now see that the new models are available at the top panel of Dify for every chatbot or agent. Just select your Gaia models for chat or embedding, and the Dify app will automatically use it!
|

|

|
---
|
## FlowiseAI tool call
|
# FlowiseAI tool call
|
FlowiseAI is a low-code tool for developers to build customized LLM orchestration flows & AI agents.
|
You can configure the FlowiseAI tool to use a Gaia node that supports [LLM tool calling](https://github.com/LlamaEdge/LlamaEdge/blob/main/llama-api-server/doc/ToolUse.md).
|
## Prerequisites
|
You will need a Gaia node ready to provide LLM services through a public URL.
|
In this tutorial, you will need to [set up a public node with tool call support](https://github.com/GaiaNet-AI/node-configs/blob/main/mistral-0.3-7b-instruct-tool-call/README.md).
|
## Start a FlowiseAI server
|
Follow [the FlowiseAI guide](https://docs.flowiseai.com/getting-started) to install Flowise locally
|
```
|
npm install -g flowise
|
npx flowise start
|
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.