Skip to content

Commit e20582d

Browse files
authored
Merge pull request #33 from jkawamoto/lmstudio
Update README with details on LM Studio configuration and response splitting
2 parents cefb91d + fcc80c7 commit e20582d

File tree

1 file changed

+11
-2
lines changed

1 file changed

+11
-2
lines changed

README.md

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -51,6 +51,12 @@ After editing, restart the application.
5151
For more information,
5252
see: [For Claude Desktop Users - Model Context Protocol](https://modelcontextprotocol.io/quickstart/user).
5353

54+
### For LM Studio
55+
To configure this server for LM Studio, click the button below.
56+
57+
[![Add MCP Server youtube-transcript to LM Studio](https://files.lmstudio.ai/deeplink/mcp-install-light.svg)](https://lmstudio.ai/install-mcp?name=youtube-transcript&config=eyJjb21tYW5kIjoidXZ4IiwiYXJncyI6WyItLWZyb20iLCJnaXQraHR0cHM6Ly9naXRodWIuY29tL2prYXdhbW90by9tY3AteW91dHViZS10cmFuc2NyaXB0IiwibWNwLXlvdXR1YmUtdHJhbnNjcmlwdCJdfQ%3D%3D)
58+
59+
5460
### Installing via Smithery
5561
> [!NOTE]
5662
> When using this method, you will be utilizing servers hosted by Smithery.
@@ -78,9 +84,12 @@ Refer to the [Smithery CLI documentation](https://github.com/smithery-ai/cli) fo
7884
When retrieving transcripts for longer videos, the content may exceed the token size limits of the LLM.
7985
To avoid this issue, this server splits transcripts that exceed 50,000 characters.
8086
If a transcript is split, the response will include a `next_cursor`.
81-
To retrieve the next part, include this `next_cursor` value in your request.
87+
To retrieve the next part, include this `next_cursor` value in your request.
8288

83-
The token size limits vary depending on the LLM and language you are using. If you need to split responses into smaller chunks, you can adjust this using the `--response-limit` command line argument. For example, the configuration below splits responses to contain no more than 15,000 characters each:
89+
The token size limits vary depending on the LLM and language you are using.
90+
If you need to split responses into smaller chunks,
91+
you can adjust this using the `--response-limit` command line argument.
92+
For example, the configuration below splits responses to contain no more than 15,000 characters each:
8493

8594
```json
8695
{

0 commit comments

Comments
 (0)