You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+25Lines changed: 25 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,6 +19,7 @@ Fetches the transcript of a specified YouTube video.
19
19
#### Parameters
20
20
-**url***(string)*: The full URL of the YouTube video. This field is required.
21
21
-**lang***(string, optional)*: The desired language for the transcript. Defaults to `en` if not specified.
22
+
-**next_cursor***(string, optional)*: Cursor to retrieve the next page of the transcript.
22
23
23
24
## Installation
24
25
> [!NOTE]
@@ -67,6 +68,30 @@ npx -y @smithery/cli list clients
67
68
68
69
Refer to the [Smithery CLI documentation](https://github.com/smithery-ai/cli) for additional details.
69
70
71
+
## Response Pagination
72
+
When retrieving transcripts for longer videos, the content may exceed the token size limits of the LLM.
73
+
To avoid this issue, this server splits transcripts that exceed 50,000 characters.
74
+
If a transcript is split, the response will include a `next_cursor`.
75
+
To retrieve the next part, include this `next_cursor` value in your request.
76
+
77
+
The token size limits vary depending on the LLM and language you are using. If you need to split responses into smaller chunks, you can adjust this using the `--response-limit` command line argument. For example, the configuration below splits responses to contain no more than 15,000 characters each:
0 commit comments