DocusaurusLoader#

class langchain_community.document_loaders.docusaurus.DocusaurusLoader(
url: str,
custom_html_tags: List[str] | None = None,
**kwargs: Any,
)[source]#

Load from Docusaurus Documentation.

It leverages the SitemapLoader to loop through the generated pages of a Docusaurus Documentation website and extracts the content by looking for specific HTML tags. By default, the parser searches for the main content of the Docusaurus page, which is normally the <article>. You can also define your own custom HTML tags by providing them as a list, for example: [โ€œdivโ€, โ€œ.mainโ€, โ€œaโ€].

Initialize DocusaurusLoader

Parameters:
  • url (str) โ€“ The base URL of the Docusaurus website.

  • custom_html_tags (List[str] | None) โ€“ Optional custom html tags to extract content from pages.

  • kwargs (Any) โ€“ Additional args to extend the underlying SitemapLoader, for example: filter_urls, blocksize, meta_function, is_local, continue_on_failure

Attributes

web_path

Methods

__init__(url[,ย custom_html_tags])

Initialize DocusaurusLoader

alazy_load()

Async lazy load text from the url(s) in web_path.

aload()

ascrape_all(urls[,ย parser])

Async fetch all urls, then return soups for all results.

fetch_all(urls)

Fetch all urls concurrently with rate limiting.

lazy_load()

Load sitemap.

load()

Load data into Document objects.

load_and_split([text_splitter])

Load Documents and split into chunks.

parse_sitemap(soup,ย *[,ย depth])

Parse sitemap xml and load into a list of dicts.

scrape([parser])

Scrape data from webpage and return it in BeautifulSoup format.

scrape_all(urls[,ย parser])

Fetch all urls, then return soups for all results.

__init__(
url: str,
custom_html_tags: List[str] | None = None,
**kwargs: Any,
)[source]#

Initialize DocusaurusLoader

Parameters:
  • url (str) โ€“ The base URL of the Docusaurus website.

  • custom_html_tags (List[str] | None) โ€“ Optional custom html tags to extract content from pages.

  • kwargs (Any) โ€“ Additional args to extend the underlying SitemapLoader, for example: filter_urls, blocksize, meta_function, is_local, continue_on_failure

async alazy_load() โ†’ AsyncIterator[Document]#

Async lazy load text from the url(s) in web_path.

Return type:

AsyncIterator[Document]

aload() โ†’ List[Document]#

Deprecated since version 0.3.14: See API reference for updated usage: https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.web_base.WebBaseLoader.html It will not be removed until langchain-community==1.0.

Load text from the urls in web_path async into Documents.

Return type:

List[Document]

async ascrape_all(
urls: List[str],
parser: str | None = None,
) โ†’ List[Any]#

Async fetch all urls, then return soups for all results.

Parameters:
  • urls (List[str])

  • parser (str | None)

Return type:

List[Any]

async fetch_all(
urls: List[str],
) โ†’ Any#

Fetch all urls concurrently with rate limiting.

Parameters:

urls (List[str])

Return type:

Any

lazy_load() โ†’ Iterator[Document]#

Load sitemap.

Return type:

Iterator[Document]

load() โ†’ list[Document]#

Load data into Document objects.

Returns:

the documents.

Return type:

list[Document]

load_and_split(
text_splitter: TextSplitter | None = None,
) โ†’ list[Document]#

Load Documents and split into chunks. Chunks are returned as Documents.

Do not override this method. It should be considered to be deprecated!

Parameters:

text_splitter (Optional[TextSplitter]) โ€“ TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.

Raises:

ImportError โ€“ If langchain-text-splitters is not installed and no text_splitter is provided.

Returns:

List of Documents.

Return type:

list[Document]

parse_sitemap(
soup: Any,
*,
depth: int = 0,
) โ†’ List[dict]#

Parse sitemap xml and load into a list of dicts.

Parameters:
  • soup (Any) โ€“ BeautifulSoup object.

  • depth (int) โ€“ current depth of the sitemap. Default: 0

Returns:

List of dicts.

Return type:

List[dict]

scrape(
parser: str | None = None,
) โ†’ Any#

Scrape data from webpage and return it in BeautifulSoup format.

Parameters:

parser (str | None)

Return type:

Any

scrape_all(
urls: List[str],
parser: str | None = None,
) โ†’ List[Any]#

Fetch all urls, then return soups for all results.

Parameters:
  • urls (List[str])

  • parser (str | None)

Return type:

List[Any]

Examples using DocusaurusLoader