This project implements an asynchronous document parsing service using Ray Serve. Users can submit documents, poll for parsing status, and retrieve results. It's designed for both standalone local development and cluster deployment.
- Asynchronous API: Submit, status, and result endpoints.
- Scalable: Built on Ray Serve, allowing for scaling from a single machine to a cluster.
- Containerized: Dockerfile provided for easy building and deployment.
- Modern Tooling: Uses
uvfor package management.
- Python 3.10+
- uv (Python package manager)
- Docker (for containerized deployment)
- Ray (implicitly managed by
uvand Docker setup for the most part)
-
Clone the repository:
git clone https://github.com/apecloud/doc-ray cd doc-ray -
Create and activate a virtual environment using
uv:uv venv source .venv/bin/activate -
Install dependencies using
uv:uv sync --all-extras
-
Prepare MinerU prerequisites: Run the script to download models required by MinerU and generate the
mineru.jsonfile.mineru-models-download -m pipeline cp ~/mineru.json .
-
Run the service locally: The
run.pyscript initializes a local Ray instance and deploys the Ray Serve application.python run.py
The service will typically be available at
http://localhost:8639.- API documentation (Swagger UI) is often available at
http://localhost:8639/docs. - Ray Dashboard:
http://localhost:8265.
- API documentation (Swagger UI) is often available at
- POST
/submit: Submits a document for parsing.- Request Body:
{"document_data": "content of the document"} - Response:
{"job_id": "unique_job_id", "message": "Document submitted..."}(Status 202)
- Request Body:
- GET
/status/{job_id}: Checks the parsing status.- Response:
{"job_id": "unique_job_id", "status": "processing|completed|failed", "error": "error message if failed"}
- Response:
- GET
/result/{job_id}: Retrieves the parsing result.- Response (if completed):
{"job_id": "unique_job_id", "status": "completed", "result": {"markdown": "parsed markdown content"}} - Response (if pending/failed):
{"job_id": "unique_job_id", "status": "processing|failed", "message": "...", "error": "..."}
- Response (if completed):
- DELETE
/result/{job_id}: Deletes a job and its result to free up resources.- Response (if successful):
{"job_id": "unique_job_id", "message": "Job and result deleted successfully."}(Status 200)
- Response (if successful):
Once the doc-ray service is running, you can use the provided client.py script to submit a document for parsing and test the service. The script accepts both local file paths and URLs as input.
-
Ensure
client.pyis executable or run it withpython: The script is located in thescriptsdirectory. -
Basic Usage: Navigate to the root directory of the project and run:
- For a local file:
python scripts/client.py path/to/your/document.pdf
Replace
path/to/your/document.pdfwith the actual path to the local document you want to test.- For a URL:
python scripts/client.py https://raw.githubusercontent.com/microsoft/markitdown/da7bcea527ed04cf6027cc8ece1e1aad9e08a9a1/packages/markitdown/tests/test_files/test.pdf
Replace the URL with the actual URL of the document you want to test. The script will download the content from the URL before submitting it.
-
Specifying
DOCRAY_HOST(if not default): If yourdoc-rayservice is not running at the defaulthttp://localhost:8639, you need to set theDOCRAY_HOSTenvironment variable:DOCRAY_HOST="http://your-doc-ray-service-address:port" python scripts/client.py path/to/your/document.pdfFor example, if it's running on a different host or port.
-
(Optional) Build the Docker image locally:
make build
-
Run the Docker container:
make run-standalone # Or run: # docker run -d -p 8639:8639 -p 8265:8265 --gpus=all --name doc-ray apecloud/doc-ray:latest
-d: Run in detached mode.-p 8639:8639: Maps the container's port 8639 (Ray Serve HTTP) to the host's port 8639.-p 8265:8265: Maps the container's port 8265 (Ray Dashboard) to the host's port 8265.--gpus=all: Enables GPU support. If your Docker environment does not provide GPU access (e.g., Docker Desktop for macOS), omit this flag. Using GPUs is strongly recommended for optimal performance.--name doc-ray: Assigns a name to the container for easier management.
The service will be accessible at
http://localhost:8639on your host machine.
-
Set up a Ray Cluster: Follow the official Ray documentation to set up a multi-node Ray cluster. (See: Ray Cluster Setup)
-
Deploy the application to the cluster: Once your Ray cluster is running and your local environment is configured to connect to it (e.g., via
ray.init(address="ray://<head_node_ip>:10001")or by settingRAY_ADDRESS), you can deploy the application using the Ray Serve CLI with the configuration file.Ensure your application code and
serve_config.yamlare accessible to the machine from where you run the deploy command, or are part of your runtime environment.# Example: If connecting to a running Ray cluster # Ensure your context points to the cluster head. # Then, from the doc_parser_service directory: serve run serve_config.yaml
This command submits the application defined in
serve_config.yamlto the connected Ray cluster. TheJobStateManageractor will ensure that state is shared across the cluster, and Ray Serve will handle routing requests to appropriate replicas.For robust cluster deployment, consider:
- Packaging your application code and dependencies into a runtime environment (
working_dirorpy_moduleswith a requirements file) specified in your Serve config or when connecting to Ray. The Docker image itself can also be used as a basis for nodes in a Kubernetes-based Ray cluster (e.g., using KubeRay). - Configuring
num_replicas, CPU/GPU resources, and other deployment options inserve_config.yamlor directly in theapp.main.pydeployment definition for production needs.
- Packaging your application code and dependencies into a runtime environment (