Hf CLI

Hf CLI

Hugging Face Hub CLI (`hf`) for downloading, uploading, and managing repositories, models, datasets, and Spaces on the Hugging Face Hub. Replaces

Category: development Source: huggingface/skills

What Is This?

Overview

The Hugging Face Hub CLI, accessed through the hf command, is a command-line interface tool designed to interact directly with the Hugging Face Hub from your terminal. It provides a streamlined way to download models, upload datasets, manage repositories, and work with Spaces without leaving your development environment. The tool replaces the now-deprecated huggingface-cli command and serves as the modern standard for Hub interactions via the command line.

Built for speed and simplicity, the hf CLI abstracts away the complexity of direct API calls and manual file management. Whether you are pulling a large language model for local inference or pushing a fine-tuned checkpoint to share with your team, the tool handles authentication, chunked transfers, and repository management in a consistent and reliable way.

The CLI is installed with a single command and integrates naturally into shell scripts, CI/CD pipelines, and automated workflows. It supports all major Hub repository types, including models, datasets, and Spaces, making it a versatile tool across the full machine learning development lifecycle.

Who Should Use This

  • Machine learning engineers who regularly download pretrained models and need a reliable, scriptable alternative to manual browser downloads.
  • Data scientists who upload processed datasets to the Hub for versioning, sharing, or use in training pipelines.
  • MLOps practitioners who integrate Hub interactions into automated deployment and fine-tuning workflows.
  • Researchers who manage multiple model checkpoints and need efficient tools for organizing and versioning their work on the Hub.
  • Open-source contributors who maintain public repositories on the Hub and need to push updates, manage branches, or inspect repository contents.
  • Platform engineers building infrastructure that provisions models or datasets from the Hub as part of a larger system.

Why Use It?

Problems It Solves

  • Manual file downloads from the Hub browser interface are slow, error-prone, and impossible to automate in scripts or pipelines.
  • The deprecated huggingface-cli command is no longer maintained, leaving teams relying on it exposed to compatibility issues and missing features.
  • Managing large model files without a dedicated tool often results in incomplete downloads, missing shards, or incorrect directory structures.
  • Uploading datasets or model checkpoints through the web UI does not scale for large files or frequent update cycles.
  • Authenticating and managing tokens across multiple environments requires a consistent, secure mechanism that ad-hoc scripts rarely provide.

Core Highlights

  • Single-command installation via a shell script, requiring no package manager setup.
  • Replaces the deprecated huggingface-cli with a modern, actively maintained interface.
  • Supports downloading entire repositories or individual files from models, datasets, and Spaces.
  • Handles large file uploads with chunked transfer and resume support.
  • Provides repository creation, deletion, and metadata management from the terminal.
  • Integrates with Hub authentication tokens for secure, credential-managed access.
  • Works seamlessly in shell scripts, Docker containers, and CI/CD environments.

How to Use It?

Basic Usage

Install the CLI with the following command:

curl -LsSf https://hf.co/cli/install.sh | bash -s

After installation, view all available commands:

hf --help

Download a model repository to your local machine:

hf download meta-llama/Llama-3.2-1B

Upload a local file to an existing repository:

hf upload my-username/my-model ./checkpoint.bin

Specific Scenarios

Scenario 1: Downloading a specific file from a dataset

When you only need one file from a large dataset repository, target it directly to avoid downloading the entire repository:

hf download datasets/my-org/my-dataset train.parquet --local-dir ./data

Scenario 2: Creating and populating a new model repository

hf repo create my-fine-tuned-model --type model
hf upload my-username/my-fine-tuned-model ./output_dir

Real-World Examples

A training pipeline script pulls the base model at job start, runs fine-tuning, then pushes the resulting checkpoint back to the Hub, all using hf commands chained in a shell script.

A data engineering team uses hf download inside a Dockerfile to bake a specific dataset version into a container image for reproducible training runs.

Important Notes

Requirements

  • A Unix-based shell environment (Linux or macOS) or a compatible shell on Windows such as WSL.
  • A Hugging Face account and access token for operations that require authentication, such as uploading or accessing private repositories.
  • Sufficient local disk space when downloading large model repositories, as some models exceed tens of gigabytes.