Pipes and Redirection
Summary: in this tutorial, you will learn understand standard streams, output redirection, input redirection, and how to control where data flows in bash.
Pipes and Redirection
Pipes and redirection represent the soul of the Unix philosophy: write programs that do one thing well, and design them to work together by handling text streams. This architectural principle, established in the 1970s, remains the foundation of modern command-line power.
Why pipes and redirection matter:
- Composability: Chain simple commands into complex solutions
- Flexibility: Mix and match tools without programming
- Efficiency: Process data without temporary files
- Automation: Build powerful scripts from small, tested components
- Data flow control: Direct output, errors, and input precisely where needed
This tutorial covers standard streams, redirection operators, pipes, and advanced stream manipulation—the techniques that transform individual commands into a cohesive data processing environment.
Standard Streams: The Three Channels
Every program running in Linux has three standard communication channels (streams) established at startup:
┌──────────────┐
stdin (0) ──────> │ │ ──────> stdout (1)
(keyboard) │ Command │ (screen)
│ │
└──────────────┘ ──────> stderr (2)
(screen)
| Stream | File Descriptor | Default | Purpose | Typical Content |
|---|---|---|---|---|
| stdin (Standard Input) | 0 | Keyboard | Data flowing INTO a program | User input, piped data, file contents |
| stdout (Standard Output) | 1 | Terminal | Normal output FROM a program | Results, reports, processed data |
| stderr (Standard Error) | 2 | Terminal | Error messages FROM a program | Warnings, errors, debug messages |
ℹ️ Why separate stdout and stderr?
Separating normal output (stdout) from errors (stderr) enables critical capabilities:
- Silent success: Capture results (
command > file.txt) while still seeing errors - Error logging: Log errors (
command 2> errors.log) while displaying results - Clean pipes: Chain commands (
cmd1 | cmd2) without error messages corrupting data - Monitoring: Check error logs separately from output logs
- Debugging: Filter error messages without losing data
This separation is fundamental to Unix design—errors don't contaminate data, and data doesn't hide errors.
File Descriptors Explained
File descriptors are integers that represent open files or streams:
# Each program starts with three file descriptors open:
# 0 → stdin (reads from keyboard by default)
# 1 → stdout (writes to terminal by default)
# 2 → stderr (writes to terminal by default)
# When you redirect, you're changing where these file descriptors point:
echo "Hello" > file.txt # Redirect FD 1 (stdout) to file.txt
command 2> errors.txt # Redirect FD 2 (stderr) to errors.txt
command < input.txt # Redirect FD 0 (stdin) from input.txt
Output Redirection
> — Write to File (Overwrite)
The > operator redirects stdout to a file, creating or completely overwriting the destination:
# Basic redirection (stdout to file)
echo "Hello, World!" > greeting.txt
date > timestamp.txt
ls -la /etc > directory_listing.txt
# Multiple commands can write to the same file (overwrites each time)
echo "First line" > file.txt
cat file.txt # Shows: First line
echo "Second line" > file.txt # OVERWRITES file.txt!
cat file.txt # Shows: Second line only (first line is GONE)
# Redirect command output
df -h > disk_usage.txt
ps aux > process_list.txt
history > command_history.txt
# Create empty file (truncate if exists)
> empty_file.txt
# Equivalent to: touch empty_file.txt but faster
# Combine with other commands
sort names.txt > sorted_names.txt
grep "error" logfile.txt > errors_only.txt
⚠️ > overwrites without warning!
The > operator is destructive—it immediately truncates (empties) the file before writing. There is no "are you sure?" prompt.
Dangerous example:
# Accidentally overwrite important file
sort data.txt > data.txt # WRONG: Empties data.txt before reading it!
# Result: Empty file
# Correct approach:
sort data.txt > data_sorted.txt
mv data_sorted.txt data.txt
# Or:
sort -o data.txt data.txt # sort's -o can safely overwrite input
Protect yourself:
# Add to ~/.bashrc to enable noclobber (prevent accidental overwrites)
set -o noclobber
# Now > won't overwrite existing files
echo "test" > existing_file.txt # Error: cannot overwrite existing file
# Force overwrite even with noclobber
echo "test" >| existing_file.txt # The | forces overwrite
# Disable noclobber
set +o noclobber
>> — Append to File
The >> operator adds content to the end of a file without disturbing existing content:
# Append to a file (creates if doesn't exist)
echo "Line 1" > log.txt # Creates file with "Line 1"
echo "Line 2" >> log.txt # Appends "Line 2"
echo "Line 3" >> log.txt # Appends "Line 3"
cat log.txt
# Line 1
# Line 2
# Line 3
# Perfect for logging
echo "$(date): Application started" >> /var/log/myapp.log
echo "$(date): Processing 1000 records" >> /var/log/myapp.log
echo "$(date): Completed successfully" >> /var/log/myapp.log
# Build a report incrementally
echo "=== System Report ===" > report.txt
echo "Generated: $(date)" >> report.txt
echo "" >> report.txt
echo "Disk Usage:" >> report.txt
df -h >> report.txt
echo "" >> report.txt
echo "Memory Usage:" >> report.txt
free -h >> report.txt
# Append command outputs
{ date; uptime; df -h; } >> system_status.txt
Practical logging pattern:
# Create a logging function
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" >> app.log
}
# Use it throughout your script
log "Script started"
log "Processing file: data.csv"
log "Found 1523 records"
log "Processing completed"
# app.log contains:
# [2024-01-15 10:30:45] Script started
# [2024-01-15 10:30:45] Processing file: data.csv
# [2024-01-15 10:30:47] Found 1523 records
# [2024-01-15 10:30:52] Processing completed
2> — Redirect Standard Error
Redirect only error messages (stderr) to a file or destination:
# Redirect errors to a file (normal output goes to screen)
find / -name "secret.txt" 2> errors.txt
# Results display on screen, "Permission denied" errors go to errors.txt
# Most common use: Discard errors (send to /dev/null)
find / -name "*.conf" 2>/dev/null
# Only shows successful results, suppresses all error messages
# Redirect errors to a log
./my_script.sh 2> error_log.txt
# Script runs, normal output to screen, errors to error_log.txt
# Append errors to existing log
./my_script.sh 2>> error_log.txt
# Separate normal output and errors
command > output.txt 2> errors.txt
# Success messages → output.txt
# Error messages → errors.txt
Example: Find all readable config files:
# Without error suppression (messy):
find /etc -name "*.conf"
# Output mixed with:
# find: '/etc/ssl/private': Permission denied
# find: '/etc/polkit-1/localauthority': Permission denied
# ... lots of errors ...
# With error suppression (clean):
find /etc -name "*.conf" 2>/dev/null
# Only shows files you can actually read
/dev/null — The Bit Bucket
/dev/null is a special file that discards all data written to it. Reading from it produces EOF (end-of-file) immediately.
# Discard all output
command > /dev/null
# Discard only errors
command 2> /dev/null
# Discard everything (stdout and stderr)
command > /dev/null 2>&1
command &> /dev/null # Shorthand (Bash 4+)
# Test if a command exists (discard all output)
if command -v docker &> /dev/null; then
echo "Docker is installed"
else
echo "Docker not found"
fi
# Silent background process
long_running_command &> /dev/null &
# Check if file exists (suppress any errors)
cat /path/to/file 2>/dev/null || echo "File not found"
ℹ️ What is /dev/null?
/dev/null is a null device—a special file that:
- Discards all writes: Data written to it disappears (like a black hole)
- Returns EOF on read: Reading from it immediately signals end-of-file
- Always succeeds: Operations on
/dev/nullnever fail
Common names:
- "The bit bucket"
- "The black hole"
- "/dev/null: the data roach motel (data checks in but never checks out)"
Use cases:
- Suppress unwanted output
- Test if a command exists without seeing output
- Provide empty input:
command < /dev/null - Background processes that shouldn't produce output
Combining stdout and stderr
Redirect both streams together or separately:
# Redirect both to the SAME file
command > output.txt 2>&1
# Explanation:
# 1. > output.txt → redirect stdout (FD 1) to output.txt
# 2. 2>&1 → redirect stderr (FD 2) to wherever stdout (FD 1) is going
# Shorthand (Bash 4+)
command &> output.txt
# Append both to the same file
command >> output.txt 2>&1
command &>> output.txt # Shorthand
# Redirect to DIFFERENT files
command > normal_output.txt 2> errors.txt
# stdout → normal_output.txt
# stderr → errors.txt
# Send stdout to file, stderr to screen (default)
command > output.txt
# Send stderr to file, stdout to screen (default)
command 2> errors.txt
# Swap stdout and stderr (advanced!)
command 3>&1 1>&2 2>&3
# Temporarily use FD 3 to swap FD 1 and FD 2
# Discard stdout, keep stderr
command > /dev/null
command 2>&1 > /dev/null # Discards stdout, stderr goes to screen
# Discard stderr, keep stdout
command 2> /dev/null
⚠️ Order matters with 2>&1
Correct:
command > file.txt 2>&1
# 1. stdout → file.txt
# 2. stderr → wherever stdout is (file.txt)
# Result: Both go to file.txt
Incorrect:
command 2>&1 > file.txt
# 1. stderr → wherever stdout is (screen at this moment)
# 2. stdout → file.txt
# Result: stdout to file.txt, stderr to screen (not what you want!)
Remember: Redirect stdout first, THEN redirect stderr to stdout.
Practical example: Comprehensive logging
# Log everything (success and errors) with timestamps
{
echo "=== Script started at $(date) ==="
./process_data.sh
echo "=== Script finished at $(date) ==="
} >> full_log.txt 2>&1
# Separate logs for different purposes
./deploy.sh > deployment.log 2> deployment_errors.log
# Silent execution but capture errors
./script.sh > /dev/null 2> errors.log
if [ -s errors.log ]; then
echo "Errors occurred:"
cat errors.log
fi
Input Redirection
< — Read from File
The < operator redirects stdin, making a command read from a file instead of the keyboard:
# Read input from a file
wc -l < /etc/passwd # Count lines (file content as stdin)
sort < unsorted_list.txt # Sort file contents
grep "error" < logfile.txt # Search within file
# Equivalent to (but stdin redirection is more explicit):
wc -l /etc/passwd # Same result (wc accepts filename)
sort unsorted_list.txt
grep "error" logfile.txt
# Useful when command expects stdin
cat < file.txt # Reads from file instead of keyboard
tr '[:upper:]' '[:lower:]' < input.txt # Convert to lowercase
# stdin redirection with output redirection
tr '[:upper:]' '[:lower:]' < input.txt > output.txt
# Commands that require stdin redirection
mysql -u root -p < setup.sql # Execute SQL from file
python3 < script.py # Run Python script via stdin
psql -U postgres < backup.sql # Restore database
# Feed data to a loop
while read -r line; do
echo "Processing: $line"
done < data.txt
When to use < vs filename argument:
Most commands accept filenames directly, so < isn't always necessary:
# These are equivalent for most commands:
grep "pattern" file.txt
grep "pattern" < file.txt
# But < is required when:
# 1. Command only reads from stdin
tr 'a-z' 'A-Z' < input.txt # tr doesn't accept filenames
# 2. You want to hide the filename from command
wc -l < file.txt # Shows: "42" (just the count)
wc -l file.txt # Shows: "42 file.txt" (includes filename)
# 3. Combining redirections
command < input.txt > output.txt 2> errors.txt
<< — Here Document (Heredoc)
A heredoc provides multi-line input directly in your script without creating temporary files:
# Basic heredoc syntax
command << DELIMITER
content line 1
content line 2
...
DELIMITER
# Practical example: Create a file with multiple lines
cat << EOF > config.txt
server=localhost
port=8080
debug=false
timeout=30
EOF
# Variables are expanded inside heredoc
name="Alice"
cat << EOF
Hello, $name!
Today is $(date).
Your home is $HOME.
EOF
# Prevent variable expansion with quoted delimiter
cat << 'EOF'
This is literal: $HOME
No expansion: $(date)
Variables: $name
EOF
# Indented heredoc (<<- strips leading TABS, not spaces)
if true; then
cat <<-EOF
This line has a leading tab (will be stripped).
So does this one.
EOF
fi
# Heredoc to a command
mail -s "System Report" admin@example.com << EOF
System: $(hostname)
Uptime: $(uptime)
Disk: $(df -h / | awk 'NR==2{print $5}')
EOF
# SQL queries
mysql -u root -p"$password" << EOF
CREATE DATABASE myapp;
USE myapp;
CREATE TABLE users (
id INT PRIMARY KEY AUTO_INCREMENT,
username VARCHAR(50) UNIQUE NOT NULL,
email VARCHAR(100) NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
INSERT INTO users (username, email) VALUES
('alice', 'alice@example.com'),
('bob', 'bob@example.com');
EOF
# Python script execution
python3 << 'EOF'
import sys
import os
print(f"Python version: {sys.version}")
print(f"Current directory: {os.getcwd()}")
EOF
Practical heredoc patterns:
# Generate configuration files
cat > /etc/nginx/sites-available/myapp << EOF
server {
listen 80;
server_name example.com;
root /var/www/myapp;
index index.html;
location / {
try_files \$uri \$uri/ =404;
}
}
EOF
# Create scripts on the fly
cat > backup.sh << 'EOF'
#!/bin/bash
DATE=$(date +%Y%m%d)
tar -czf backup_$DATE.tar.gz /home/$USER/documents
echo "Backup completed: backup_$DATE.tar.gz"
EOF
chmod +x backup.sh
# Multi-line echo alternative
cat << EOF
╔════════════════════════════════╗
║ Welcome to MyApp v1.0 ║
║ Please select an option: ║
║ 1) Start ║
║ 2) Configure ║
║ 3) Exit ║
╚════════════════════════════════╝
EOF
# Documentation in scripts
: << 'DOCUMENTATION'
Script: backup_system.sh
Purpose: Create full system backup to remote server
Author: Alice
Date: 2024-01-15
Notes: Requires ssh keys to be configured
DOCUMENTATION
<<< — Here String
A here string passes a single string as stdin (simpler than heredoc for one line):
# Basic here string
grep "error" <<< "This line has an error in it"
# Output: This line has an error in it
# Using with variables
message="Hello World from Bash"
wc -w <<< "$message" # Count words: 4
# Read words into an array
line="apple banana cherry date"
read -ra fruits <<< "$line"
echo "${fruits[0]}" # apple
echo "${fruits[2]}" # cherry
# Parse delimited strings
IFS=',' read -ra items <<< "first,second,third,fourth"
for item in "${items[@]}"; do
echo "Item: $item"
done
# Process command output as string
uptime_output=$(uptime)
read load1 load5 load15 <<< $(echo "$uptime_output" | awk -F'load average:' '{print $2}')
# Calculate with bc
result=$(bc <<< "scale=2; 22 / 7")
echo $result # 3.14
# Multiple lines (use actual newlines)
tr '[:upper:]' '[:lower:]' <<< "HELLO
WORLD
BASH"
Here string vs heredoc vs echo:
# These are equivalent:
# Here string (cleanest for one line)
grep "pattern" <<< "text to search"
# Heredoc (verbose for one line)
grep "pattern" << EOF
text to search
EOF
# Echo with pipe (common but spawns extra process)
echo "text to search" | grep "pattern"
# Here string with variable
grep "pattern" <<< "$my_variable"
# Echo with pipe (equivalent but slower)
echo "$my_variable" | grep "pattern"
Written by the ShellRAG Team
The ShellRAG editorial team writes practical, beginner-friendly Bash Shell tutorials with tested code examples and real-world use cases. Every article is technically reviewed for accuracy and updated regularly.
Learn more about us →