Pipes in Linux Explained

This tutorial explains what pipes are in Linux and how to use them. Learn how pipes work in Linux through examples.

What are the pipes in Linux?

Pipes are syntactical glue that allows the STDOUT (standard output) from one command to be work as STDIN (standard input) of the next command. In simple words, pipes connect two or more commands, scripts, utilities, or programs together. Pipes are represented by the vertical bar (|).

How do pipes work in Linux?

Normally, commands, scripts, utilities, programs, and processes work in three stages. These stages are taking an input, processing the input, and returning the output (processed input). A pipe connects the third stage of a command to the first stage of the next command.

Let's understand this process through an example. Suppose, you want to process some data through three commands in such a way that the output of the first command works as the input of the second command and the output of the second command works as the input of the third command.

In such a situation, the output of the first command would have to be saved in a temporary file. The second command would have to read the input data from the intermediate temporary file and perform its modification or operation on the data and would have to save its output in its own temporary data file. The third command would have to take its input data from the second temporary data file and perform its own manipulation and then send the resulting output data to the specified output device. At each step, you have to save the output in a temporary file and have to specify that temporary file as an input file for the next command.

The following image shows the above example.

commands redirection without pipes

Pipes make this process easier. Pipes connect the output from one command to the input of another command. In other words, instead of sending the output of a command to a destination file or device, pipes send that output to another command as input. This lets you have one command work on some data and then have the next command deal with the results.

The following image shows how pipes work in the above example.

redirecting commands with pipes

Examples of pipes

Let's take some examples to understand how pipes are used. Suppose, we want to alphabetically sort all contents of the current directory. The ls command lists all contents of the specified directory. The sort command sorts the contents of the specified directory.

We can use both commands separately or can use them together by using a pipe sign. To use both commands separately, first, execute the ls command and save its output in a temporary file and then execute the sort command. Specify the temporary file as the input file of the sort command.

$ls > tempfile
$sort tempfile

The following image shows the above commands with the output.

commands without pipes

Now let's do the same task by using pipes. Using pipes are extremely simple. Just use the pipe sign (|) between two commands that you want to connect in such a way that the output of the first command works as the input of the second command.

In our example, we want to use the output of the ls command as the input of the sort command. To do this, place a pipe sign (vertical bar character) between both commands to form a connection between them.

$ls | sort

The pipe operator receives output from the ls command placed before the pipe and sends this data as input to the sort command placed after the pipe. The following image shows the output of the above command.

using commands to connect pipes

Let's take the next example. Suppose you want to count the number of files in a directory. For this, you can send the output of the ls command in the wc -l command as the input.

$ls | wc -l

The following image shows the output of the above command.

counting files of a directory using pipes

Pipeline

Pipes are not limited to two commands. You can add as many commands as you want in a pipeline. A pipeline is a group of commands that connect in a chain by using pipes and provide a single output.

When building a complex pipeline, it is best practice to write or add one command at a time to the pipeline and check its output before adding the next command to the pipeline. This approach helps you to debug the pipeline in case of an error.

Let's understand how it works through an example. The following pipeline displays an alphabetically sorted list of users whose username starts with the word 'user' and their default shell is set to /bin /bash.

$cat /etc/passwd | grep user |grep /bin/bash | sort

To build this pipeline, we will add command in the following manner.

$cat /etc/passwd
$cat /etc/passwd | grep user
$cat /etc/passwd | grep user | /bin/bash
$cat /etc/passwd | grep user |grep /bin/bash | sort

The /etc/passwd file stores local users' database. We used the cat command to read/copy all data of this file and, instead of displaying/pasting that data on the monitor screen, we instructed the cat command to redirect that data to the grep command as the input.

We used the grep command to search all entries/lines that contain the word 'user' in the output of the first command. Again, instead of displaying the output on the screen, we redirected the output to the next grep command.

We used the next grep command to search all lines that contain the /bin/bash in the output of the second command. Finally, we redirected to the filtered output to the sort command. The sort command sorts the input data alphabetically and displays that on the monitor screen.

The following image shows the output of the above exercise.

example of pipeline

That's all for this tutorial. If you like this tutorial, please don’t forget to share it with friends through your favorite social network.

Advertisements

ComputerNetworkingNotes Linux Basic tutorials Pipes in Linux Explained