Wget is a command-line tool used for downloading files from the web in Linux. It is a free utility that can retrieve files from HTTP, HTTPS, and FTP servers. Wget can be used to download single files or entire websites with ease. In this tutorial, we will cover the basics of using the Wget command to download files from the web, as well as several advanced options for power users. Whether you’re a beginner or an experienced Linux user, this guide will help you get up and running with Wget in no time!
Installing Wget on Linux
Wget is a command-line tool that is pre-installed on most Linux distributions. However, if it is not installed on your system, you can easily install it using the package manager of your Linux distribution.
Installing Wget on Debian/Ubuntu
To install Wget on Debian or Ubuntu, simply run the following command in your terminal:
sudo apt-get install wget
This will download and install Wget along with its dependencies.
Installing Wget on CentOS/RHEL
If you are using CentOS or RHEL, you can install Wget by running the following command:
sudo yum install wget
Installing Wget on Fedora
To install Wget on Fedora, run the following command:
sudo dnf install wget
Installing Wget on Arch Linux
On Arch Linux, you can install Wget by running the following command:
sudo pacman -S wget
Once you have installed Wget, you can start using it to download files from the web.
Basic Syntax of Wget Command
The basic syntax of the Wget command in Linux is quite simple. To download a file from the web using Wget, the following command can be used:
wget [options] [URL]
Here, `options` are the different flags that can be used with the `wget` command to modify its behavior, and `URL` is the web address or URL of the file that needs to be downloaded.
For example, if you want to download a file named “example.txt” from a website with URL “http://www.example.com”, you can use the following command:
wget http://www.example.com/example.txt
This will download the file “example.txt” and save it in your current directory.
In addition to downloading single files, Wget can also be used to download entire websites and their contents. We’ll cover these options in more detail later in this tutorial.
Download Files with Wget Command
To download files from the web using Wget command, you can simply enter the URL of the file that you want to download after the command ‘wget’. The file will be downloaded in your current working directory.
For example, if you want to download a file called ‘example.zip’ from a website, you can use the following command:
wget http://www.example.com/example.zip
This will start downloading the file ‘example.zip’ to your current working directory. You can also specify a different directory where you want to save the downloaded file by using the -P
option.
wget -P /home/user/downloads http://www.example.com/example.zip
This will save the downloaded file ‘example.zip’ to `/home/user/downloads`.
Wget also supports resuming interrupted downloads with the `-c` option. If a download is interrupted or stopped due to any reason, you can resume it from where it left off by using this option.
In addition, Wget allows users to download only specific types of files with –-accept
option such as images, audios, videos etc. This option saves time and disk space by downloading only required files instead of downloading everything present on a website.
Limiting download speed
To put it simply, the --limit-rate
option in Wget command allows you to limit the download speed of files from the web. This can be helpful in situations where you have limited bandwidth or want to ensure that other network activities are not affected while downloading files.
To use this option, you simply need to specify the maximum download speed in bytes per second after the option. For example, if you want to limit the download speed to 100KB/s, you would use the command:
wget --limit-rate=100k
You can also use abbreviations like ‘k’ for kilobytes and ‘m’ for megabytes. Wget will automatically adjust the download speed according to your specified rate.
Resuming interrupted downloads
Resuming interrupted downloads with the -c
option in the Wget command is a useful feature that allows you to continue downloading a file from where it left off in case the connection was lost, or the download was interrupted for any reason. This option works by checking if the file already exists and if it does, it will resume downloading from where it stopped.
To use the -c
option, you simply need to add it to your Wget command followed by the URL of the file you want to download. For example:
wget -c https://example.com/myfile.zip
If you had previously started downloading this file but it was interrupted, running this command will resume downloading from where it left off instead of starting from scratch.
This feature can be particularly useful when downloading large files or files over an unstable internet connection. Instead of having to start over each time there’s a problem, you can simply use the -c option to pick up where you left off and save time and bandwidth.
Downloading only specific types of files
The --accept
option is a powerful feature of the wget
command that allows you to download only specific types of files from a website. By specifying a file extension or pattern, you can tell wget
to download only the files that match your criteria, while ignoring all others.
For instance, if you want to download only PDF files from a website, you can use the command:
wget --accept=pdf -r -l1
This will download all PDF files from the website and save them in the current directory. You can replace “pdf” with any other file extension or pattern you want to download.
Another example is if you only want to download images from a website, you can use:
wget --accept=jpg,png,gif,jpeg -r -l1
This will download all images with the specified extensions and save them in the current directory.
Mirroring a website
The --mirror
option in the Wget command allows you to create a complete copy of a website on your local machine. This means that you will be able to browse the website offline or host it on another server.
Using the --mirror
option downloads all files from the website, including HTML files, images, videos and other assets required for full functionality. It will also create a folder structure similar to that of the actual website, making it easy to navigate and find specific files.
The --mirror
option can also be used to update an existing local copy of the website by downloading only new or updated files. This is particularly useful if you need to keep an offline copy of a frequently updated website.
To use the --mirror
option, simply type the following command in your terminal:
wget --mirror
This will start downloading all files from the specified URL and save them in a folder named after the domain name of the website. You can further customize this command with additional options such as limiting download speed or specifying which file types to download.
Downloading recursively from a website
The -r
option in the Wget command allows you to download files recursively from a website. This means that it will not only download the specified file, but also all related files that are linked to it. This can be helpful if you need to download all the files from a website, including images, videos, and other media.
For example, if you want to download all the files from a website called “example.com”, you can use the following command:
wget -r example.com
This will download all the files on the website recursively and save them in a directory named after the domain name.
Note that when using this option, it is important to be careful not to overload the server by downloading too much data at once. You may want to limit your download speed using the --limit-rate
option or set other restrictions on your download bandwidth usage.
Restricting download bandwidth usage
The --limit-rate
option in the Wget command can be used to restrict the download bandwidth usage. This is particularly useful when you want to limit the download speed for a specific file or when you want to avoid hogging up the entire bandwidth on a shared network.
To use this option, simply add it along with the Wget command and specify the desired download rate in bytes per second (B/s). For example, if you want to restrict the download speed to 100kB/s, you would use the following command:
wget --limit-rate=100k http://example.com/file.zip
This would limit the download speed for `file.zip` to 100kB/s. You can adjust the value based on your requirements.
Conclusion
In conclusion, the Wget command is a powerful tool that allows you to download files from the web in Linux.
Once you have installed it, you can use its basic syntax to download files or explore its many options to customize your downloads.
The different options available with the Wget command include limiting download speed, resuming interrupted downloads, downloading only specific types of files, mirroring a website, and downloading recursively from a website. With these options at your disposal, you can easily manage your downloads and make the most of your bandwidth.