When beginning to tackle a new website for a client, it's often helpful to backup their old website. As a part of this process, I often need to crawl the old website in order to generate a complete list of valid URLs. This list is later useful in building out a sitemap for pages that need to be designed and coded, and just as importantly, to map the old links to their corresponding pages on the new website. Enter this simple shell script.
Download the script and save to the desired location on your machine.
wgetinstalled on your machine.
To check if it is already installed, try running the command
If you are on a Mac or running Linux, chances are you already have wget installed; however, if the
wgetcommand is not working, it may not be properly added to your PATH variable.
If you are running Windows:
Download the lastest
wgetbinary for windows from https://eternallybored.org/misc/wget/
The download is available as a zip with documentation, or just an exe. I'd recommend just the exe.
If you downloaded the zip, extract all (if windows built in zip utility gives an error, use 7-zip). In addition, if you downloaded the 64-bit version, rename the
Ensure the version of
grepon your computer supports
-E, --extended-regexp. To check for support, run
grep --helpand look for the flag. To check the installed version, run
Open Git Bash, Terminal, etc. and set execute permissions for the
chmod +x /path/to/script/fetchurls.sh
- Enter the following to run the script:
Alternatively, you may execute with either of the following:
sh ./fetchurls.sh [OPTIONS]... # -- OR -- # bash ./fetchurls.sh [OPTIONS]...
If you do not pass any options, the script will run in interactive mode.
If the domain URL requires authentication, you must pass the username and password as flags; you are not prompted for these values in interactive mode.
You may pass options (as flags) directly to the script, or pass nothing to run the script in interactive mode.
The fully qualified domain URL (with protocol) you would like to crawl.
Ensure that you enter the correct protocol (e.g.
https) and subdomain for the URL or the generated file may be empty or incomplete. The script will automatically attempt to follow the first HTTP redirect, if found. For example, if you enter the incorrect protocol (
https:/www.adamdehaven.com, the script will automatically follow the redirect and fetch all URLs for the correct HTTPS protocol.
The domain's URLs will be successfully spidered as long as the target URL (or the first redirect) returns a status of
HTTP 200 OK.
The location (directory) where you would like to save the generated results.
If the directory does not exist at the specified location, as long as the rest of the path is valid, the new directory will automatically be created.
The desired name of the generated file, without spaces or file extension.
- Default: See the default list of excluded file extensions
Pipe-delimited list of file extensions to exclude from results.
To prevent excluding files matching the default list of file extensions, simply pass an empty string to the flag:
The list of file extensions must be passed inside quotes, as shown above.
The number of seconds to wait between retrievals.
If the domain URL requires authentication, the username to pass to the wget command.
If the username contains space characters, you must pass inside quotes. This value may only be set with a flag; there is no prompt in interactive mode.
If the domain URL requires authentication, the password to pass to the wget command.
If the password contains space characters, you must pass inside quotes. This value may only be set with a flag; there is no prompt in interactive mode.
Allows the script to run successfully in a non-interactive shell.
Ignore robots.txt for the domain.
Show wget install instructions.
The installation process may vary depending on your computer's configuration.
Show version information.
Outputs received option flags with their associated values at runtime for troubleshooting.
Show the help content.
If you do not pass the --domain flag, the script will run in interactive mode and you will be prompted for the unset options.
First, you will be prompted to enter the full URL (including HTTPS/HTTP protocol) of the site you would like to crawl:
Fetch a list of unique URLs for a domain. Enter the full domain URL ( http://example.com ) Domain URL:
You will then be prompted to enter the location (directory) of where you would like the generated results to be saved (defaults to Desktop on Windows):
Save file to directory Directory: /c/Users/username/Desktop
Next, you will be prompted to change/accept the name of the generated file (simply press enter to accept the default filename):
Save file as Filename (no file extension, and no spaces): example-com
Finally, you will be prompted to change/accept the default list of excluded file extensions (press enter to accept the default list):
Exclude files with matching extensions Excluded extensions: bmp|css|doc|docx|gif|jpeg|jpg|JPG|js|map|pdf|PDF|png|ppt|pptx|svg|ts|txt|xls|xlsx|xml
The script will crawl the site and compile a list of valid URLs into a new text file. When complete, the script will show a message and the location of the generated file:
Fetching URLs for example.com Finished with 1 result! File Location: /c/Users/username/Desktop/example-com.txt
If a file of the same name already exists at the location (e.g. if you previously ran the script for the same URL), the original file will be overwritten.
Excluded Files and Directories
The script, by default, filters out many file extensions that are commonly not needed.
The list of file extensions can be passed via the
--exclude flag, or provided via the interactive mode.
In addition, specific site (including WordPress) files and directories are also ignored.
The script should filter out most unwanted file types and directories; however, you can edit the regular expressions that filter out certain pages, directories, and file types by editing the
fetchUrlsForDomain() function within the
If you're not familiar with grep or regular expressions, you can easily break the script.