The following command will try to download a full website with all pages it can find through public links.
wget --wait=2 --limit-rate=200K --recursive --no-clobber --page-requisites --convert-links --domains example.com http://example.com/
--waitWait the specified number of seconds between the retrievals. We use this option to lighten the server load by making the requests less frequent.
--limit-rateLimit the download speed to amount bytes per second. We use this option to lighten the server load and to reduce the bandwidth we consume on our own network.
--recursiveTurn on recursive retrieving.
--no-clobberIf a file is downloaded more than once in the same directory, we prevent multiple version saving.
--page-requisitesThis option causes Wget to download all the files that are necessary to properly display a given HTML page.
--convert-linksAfter the download is complete, convert the links in the document to make them suitable for local viewing.
--domainsSet domains to be followed. It accepts a domain-list as a comma-separated list of domains.
This post is also available in: Greek