Table Of Contents:
- Quick Start
- System Requirements
- About SuperBot
- Installing and Uninstalling SuperBot
- Configuring SuperBot
- Pausing, Stopping, and Restarting SuperBot
- Using the Clipboard Monitoring Feature
- Troubleshooting
With SuperBot, you can copy a website in three easy steps!
- Connect to the Internet.
- Run SuperBot, and click the Start... button.
- Type the URL, or address, of the site you want to copy in the Starting URL box, and click OK. The copying procedure will begin immediately.
For more information, please read the rest of this manual.
Text in GREEN applies only to registered copies of SuperBot.
To run SuperBot, you will need:
- An IBM-PC or compatible computer, running Windows 95/98/NT4;
- A direct Ethernet connection to the Internet, or a dialup SLIP or PPP account from an Internet service provider;
- Internet Explorer 4.0 SP1 or above;
- A TCP/IP stack.
SuperBot downloads entire websites automatically, and saves them on your
computer. The copied sites look and feel like the online versions, except they are much faster to browse. Unlike other offline browsers, SuperBot is small, efficient, and very easy to use.
To install SuperBot on your computer, follow these steps:
- Connect to the Internet and download the SuperBot ZIP archive from the EliteSys website.
- Create a directory (folder) on your computer called C:\Program Files\SuperBot.
- Use WinZip, or any other ZIP utility, to decompress the SuperBot archive into the directory you just created.
- Run (double-click) SuperBot, and the program will transparently install itself. It will not place any hidden files on your computer or modify your system registry.
To uninstall SuperBot, just delete the C:\Program Files\SuperBot directory from your computer.
SuperBot will automatically set values for most of the following options and restrictions. With the exception of the Starting URL, they may be ignored by novice users.

- Starting URL
-
SuperBot will initiate the copying procedure at this address or location.
The URL must be in the standard format:
http://[username:password@]server[:port]/path
A username and password are only necessary if the material you are copying requires
authentication (i.e. members-only web pages). Here are some examples of good URLs:
- http://members.myserver.com/secure/document.html
- http://cia.com:81/ghetto/crack/plans.doc
- http://topsecret.co.jp
- http://EliteSys:IsGreat@207.136.80.38/~elitesys/secure/
- Save Directory
- SuperBot will save all retrieved files under this directory. For
example, if you choose c:\Program Files\SuperBot as the directory, and http://www.website.com as the starting URL, the retrieved files will be saved in the c:\Program Files\SuperBot\www.website.com directory.
- Location Restrictions
-
- Stay in or below this directory will prevent SuperBot from retrieving any files from
above the starting URL. For example, if your starting URL is
http://www.website.com/members/index.html, this option will prevent SuperBot from following a link to http://www.website.com/main.html, because main.html is "above" the members directory.
- Stay at this server will prevent SuperBot from following links that reference other servers. For example, if your starting URL is
http://www.website.com/members/index.html, this option will prevent SuperBot from following a link to http://www.othersite.com/top.htm, because top.htm is not hosted at www.website.com.
- If you choose no location restrictions, SuperBot will follow all links, regardless of their location.
Location restrictions apply only to HTML files, not their embedded graphics and sounds. This ensures that copied pages look and act like their online counterparts.
- Restrict copying depth
-
This restriction provides an alternate way to limit SuperBot's copy procedure. If this
box is checked, the length of the trails SuperBot can follow will be limited to the number you specify.
Examples:
- To copy a single page and the graphics embedded within it, set the maximum depth to 1.
- To copy a single page, its embedded graphics, and any files that can be reached with a single click, set the maximum depth to 2.
- To copy all pages within six clicks of the Starting URL, set the maximum depth to 7.
If this box is left unchecked, the maximum copying depth will automatically be set to 30.
- Restrict number of files
- If this restriction is enabled, SuperBot will stop when the specified number of files have
been copied, even if some links have not been followed.
- Only download files with these extensions
Do not download files with these extensions
-
For even more control, you can have SuperBot ignore certain types of files. SuperBot determines a file's type by examining the filename extension. For example, if you want to skip downloading any sound files, you might type mid wav ra au in this box. If you want to ignore links to videos, you could type avi mov mpg rm in this box.
Up to eight extensions may be entered in this box, each up to eight characters long. Each extension should be separated by a single space, and periods, commas, semicolons, etc. should NOT be entered.
Under some circumstances, extension restrictions will be overridden:
- To avoid misuse of server resources, SuperBot will not follow any links to files with these
extensions: exe com cgi pl asp. Any URLs containing embedded arguments
(indicated by a ?, ; or =) will also be ignored.
- SuperBot ignores this restriction when downloading web pages (htm html phtml shtml).
- Download inline pictures and sounds
- If this option is enabled, SuperBot will download page background graphics, and other embedded pictures, sounds, and videos. Otherwise, those links will be rewritten to point to the online copies of those files.
- Update older files
- SuperBot can update any files that have been modified since the last time they were downloaded. If this option is disabled, SuperBot will always keep the local copy of a file, even if the online version has changed.
- Allow authentication
- If you are copying a passworded site, this option must be enabled. (You must also enter your username and password in the Starting URL).
This option will allow SuperBot follow links with embedded usernames and passwords; you may want to enable the Stay at this server restriction to keep SuperBot out of restricted areas.
- Take naps between downloads
- To avoid monopolizing the resources of a web server, SuperBot can pause for 3 seconds between each download. If this option is disabled, SuperBot will download files as fast your network resources allow.
- Ignore robot META tags
- SuperBot will ignore the NOINDEX and NOFOLLOW ROBOT META request tags of web page authors if this option is enabled. For more information on these special HTML tags, see the HTML Authors' Guide to the Robots META Tag.
- Agent Identification Name
-
- This is the name that SuperBot uses to identify itself when downloading files from Internet servers.
- If you need to temporarily disconnect from the Internet...
- Press the Pause button. When all of SuperBot's clones have fallen asleep, it's safe to disconnect from the Net.
To continue the copying procedure, reconnect to the Internet and release the Pause button.
- If you need to turn off your computer, but want to resume copying some other time...
- Press Pause. Once the clones are asleep, press Stop. This will ensure that SuperBot does not save any partially downloaded or corrupted files.
To resume the copying procedure, reconnect to the Internet and restart SuperBot, using the same Starting URL and Save Directory.
- If you want to abort immediately...
- Press the Stop button, close the program window, or disconnect your computer from the Internet. SuperBot will stop all file transfers, even if some files have not been completely downloaded. A website copy cannot be resumed if it is terminated in this manner; if you want to stop SuperBot gracefully, use one of the previous two methods.
Using the Clipboard Monitoring Feature
Registered copies of SuperBot can automatically scan the Windows clipboard for URLs, and enable the download of sites with a single click. To coordinate SuperBot's monitoring feature with your browsing session:
- Start SuperBot.
- Start your web browser.
- Using your browser, surf to the desired site or link.
- Internet Explorer users: Right-click the Address and choose Copy, or right-click a link and select Copy Shortcut.
Netscape Navigator users: Right-click the Location and choose Copy, or right-click a link and select Copy Link Location.
SuperBot's Copy web pages... box will appear. Press OK to copy the site, or Cancel to ignore it.
To disable clipboard monitoring in registered copies of SuperBot, use the /nomonitor command-line option.
- Every time I try to copy a site, SuperBot says "I was unable to download or update...". Why?
- What is the meaning of "error code 87"?
-
SuperBot uses Internet components that were not distributed with the original retail versions of Windows 95 or Windows NT. These components may be added to your system by installing
Internet Explorer 4.0 SP1 or above.
Click here to download IE 5.01 Active Setup (509 Kbytes).
- Why don't the Java applets / Javascripts / VBScripts / ActiveX controls work on my site copy?
-
Applets, controls, and scripts are too complex to remap in a consistent and secure manner, so SuperBot usually just leaves them alone. Some applets, controls, and scripts will continue to work in your copies; some will not.
- Why isn't SuperBot using my proxy server?
-
SuperBot uses your default proxy server settings, accessible from the Internet...Connection section of the Windows Control Panel. Make sure these settings are correct.
- Why didn't SuperBot download all the files at the site(s)?
-
If there is a downloading error on any particular file, SuperBot will modify the corresponding link to point to to the online version of that file. Try again with the same settings, and SuperBot will attempt to download any missing files and update your site copy.
If you have a question that isn't answered by this manual, please contact EliteSys
Visit EliteSys, home of SuperBot
Copyright ©1999 EliteSys. All rights reserved.