Every time I run it, it successfully downloads 20-30 pages, then it tries to connect to the same website and cannot. Then when I try to use the browser to connect to the internet (the one build into OE) it also does not connect, although sometimes it connects for a few seconds afterwards but dies after that.
I have tried:
1) Reducing the speed
2) Reducing the number of connections
3) Different websites
4) Doing it with my firewall on and off
5) Uninstalling and re-installing
6) Turning off the built in server
7) Various other tweaks.
Additionally, and obviously, I am able to browse outside of OE throughout the process.
Any idea why this is happening?
That seemed to fix it!
No, after checking it it started working.
I have another question - is there something I am missing regarding the filename filters?
As in, if there is such a URL:
How do I prevent the software from downloading these pages?
I have tried adding:
to the filename exclude list, but it still downloads it.
Do i have to stop and start for it to be activated even though I click apply? I've tried pausing and restarting but it still doesn't exclude these.
to the URL Filters - Filename - Excluded list.
Please also check the File Filters - Text - its Location field should be "Load using URL Filters settings".
if it is already in the queue does the filter not apply?
Because I've done what you said but it still loads.
If you want to get rid from these URLs in the Queue, press F9 to pause, go to Queue tab, right-click and choose Select By Mask. Then abort the selected files.
Say I want to just download images from a site.
Under "file filters" if I unclick everything except the images tab, it spiders the whole site and only downloads images.
If I click the text one and enable use url filter settings it will download the actual HTML pages too. If I unclick the document extension list in the file filter - text tab, it doesn't even spider those pages for links to find images.
So how do I use URL filters to prevent the program from spidering certain pages, yet still spider the rest of the pages for images, and not download any HTML documents?
It should work.