A beginner's problem
|imbeginner||12/15/2006 12:22 pm|
|Hello, I am using Offline Explorer Pro since the last three days. I have spent hours searching the help files and some time this website's forum but i haven't found solution to simple basic problem. Plz guide me bcoz it seems I am a complete dumb in using this software. Here's the problem :
I wished to download ALL links beginning with
and NO OTHER LINKS which do not start with the name of the above url.
In fact when I tried to download, by simply giving the above url in the new project option along with default settings, not only did i get webpages not starting with this url but I also did not get all the webpages starting with this url. I had to manually check the 100 pages to find out the fifteen web pages which were not downloaded.
Also, I am not good at understanding filters and parsing. So I will be very grateful if you could specify the exact configuration that I need to make before starting to run the project.
|Oleg Chernavin||12/17/2006 12:38 pm|
|The best way is to use the Wizard - it allows you to load from the starting URL and don't forget not to allow images to be loaded from anywhere.
|imbeginner||12/17/2006 01:53 pm|
|but thats exactly what i had done before. If possible , plz give it a try yourself. links not beginning with the exact address mentioned above also get downloaded. and u will find plenty of missing files too. it wont take much time for u to try this.|
|Oleg Chernavin||12/18/2006 07:18 am|
|I made a Project for you that loads what you want. Please find it using the Tools - Published Projects - Information Technology section.
|imbeginner||12/18/2006 01:46 pm|
|Thank you for the help Mr.Oleg.
Though the Project looked perfect, unfortunately it failed to work.
I could only download the following :
There are all files from /gdebgb1 to /gdebgb100 atleast. Infact, the above list shows the urls that were in the queue. Urls other than these never even entered the queue. Also, whwnever I click on update or restart the projecti get this in the download progress window :
Download Complete.Status 302 object moved. I even went to the website to see if all the pages were still there and i found that they were there.
I would be glad if you could try to download all the pages once,bcoz in all they are 100 pages of about 20KB each----which means total 2MB.
|Oleg Chernavin||12/18/2006 02:52 pm|
|Can you increase the Level of the Project to 5 and start the download? It should get all links in the linked files.
|imbeginner||12/18/2006 03:26 pm|
|I can see it working exactly the way I wanted it to work. Many many thanks to you.
However I can't understand that why only certain urls were downloaded like the list i sent u before. After all, the rest (which were not downloaded) also did not seem to be linked inside a link.
(i mean why require a different setting for /gdebgb94 and /gdebgb2 ?) How did u determine that level limit should be increased? I ask u this bcoz the answer may help me in future projects.
|Oleg Chernavin||12/19/2006 03:13 am|
|If you have Level=1, Offline Explorer loads only the links listed on the starting page. Increasing the level, will force it to collect links on the other pages.
|imbeginner||12/19/2006 12:08 pm|
|As I said earlier, all the links that I wanted were downloaded. However, almost 50 of the 200 links were not downloaded completely. The file size should have been 20KB. 50 of the links had size of either 6KB or 8KB. I had to manually find out all 50 links and then put them in a project to redownload them. Then, this time 7 limks were not completely downloaded. Again,I started a fresh project for these seven. I had set only 5 connections and timeout was 120 seconds.
Why is there no error reporting about partially downloaded links?
Is there a way to find these partially downloaded links and have them automtically download themselves?
|Oleg Chernavin||12/19/2006 12:38 pm|
|This is because the server doesn't tell the file size and Offline Explorer has to guess whether the file is complete or not. There is a setting - Check Files Integrity (Properties - Advanced) - it helps with most of cases, but not always.
|imbeginner||12/19/2006 01:24 pm|
|Is this a limitation? Imagine the trouble if someone downloads a website containing 1000 pages. Is he supposed to manually put together all the partial links to download them again? and even bigger quesion : is he supposed to know the tentative file size to spare the trouble of opening each and every link to see whether it was fully downloaded? Is the case same even if i use Mass downloader or any other product? If there is no good answer for these questions then it seems that offline browsing is not a reality yet.|
|Oleg Chernavin||12/19/2006 04:59 pm|
|In most cases there are no problems with broken files. If some servers are not fast or reliable, it is possible to set one-two connections and a delay between downloads to make sure that the files will definetely download completely.
|imbeginner||12/20/2006 01:04 pm|
|Thanks. I will try that. However, if you ever come across any such solution (specially regarding detection of incomplete links leave alone downloading them) please email me at email@example.com
Thanks for all the prompt replies. It was almost like I was in an online chat with you. I have never seen such a fast response anywhere. Thanks a lot.
|Oleg Chernavin||12/21/2006 05:01 am|
|Yes, we are always looking for ways to improve. Thank you for your kind words! We like to provide an almost "live" support.