| delorie.com/archives/browse.cgi | search |
| Mailing-List: | contact cygwin-help AT cygwin DOT com; run by ezmlm |
| List-Subscribe: | <mailto:cygwin-subscribe AT cygwin DOT com> |
| List-Archive: | <http://sources.redhat.com/ml/cygwin/> |
| List-Post: | <mailto:cygwin AT cygwin DOT com> |
| List-Help: | <mailto:cygwin-help AT cygwin DOT com>, <http://sources.redhat.com/ml/#faqs> |
| Sender: | cygwin-owner AT cygwin DOT com |
| Mail-Followup-To: | cygwin AT cygwin DOT com |
| Delivered-To: | mailing list cygwin AT cygwin DOT com |
| From: | "Joaquin" <winminion AT realmspace DOT com> |
| To: | <cygwin AT cygwin DOT com> |
| Subject: | RE: wget, continued download |
| Date: | Fri, 16 Jan 2004 07:37:17 -0800 |
| Message-ID: | <009d01c3dc46$9fd0d270$c901a8c0@macross> |
| In-Reply-To: | <3FF47733.9040807@hack.kampbjorn.com> |
| X-IsSubscribed: | yes |
> > I would use perl and Net::HTTP for this. But then I'm > familiar with both. > Really. You spider the site with that? One problem I always had with wget is that it only gathers URLs from HTML. I wanted to get support at least CSS, and maybe even JavaScript/VBScript URLS. -- Unsubscribe info: http://cygwin.com/ml/#unsubscribe-simple Problem reports: http://cygwin.com/problems.html Documentation: http://cygwin.com/docs.html FAQ: http://cygwin.com/faq/
| webmaster | delorie software privacy |
| Copyright © 2019 by DJ Delorie | Updated Jul 2019 |