Mail Archives: cygwin/2009/02/05/10:20:35
Sorry. Take 2.
On Thu, Feb 5, 2009 at 15:10, Julio Emanuel wrote:
> On Thu, Feb 5, 2009 at 14:55, TAJTHY Tam=C3=A1s wrote:
>> Dear All,
>>
>> I have a problem. I'd like to get HTML pages, but not their plain
>> sources. If it has an embedded javascript and it generates HTML code I
>> need the resutling HTML code.
NOW I read that :)
>> Now I just run a perl script which
>> launches a firefox and I copy the resulting page to the clipboard. But
>> this is not too nice solution as I can not detect when firefox
>> finished downloading and processing the page.
>>
>> Is there a library which can do this? Can anyone give some help, how
>> can solve this?
>>
>
> wget should do the trick, if I understood correctly what you are trying t=
o do.
>
Nah. Forget it. Don't think it will help you.
> If you want to get fancy :), do your browsing in the console window with =
w3m.
> It can also get your sources non-interactively (see -dump_source option).
>
Maybe this helps. If it displays correctly, then all you have to do is
-dump_source after display.
Another reference I can quote is curl, by I don't have any experience with =
that.
Google a little with these references, maybe you get lucky.
Or, wait a little longer, and maybe a TRUE guru shows up :)
Sorry for the noise.
> Have fun!
> ___________
> Julio Costa
>
--
Unsubscribe info: http://cygwin.com/ml/#unsubscribe-simple
Problem reports: http://cygwin.com/problems.html
Documentation: http://cygwin.com/docs.html
FAQ: http://cygwin.com/faq/
- Raw text -