I want somebody to program a proxycrawler.
The crawler should run on linux machines.
The crawler should search for different search terms, taht come from a database for example "socks proxy" on google yahoo and so on.
Then it should get the results of the search query and open save them to a database.
Then the crawler should open the search results from the database and get the content of the website (only the htmlcode).
Now the programm should search for proxies on this website for exapmle [url removed, login to view] or [url removed, login to view] 1234.
This proxies should be saved to the database.
The crawler should be able to follow links on the website.
It would be glad if you could specify how deep the crawler should go and if he should follow links to other websites.
There are still some details but this is the basic task.