We need to create a automated crawler that will log into [url removed, login to view] website and download resumes from saved searches.
Walk through process of what needs to be done:
Step 1: Login
Login using login name/password stored in config file.
Step 2: Successfully Logged in, run saved search
Step 3: Result page to crawl/store, result list, and each resume into a mysql table
* Define step / crawl interval (freqency between page navigations) to prevent from being banned from site