Selenium webdrivercông việc
...tiêu – Lấy đầy đủ tên sản phẩm, giá hiện tại, giá gốc (nếu có), thuộc tính chính, tên shop, lượt bán, lượt thích, điểm rating và đường dẫn ảnh. – Crawler phải đi hết các trang trong danh mục, không bỏ sót, không trùng dữ liệu. – Kết quả xuất ra CSV / Excel; cột nào ra cột nấy, tiếng Việt không lỗi font. Yêu cầu kỹ thuật – Ưu tiên Python sử dụng Scrapy, Requests + BeautifulSoup hoặc Selenium nếu cần vượt qua cơ chế chống bot của Shopee. – Code sạch, chú thích ngắn gọn, dễ đọc; tôi cần toàn bộ source kèm hướng dẫn chạy trên Windows (có thể bằng file README). &nd...
tôi cần code 1 tool sử dụng selenium python để tự động hoá thao tác cho 4 tác vụ này trên tạo video từ ảnh ( image to video ) tạo video từ prompts ( text to video ) tạo video cố định nhân vật ( sudject reference ) tạo ảnh từ prompt ( image generation ) logic chi tết thảo luận thêm
Viết chương trình test một website bán hàng bằng wedrive selenium với python test một vài chức năng như đăng nhập, thêm vào giỏ hàng, xóa sửa sản phẩm . Mỗi chức năng có 3-4 test case có thể export kết quả pass- fail mỗi test case ra excel.
Cần thuê người code AutoIT sử dụng Webdriver để Auto điều khiển Chrome. ==== Dữ liệu đầu vào : - File Txt chứa link Mục tiêu cần click vào. - File Txt chứa Link Random Ưu Tiền khi đã vào trang mục tiêu. - File Txt chứa các Website Trung Gian. - Cổng API kết nối vào Website chứa dữ liệu IP. - Data áp cứng mã fingerprint cho từng cấu hình, mỗi cấu hình một mã fingerprint khác nhau. Canvas, audio, webrtc, client rect, webgl, timezone, resolution, location, video output, audo input, ram, cpu, v.v… (Để Fake thông tin Profile truy cập) Mô tả hành vi của bot : - Dùng Webdriver mở Chrome mới lên với thông số mã fingerprint MỚI H...
Mình đang cần tìm một bạn để dạy kèm cho mình sử dụng python để làm webscraping cho mình. Mình prefer các bạn có kinh nghiệm làm việc với thư viện selenium. (10 đô/ 1h)
viết tool trên sàn giao dịch chứng khoán theo yêu cầu của mình. Tool đơn giản và không phức tạp, quan trọng là sau khi viết tool xong dạy lại mình cách viết code.
Tự dộng đăng ký facebook bằng chrome selenium. Yêu cầu hỗ trợ đa tác vụ và sắp xếp vị trí các cửa sổ chrome gọn gàng trên màn hình
Gear Inc. đang tìm 1 bạn QA Automation Engineer có thể làm việc tại công ty ở Hà Nội trong vòng 2 tháng, giờ hành chính. Yêu cầu: - Có thể làm việc giờ hành chính tại công ty ở Hà Nội - Có ít nhất 3 năm kinh nghiệm về automation testing - Có kiến thức về HTML, CSS, JavaScrip, etc - Có kinh nghiệm về automation test trên web-based và mobile application, sử dụng automation tools như Selenium, Appium, Cucumber, etc Quyền lợi - Lương thỏa thuận cạnh tranh up to $2000/tháng cho công việc freelancer - Cơ hội hợp tác lâu dài, trở thành nhân viên chính thức của công ty (Remove...
Start-up của Singapore cần tuyển: on Rails developer: Phát triển hệ thống back end, hỗ trợ tính năng mới. 3+ năm kinh nghiệm phát triển backend trên Ruby on Rails (RoR) Chuyên môn trong RoR và các công nghệ web. Có khả năng giải quyết ác vấn đề phức tạp. Kinh nghiệm thử nghiệm TDD (RSpec, Capybara, Selenium) Phát triển và nâng cấp backend của App dựa trên cấu trúc tổng quan, tăng thêm tính năng cho App. Khả năng học hỏi nhanh và cao. Có khả năng giao tiếp Tiếng Anh developer: 2+ năm kinh nghiệm với JavaScript, phát triển front end phần mềm. Kỹ năng: JavaScript, HTML5, CSS3, ngôn ngữ phát triển web và ...
I need a straightforward Python script that signs in to a Hotmail / account with Selenium (or another reliable browser-automation library)
I need end-to-end testing for my web application. The project code is already written, so you will be focusing solely on testing. Requirements: - Thoroughly test on Chrome - Ensure all workflows function as intended - Identify and document any bugs or issues Ideal Skills: - Experience with end-to-end testing tools (e.g., Selenium, Cypress) - Strong attention to detail - Familiarity with web applications and browser testing Please provide a brief overview of your testing experience and any relevant tools you plan to use.
I have a public-facing website that I need scraped end-to-end. The site is open (no login), but the content is split across multiple pages, so your script will have to detect and follow pagination automatically. Here is exactly what I expect: • A clean, well-commented Python script (requests/BeautifulSoup, Scrapy, or Selenium—your choice) that visits every page, captures the required fields, and writes them to a neatly structured CSV. • The final CSV containing all rows pulled from the site. • A short README that tells me how to run the script and change the target URL or output path if needed. Code quality matters to me: no hard-coded absolute paths, clear variable names, and graceful error handling so the run doesn’t stop if a single page fa...
I...framework into a CI pipeline. • Walk-throughs of flaky-test triage, mocking external dependencies, and debugging failures that only show up in complex environments. • Short take-home exercises or sample repositories that reinforce each lesson, plus code reviews so I know I’m applying the patterns correctly. Although my main interest is integration testing, I’m flexible on specific tooling: Selenium, NUnit, SpecFlow or whichever stack you feel showcases best practices in C# automation. The important part is understanding why a tool is chosen and how to extend or swap it later. Please outline your preferred tools, how you normally structure a learning path, and roughly how many hours you expect we’ll need to reach a self-sufficient framework and...
...support release cycles. • Maintain test documentation and contribute to continuous improvement of QA processes. Required Skills & Qualifications Core QA Skills • 3+ years of experience in software testing, preferably in cybersecurity or networking domains. • Strong understanding of QA methodologies: black-box, white-box, and grey-box testing. • Experience with test automation frameworks (e.g., Selenium, PyTest, Postman). • Familiarity with reverse proxy tools (NGINX, HAProxy, Envoy) and network traffic analysis. • Hands-on experience with Linux environments and shell scripting. • Graduate from a Top 50 engineering college in India as per NIRF 2025 ranking. Cloud & Security Awareness • Testing in cloud platforms (AWS, Azure, GCP) ...
...implement a robust test automation architecture with Maven/Gradle and CI/CD pipelines Page Object Model (POM) or similar design pattern reporting (Allure/Extent Reports or similar) reusable and maintainable test scripts documentation for setup and usage Required Skills: experience in Playwright with Java understanding of Selenium/automation concepts with TestNG/JUnit in CI/CD integration (Jenkins/GitHub Actions, etc.) of Git version control in framework design and best practices Nice to Have: automation experience knowledge testing exposure Duration: Short-term (with possible extension) Share your previous Playwright (Java) project
...CSV should include original variables like organization name, state and zip even though that data was not used in the scraper. The script must perform the following steps for each URL in the input list: 1. Input: Read a list of URLs from a provided CSV file (single column of URLs). 2. Navigation/Rendering: Visit the URL (handling redirects is essential). The use of a headless browser (like Selenium/Puppeteer) or an advanced HTTP library is preferred, as some websites may load the footer content dynamically via JavaScript. 3. Targeted Scanning: Scan the HTML source code of all pages found in the sitemap, specifically looking for the presence of a specific link. 4. Output Logic: - If the link is found, record the identified vendor. - If no vendor is explicitly identified, ...
I need the entire history of a specific Facebook Group captured—every post along with all associated comments. I’m ...with working links to the images and videos placed in clearly named folders. I don't want folders or links. Just one huge continuous page that has everything. This is for a court case and I have to give this to the other side. I want them to have to scroll through however many hundred pages there are. Just as if they were actually on FB. Please outline: • your scraping approach (Python + Selenium, Go, node-puppeteer, etc.), • how you’ll handle media downloads and folder structure, • estimated turnaround time. I’ll review a short sample export before we proceed with the full run to confirm the layout meets my ...
...need a Selenium-based solution that runs reliably on Windows and opens Google Chrome to simulate human visits to LinkedIn (and occasionally other) profile URLs listed in a Google Sheet. For each URL the program should: • Pull the next unused link from the sheet • Load the page in Chrome, wait a random time between 20 seconds and 3 minutes • Apply truly randomized scrolling patterns while the profile is open so behaviour looks organic • Fire a webhook the moment the visit completes, passing back any ID or payload I define so our CRM reflects the touch instantly Configuration items such as Google Sheet ID, webhook endpoint, minimum/maximum dwell time, and daily visit caps should live in a simple file I can edit without touching code. A short README on ...
...the comment text, number of comments, likes, reposts/shares, the post date and any other readily available metadata (author handle, follower count, post URL, media links, etc.). Accuracy is critical because the data will feed a trend-analysis dashboard later. Please build the workflow in a way that respects rate limits and login requirements: if you intend to use official APIs, private APIs, Selenium, Scrapy, Playwright, or headless browsers, spell that out so I know how sustainable the solution will be. The final hand-off should include: • A clean, well-commented reusable script (Python preferred) • A short README explaining environment setup, keyword input format and how to extend to new regions • The full export in CSV so I can validate before sign-off I...
... 1. Runs on Windows 10/11 without additional setup beyond the usual runtime (e.g., Python + packaged dependencies, .NET, or a single-file executable). 2. Completes the full login-scrape-submit-click cycle unattended. 3. Automatically resolves CAPTCHAs within a reasonable timeout (configurable). 4. Produces the requested data file and a concise activity log per run. Preferred tooling: Selenium, Playwright, or Puppeteer, though I am open to any framework that meets the above goals....
I need a r...I can decide the exact days and times each role goes out • one-click “auto post” that instantly publishes a job when I hit the button The posting frequency isn’t fixed; some days I may blast several openings, other weeks none at all, so the scheduler has to respect whatever plan I set. Use whichever method makes the process most stable—headless browser automation (Puppeteer, Playwright, Selenium) or direct API calls if Placementindia offers them. The final package must run on a standard VPS, expose clear logs/errors, and allow easy editing of job templates. Please share links or short videos of similar bots you’ve built so I can gauge robustness. Once I can log in, queue a job, and watch it appear live without intervention, I&rs...
...Primitiva: €1.00 El Gordo: €1.50 Dynamic AI suggestions: recommend bet quantity based on backtesting, current jackpot and balance, using Monte Carlo for EV (prioritize positive EV scenarios). 2.5. Login and Bet Execution Automation Never store login/password (not even encrypted). Prompt user for credentials (email/NIF/NIE + password) every login, using secure masked fields. Login process: Selenium (headless Chrome/Firefox), access , fill form, submit, verify success (post-login elements), handle sessions/cookies. Bet execution: navigate to lottery page, generate AI numbers, add exact user-requested bets to cart (batches for high volumes), support rules of each lottery. Finalization: show full summary (lottery, bets, cost, numbers); require
I need a small, Windows-friendly Python script that will open a real browser with Selenium and wipe large batches of content from my X (Twitter), Facebook, and Instagram accounts. Because my X account sits on the free API tier I keep running into 403 errors, so this project must rely solely on browser automation—no official APIs or paid third-party tools. Here’s what I’m after: the script launches from the command prompt, asks for (or reads from a .env) my login credentials, signs in, and then iterates through all visible posts, tweets, and reels, deleting each one until none remain or until it hits an optional stop condition such as a date or a post count I can set. A simple console printout like “Deleted tweet #42” is enough for logging; I don’...
IM TYRING TO RUN THE ATTACHED JPNY SCRIPT TO GET INFO FROM A WEBSITE BUT I CANT UNDERSTAND IT DOESN'T WORK. I NEED THIS SCRIPT TO BE FIX + PAGINATION TO FETCH AROUND 2400 RECORDS FOR YELLOWPAGES I ONLY USE JUPYTER
...market research. The job centers on extracting selected data points from public web pages, transforming them into a clean, structured format, and making them available for analysis every 24 hours. Here’s what I need you to handle from end to end: • Source acquisition – fetch HTML from the URLs I provide, even when content is hidden behind JavaScript (a headless browser such as Playwright or Selenium is fine). • Parsing & cleansing – pull the specific fields I’ll list (product name, price, SKU, availability, and a time-stamp), remove duplicates, and standardize values. • Storage & delivery – load the daily output into my PostgreSQL instance; if you prefer Parquet or plain CSV that’s acceptable as long as it’s a...
...information should be organised into a clean CSV file—one row per page—with columns for page URL, full body text, image file names, and link destinations. Please download the images themselves as well and bundle them in a separate folder (a simple ZIP is fine); the CSV should reference the exact filenames so everything lines up. I’m happy for you to use Python with BeautifulSoup, Scrapy, Selenium or whichever stack you prefer, as long as the final output meets these acceptance criteria: • Complete CSV containing text, image names, and link URLs for each page • All images successfully downloaded and accessible via the filenames listed in the CSV • No duplicates or missing pages from the target site * Images need to be sorted for each l...
I am looking for a Python developer to create a simple and focused scraper script for Facebook Mar...• A file containing all product URLs for that seller • File format: TXT or CSV • Handle infinite scrolling to load all products Technical Requirements: • Python • Selenium or Playwright • Experience with dynamic websites • Clean, runnable, and well-structured code Important Notes: • No filters required (no country, city, or keywords) • No data is needed other than product links only • Manual login can be used if required Budget: Open — to be discussed based on experience and quality When Applying, Please Include: • Any previous experience with Facebook Marketplace scraping • The tool you plan to use (...
I have a data-analysis pipeline that relies on a steady flow o... • Payload: high-resolution image files plus a CSV/JSON map linking each file to product ID, title, price, and category text that you extract during the same run. • Scale: thousands of products per crawl; a resumable approach is essential so partial failures don’t force a full restart. • Frequency: I’ll trigger the crawl weekly, so reusable code is a must. I’m happy with Python—Scrapy, Selenium, Playwright, or a headless solution of your choice—as long as it respects the site’s anti-bot measures and keeps requests polite. Please include a brief outline of how you’ll handle pagination, lazy-loaded images, and rate limiting. Let me know your proposed stac...
...mandatory declarations and any digital signature steps must all be handled by the system before it attempts the final submission. A short, clear dashboard that shows “ready”, “errors found” or “submitted” status for each tender would be ideal so I can intervene only when something is missing. Deliverables I must see to accept the project: • Source code and install guide (preferably Python with Selenium / Playwright or similar RPA layer, but I’m open if you can justify another stack) • A configuration file where I can add new government portals without touching core code • Automated validation rules that stop a submission if any mandatory field or attachment is missing • A post-submission PDF/CSV log summarisin...
IM TYRING TO RUN THE ATTACHED JPNY SCRIPT TO GET INFO FROM A WEBSITE BUT I CANT UNDERSTAND IT DOESN'T WORK.
...accuracy and time-stamping are essential. Store everything in a structured database of your choice (PostgreSQL or MySQL are fine). The tables must let me query: • first-pull values • second-pull values • calculated deltas between the two Please build in a simple scheduler or CLI flag so I can trigger each scrape automatically via cron. Bet365 is heavily scripted, so headless-browser handling (Selenium, Playwright, or Puppeteer) plus proxy/anti-bot measures may be required; use whichever stack you’re comfortable with, provided it is well documented. Deliverables 1. Clean, runnable source code with setup instructions. 2. SQL schema and migration script. 3. README showing sample queries that compare initial vs. pre-game lines. 4. Brief note on ho...
I need a small automation script that periodically checks item availability on the Bigbasket website and pings me on Telegram the moment any of the tracked products come back in stock. You are free to choose the underlying tech stack (Python + Requests/BeautifulSoup, Selenium, Playwright, or a headless browser of your choice) as long as it works reliably with Bigbasket’s current site layout and protects my account from rate-limit blocks or captchas. The flow I have in mind is straightforward: I feed the bot a list of product URLs (or SKUs). It runs on a schedule I can change—every few minutes during peak shortages, maybe every hour otherwise—grabs the stock status, and fires a concise Telegram message whenever the status flips from “Out of Stock” to &l...
...workbook. Please crawl the entire site, not just a few sections, and return each number alongside the key profile details that make the data usable at a glance—name, profile URL, and any other easily captured identifiers shown next to the number. A clean .xlsx with one row per profile, no duplicates, and clearly labelled columns is the only deliverable I’m expecting. If you prefer Python, Scrapy, Selenium, Beautiful Soup or a comparable stack, go ahead; I’m interested in results, not the specific toolset, as long as the script can be rerun later should the site content change. Before delivery, double-check that: • every row contains a valid phone number and url • no pages on the site were skipped • the sheet opens flawlessly in the latest des...
Quiero contar con un archivo .xlsx que contenga las 12 728 filas completas de la tabla pública que aparece en la web de INDECOPI (Perú). El sitio sólo muestra 10 regis...formato estándar, sin filtros, tablas dinámicas ni otras funcionalidades añadidas. Yo te facilitaré la URL exacta y los pasos de navegación para que ubiques la vista paginada. Una vez terminado, comprobaré que el total de filas coincida con el contador oficial y que no existan celdas vacías en los tres campos solicitados. Si tienes experiencia en scraping con Python (requests, BeautifulSoup, Selenium) o herramientas similares y puedes generar el .xlsx sin alterar la estructura original, me será suficiente. Entrega prevista: archivo Excel ...
...accuracy and time-stamping are essential. Store everything in a structured database of your choice (PostgreSQL or MySQL are fine). The tables must let me query: • first-pull values • second-pull values • calculated deltas between the two Please build in a simple scheduler or CLI flag so I can trigger each scrape automatically via cron. Bet365 is heavily scripted, so headless-browser handling (Selenium, Playwright, or Puppeteer) plus proxy/anti-bot measures may be required; use whichever stack you’re comfortable with, provided it is well documented. Deliverables 1. Clean, runnable source code with setup instructions. 2. SQL schema and migration script. 3. README showing sample queries that compare initial vs. pre-game lines. 4. Brief note on ho...
...service must issue and validate JWT tokens for every request beyond the public health-check route. Token refresh, revocation, and a simple role model (“user” vs. “admin”) should be built in from the start. Flight data extraction I do not have official Iberia developer access, so we will need to pull the data ourselves. I’m open to whichever tooling you are most comfortable with — BeautifulSoup, Selenium, Scrapy, or a hybrid approach — as long as the final solution is headless, resilient to minor layout changes, and respectful of Iberia’s rate limits. Only flights that are bookable with Avios need to be captured; no hotel or car-rental data is required. Deliverables • Clean, modular Python code (FastAPI or Flask preferred,...
...service must issue and validate JWT tokens for every request beyond the public health-check route. Token refresh, revocation, and a simple role model (“user” vs. “admin”) should be built in from the start. Flight data extraction I do not have official Iberia developer access, so we will need to pull the data ourselves. I’m open to whichever tooling you are most comfortable with — BeautifulSoup, Selenium, Scrapy, or a hybrid approach — as long as the final solution is headless, resilient to minor layout changes, and respectful of Iberia’s rate limits. Only flights that are bookable with Avios need to be captured; no hotel or car-rental data is required. Deliverables • Clean, modular Python code (FastAPI or Flask preferred,...
I need a senior-level specialist to harvest product data from several e-commerce sites and deliver it in a single, well-structured CSV file. The task demands production-ready techniques—think Scrapy spiders hardened with rotating proxies, Selenium or Playwright for dynamic content, and solid anti-bot countermeasures. The information I’m after is very specific: product names, prices, pictures, and SKU. Nothing less, nothing more. Your solution must run reliably at scale, cope with frequent layout changes, and leave no trace that could trigger blocks. Python is the preferred stack, but if you have a proven alternative that meets the same bar, I’m open to hearing it. To be considered, include in your proposal: • At least one example of a comparable e-commerce...
...as an appointment is secured (or fails), the system should push an email and, if possible, a Telegram or SMS alert to our team. Access & roles Only Admins—our internal staff—will use the interface. A straightforward dashboard that lists upcoming bookings, status messages and basic logs is enough; no public client portal is required at this stage. Tech preferences I am open to Python (Selenium, Playwright), Node, or another proven stack you recommend, as long as it can handle VFS’s anti-bot measures and is easy for us to maintain on our own server in Turkey. Deliverables 1. Source code and deployment guide for the web automation. 2. Admin dashboard with real-time booking log and resend-notification option. 3. One live demo showing the tool gr...
...visit the target site (I’ll share the URL once we start) and pull Product Details exactly as they appear online. That means every time I point the script at a category or search page it should work through all pagination, capture the data, and save it to CSV or Excel so I can sort and analyse it later. Key points to cover • Use reliable, open-source libraries such as requests, BeautifulSoup, or Selenium—whichever gives the most stable results for the site once you see it. • Build in simple settings (URL, output file name, optional delay between requests) near the top of the file so I can tweak them without touching the core logic. • Handle common edge cases: missing fields, changing layouts, or temporary time-outs, and log any skipped items for r...
...supply in a CSV/JSON. • Scrape: title, price, promos, specs, images, ratings, full review texts, review dates, and reviewer scores. • Output: clean CSV or JSON dropped into a dated folder after each run. Make the script easy to tweak if Lazada changes its markup. Acceptance criteria 1. Script written in Python 3, primary parser BS4, with clear README on setup and dependencies (requests, selenium for dynamic pages if needed, etc.). 2. One-click (or single command) launch that completes without errors and produces a sample file from a test URL I provide. 3. Simple logging that flags failed pages or blocked requests so I can retry. 4. Code remains within Lazada’s permissible request rate to minimize captchas or bans. Deliver the .py file(s), , , and a sh...
...developer to build a robust scraper that collects the required data and writes it straight to JSON—no additional cleaning or processing necessary. Once we begin I’ll provide the target URL(s) and any access details; for now, assume a standard public site with pagination and occasional anti-bot checks. Core expectations • Written in Python 3 using requests/BeautifulSoup or Scrapy; resort to Selenium only if there’s no lighter workaround. • Handles pagination, retries, and polite delays gracefully so the run can complete unattended. • Config file or clear constants for headers, cookies, and start URLs, letting me tweak targets without editing core logic. • Produces a single JSON file (or one file per page if that’s cleaner) refle...
...well-structured lead list and I already know exactly what it should contain. The task is to extract contact information—email addresses, phone numbers and full mailing addresses—from three sources: company and organisation websites, their public social-media profiles, and well-known online directories. I expect the data to be gathered with a solid scraping workflow (Python, Scrapy, BeautifulSoup, Selenium or an equivalent stack is fine) and then verified so that bounced emails and dead numbers are kept to an absolute minimum. Deliverables • One CSV or Excel file with separate columns for name, company, job title, email, phone, street address, city, state, ZIP/postcode, country, source URL and date collected. • No duplicates; every entry must be uni...
...working tool or script that successfully delivers at least 500 text DMs across two separate test accounts without triggering action blocks. 2. Step-by-step instructions (video or PDF) covering installation, authentication, list import, sending, and troubleshooting. 3. Source code or build files supplied so I can tweak or update in the future. If you have experience with Instagram private API, Selenium, Puppeteer, or similar automation frameworks and can demonstrate prior bulk-messaging work, this should be straightforward. Please outline your proposed approach, the tech stack you’ll use, and an estimated turnaround time....
I have a data-analysis pipeline that relies on a steady flow o... • Payload: high-resolution image files plus a CSV/JSON map linking each file to product ID, title, price, and category text that you extract during the same run. • Scale: thousands of products per crawl; a resumable approach is essential so partial failures don’t force a full restart. • Frequency: I’ll trigger the crawl weekly, so reusable code is a must. I’m happy with Python—Scrapy, Selenium, Playwright, or a headless solution of your choice—as long as it respects the site’s anti-bot measures and keeps requests polite. Please include a brief outline of how you’ll handle pagination, lazy-loaded images, and rate limiting. Let me know your proposed stac...
I need a standalone desktop program that lets me analyse horse races by pulling fresh horse-performance data directly from www.racenet.com.au. The application’s ...• One-click scrape pulls the latest horse performance data without captchas or manual intervention. • All key fields visible on racenet for each horse populate correctly in the local database. • Basic analytical views refresh in under two seconds on a typical laptop. • No paid API keys required—everything comes from the public site. I’m flexible on the tech stack: Python (BeautifulSoup/Selenium), C# (.NET), or even Electron if it stays lightweight. What matters most is reliable scraping, clean code, and a UI I can rely on race morning. Let me know your preferred approach and an...
...the results to CSV or Google Sheets. I mainly care about item title, price, description, photos (image URLs are fine), posting date, item location and the seller’s profile link so I can trace each record back to its source. If you can collect additional fields that Facebook exposes, even better—just keep everything neatly labelled. No hard requirement on the stack: Python with BeautifulSoup / Selenium, Node with Puppeteer, Playwright, or a headless browser solution all work for me as long as it runs on Windows or a small Linux VPS and doesn’t violate Facebook’s ToS. Please build in reasonable throttling, login handling (cookie-based or mobile API, whichever is more stable) and a simple config file where I can tune delay settings or add new accounts. ...
... • Build a clean and user-friendly dashboard to: • Manage monitoring settings • Control alerts and configurations • Implement structured and scalable automation logic. • Ensure the solution is maintainable and adaptable to future website updates. • Provide clear documentation for setup and usage. Technical Requirements • Strong experience with Python • Web automation tools such as: • Selenium / Playwright • Requests / BeautifulSoup • Backend development experience • Familiarity with notification systems (Email, Telegram, Webhooks, etc.) • Clean, well-documented, and modular code Additional Notes • This is a long-term project. • Ongoing collaboration may be required for future updates, optim...
...need a fully-automated workflow that gathers and enriches data from well over 500 LinkedIn profiles. The automation should locate the profiles that match criteria I will provide, pull the key public details, then append reliable off-platform contact information so I can reach those professionals directly. Please design the script or low-code sequence with any reliable stack you prefer—Python, Selenium, PhantomBuster, Sales Navigator API, or comparable tools are fine as long as the method is repeatable and respects rate limits. Deliverables • CSV/Excel file containing one row per person with: – Current job title – Company name – Verified email (and phone, when available) • Source code or workflow file with brief run instructions ...
...coordinates directly from Google Maps. The second will crawl a set of websites I will supply and pull out product information, on-page contact details, and any user-generated content that appears alongside those products. Please structure every field into one tidy CSV per source so I can plug the results straight into my BI dashboards. I am comfortable if you lean on Python, Scrapy, BeautifulSoup, Selenium, or similar tools, provided the script is well-commented and can run headless behind rotating proxies without tripping rate limits. Deliverables: • 4 working scripts (Maps + websites) with clear setup instructions • Sample output files proving all requested fields are captured correctly • Output data must have City Name > (Excel file with list of d...