The ultimate guide to hiring a web developer in 2021
If you want to stay competitive in 2021, you need a high quality website. Learn how to hire the best possible web developer for your business fast.
Web Scraping is the process of extracting data or information from an online source such as a website, database, application, etc. Web Scraping Specialists have the skill that helps people collect valuable digital data and quickly find the useful information they need from websites, mobile apps, and APIs. The experts usually use web scraping tools and advanced technologies to collect large amounts of targeted data without any manual work for the client.
With web scraping, tasks that otherwise may require a lot of time can be automated and done faster. Our experienced Web Scraping Specialists use their expertise to develop scripts that continuously target structured and unstructured data sources.
Here's some projects that our expert Web Scraping Specialist made real:
Web Scraping Specialists are skilled professionals who know how to help businesses optimize processes while collecting rich structured data they need for their specific purposes. Our experts fasten the process and return accurate results in less time, so that the customer can make better decisions more quickly without any manual labour. If you are looking for a talented professional to make a web scraping project for you, you have come to the right place. Here in Freelancer.com you can find talented professionals who will get the job done with top quality results! Post your project now and see what our Web Scraping professionals can do for you!
Från 361,482 betyg, betygsätter kunder vår Web Scraping Specialists 4.9 av 5 stjärnor.Web Scraping is the process of extracting data or information from an online source such as a website, database, application, etc. Web Scraping Specialists have the skill that helps people collect valuable digital data and quickly find the useful information they need from websites, mobile apps, and APIs. The experts usually use web scraping tools and advanced technologies to collect large amounts of targeted data without any manual work for the client.
With web scraping, tasks that otherwise may require a lot of time can be automated and done faster. Our experienced Web Scraping Specialists use their expertise to develop scripts that continuously target structured and unstructured data sources.
Here's some projects that our expert Web Scraping Specialist made real:
Web Scraping Specialists are skilled professionals who know how to help businesses optimize processes while collecting rich structured data they need for their specific purposes. Our experts fasten the process and return accurate results in less time, so that the customer can make better decisions more quickly without any manual labour. If you are looking for a talented professional to make a web scraping project for you, you have come to the right place. Here in Freelancer.com you can find talented professionals who will get the job done with top quality results! Post your project now and see what our Web Scraping professionals can do for you!
Från 361,482 betyg, betygsätter kunder vår Web Scraping Specialists 4.9 av 5 stjärnor.I need a single WebExtension that runs in both Chrome and Firefox and turns our current manual workflow into a one-click process. Its core job is data collection—capturing information from pages we specify—while also handling the little chores my team repeats every day: filling forms, scraping targeted fields, and kicking off routine browser actions such as page refreshes or button clicks once certain conditions are met. The add-on must connect cleanly to three parts of our internal stack: • our CRM system (REST APIs already documented) • the project-management tool we use (webhook support available) • a central database for long-term storage (PostgreSQL) Please build with the standard WebExtension/Manifest V3 approach so we can maintain a single code...
I need webscraping expert to scrape data and export to excel from Indiegogo. Details I need for the projects are: Title: Project title. Category: The category of the project based on Indiegogo categorization system. Category: The sub-category of the project based on Indiegogo categorization system. Close Date: Close data of the campaign. Open Date: Open date of the campaign. Currency: Currency used for collected funds. Funds Raised: The amounts of funds raised. Funds Raised Percent: The percent of funds raised from the targeted funds. Funding Target: The targeted amounts of funds by the campaign initiator to be collected. Country: Country in which the project is based. Publisher: The name of the campaign initiator. Backers: The number of people who decided to fund the campaign. Updates: ...
I’m looking for a well-structured Python solution, built around BeautifulSoup (BS4) and any supportive libraries you deem essential, that reliably pulls both product details and customer reviews from Lazada on a daily schedule. The data will fuel ongoing competitor research, so consistency and clarity of the output are critical. I looking specifically to get data using bs4 by bypassing the captcha Here’s how I picture the flow: • Input: category URL(s) or product list I supply in a CSV/JSON. • Scrape: title, price, promos, specs, images, ratings, full review texts, review dates, and reviewer scores. • Output: clean CSV or JSON dropped into a dated folder after each run. Make the script easy to tweak if Lazada changes its markup. Acceptance criteria 1. S...
I need a seasoned Python developer to build a robust scraper that collects the required data and writes it straight to JSON—no additional cleaning or processing necessary. Once we begin I’ll provide the target URL(s) and any access details; for now, assume a standard public site with pagination and occasional anti-bot checks. Core expectations • Written in Python 3 using requests/BeautifulSoup or Scrapy; resort to Selenium only if there’s no lighter workaround. • Handles pagination, retries, and polite delays gracefully so the run can complete unattended. • Config file or clear constants for headers, cookies, and start URLs, letting me tweak targets without editing core logic. • Produces a single JSON file (or one file per page if that’s...
I need to build a reliable, well-structured lead list and I already know exactly what it should contain. The task is to extract contact information—email addresses, phone numbers and full mailing addresses—from three sources: company and organisation websites, their public social-media profiles, and well-known online directories. I expect the data to be gathered with a solid scraping workflow (Python, Scrapy, BeautifulSoup, Selenium or an equivalent stack is fine) and then verified so that bounced emails and dead numbers are kept to an absolute minimum. Deliverables • One CSV or Excel file with separate columns for name, company, job title, email, phone, street address, city, state, ZIP/postcode, country, source URL and date collected. • No duplicates; every...
PDF to Excel Data Scraper Needed Job Title: Data Scraper Needed: Convert 24 PDF Factsheets to Clean Excel (Mutual Fund Portfolios) Project Overview: I need a freelancer to extract detailed stock portfolio data from ~24 Mutual Fund Monthly Factsheets (PDFs). I will provide the URLs/Files. Your job is to extract the full stock holdings table for specific funds and deliver a consolidated, clean Excel/CSV file. The Goal: I need the complete list of stocks (100% of the portfolio), NOT just the Top 10. The data is used for financial backtesting, so accuracy is critical. Even top 85-90% data works. Scope of Work: Input: ~24 PDF Files (Monthly Factsheets). Target Funds: For each month, extract data for the Top 10 Equity Funds (e.g., Bluechip, Midcap, Smallcap, Value Discovery, etc. - list wi...
I have a data-analysis pipeline that relies on a steady flow of fresh product images from a well-known e-commerce site. What I need is a robust scraper that can navigate the catalog, collect every product’s main and variant images, and deliver them to me neatly organized. Key points you should know: • Target: a single e-commerce platform (URL supplied after award). • Payload: high-resolution image files plus a CSV/JSON map linking each file to product ID, title, price, and category text that you extract during the same run. • Scale: thousands of products per crawl; a resumable approach is essential so partial failures don’t force a full restart. • Frequency: I’ll trigger the crawl weekly, so reusable code is a must. I’m happy with Pytho...
So I want a script that downloads all public files(the ones in green), from a given case number from the website: It has Perimeter X as bot protection So the steps to get the documents are just inserting the case number into "Search by Case Number" going to the results and get all public files(the ones in green), these are some examples you can try: - 26-CC-003970 - 26-CC-003965 - 26-CC-004007 Important: - I don't want the files themselves but the script to download them without manual intervention - Solution could be either in Golang(prefer) or python - Solution could be either raw http requests(prefer) or browser automation tools like playwright - I'm open to use any third party service but prefer local solutions - I'm open to try a proxy provider, I'm cu...
Help wanted: daily/multi-daily comparison of supplier prices and stock levels (B2B webshop) Text: We operate a B2B webshop where business customers can place orders or commission items on request. Most of the goods are sourced directly from manufacturers. For most suppliers we have access to their stock levels and current prices; for some, no login is required, while others require login credentials. We are looking for a solution or a skilled professional who can help us retrieve supplier prices and stock levels daily — ideally multiple times per day — and compare them with our internal purchase prices so we stay up to date. No automatic syncing with our system or automatic price changes are required. It is sufficient if discrepancies between supplier prices and our system pu...
We are looking to hire an experienced freelancer for B2B contact data scraping using Apollo.io. Project Requirements Scrape contact data using Apollo filters provided by us Data must be extracted only after confirming filters are correct We will start with one state, and if the data quality is good, we will assign more states Data Fields Required Each contact must include: Full Name Job Title (Decision Makers only) Company Name Business Email (Verified) Phone Number / Mobile (where available) Company Revenue Location (City, State, Country) Company Website / LinkedIn Quality Expectations No dummy or generic emails No duplicate records Clean, structured, and fresh data Apollo-sourced data only Process We provide filters Freelancer applies filters and shares sample data ...
I already run a live Apify actor that handles several internal automation tasks. It is stable and well-structured, but I now need to branch into brand-new automation flows focused on data collection and processing. Here is what I need from you: • Analyse the current codebase (Node.js + Apify SDK) so your additions slot in without breaking the existing automation tasks. • Design and build one or more new actors, or extend the current actor, to fetch data from targeted sources, normalise it, and store the output in a dataset or key-value store of my choice. • Keep the solution configurable through ENV variables or an input schema, so non-developers on my team can tweak URLs, pagination, scheduling, and output format. • Provide clear, inline comments and a short...
I need OpenClaw on my dedicated Mac with three core capabilities: Chrome automation: open websites, click elements, fill forms, extract structured snippets, and return results in WhatsApp. Coding/app workflows: generate code locally and optionally interact with web dev platforms when commanded. Deep research workflows: run multi-step web research, compare sources, and return concise findings with references. Security and reliability are mandatory: least privilege, approved-user-only WhatsApp commands, startup on boot, restart on crash, logs, and health check.
Vänligen Vänligen Registrera dig eller Logga in för att se mer information.
I need all data that starts from , walks through every brand, opens each handset page and captures the complete specification table exactly as shown. The end-product I expect is: • A clean JSON file data where every phone is an object containing every available field (model name, release date, dimensions, display, chipset, camera, battery—everything published on the spec sheet). Please make sure the scraper respects polite crawling rules, handles pagination and brand/model edge cases gracefully, and returns UTF-8 encoded text. If anything on the site requires minor waits or retries, can block your way. I will test JSON data and if validates proper data, the job is done.
I have a list of titles (number depends on the search results, and the last time I checked it was 250)currently tagged “In Production” on IMDbPro and I need every line item turned into a clean, ready-to-filter spreadsheet. Because IMDbPro expressly forbids scraping, each record must be gathered by hand. Here is what I expect to see, each point in its own column: • Movie Title • Director(s) • Composer(s) – if any are listed • Music Supervisor(s) • Producer(s) • Producer contact details (email and/or phone whenever they appear) • Direct URL of the movie page • Cast The workflow is straightforward: open the title, copy the details, paste them into the sheet, move on to the next film. Where information is missing on IMD...
There is around 20k reviews publically available, so I can't scroll endlesly but I need you to scrape it for me and put in the spreadsheet along with filters - 1 stars to 5 stars. The job is simple for a professional, so please be realistic with prices. Should you do this correct and fast, I will give you more leads to scrape. Thanks
There is around 20k reviews publically available, so I can't scroll endlesly but I need you to scrape it for me and put in the spreadsheet along with filters - 1 stars to 5 stars. The job is simple for a professional, so please be realistic with prices. Should you do this correct and fast, I will give you more leads to scrape. Thanks
I'm looking for a qualified freelancer to develop a bot that can navigate the Almaviva Egypt website just like a human would. The bot must be capable of completing three key tasks: - Filling out all necessary appointment-related information - Selecting the date and time of the appointment - Submitting the request for the appointment Considering the constraints of the website, I require a bot that can still function proficiently with a limited number of appointment slots. Moreover, it must be programmed to input login credentials. A crucial requirement is that it can bypass or solve captcha verifications, ensuring a smooth booking process. The essential skillset for this project comprises expertise in Python, as the bot should be developed in this language. Familiarity with web scra...
Hello, I am looking for a professional translator who can accurately and naturally translate Japanese content into English. The ideal candidate will have experience in translating business, technical, or creative content and can maintain the original tone and meaning while producing fluent, high-quality English text. Project Requirements: Translate Japanese text into clear, accurate, and natural English Maintain the original tone, style, and nuance of the Japanese content Ensure proper grammar, punctuation, and formatting Deliver translations on time and communicate proactively if there are any questions Qualifications: Native or near-native English proficiency Proven translation experience with samples or portfolio preferred Attention to detail and commitment to high-quality work Addi...
I’m building a small C# utility that will crawl a target site, pull out every piece of on-page information I need (both the visible text and the images), and also save any PDF that the page links to. The PDFs aren’t generated on the fly—each one is simply a normal hyperlink sitting in the HTML—so the job comes down to fetching the page, parsing for the data, spotting the *.pdf anchors, and downloading those files to disk. You are free to approach this with HtmlAgilityPack, AngleSharp, HttpClient, Selenium, or any other .NET-friendly library you’re comfortable with, as long as the final code is clean, asynchronous where it makes sense, and easy to extend. I will pass in the root URL (or a list of starting URLs) plus an output folder path; the tool should handl...
I need a clean, up-to-date mailing list focused exclusively on schools and daycares, camps, and churches located in my immediate area. After I award the project I will give you the exact city limits and surrounding ZIP codes to keep the search tight. For every entry I want the business or institution name, their direct email, a working phone number, and the mailing address. Accuracy matters more than volume—please verify that each record is current and remove any duplicates you find along the way. The finished file should arrive as an Excel or Google Sheet that I can sort and filter easily that i can easily create mailing labels from. If you already use tools such as LinkedIn Sales Navigator, Apollo, Hunter, or a similar scraper/validation service, let me know; anything that help...
I need a reliable way to pull data from Facebook Marketplace seller pages at scale. The target platform is Facebook; other marketplaces such as eBay, Amazon or Etsy are irrelevant for this job. Here’s what I’m after: when I paste one or many seller profile URLs into your script or small desktop app, it should crawl every public listing on those pages and export the results to CSV or Google Sheets. I mainly care about item title, price, description, photos (image URLs are fine), posting date, item location and the seller’s profile link so I can trace each record back to its source. If you can collect additional fields that Facebook exposes, even better—just keep everything neatly labelled. No hard requirement on the stack: Python with BeautifulSoup / Selenium, ...
I am looking for an experienced developer with strong expertise in Python and web automation to build a smart system for monitoring ticket availability and event updates on the Webook platform. The system should focus on automation, notifications, and usability while following best technical and compliance practices. Scope of Work • Develop a Python-based automation system to monitor events and ticket availability. • Send real-time notifications when: • New events are published • New ticket batches become available • Build a clean and user-friendly dashboard to: • Manage monitoring settings • Control alerts and configurations • Implement structured and scalable automation logic. • Ensure the solution is maintainable and adaptable to f...
For an upcoming market research study, I need a fully-automated workflow that gathers and enriches data from well over 500 LinkedIn profiles. The automation should locate the profiles that match criteria I will provide, pull the key public details, then append reliable off-platform contact information so I can reach those professionals directly. Please design the script or low-code sequence with any reliable stack you prefer—Python, Selenium, PhantomBuster, Sales Navigator API, or comparable tools are fine as long as the method is repeatable and respects rate limits. Deliverables • CSV/Excel file containing one row per person with: – Current job title – Company name – Verified email (and phone, when available) • Source code or workflow fi...
Necesito que tomes más de 200 productos que actualmente aparecen en la web de mis proveedores (todo el contenido está en formato texto e imágenes) y los publiques correctamente en mi tienda Shopify. También que elimines los que están descontinuados. Alcance del trabajo • Copiar nombre, descripción, precio, variantes y atributos clave de cada producto. • Descargar y subir las imágenes en alta calidad, asociándolas al producto correspondiente. • Crear/ajustar colecciones, etiquetas y metadatos para facilitar la navegación y el SEO interno de Shopify. • Verificar que cada ficha quede con inventario, SKU y opciones de envío configuradas. • Mantener la coherencia visual y de formato entre to...
I have a growing list of company names, and I need a small, reliable Python script that can: Automatically find each company’s career/jobs page where open positions are posted (pages may be built using HTML, JavaScript, or modern front-end frameworks) Navigate through all job listings, including: Pagination (page numbers, next/previous, etc.) “Load more” buttons Infinite scrolling Ability to fetch data from multiple pages (e.g., page 3, 4, or beyond) Apply job filters, especially location-based filtering, so that only job links for specific locations are collected Extract only individual job posting links after filters are applied Visit each job link and scrape complete job details, including: Job title Job description Location Employment type (if available) Department / ...
I need help to make my catalogue of automotive spare parts by pairing every OEM number I supply with a clean, high-resolution product photo and basic part information. The scope covers the full range of engine, suspension and brake system components, so you’ll be digging through manufacturer websites and trustworthy e-commerce listings until you find an image that is crisp, watermark-free and matches the exact OEM reference. Once you locate a match, capture the part name exactly as it appears on the source page, copy the product-page link, download the image at its highest available resolution, and note everything in a structured Google Sheet. File naming should mirror the OEM numbers so that images and rows line up perfectly. Deliverables • A Google Sheet containing OEM num...
Senior Automation Engineer for Traffic Simulation & Referrer Spoofing I am looking for a specialized Automation/Growth Engineer to build a custom Traffic Orchestration System. The goal is to simulate "Viral" traffic spikes to specific URLs to test search engine ranking signals. The Technical Challenge: This is not a simple headless browser task. You must solve the problem of taking high-volume, raw human traffic (via Pop-Under/PPV APIs) and "cleaning" it through a Bridge/Redirect layer to spoof specific social referrers while maintaining session integrity. Core Deliverables: - Referrer Bridge: Build a script/server that receives raw hits and uses a "Double Meta Refresh" or similar logic to spoof (e.g., masking traffic to appear as if it's coming f...
I need a clean, freshly-sourced list of 5,000–10,000 tech-startup contacts for a one-to-one outreach campaign promoting GrowthAI’s free trial. Every record has to come from information that is already public—think company websites, press pages, blog author bios, event directories, Crunchbase-style listings—never scraped LinkedIn data, leaked dumps, or anything that could be considered private. What the sheet must contain • Company name • Website URL • Contact name (when it’s on the site) • Public business email only (no personal Gmail/Yahoo unless the firm itself lists it as its main contact) • Industry tag • Country Target profile • Primary industry: Tech Startups • Regions: North America, Europe, and ...
Please Read Carefully Before Applying It does not matter whether you consider yourself a “vibe coder” or a traditional software engineer we accept both here. What matters is whether you can make this system work reliably at scale. We operate a production scraper that processes 500+ leaderboard sites per hour. All sites we scrape are leaderboards, but no two sites are the same. This is not a basic scraper. What Makes This Scraper Different The leaderboards we scrape vary heavily in structure and behavior: Dynamic buttons, tabs, and switchers JavaScript-rendered content Hybrid navigation (UI interaction + background API calls) Tables, card layouts, podium layouts, or combinations of all three Masked usernames and inconsistent rank formats Different ordering of wager / prize data ...
I need a small proof-of-concept scraper written in Python that pulls user information from a set of static website pages and exports it into a clean CSV file. The pages load without JavaScript, so a lightweight stack such as requests + BeautifulSoup (or lxml) should be all that’s required; no browser automation is necessary unless you can justify a clear advantage. I will supply the page URLs and highlight the exact fields to capture (name, profile link, location, and any other visible user meta). Your code should handle pagination where applicable, respect polite crawl rates, and be easy for me to adjust if the HTML structure shifts. Deliverables • Well-commented Python script (.py) • Sample CSV containing the extracted records • README with setup steps and a qu...
I have an existing Flutter mobile app (Firebase backend + RevenueCat + web scraping). Most functionality is already implemented. I need an experienced Flutter developer to update and refine several features. I believe this should not take more than a few days for an experienced developer. Scope of Work: 1. Sync Local Storage with Firestore (Offline Support) - Keep using local storage for offline mode - Sync shift data with Firebase Firestore - Handle offline → online auto sync - Prevent duplicates (unique shift ID) - Secure Firestore rules (user-only access) - Ensure cross-device sync works properly 2. Fix Email Verification (Spam Issue) - Configure Firebase Auth to use custom domain () - Set up SPF, DKIM, DMARC - Improve email template - Ensure emails land in inbox (Gmail/Outlook) ...
I need a clean, well-structured extract of permit holder information from the WA State Labor & Industry online permit lookup (sometimes called the Permit Center). Whether you can do a fully automated scrape or need to do a manual pull is up to you—the key is accuracy and complete coverage. Scope • Visit the WA State L&I electrical permit lookup site and capture every record that appears in the public search results that: - Is for a generator or automatic transfer switch installation. - For the license numbers that will be given to you - for the timeframe given (5-6 years back). • Extract only the permit holder–related fields (name, address, and any other holder-specific details that the site exposes). • Return the ...
The contractor is commissioned to download DRM-protected videos from an online portal to which the client has legitimate access and usage rights. The videos must be processed as follows: - Download approximately 240 videos from the portal with about 18 hours video material - The videos have an average length of approximately 5 minutes - Original video titles must be preserved - The videos must be organized into folders according to the portal order/structure - All files must be uploaded and stored on Google Drive - The final folder structure on Google Drive must be same like on the portal
I need a reliable specialist who can log into our dealership’s backend every weekday, pull fresh customer information, and feed it straight into our call-tracking platform the same day. The only data I’m after are contact details and service records—nothing else—so the extraction script or manual process can stay laser-focused on those two fields for speed and accuracy. Turnaround is critical. If you can set this up and have the first full export/import cycle running smoothly right away, I’m happy to add a rush bonus on top of the agreed rate. Accuracy must be spot-on and the data has to land in the tracking system without duplicates or formatting hiccups. Deliverables each weekday: • Clean export of new customer contact details and service record...
INGENIERO SENIOR DE IA: SISTEMA RAG MULTIMODAL ON-PREMISE CON APRENDIZAJE CONTINUO 1. CONTEXTO Y DESAFÍO REAL proyecto del sector de la trefilería y el galvanizado con más de 40 líneas de producción activas. desafío no es la falta de información, sino que el conocimiento crítico es volátil: reside en la experiencia de supervisores y operarios veteranos y se transmite de forma verbal. Cuando surge una solución técnica en planta, esta no se documenta y se pierde para el siguiente turno. Buscamos desarrollar un ecosistema de IA que no solo responda preguntas, sino que capture y democratice el conocimiento técnico que surge en el día a día. 2. LA SOLUCIÓN: "THE KNOWLEDGE LOOP" B...
I have an Excel template ready and a list of items I need populated with reliable, up-to-date product details. For every product on the list, please pull information only from official brand websites, leading eCommerce platforms, and the customer-review sections of those sites. What I expect captured for each item: • Current price and stock status • Key features or technical specifications exactly as stated by the manufacturer or retailer • Average customer rating plus any standout review insights (e.g., “4.5/5 from 230 reviews”) Accuracy matters more than speed, so cross-check conflicting figures before entering them. Add the source URL next to every data point so I can verify quickly. Once the sheet is complete, send it back in the same format&mdash...
I have a spreadsheet with 200 U S-based websites and I need the direct phone numbers of each owner. The numbers are not published on the sites themselves, so please pull them through your own account. Alongside every number, include the owner’s LinkedIn profile URL; no other fields are required. What I expect from you • A clean CSV or Google Sheet with three columns: Website, Owner Phone Number, LinkedIn Profile • Accuracy checked against Apollo’s latest data • Completion within 24 hours of project acceptance This is a quick job for an experienced user. I will review the sheet immediately and release payment within 24 hours once the data is verified.
Every week I compile a fresh list of Danish houses and apartments that may have changed hands in the previous seven days. Your job is to open the specific web link I supply for each property and confirm whether the listing now shows as “Solgt / Sold.” No phone calls to agents, no site visits—everything happens inside the browser, one URL at a time. I need someone who can commit to roughly 50 hours of this work each week on an ongoing basis and who is comfortable updating a shared Google Sheet (or Excel file, if you prefer) as you go. For each address you will: • mark the sale status (Sold / no data) • attach or link a screenshot of the listing as proof That’s it. The task is straightforward but must be done manually—no bots or scraping tools. Wh...
I'm looking for a comprehensive list of home decor small businesses in Florida. The list should be organized by city and delivered in an Excel spreadsheet format. Requirements: - Contact details - Product offerings - Customer reviews - Categorized by city Ideal skills and experience: - Attention to detail - Experience with data collection - Proficient in spreadsheet software
I want to replace several manual reporting routines with an end-to-end AI workflow that ingests data from our internal finance databases and live web sources, then produces clear, timely analytics for management. Reporting and analytics are the sole focus—no transaction execution—so the system must excel at pulling, cleaning, and interpreting numbers rather than booking them. We also want to compare legal documents vs term sheets and excel spreadsheets Data sources • Company databases (SQL, flat files, Excel exports) - Dropbox all our files are in drop box • Extensive web scraping for competitor benchmarks and investment-market signals If you have ideas for safely adding external financial APIs later, let me know, but the two feeds above are mandatory. - Th...
I need an experienced engineer to analyze and improve a high-demand online booking workflow so bookings can succeed reliably even under extreme traffic. I already have a working Playwright-based browser automation, but during peak demand all sessions currently land on a “high demand / unavailable” state. The goal is to improve success rate through deeper system understanding, better timing, and smarter flow control . The work involves analyzing booking flow and state transitions, understanding how availability actually appears during high demand (including delayed or staggered releases), improving timing, retries, waiting strategy, and navigation logic, eliminating race conditions and aborted navigations, and designing the automation to be long-running, reactive, and resilient ...
Hi. I have an Excel list with around 170.000 local businesses in Spain with their emails: "File 1". I have another Excel list with around 65.000 local businesses with their web sites but without email addresses: "File 2" Tasks that I need from your site 1º With the help of IA or another tool to check every web site of every business in the "File 2" to try to obtain their emails. I will add the emails obtained in the task 1º to the "File 1" to create a complete file, File 3. 2º With the help of some tool to verify if the email of every business in the File 3 is active. 3º With the help of some tool to verify if the website of every business in the File 3 is active. 4º With help of IA or another tool check every activ...
I am looking for a full-stack developer to build a Web Compliance Audit platform. The goal is to create a solution that scans websites for GDPR, Privacy, and Cookie compliance using AI (OpenAI API) and generates professional audit reports. The front-end will be WordPress (for user management, payments, and UI), while the "brain" will be a Python-based engine running on a Linux VPS. Key Features & Requirements: 1. WordPress Frontend: Landing Page: Professional UI where users can enter a URL for a "Quick Scan." User Dashboard: Where clients can see their scan history and download PDF reports. Payment Integration: WooCommerce or Paid Member Subscriptions (Stripe/PayPal) for one-time reports or monthly monitoring plans. API Integration: A custom function to send the U...
I need a developer to collect data from multiple public websites and deliver it in a clean, structured format. This is for legitimate data extraction from publicly available pages. I will share the target URLs and exact data fields with shortlisted candidates. Scope of work Scrape data from multiple public websites (details shared after shortlisting) Extract specific fields consistently and handle pagination/filtering where needed Normalize/clean the data (remove duplicates, consistent formatting) Export results to CSV/Excel/JSON (format to be confirmed) Provide a repeatable solution (script or small app) that I can run on demand Basic documentation: how to run it, how to adjust settings, where outputs go Quality requirements Reliable scraping with error handling and retries Resp...
Hi, ******Need someone ONLY AND ONLY from AUSTRALIA AND UK ONLY for my betfair apis development. I’m expanding an existing trading suite and now need clean, reliable access to Betfair’s Market Data API. Because of licensing considerations, I can only work with developers physically based in Australia or the United Kingdom. Scope of work – Connect to the Betfair Exchange API (non-interactive login is already in place on my side) – Retrieve real-time odds, price ladders and market status updates for Horse Racing, Football, Tennis and Cricket markets only – Structure the responses so they can be consumed by my Python back-end (JSON is fine) – Handle throttling limits and session renewal gracefully Deliverables 1. Well-commented source co...
I need a reliable LEGAL COMMENT & researcher who can MAKE legal COMMENTRY ON OUR OLD company filings from the U.S. SEC’s EDGAR database for publicly traded companies. The immediate task is to locate, download, and organise the relevant documents in a structured way that lets me review them quickly—filing date, form type, company name, CIK, and a direct link back to the source must all be captured. Acceptance TO BE CAPABLE OF PROVING WHETHER TO REESTABLISH THIS OLD APPLICATION. Turnaround is flexible but please outline how long you need per 100 filings so I can plan the next milestones.
I’m looking for a large data dump of UK-focused domains in the property, home improvement/construction, landlord, property service niches. The aim is to find unloved, good quality SEO websites with affiliate or lead gen potential that i can purchase and grow. See attached for the types of websites/niches that would be suitable. This is a quantity-first task, no manually website review or qualitative judgements required - i will outreach. Use Ahrefs or Semrush only to export domains that rank organically for broad UK property topics such as: landlord guides/compliance, property finance (BTL, bridging), property investment & deal sourcing, refurb/renovation planning, EPC & energy efficiency, planning permission & permitted development, HMO licensing, serviced accommodat...
I have an urgent need for a clean, well-structured dataset containing the listing agent’s first name, last name, mailing address, and phone number for well over 500 active Zillow listings. Speed is critical, but accuracy matters just as much; the final file should be ready for immediate import into my CRM. You are free to use whichever stack you prefer—Python with BeautifulSoup or Scrapy, Selenium, residential proxies, even the unofficial Zillow API—so long as rate-limits are respected and the data is complete. I don’t need property details or price history; the focus is strictly on the agent contact fields. Deliverables • CSV or XLSX with a separate column for each required field • A short read-me explaining the script or method so I can rerun it la...
I have a set of websites whose data I need to capture automatically, and I want the whole process built as a reusable Apify actor. I will share the exact URLs, the fields to be collected, and the desired output format once we agree to proceed, but the common theme is structured extraction (think product specs, profile info, or similar). Here’s the outcome I’m expecting: • A clean Node.js actor that runs on the Apify platform, uses the latest Apify SDK, and follows best practices for request queuing, proxy rotation, and error handling. • Configurable input schema so I can plug in new target URLs or tweak search parameters without touching the code. • Output saved to an Apify dataset (JSON/CSV) and pushed to my Google Drive via webhook on each successful r...
If you want to stay competitive in 2021, you need a high quality website. Learn how to hire the best possible web developer for your business fast.
Learn how to find and work with a top-rated Google Chrome Developer for your project today!
Learn how to find and work with a skilled Geolocation Developer for your project. Tips and tricks to ensure successful collaboration.