FAQ
The basic formula is very simple – 1 credit for 1 page that is scraped. If you’re project is not urgent, and you don’t need any of the advanced features that will be the price for each page.
If your project is a bit bigger and you want to run it on multiple servers, for each extra server beyond the default one server, you will pay an extra of 0.02 credits per server. Meaning, if you want to run your scraper on 10 servers simultaneously you will be deducted 1.18 per page scraped but will get superb speed.
We offer 10,000 free credits to try the powerful features of Stalkit. The "Try us" weight has all the features of any other subscription or credits package. Contact us if you need a larger test to make sure your scraper works correctly.
Yes. We offer our clients to use our experience in building large and complicated scrapers, and for a small extra free, we will do all the setup for you and allow you to focus on your usage of the data scraped.
Stalkit supports scraping data from many source types : Simple static websites, Interactive websites, Login and other serach forms, APIs returning JSON/XML data types and more.
You can use the export API to get acceess by code to the data scraped. We also offer custom solutions to insert the data straight into your db. Contact us and we will find the best solution for you.
No. We offer subscription packages or buy one time credits. This is a perfect solution if you have a one time project or don't want to obligate for longer term.
Stalkit is a fully cloud based solution. No download is needed and everything is performed on our servers. Our goal is to give you the best service, save you time and money on servers, proxies and coding time.
Tutorials
General screens
Introducing Stalkit main panel. Viewing scrapers list, log screen and other actions can be performed on each scraper.
Building your first scraper (part 1)
Building your first scraper. General settings, adding data filters and setting up emails notifications.
Building your first scraper (part 2)
The scraper builder screen. List of actions that can be performed. Using the item selector panel to select fields to extract.
Creating and running your first scraper
Building a scraper and running it. Basic text modifications to clean the data.
Scraping paginated websites
Setting up pagination rules for different sources. How to handle page pagination, load more button and inifinite scrolls.
API's and importing from Excel
Scraping data from non html sources. Importing data from Excel, scraping JSON and XML data types.
Text modifications - part 1
Using the text modifications screen. Cleaning data, duplicating columns, selecting what fields to export.
Text modifications - part 2
Connecting between different tables without needing excel vlookup. Removing duplicates.
Downloading html files and working on local copy
Using the download files option for big projects. Rerunning and fixing issues on local copy.
Images and files downloading
Downloading images and media files from list of scraped urls. Splitting the data into folder, setting up naming rules.
Account and credits usage
Account susbcriptions compared to buying credits. How credits are calculated and used.
Use cases - pagination types
Full use cases on common websites. Setting up the scrapers from scratch.
Scraping data behind logins
Full use cases on common websites. Setting up the scrapers from scratch.
Input selections
Full use cases on common websites. Setting up the scrapers from scratch.
Loading scrapers from templates
Full use cases on common websites. Setting up the scrapers from scratch.
Scraping complicated site layouts
Full use cases on common websites. Setting up the scrapers from scratch.
Scraping emails from domains
Full use cases on common websites. Setting up the scrapers from scratch.
Recurring scraping
Full use cases on common websites. Setting up the scrapers from scratch.
Secret methods of scraping
Bouns video - secret methods to make your scrapers run faster. Finding sitemaps and hidden APIs, scraping live refreshing data.