The Scraping Robot Update: Try Me!

Scraping Robot
July 7, 2020
Community

In this ongoing blog series, we’ll be sharing all the exciting stuff going on behind the scenes of your favorite web scraper. Whether we’re launching new modules, adding to our current functionality, or improving our backend, there’s always lots of work going on at Scraping Robot!

Hiya, folks! We’re back! In our last update, we shared that our team was doing as well as could be while working from home. We’re happy to say that this is still the case, and we hope that you’re getting the hang of things, too. These unpredictable days can feel a little overwhelming, but Scraping Robot is putting in work to improve our scraping tools and make sure they’re opening doors that were closed before.

Speaking of improvements (the whole point of this blog series, right?), we’re so excited to share our new products, documentation and stuff to try. The list may seem short, but trust us, quality far exceeds quantity in this case.

What Time Is It? Real Time

We implemented our very own API just two months ago, and big things are happening already! Our Google and Amazon scraping modules have helped our customers find great success, but we knew it was time to make their scraping experiences even better. Now, you can use our Google and Amazon API documentation to program your scraper to get real-time data from these sites in less than 60 seconds. This is big! (If you’ve read our article on the importance of using an API, you’ll see why!)

Not only can you use these APIs to discover changes to data that matters to you, but you can also program your software to use them and identify changes as soon as they happen. Maybe you want to scrape Google and Amazon hundreds (even thousands) of times a day. Or, maybe you only want to gather pricing data from Amazon right when it changes. Whatever your goals may be, our API will help you reach them!

It’s on the tip of my tongue!

Allow us to give you a taste of our newest product: the Scraping Robot Desktop app. We believe in giving our customers something to work with as soon as possible, so this working version will help us address your needs as we go (instead of developing an entire product that nobody asked for). Take a look at our Beta/MVP expectations to understand more about the features and limitations of this MVP, or minimum viable product. Eventually, the desktop app will allow you to perform bulk scraping projects, scrape faster and get more intuitive results about your scraping successes and errors.

If you’re interested in trying out the Scraping Robot Desktop app and becoming a beta customer, contact CEO Neil Emeigh right away. When you join our testing team, you’ll get access to software updates, free of charge. All we ask in return is that you provide us with regular feedback, which shouldn’t take longer than 5 (ish) minutes every couple of weeks. You can expect that we’ll take your concerns and ideas seriously. When you talk, we listen! It’s a super simple process that allows you to have a real voice in the type of product we develop.

What’s Next for Scraping Robot?

Our roadmap page is your go-to source for our Scraping Robot Desktop app updates and responses to customer feedback. There, you can see our top priorities and even make your own suggestions. In fact, we hope you do! Like we said: we think it’s more important that we listen to our customers than assume we know what you want from us, and we can only do that if you let us know what could be better. Thanks to the customers who voted on new features, we’re prioritizing bulk scraping, individual project statistics and failed scrape repeats.

As always, let us know if you have custom scraping needs or more ideas about our new products. We’re excited to hear what you think!

The information contained within this article, including information posted by official staff, guest-submitted material, message board postings, or other third-party material is presented solely for the purposes of education and furtherance of the knowledge of the reader. All trademarks used in this publication are hereby acknowledged as the property of their respective owners.