Send in your spam and get the offenders listed
Create a rule in outlook or simply forward the spam you receive to firstname.lastname@example.org
Posted: 13 Aug 2018 07:30 AM PDT
I love our industry for the variety of tools we create for ourselves.
I think we are one of the most self-innovating industries out there, and I am always happy to come across new tools to play with.
This time I am reviewing a cool new SEO crawler: JetOctopus
Disclaimer: I was given free credits to play with the tool for this review.
JetOctopus is probably one of the most efficient crawlers on the market. It’s fast and incredibly easy to use, even for a non-SEO.
Its most convincing selling point is that it has no crawl limits, no simultaneous crawl limits and no project limits giving you more data for less money. If you are working on a huge database-driven website, you’ll definitely find it a money- and time-saver.
The best part of the tool is that it’s web-based which makes it perfect for collaboration: Your team doesn’t need any new software installed. All they need is a (universal) login.
Web-based tools keep teams on one page because when logging in they all see the same thing. Whenever I can, I use online tools for this exact reason: Cross- team (and cross-device) co-working.
When it comes to SEO crawlers, the usual problem with web-based solutions is that they are not fast enough. You’ll be happy to find JetOctopus to be even faster than its desktop alternatives.
Your content team will appreciate its “Content” section that can generate all kinds of analyses thanks to the flexible filters, for example:
Naturally, there are a lot of features targeting a more technically-equipped user. JetOctopus helps dev teams to diagnose all kinds of errors hindering smooth user experience as well as preventing search crawlers from access your site.
For example, you can easily find:
Internal Linking Analysis
We are all pretty sure (and anyone working with at least one site has seen the actual experimental evidence on that) that internal links help a page rank better in search. How come we have so few tools analysing internal links for each particular page.
We have a few powerful platforms analyzing incoming links from other domains but there’s no good solution to the best of my knowledge as to how many internal in-links a web page has.
JetOctopus has just introduced a great feature our industry is missing: “Linking Explorer” lets you see how many pages within your site link to a particular page (or pages) and, more importantly, which anchor text those internal links have:
Takeaway: Dig as Deep as You Need / Can
The beauty of SEO crawlers is that everyone is using them differently. A SEO crawler isn’t supposed to show you the way: Instead you can play with the data in your own way to identify what matters to you based on your focus and specialty.
JetOctopus accomplishes this task in an almost perfect way: Its Data Table view gives you all the filters and options to find whatever it is you are looking for, be it canonical tags, redirects, load time metrics or almost anything else under the sun.
I’d probably argue with some things JetOctopus identifies as issues (e.g. too short or too long title tags) and sometimes I’ve seen labeling pages with “multiple title tags” even though I could clearly see only one in the code. But I don’t expect to always agree with an SEO tool as we don’t have clearly set industry standards in many cases.
I am an Excel fan and JetOctopus perfectly matches my love for filters. segments and tables I can play with.
To check out more of my tool reviews, proceed here:
To get your tool reviewed by me (which usually results in more (guest) post mentions), read on how to pitch your SEO tool to me here.
If you give JetOctopus a try (they do have a free trial), let me know your thoughts in the comments!
|You are subscribed to email updates from Seo Smarty.
To stop receiving these emails, you may unsubscribe now.
|Email delivery powered by Google|
|Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States|
All titles, content, publisher names, trademarks, artwork, and associated imagery are trademarks and/or copyright material of their respective owners. All rights reserved. The Spam Archive website contains material for general information purposes only. It has been written for the purpose of providing information and historical reference containing in the main instances of business or commercial spam.
Many of the messages in Spamdex's archive contain forged headers in one form or another. The fact that an email claims to have come from one email address or another does not mean it actually originated at that address! Please use spamdex responsibly.
Google + Spam © 2010- 2017 Spamdex - The Spam Archive for the internet. unsolicited electric messages (spam) archived for posterity. Link to us and help promote Spamdex as a means of forcing Spammers to re-think the amount of spam they send us.
Our inspiration is the "Internet Archive" USA. "Libraries exist to preserve society's cultural artefacts and to provide access to them. If libraries are to continue to foster education and scholarship in this era of digital technology, it's essential for them to extend those functions into the digital world." This is our library of unsolicited emails from around the world. See https://archive.org. Spamdex is in no way associated though. Supporters and members of http://spam.abuse.net Helping rid the internet of spam, one email at a time. Working with Inernet Aware to improve user knowlegde on keeping safe online. Many thanks to all our supporters including Vanilla Circus for providing SEO advice and other content syndication help | Link to us | Terms | Privacy | Cookies | Complaints | Copyright | Spam emails / ICO | Spam images | Sitemap | All hosting and cloud migration by Cloudworks.