SEOClerks

How To Find High DA Expired Web 2.0 Blogs



Write the reason you're deleting this FAQ

How To Find High DA Expired Web 2.0 Blogs

Scraping domains is fine and dandy but they cost money to register and you really need to have multiple hosting accounts so they all get a different c-class ip. This kind of defeats the marketing on a budget part. So instead we are going to want to find web 2.0’s that are expired, deleted, or are free to register. Finding expired web 2.0’s is made easy with scrapebox’s addon: Vanity Name Checker. Unfortunately it only has a handful of web 2.0’s. This includes Tumblr, WordPress, BlogSpot, LiveJournal, WikiDot, OverBlog and some other ones that are not useful for finding expired domains. WordPress does not let you register previously used names and LiveJournal will cost you 15$ to rename your account with an expired one. This makes wordpress totally useless and LiveJournal can be a pain registering multiple accounts and then paying to change your name. That being said, I have found that it is easier to find some high PR LiveJournal sites than any other Web 2.0’s. I currently have a couple of LiveJournal sites with very high PR. The last thing to note before we get into the tutorial is that sometimes the PR of 2.0’s doesn’t always stick. I have noticed this mainly with tumblr but also occasionally on other sites to. When researching tumblr sites you need to look at the links. If you only see reblogged links or similar you will probably lose the authority. At least that’s what I have noticed. You want to find links that are not reblogged or likes.

Scrape Those Expired Web 2.0’s


Scraping Web 2.0’s is stupid easy. You are going to need to create a footprint file for scrapebox. These are a couple that I use:
site:.tumblr.com
site:.overblog.com
site:.blogspot.com
I use the keyword scraper to create a long list of relevant keywords in the niche I am working in. I append A-Z, put the level to 2, and select all the sources. This usually provides a big list of semi relevant keywords. Send your keywords back to the harvester and click on the M button to select your footprint file you just created. This will merge all your keywords with the footprints. Let scrapebox scrape until you hit a million results or until it finishes all the keywords. Once it’s finished harvesting remove duplicate domains and then trim to root. Hopefully you will have a list of 150,000 to 250,000 domains.

Two Methods To Find Expired Blogspot and Tumblr:

I said I wasn’t going to give my footprints away but here you go!

When scraping expired sites we want to find pages that have list of blogger or tumblr sites.

How To Find Expired Blogger Domains.

For blogger use this footprint:
site:.blogger.com/profile/ “Blogs I follow”
This will find profiles of users where they are following other blogs. The list of blogs they are following will a lot of times have a ton of links on them. Like before you will need to scrape a list of keywords and place the above footprint in the footprint section of scrapebox. When you scrape you should not get any urls that are not profile urls, but I will filter out any urls that do not have the word profile in them. So now you should have a big list of profiles that all contain the section “blogs I follow”. You are going to need to use the scrapebox addon link extractor. You will need to extract all the external urls. Depending how big your list is, you may want to split it to make it easier to extract the links. After you get all the links extracted all you will need to do is run them through the vanity name checker. I usually split my extracted links list to 100000.
You can also use these footprints:
“Sorry, the blog you were looking for does not exist. However,” site:.blogspot.com
“Blog not found” site:.blogspot.com
Blog not found site:.blogspot.com
Sorry, the blog you were looking for does not exist. However, site:.blogspot.com
I believe these are used quite often and very rarely do i find available sites with them.
How to find expired tumblr domains
For tumblr use this footprint:
“reblogged this from”
You can also add site:.tumblr.com to the footprint but I get a lot less results. The bad thing about this method is that you will get a lot of other sites besides tumblr. You will need to follow the same steps as the blogger method above. You will need to use the link extractor to extract all the external urls. Then you will want to filter the domains to keep urls with tumblr only.

Vanity Check the Web 2.0’s

Open up the vanity name checker addon in scrapebox and import your harvested URL’s. I use 20 semi dedicated private proxies for checking the names. I set the connections to 40 or 50 depending how much load my proxies are under. The vanity name checker seems to crash for me in scrapebox version 1 when I import more than 250000 urls. Scrapebox 2 (64 bit version) seems to work fine with large lists of urls. After the first run is completed I remove all the taken names and rerun the name checker again in case any of the failed ones can be rechecked. You may need to do this several times to ensure you check as many of the names as possible. Hopefully you will get around 2000 or 3000 available names, but at minimum you should get 200 or 300. Save the available names to a text file so you can process it.

Find The Gold

Import the file of available names into scrapebox to check the PR. I take all PR 2 sites and above and analyze them through seo moz (Page Authority addon). At this point any sites that meet my basic criteria I will go ahead and get them before someone else does. You may want to be more thorough and inspect the backlinks or whatever else, but since it takes very little effort I just register them ASAP. You would be surprised how many times a name will be taken after you research it. So now we have grabbed the sites with PR, but what about the all the rest of the sites? Don’t toss them; there are a lot of them that will have authority. They just might not have PR at the moment for various reasons. Run all of the sites through the page authority addon. I check the PA for anything above thirty and save those sites for further review. Having a minimum PA of 30 is what I use for filtering the sites for further checking; you may have different metrics you might want to use. Don’t get excited when you see the domain has a DA above 90, as obviously these sites are subdomains of high PR sites. You can manually check the links in open site explorer and look for high PR backlinks. This is about all I do to check and see if a site is worth registering. Because we do not need to pay for a domain name or hosting for these sites there is not much to lose by registering it. However if you buy or write a lot of content for the site and find out it has been spammed to death you could lose some time/money.

What Do I Do Now

So, you have some high PR web 2.0’s, but what should you do with them now? There are a few options at this point:

  1. Create or buy original content. This will always be the best solution.
  2. You could try to get the sites old content off of archive.org. This could lead to copyright violations or various other headaches. A copyright violation more than likely will get the site taken away from you. It’s also a big pain to get all the old content and get it up on the new site. I almost never use this method.
  3. You can place some PLR articles or spun content on the site. I manually spin articles for other projects and will use my manually spun articles after I check them for grammar mistakes. I always make sure they are perfectly readable.
  4. Use wicked article creator (Wac) or WordAI to generate readable articles for web 2.0’s. This is what I mainly do and it works quite well. You will need to spend a few minutes correcting grammar errors on the final article.


Final Thoughts

You should now be able to find high PR web 2.0’s easily, but you should try improving it for your needs. I found several methods on how to find expired web 2.0’s and modified it to get better results. You should always be trying to one-up any methods you find.

Comments

Please login or sign up to leave a comment

Join
Topseoservice
Wow such a nice and detailed guide How To Find High DA Expired Web 2.0 Blogs . Thanks for this.



Are you sure you want to delete this post?

SEOGuru
Wow such a nice and detailed guide How To Find High DA Expired Web 2.0 Blogs . Thanks for this.

Thank you very much.



Are you sure you want to delete this post?

SEOGuru
How many people got scrapebox here ?



Are you sure you want to delete this post?