SEO techniques change from time to time depending on the need to boost a website’s ranking. However, the latest Google update required more than the usual optimization process.
Since the Panda and Penguin updates, many webmasters and search engine optimizers have been challenged to create a strategy that will surely boost their website’s performance in search engines google search scraper. The sudden drop of websites from their positions in search results, especially in Google, has resulted to online businesses panicking over their performances and their profit.
If there are websites that fell from their rankings there are those that went up from their previous position. This only shows that while there are indeed certain techniques that need to be changed there are also those who have done it well.
Actually, SEO techniques required to search engine optimizers is pretty simple: concentrate more on white hat and try not to spam. When we say “white hat” it means that you should concentrate on quality SEO rather than bombard your articles, even your participation on blog commenting and forum posting, with links and keywords. While back then quantity of links can boost your visibility, it has somehow made these sites unstable.
Most postings today on search engine optimization are concentrated on what to do to get back into the rankings. While these are definitely helpful, there is still the need to understand exactly what made other website fall from their rankings after the Google updates. Here are some of the mistakes made and should be avoided by optimizers.
Google seems to devalue content that has been produced with low quality in mind such as through hiring writers that have no knowledge of the topics to mass produce articles, that are later submitted to large amount of article directories. Using automated article submission software was always considered a black hat SEO technique, “effectively dealt by Google”.
Major article directories such as EzineArticles or HubPages have been affected. Although, the articles on these sites are often unique to begin with, they are later copied and populated on other sites free of charge or submitted to 100s of other article directories. The sites that copy the article from directories are obliged to provide a link back to the article directory. This link building technique will have to be revised in order to face the algorithm change.
The good news is that Matt Cuts said that ‘the searchers are more likely to see the websites that are the owners of the original content rather than a site that scraped or copied the original site’s content’.
Mostly affected sites are the ‘scraper’ sites that do not populate original content themselves but copy content from other sources using RSS feed, aggregate small amounts of content or simply “scrape” or copy content from other sites using automated methods.
Data mining isn’t screen-scraping. I know that some people in the room may disagree with that statement, but they’re actually two almost completely different concepts.
In a nutshell, you might state it this way: screen-scraping allows you to get information, where data mining allows you to analyze information. That’s a pretty big simplification, so I’ll elaborate a bit.
The term “screen-scraping” comes from the old mainframe terminal days where people worked on computers with green and black screens containing only text. Screen-scraping was used to extract characters from the screens so that they could be analyzed. Fast-forwarding to the web world of today, screen-scraping now most commonly refers to extracting information from web sites. That is, computer programs can “crawl” or “spider” through web sites, pulling out data. People often do this to build things like comparison shopping engines, archive web pages, or simply download text to a spreadsheet so that it can be filtered and analyzed.
Data mining, on the other hand, is defined by Wikipedia as the “practice of automatically searching large stores of data for patterns.” In other words, you already have the data, and you’re now analyzing it to learn useful things about it. Data mining often involves lots of complex algorithms based on statistical methods. It has nothing to do with how you got the data in the first place. In data mining you only care about analyzing what’s already there.
The difficulty is that people who don’t know the term “screen-scraping” will try Googling for anything that resembles it. We include a number of these terms on our web site to help such folks; for example, we created pages entitled Text Data Mining, Automated Data Collection, Web Site Data Extraction, and even Web Site Ripper (I suppose “scraping” is sort of like “ripping”). So it presents a bit of a problem-we don’t necessarily want to perpetuate a misconception (i.e., screen-scraping = data mining), but we also have to use terminology that people will actually use.