Writing Blog

February 3, 2010

What is the Google Affiliate Network?

Filed under: Blogs,Internet,Software — Rafael Minuesa @ 1:46 AM
Tags: , , ,
This is one of a series of articles I posted for the 1000 Webs’ Blog.
You can view the original version at:

When Google acquired DoubleClick in March 2008, it also acquired its affiliate ad network program called Performics, the first full-service affiliate network founded in 1998 and that in turn had been acquired by DoubleClick in 2004. Google has now further developed and rebranded Performic as the Google Affiliate Network.

The Google Affiliate Network works like any ordinary affiliate ad network, by enabling advertising relationships between publishers and advertisers, whereby publishers get paid for every successful sale transactions that their site brings to advertisers.

As a Google Affiliate Network publisher, you can add an advertiser’s banner or text link on your site. When a transaction, such as a sign-up or purchase, occurs through one of these affiliate links, Google Affiliate Network will track the sale and pay you a commission or bounty.

Someone clicks the ad on your site…
…buys the advertised product…
…and you receive a commission on the sale
 

The Google Affiliate Network has been integrated into Google AdSense. All Google Affiliate Network publishers must accept AdSense terms. Additionally, all earnings are distributed through AdSense.
But being a Google AdSense publisher does not make you automatically a publisher in Google Affiliate Network. You must complete a separate application for Google Affiliate Network.
In order to join the program, you need to apply to their network in two steps:
Step 1: Link to or apply for a Google AdSense account.
Step 2: Tell Google about your site and promotional methods.

Each application is reviewed by the Google Affiliate Network quality team which will check some requirements, such as being a site that attracts a desirable audience for the products offered, able to test advertising offers and nurture the most productive relationships, being an expert in driving and converting visitor traffic and adhere strictly to Google Affiliate Network quality standards and advertiser policies.
In addition Google states that,

we’ve found that Google Affiliate Network tends to yield greater benefits to publishers who create niche content, manage loyalty and rewards programs, aggregate coupons and promotions, or manage social media.

Payments are processed on a cost-per-action (CPA) basis, typically as a revenue share or fixed bounty for a lead or other action. Google Affiliate Network earnings will be posted to your Google AdSense account approximately 30 days after the end of every month.


More Info:

January 17, 2010

Supplemental results: An experiment

Filed under: Blogs,Internet — Rafael Minuesa @ 1:34 AM
Tags: , , ,
This is one of a series of articles I posted for my SEO Blog.
You can view the original version at:
* http://rafael-minuesa-seo.blogspot.com/2010/01/supplemental-results.html

What are supplemental results?
Supplemental results usually only show up in the search index after the normal results. They are a way for Google to extend their search database while also preventing questionable pages from getting massive exposure.

How does a page go supplemental?
From my experiences pages have typically went supplemental when they became isolated doorway type pages (lost their inbound link popularity) or if they are deemed to be duplicate content. For example, if Google indexes the www. version of your site and the non www. version of your site then likely most of one of those will be in supplemental results.
If you put a ton of DMOZ content and Wikipedia content on your site that sort of stuff may go supplemental as well. If too much of your site is considered to be useless or duplicate junk then Google may start trusting other portions of your site less.

Negative side effects of supplemental:
Since supplemental results are not trusted much and rarely rank they are not crawled often either. Since they are generally not trusted much and rarely crawled odds are pretty good that links from supplemental pages likely do not pull much – if any – weight in Google.

How to get out of Google Supplemental results?
If you were recently thrown into them the problem may be Google. You may just want to give it a wait, but also check to make sure you are not making errors like www vs non www, content management errors delivering the same content at multiple URLs (doing things like rotating product URLs), or too much duplicate content for other reasons (you may also want to check that nobody outside your domain is showing up in Google when you search for site:mysite.com and you can also look for duplicate content with Copyscape).
If you have pages that have been orphaned or if your site’s authority has went down Google may not be crawling as deep through your site. If you have a section that needs more link popularity to get indexed don’t be afraid to point link popularity at that section instead of trying to point more at the home page. If you add thousands and thousands of pages you may need more link popularity to get it all indexed.
After you solve the problem it still may take a while for many of the supplementals to go away. As long as the number of supplementals is not growing, your content is unique, and Google is ranking your site well across a broad set of keywords then supplementals are probably nothing big to worry about.

Note:
All of the text above has been copy&pasted from:
http://www.seobook.com/archives/001545.shtml
so, if those points are correct this post should go Supplemental in no time, right?
Wrong. Wait and see …

Matt Cutts, a well known Google engineer, asked for feedback on the widespread supplemental indexing issue in this thread. As noted by Barry, in comment 195 Matt said:

Based on the specifics everyone has sent (thank you, by the way), I’m pretty sure what the issue is. I’ll check with the crawl/indexing team to be sure though. Folks don’t need to send any more emails unless they really want to. It may take a week or so to sort this out and be sure, but I do expect these pages to come back to the main index.

In the video below Matt Cutts answers questions about Supplemental Results:

http://video.google.com/videoplay?docid=-3494613828170903728&hl=en#

In the video, to the question of should I worry about results estimates for:

  1. supplemental results
  2. using the site: operator
  3. with negated terms and
  4. special syntax such as intitle: ?

And the answer was: No. That’s pretty far off the beaten path

Getting Out of Google Supplemental Results

Getting out of the Google Supplemental Results may be possible by improving your website navigation system. To get more pages fully Google indexed, the prominence of important website pages can often be boosted by linking to them from pages within your domain having the highest Page Rank, such as your homepage. The reason for this being that Page Rank is passed from one page to another by links and the most common cause of Supplemental results is lack of Page Rank.

Start by determining your most important web pages which have been made supplemental – for example those promoting lucrative products and services, and then improve your website internal linking by adding links to these pages from more prominent fully Google indexed pages of your site including your homepage. At the same time, ensure that your website navigation system is search engine friendly using a website link analyzer.

Site Link Analyzer Tool © SEO Chat™


URL
Valid URL

Type of links to return:
External (links going to outside web-sites)
Internal (links inside the current web-site)
Both types

Additional Info
Show nofollow links?

Enter Captcha To Continue
To prevent spamming, please enter in the numbers and letters in the box below



Report Problem with Tool.

By improving website navigation and getting more inbound links from other Worldwide Web sites, you may be able to get more website pages fully Google indexed, solving the problem of partial Google indexing and Supplemental pages.

Where the Google Page Rank of your website homepage is PR4 or PR3, improving your website navigation system and in particular the prominence of internal pages may help to get out of supplemental results. This can be done by including static hyperlinks from the homepage to your ‘problem supplemental result pages’.

However, where your homepage is PR3 or lower and you have a large website, internal navigation improvements alone may still not be enough when it comes to getting out of the Google Supplemental Results. At PR3 or lower, your homepage Page Rank is probably too low to pass on enough Page Rank to your internal pages to completely get out of Supplemental Results.

To fully solve the problem of partial Google indexing, get more one way links to your site from quality web directories and sites of a similar theme and wait patiently to become fully Google indexed. In addition, getting more quality one way links pointing to internal pages of your website (rather than just targeting your homepage) is another powerful way of boosting the ranking of those pages against specific keyword terms, and it will also assist in getting them out of supplemental results. This is often referred to as “deep linking”.

Note:
Google states that they have now removed the label “Supplemental Result” from their search result pages:

“Supplemental Results once enabled users to find results for queries beyond our main index. Because they were “supplemental,” however, these URLs were not crawled and updated as frequently as URLs in our main index.

Google’s technology has improved over time, and now we’re able to crawl and index sites with greater frequency. With our entire web index fresher and more up to date, the “Supplemental Results” label outlived its usefulness.”

Right, that is correct. It is called now “Omitted Results“. Same thing really, and same side-effects, at least from this non-Google point of view.

More Info

March 3, 2008

SPAM on Usenet

This is one of a series of articles I posted for magiKomputer.
You can view the original version at:
* http://magikomputer.blogspot.com/2008/03/spam-on-usenet.html

From Wikipedia, the free encyclopedia:

“Spamming is the abuse of electronic messaging systems to indiscriminately send unsolicited bulk messages. While the most widely recognized form of spam is e-mail spam, the term is applied to similar abuses in other media: instant messaging spam, Usenet newsgroup spam, Web search engine spam, spam in blogs, wiki spam, mobile phone messaging spam, Internet forum spam and junk fax transmissions.

Spamming is economically viable because advertisers have no operating costs beyond the management of their mailing lists, and it is difficult to hold senders accountable for their mass mailings. Because the barrier to entry is so low, spammers are numerous, and the volume of unsolicited mail has become very high. The costs, such as lost productivity and fraud, are borne by the public and by Internet service providers, which have been forced to add extra capacity to cope with the deluge. Spamming is widely reviled, and has been the subject of legislation in many jurisdictions.”

Spam affects about everybody that uses the Internet in one form or another. And in spite of what Bill Gates forecasted in 2004, when he said that “spam will soon be a thing of the past”, it is getting worse by the day. While the European Union’s Internal Market Commission estimated in 2001 that “junk e-mail” cost Internet users €10 billion per year worldwide, the California legislature found that spam cost United States organizations alone more than $13 billion in 2007, including lost productivity and the additional equipment, software, and manpower needed to combat the problem.

Where does all that Spam come from? Experts from SophosLabs (a developer and vendor of security software and hardware) have analyzed spam messages caught by companies involved in the Sophos global spam monitoring network and came out with a list of top 12 countries that spread spam around the globe:

  • USA – 28.4%;
  • South Korea – 5.2%;
  • China (including Hong Kong) – 4.9%;
  • Russia – 4.4%;
  • Brazil – 3.7%;
  • France – 3.6%;
  • Germany – 3.4%;
  • Turkey – 3.%;
  • Poland – 2.7%;
  • Great Britain – 2.4%;
  • Romania – 2.3%;
  • Mexico – 1.9%;
  • Other countries – 33.9%

There are many types of electronic spam, including E-mail spam (unsolicited e-mail), Mobile phone spam (unsolicited text messages, Messaging spam (“SPIM”), use of instant messenger services for advertisement or even extortion, Spam in blogs (“BLAM”), posting random comments or promoting commercial services to blogs, wikis, guestbooks, Forum spam (posting advertisements or useless posts on a forum, Spamdexing, manipulating a search engine to create the illusion of popularity for web pages, Newsgroup spam, advertisement and forgery on newsgroups, etc.

For the purpose of this post we shall focus on Newsgroups spam, the type of spam where the targets are Usenet newsgroups.
Usenet convention defines spamming as excessive multiple posting, that is, the repeated posting of a message (or substantially similar messages). During the early 1990s there was substantial controversy among Usenet system administrators (news admins) over the use of cancel messages to control spam. A cancel message is a directive to news servers to delete a posting, causing it to be inaccessible to those who might read it.
Some regarded this as a bad precedent, leaning towards censorship, while others considered it a proper use of the available tools to control the growing spam problem.
A culture of neutrality towards content precluded defining spam on the basis of advertisement or commercial solicitations. The word “spam” was usually taken to mean excessive multiple posting (EMP), and other neologisms were coined for other abuses — such as “velveeta” (from the processed cheese product) for excessive cross-posting.
A subset of spam was deemed cancellable spam, for which it is considered justified to issue third-party cancel messages.

The Breidbart Index (BI), developed by Seth Breidbart, provides a measure of severity of newsgroup spam by calculating the breadth of any multi-posting, cross-posting, or combination of the two. BI is defined as the sum of the square roots of how many newsgroups each article was posted to. If that number approaches 20, then the posts will probably be canceled by somebody.


The use of the BI and spam-detection software has led to Usenet being policed by anti-spam volunteers, who purge newsgroups of spam by sending cancels and filtering it out on the way into servers.

A related form of Newsgroups spam is forum spam. It usually consists of links, with the dual goals of increasing search engine visibility in highly competitive areas such as sexual invigoration, weight loss, pharmaceuticals, gambling, pornography, real estate or loans, and generating more traffic for these commercial websites.
Spam posts may contain anything from a single link, to dozens of links. Text content is minimal, usually innocuous and unrelated to the forum’s topic. Full banner advertisements have also been reported.
Alternatively, the spam links are posted in the user’s signature,where is more likely to be approved by forum administrators and moderators.
Spam can also be described as posts that have no relevance to the threads topic, or have no purpose in general (e.i, a user typing “CABBAGES!” or other such useless posts in an important news thread).

When Google bought the Usenet archives in 2001, it provided a web interface to text groups (thus turning them into some kind of web forums) through Google Groups, from which more than 800 million messages dating back to 1981 can be accessed.
There are some especially memorable articles and threads in these archives, such as Tim Berners-Lee’s announcement of what became the World Wide Web:
http://groups.google.com/groups?selm=6487%40cernvax.cern.ch
or Linus Torvalds’ post about his “pet project”:
http://groups.google.com/groups?selm=1991Oct5.054106.4647%40klaava.Helsinki.FI
You can view a pick of the most relevant posts here:
http://www.google.com/googlegroups/archive_announce_20.html

But Google Groups are responsible for the higher proportion of the spam that floods the Usenet nowadays. Google Groups isn’t the only source, but is the one that makes it easier for spammers to carry out their irritating activities.
It’s so easy to spam Usenet through Google Groups that there are some infamous spammers who have been doing so for years. Perhaps the best known of all is the MI-5 Persecution spammer who gets his way across just about any other newsgroup with rambling postings that often appear as clusters of 20 or more messages all related to Mike Corley’s perceived persecution of himself by MI5, the British intelligence agency. This UK-based spammer readily admits that he suffers from mental illness in several of his postings. He annoys the rest of users in such an exasperating way, that some of them have even offered themselves to the MI-5 to personally finish off the job.

The solution, IMHO, is to implement the Breidbart Index in Google Groups. It would be an easy task for a company that excels at implementing all kinds of algorithms in their search engine, that I just can’t understand what are they waiting for.

http://www.youtube.com/watch?v=XZ6N5m8FpVg

More Info:
Newsgroup Spam

February 23, 2008

And the Winner is … Google

Filed under: Internet,Software — Rafael Minuesa @ 12:55 AM
Tags: , , , , , , , , , , , , , ,
This is one of a series of articles I posted for the 1001webs’ blog.
You can view the original version at:
* http://1001webs.blogspot.com/2008/02/and-winner-is-google.html

On Feb 1 Microsoft offered to buy Yahoo! for $31 per share, a deal that was valued at $44.6 billion, in an attempt to acquire assets that would allow MSN to become a real competitor to Google’s supremacy on the Internet.

Microsoft justified its interest in acquiring Yahoo! explaining that:

“The industry will be well served by having more than one strong player, offering more value and real choice to advertisers, publishers and consumers.”

Yahoo! would certainly add some very valuable assets to Microsoft’s Internet Division, such as an audience of more than 500 million people per month in sites devoted to news, finance and sports, or Yahoo Mail (the most widely used consumer e-mail service on the Internet) or web banner ads used by corporate brand advertisers.
Although the price was a 62% premium above the closing price of Yahoo! common stock of $19.18 on January 31, 2008, it was only about a quarter of what Yahoo was worth in 2000, and the company’s board finally rejected the offer two weeks ago because they felt they were being undervalued at $31 a share. Or at least that’s what they said.

At a conference at the Interactive Advertising Bureau on Monday, Yahoo chief executive Jerry Yang had the chance to provide their own version of the story.
Yang broke the ice with a “Before you start, let me guess what your first question is. Does it start with an M and end with a T?”
However he did not elaborate much further:

“Everyone has read what we are doing, so there is not much to report. We’re taking the proposal that Microsoft has given to us seriously. It’s been a galvanizing event for everyone at Yahoo. Our board is spending a lot of time thinking about all the alternatives. It’s something that we need to think through carefully.”

But Microsoft is not be put off so easily and has recently hired a proxy firm to try to oust Yahoo’s board.
Last Friday, Microsoft released an internal memo from Kevin Johnson, President of Microsoft’s Platforms & Services Division, where he actually sees the deal going through:

“While Yahoo! has issued a press release rejecting our proposal, we continue to believe we have a full and fair proposal on the table. We look forward to a constructive dialogue with Yahoo!’s Board, management, shareholders, and employees on the value of this combination and its strategic and financial merits.
If and when Yahoo! agrees to proceed with the proposed transaction, we will go through the process to receive regulatory approval, and expect that this transaction will close in the 2nd half of calendar year 2008. Until this proposal is accepted and receives regulatory approval, we must continue to operate our business as we do today and compete in this rapidly changing online services and advertising marketplace.
It is important to note that once Yahoo! and Microsoft agree on a transaction, we can begin the integration planning process in parallel with the regulatory review. We can create the integration plan but we cannot begin to implement it until we have formal regulatory approval and have closed the transaction. Because the integration process will be critical to our success as a combined company, we are taking this very seriously. “

On the other hand, Google is not standing idle among other obvious reasons because it owes one to Microsoft from when the latter interfered with Google’s purchase of DoubleClick last year.
Google’s chief legal officer, David Drummond, wrote in the The Official Google Blog:

“Microsoft’s hostile bid for Yahoo! raises troubling questions. This is about more than simply a financial transaction, one company taking over another. It’s about preserving the underlying principles of the Internet: openness and innovation.
Could Microsoft now attempt to exert the same sort of inappropriate and illegal influence over the Internet that it did with the PC? While the Internet rewards competitive innovation, Microsoft has frequently sought to establish proprietary monopolies — and then leverage its dominance into new, adjacent markets.
Could the acquisition of Yahoo! allow Microsoft — despite its legacy of serious legal and regulatory offenses — to extend unfair practices from browsers and operating systems to the Internet? In addition, Microsoft plus Yahoo! equals an overwhelming share of instant messaging and web email accounts. And between them, the two companies operate the two most heavily trafficked portals on the Internet. Could a combination of the two take advantage of a PC software monopoly to unfairly limit the ability of consumers to freely access competitors’ email, IM, and web-based services? Policymakers around the world need to ask these questions — and consumers deserve satisfying answers.”

In any case, it remains unclear how the situation will develop and where it will lead to. Google probably can’t stop the deal, but it can delay it considerably, and the delay will certainly act in Google’s interests.

“In the interim, we foresee disarray at Microsoft and Yahoo, We believe the deal has distracted the engineers and should benefit Google over the next 18 to 24 months, providing with a major opportunity to advance in branded advertising.”

as foreseen by analyst Marianne Wolk of Susquehanna Financial Group.
According to Wolk,

“If instead Microsoft is forced to acquire Yahoo via a proxy fight, it would mean a more protracted closing process, then the transaction will not close until early 2009, when it would begin the complex integration of Yahoo’s 14,300 employees, multiple advertising platforms, technology infrastructures, content sites, culture, etc.
Google may not face a more competitive Microsoft-Yahoo until 2010.”

By then, she said, Google could “extend its lead in search monetization” and grab a “major lead in emerging growth areas, such as video advertising, mobile and local advertising.”
Wolk also pointed out that Google would likely find it easier to hire top engineers from Microsoft and Yahoo “as they fear for their jobs in a consolidation.”

My personal bet is that even if the deal goes ahead and Microsoft pours in huge loads of money and resources, it won’t work.
And it won’t because Microsoft will try to apply the same tactics that it has applied to gain dominance over the PC market, i.e. trying to force every user to use their software.

The Internet is totally different. You can’t force people to use your staff. You have to convince them to use it. And in order to do that you have to provide a superior product. Neither MSN nor Yahoo! come even closer to what Google delivers in terms of search results and applications designed for the web.

Much needs to be improved in both MSN and Yahoo! in order to be able to compete with Google.
In the case of Yahoo is a technical issue. I have recently switched from Google to Yahoo’s search engine just to see how accurate the results were and I had to switch back because the difference with Google’s is abysmal, both in accuracy and quality of results.
I kind of feel sorry for Yahoo! because I’ve been a long time user of their services and I can see it going down the gutter, no matter what the final result of the acquisition will be. They have some top-quality services such as Yahoo! Mail or Yahoo! Finance and in many countries in Asia Yahoo! is a real competitor to Google, but they need to innovate so much that I doubt they will ever revert the downward trend.
They are moving in the right direction now with Web 2.0, but I’m afraid that it might be too late.
They have recently announced that they are opening up their Search to third party so that everybody can collaborate in building their search results:

“This open search platform enables 3rd parties to build and present the next generation of search results. There are a number of layers and capabilities that we have built into the platform, but our intent is clear — present users with richer, more useful search results so that they can complete their tasks more efficiently and get from “to do” to “done.”

Because the platform is open it gives all Web site owners — big or small — an opportunity to present more useful information on the Yahoo! Search page as compared to what is presented on other search engines. Site owners will be able to provide all types of additional information about their site directly to Yahoo! Search. So instead of a simple title, abstract and URL, for the first time users will see rich results that incorporate the massive amount of data buried in
websites — ratings and reviews, images, deep links, and all kinds of other useful data — directly on the Yahoo! Search results page.

We believe that combining a free, open platform with structured, semantic content from across the Web is a clear win for all parties involved — site owners, Yahoo! and most importantly, our users.”

Let’s wait and see.
You can see the details at the following links:

And MSN simply doesn’t get it. They’re trying to apply the same centralized tactics that made them so successful in the PC market, but it is evident that they won’t work on the Internet.

By combining both companies you’ll only get a much more cumbersome monster and the Internet is about just the opposite, decentralization and agility.

Many people attribute the initial success of Google to the quality of the search results. That is true today but it wasn’t so in the beginning, when they started to draw users from other search engines. The main reason why most of the people made Google their home page is because it was simple. No advertising or fancy graphics, just a search box and a menu where the rest of services are listed as text links on a page that loads very fast.

By trying to push users into using your services and bloating your front page with advertising, you are actually driving them away.
To be fair Yahoo! does have a version of their home page that is designed that way:
http://search.yahoo.com/
Had they made it to be their front page many years ago, they’d still be game.

Screenshot of Yahoo front pageAnother feature that convinced me to switch to Google many years ago was that they give you the opportunity to try your search terms on different search engines with just one click, with the:
“Try your search on Yahoo, Ask, AllTheWeb, Live, Lycos, Technorati, Feedster, Wikipedia, Bloglines, Altavista, A9″
that used to appear on every search results page.
Same functionality can be achieved nowadays by installing CustomizeGoogle, a highly recommended Firefox extension that enhances Google search results by adding extra information (like the above mentioned links to Yahoo, Ask.com, MSN, etc.) with the added plus of enabling you to remove unwanted information (like ads and spam).

CustomizeGoogle 2 min introduction movie

By creating an extremely simple entrance to an environment open to everybody, including their most direct competitors, they have succeeded in being the most popular home page on the Internet.
The KISS approach (“Keep It Simple, Stupid”) is what they used.
Keep It Simple. And Open. Stupid.


  • Occam’s razor: “entities must not be multiplied beyond necessity”
  • Albert Einstein: “everything should be made as simple as possible, but no simpler”
  • Leonardo Da Vinci: “Simplicity is the ultimate sophistication”
  • Antoine de Saint Exupéry: “It seems that perfection is reached not when there is nothing left to add, but when there is nothing left to take away”

%d bloggers like this: