Writing Blog

May 28, 2016

The Antikythera Laptop

Filed under: Computers,Gadgets,History,Internet,Science,Software,Technology — Rafael Minuesa @ 1:43 PM
Tags: ,
Antikythera Laptop Greek salesgirl showing the latest model of the Antikythera Laptop to Sosipatra. (notice her smile anticipating the sale)

The Antikythera Laptop was an ancient computer powered by an analog mechanism that consisted of a box with dials on the outside and a very complex assembly of bronze gear wheels mounted in the inside.

Antikythera smart wristwatch A prototype for a Antikythera smart wristwatch that didn’t make it to the manufacturing line

The computer didn’t do much apart from accurately computing the time it takes for planetary bodies to complete their orbits, but that was quite an unprecedented feat at the time that could not be replicated until the development of mechanical astronomical clocks in the fourteenth century. Besides Ancient Greeks favored Theater over Video any day of the week, so there was no need for a Graphic Card either, they were happy just watching the dials go round and round, which provided them with an infinite source of inspiration to come up with all kinds of theorems.

In latest models the Antikythera Laptop featured just 2 USB-G ports, a decision that was highly criticized by users who could not understand why they had to part with a substantial amount of extra Drachmas to buy an adapter, just to get their machines connected to other existing standard devices of the time.

The laptops were manufactured using very sturdy materials, which made them extremely hard and durable. They were also waterproof as long as the machinery was not underwater for more than a year or so. They came with a life-time warranty with accidental damage protection. They were definitely different times and different code of ethics back then.

Rusty Antikythera Mechanism This one had the warranty voided after more than 2,000 years under sea water.

March 7, 2010

Flat-file based PHP CMSs

This is one of a series of articles I posted at my Web Development Blog.
You can view the original version at:

After several unsuccessful attempts to access the MySQL databases that provide the data for the web pages of my main client, I have decided to look into other options that do not require a separate MySQL database that returns errors far too often, with a separate password that nobody knows who’s got it or how to retrieve it, and a separate set of skills outside the realm of most web designers.

To my surprise, since last time I checked on flat-file based PHP Content Management Systems, there have been emerged/developed quite a few (and good) options around this approach that I’m listing below:

CMSimple is simple, small and fast. It writes page data directly to a HTML file on the web server. Configuration files and language files are also saved in .txt format.
CMSimple also offers a wide variety of plug-ins, including many made by third parties.

DokuWiki is wiki software that works on plain text files and thus needs no database. Its syntax is similar to the one used by MediaWiki and its data files remain readable outside the wiki.
It has a generic plugin interface which simplified the development and maintenance of add-ons.
DokuWiki is included in the Linux distributions of Debian and Gentoo Linux.

Pluck allows for easy web page creation for users with little or no programming experience, and does not use a database to store its data.
Pluck also includes a flexible module system, which allows developers to integrate custom functionality into the system. Pluck includes 3 default modules: Albums, Blog and Contact form.

razorCMS is designed to be as small as possible ( around 300 KB including a WYSIWYG – Editor), just enough to be useful on a base install. Then extra functionality can be added as needed via the Blade Pack management system.
Skeleton CMS isn’t really a CMS as much as it is a very simple framework for rapid prototyping. If nothing more, it’s a good structured site model to start building a website with. There is no need for a database and no fancy admin area, but if you’re building a site for a client and you don’t need the power of WordPress this might be exactly what you’re after.

As pointed out, the main advantage of using flat-file (text) files as a database for a PHP driven CMS is that you no longer depend on external software to edit and maintain a part of the CMS, without which the system simply will not run.

But you must keep in mind that although flat-file files are an acceptable solution for small databases, they become sluggish as the database grows because access mode is sequential.
Another disadvantage is their inability to support transactions and probably the biggest concern is Security: A database protects the data from outside intrusion better than a flat file because it provides a security layer of its own.
Having said that, NOTHING that is hosted on a server connected to the Internet is secure and if there are hackers equipped with enough resources who are intent in breaking into your system, they eventually will.

An intermediate solution would be the use of SQLite an Open Source embedded relational database management system contained in a small C programming library that when is linked in it becomes an integral part of the program. The entire database including definitions, tables, indices, and the data itself are stored as a single text file on a host machine.
SQLite is embedded into a growing number of popular applications, such as Mozilla Firefox or Google’s Android OS.

Sounds good to me.
Below some of the CMSs that are able to use SQLite to store their data:

eoCMS (everyone’s Content Management System) uses MySQL or SQLite and php to deliver content in a user-friendly interface.
It features a forum, moderation features, custom 404 pages, personal Messaging , plug-ins, RSS output, ratings, etc.

Frog CMS is an extendable open source content management system designed to use PHP5 along with a MySQL database backend, although it also has support for SQLite. It is actually a port of Radiant, the Ruby on Rails CMS, although Frog has begun to take its own development direction.

FUDforum supports all the standard features you may come to expect from a modern forum with a robust MySQL, PostgreSQL or SQLite back-end, that allows for a virtually unlimited number of messages (up to 2^32 messages).

Habari is a modular, object-oriented blogging platform that supports Multiple users, Multiple sites, Plugins, Importers for Serendipity and WordPress, etc.
Habari prides itself in being standards compliant and more secure than other blogging platforms by making use of PDO and enabling prepared statements for all interactions with the database.

Jaws is a CMS and modular framework that focuses on User and Developer “Friendliness” by providing a simple and powerful framework to hack your own modules.
Lanius CMS comes out of the box with two flatfile database choices (Gladius DB and SQLite), that will work out-of-the-box with both PHP4 and PHP5.

phpSQLiteCMS is based on PHP and SQLite and runs “out of the box”. phpSQLiteCMS uses PDO as database interface, which makes it also possible to run with MySQL.

Serendipity is a PHP based blog and web-based content management system that supports PostgreSQL, MySQL, and SQLite database backends, the Smarty template engine, and a plugin architecture is kept updated by automatically checking the plugin repository online.

The only drawback to this approach in many of the systems above is that they use the PHP Data Objects (PDO) extension, a lightweight, consistent interface for accessing databases in PHP, that although greatly reduces the system’s vulnerability to SQL injection attacks, it does require the new OO features in the core of PHP 5, and so will not run with earlier versions of PHP.

February 3, 2010

What is the Google Affiliate Network?

Filed under: Blogs,Internet,Software — Rafael Minuesa @ 1:46 AM
Tags: , , ,
This is one of a series of articles I posted for the 1000 Webs’ Blog.
You can view the original version at:

When Google acquired DoubleClick in March 2008, it also acquired its affiliate ad network program called Performics, the first full-service affiliate network founded in 1998 and that in turn had been acquired by DoubleClick in 2004. Google has now further developed and rebranded Performic as the Google Affiliate Network.

The Google Affiliate Network works like any ordinary affiliate ad network, by enabling advertising relationships between publishers and advertisers, whereby publishers get paid for every successful sale transactions that their site brings to advertisers.

As a Google Affiliate Network publisher, you can add an advertiser’s banner or text link on your site. When a transaction, such as a sign-up or purchase, occurs through one of these affiliate links, Google Affiliate Network will track the sale and pay you a commission or bounty.

Someone clicks the ad on your site…
…buys the advertised product…
…and you receive a commission on the sale

The Google Affiliate Network has been integrated into Google AdSense. All Google Affiliate Network publishers must accept AdSense terms. Additionally, all earnings are distributed through AdSense.
But being a Google AdSense publisher does not make you automatically a publisher in Google Affiliate Network. You must complete a separate application for Google Affiliate Network.
In order to join the program, you need to apply to their network in two steps:
Step 1: Link to or apply for a Google AdSense account.
Step 2: Tell Google about your site and promotional methods.

Each application is reviewed by the Google Affiliate Network quality team which will check some requirements, such as being a site that attracts a desirable audience for the products offered, able to test advertising offers and nurture the most productive relationships, being an expert in driving and converting visitor traffic and adhere strictly to Google Affiliate Network quality standards and advertiser policies.
In addition Google states that,

we’ve found that Google Affiliate Network tends to yield greater benefits to publishers who create niche content, manage loyalty and rewards programs, aggregate coupons and promotions, or manage social media.

Payments are processed on a cost-per-action (CPA) basis, typically as a revenue share or fixed bounty for a lead or other action. Google Affiliate Network earnings will be posted to your Google AdSense account approximately 30 days after the end of every month.

More Info:

January 17, 2010

Supplemental results: An experiment

Filed under: Blogs,Internet — Rafael Minuesa @ 1:34 AM
Tags: , , ,
This is one of a series of articles I posted for my SEO Blog.
You can view the original version at:
* http://rafael-minuesa-seo.blogspot.com/2010/01/supplemental-results.html

What are supplemental results?
Supplemental results usually only show up in the search index after the normal results. They are a way for Google to extend their search database while also preventing questionable pages from getting massive exposure.

How does a page go supplemental?
From my experiences pages have typically went supplemental when they became isolated doorway type pages (lost their inbound link popularity) or if they are deemed to be duplicate content. For example, if Google indexes the www. version of your site and the non www. version of your site then likely most of one of those will be in supplemental results.
If you put a ton of DMOZ content and Wikipedia content on your site that sort of stuff may go supplemental as well. If too much of your site is considered to be useless or duplicate junk then Google may start trusting other portions of your site less.

Negative side effects of supplemental:
Since supplemental results are not trusted much and rarely rank they are not crawled often either. Since they are generally not trusted much and rarely crawled odds are pretty good that links from supplemental pages likely do not pull much – if any – weight in Google.

How to get out of Google Supplemental results?
If you were recently thrown into them the problem may be Google. You may just want to give it a wait, but also check to make sure you are not making errors like www vs non www, content management errors delivering the same content at multiple URLs (doing things like rotating product URLs), or too much duplicate content for other reasons (you may also want to check that nobody outside your domain is showing up in Google when you search for site:mysite.com and you can also look for duplicate content with Copyscape).
If you have pages that have been orphaned or if your site’s authority has went down Google may not be crawling as deep through your site. If you have a section that needs more link popularity to get indexed don’t be afraid to point link popularity at that section instead of trying to point more at the home page. If you add thousands and thousands of pages you may need more link popularity to get it all indexed.
After you solve the problem it still may take a while for many of the supplementals to go away. As long as the number of supplementals is not growing, your content is unique, and Google is ranking your site well across a broad set of keywords then supplementals are probably nothing big to worry about.

All of the text above has been copy&pasted from:
so, if those points are correct this post should go Supplemental in no time, right?
Wrong. Wait and see …

Matt Cutts, a well known Google engineer, asked for feedback on the widespread supplemental indexing issue in this thread. As noted by Barry, in comment 195 Matt said:

Based on the specifics everyone has sent (thank you, by the way), I’m pretty sure what the issue is. I’ll check with the crawl/indexing team to be sure though. Folks don’t need to send any more emails unless they really want to. It may take a week or so to sort this out and be sure, but I do expect these pages to come back to the main index.

In the video below Matt Cutts answers questions about Supplemental Results:


In the video, to the question of should I worry about results estimates for:

  1. supplemental results
  2. using the site: operator
  3. with negated terms and
  4. special syntax such as intitle: ?

And the answer was: No. That’s pretty far off the beaten path

Getting Out of Google Supplemental Results

Getting out of the Google Supplemental Results may be possible by improving your website navigation system. To get more pages fully Google indexed, the prominence of important website pages can often be boosted by linking to them from pages within your domain having the highest Page Rank, such as your homepage. The reason for this being that Page Rank is passed from one page to another by links and the most common cause of Supplemental results is lack of Page Rank.

Start by determining your most important web pages which have been made supplemental – for example those promoting lucrative products and services, and then improve your website internal linking by adding links to these pages from more prominent fully Google indexed pages of your site including your homepage. At the same time, ensure that your website navigation system is search engine friendly using a website link analyzer.

Site Link Analyzer Tool © SEO Chat™

Valid URL

Type of links to return:
External (links going to outside web-sites)
Internal (links inside the current web-site)
Both types

Additional Info
Show nofollow links?

Enter Captcha To Continue
To prevent spamming, please enter in the numbers and letters in the box below

Report Problem with Tool.

By improving website navigation and getting more inbound links from other Worldwide Web sites, you may be able to get more website pages fully Google indexed, solving the problem of partial Google indexing and Supplemental pages.

Where the Google Page Rank of your website homepage is PR4 or PR3, improving your website navigation system and in particular the prominence of internal pages may help to get out of supplemental results. This can be done by including static hyperlinks from the homepage to your ‘problem supplemental result pages’.

However, where your homepage is PR3 or lower and you have a large website, internal navigation improvements alone may still not be enough when it comes to getting out of the Google Supplemental Results. At PR3 or lower, your homepage Page Rank is probably too low to pass on enough Page Rank to your internal pages to completely get out of Supplemental Results.

To fully solve the problem of partial Google indexing, get more one way links to your site from quality web directories and sites of a similar theme and wait patiently to become fully Google indexed. In addition, getting more quality one way links pointing to internal pages of your website (rather than just targeting your homepage) is another powerful way of boosting the ranking of those pages against specific keyword terms, and it will also assist in getting them out of supplemental results. This is often referred to as “deep linking”.

Google states that they have now removed the label “Supplemental Result” from their search result pages:

“Supplemental Results once enabled users to find results for queries beyond our main index. Because they were “supplemental,” however, these URLs were not crawled and updated as frequently as URLs in our main index.

Google’s technology has improved over time, and now we’re able to crawl and index sites with greater frequency. With our entire web index fresher and more up to date, the “Supplemental Results” label outlived its usefulness.”

Right, that is correct. It is called now “Omitted Results“. Same thing really, and same side-effects, at least from this non-Google point of view.

More Info

January 7, 2010

Thematic frameworks in WordPress

Filed under: Blogs,Internet,Programming — Rafael Minuesa @ 12:53 AM
Tags: , , , , ,
This is one of a series of articles I posted for 1001 Templates.
You can view the original version at:
* http://1001templates.blogspot.com/2010/01/thematic-wordpress-theme-framework.html

A theme framework in WordPress consists of a highly customized theme foundation designed to create a flexible platform that can serve as a parent theme for building child themes.

The use of WordPress theme frameworks does ease theme development considerably by reducing the volume of work that is needed in creating a backbone for a WordPress theme, something that is traditionally done by using PHP and WordPress Template Tags.

Another advantage of theme frameworks is that they also make theme development more accessible, removing the need for programming knowledge.

Below you can see a list of Available Frameworks in WordPress:

After some research we have decided to focus on developing the Thematic Framework, a free, open-source, highly extensible, search-engine optimized Framework that comes out-of-the-box with some very convenient features, such as 13 widget-ready areas, grid-based layout samples, styling for popular plugins, etc. Not to mention the support of a whole community behind it, which makes it perfect for both beginner bloggers and WordPress development professionals.

Additional features are:

  • Includes a sample WordPress Child Theme for rapid development
  • A wiki-editable guide to Thematic Customization
  • Ready for WordPress plugins like Subscribe to Comments, WP-PageNavi, and Comment-license
  • Fully compatible with All-In-One SEO and Platinum SEO plugins
  • Multiple, easy to implement, bulletproof layout options for 2, or 3 column designs
  • Modular CSS with pre-packaged resets and basic typography
  • Dynamic post and body classes make it a hyper-canvas for CSS artists
  • Editable footer text—remove the theme credit without harming the theme
  • Options for multi-author blogs

March 3, 2008

SPAM on Usenet

This is one of a series of articles I posted for magiKomputer.
You can view the original version at:
* http://magikomputer.blogspot.com/2008/03/spam-on-usenet.html

From Wikipedia, the free encyclopedia:

“Spamming is the abuse of electronic messaging systems to indiscriminately send unsolicited bulk messages. While the most widely recognized form of spam is e-mail spam, the term is applied to similar abuses in other media: instant messaging spam, Usenet newsgroup spam, Web search engine spam, spam in blogs, wiki spam, mobile phone messaging spam, Internet forum spam and junk fax transmissions.

Spamming is economically viable because advertisers have no operating costs beyond the management of their mailing lists, and it is difficult to hold senders accountable for their mass mailings. Because the barrier to entry is so low, spammers are numerous, and the volume of unsolicited mail has become very high. The costs, such as lost productivity and fraud, are borne by the public and by Internet service providers, which have been forced to add extra capacity to cope with the deluge. Spamming is widely reviled, and has been the subject of legislation in many jurisdictions.”

Spam affects about everybody that uses the Internet in one form or another. And in spite of what Bill Gates forecasted in 2004, when he said that “spam will soon be a thing of the past”, it is getting worse by the day. While the European Union’s Internal Market Commission estimated in 2001 that “junk e-mail” cost Internet users €10 billion per year worldwide, the California legislature found that spam cost United States organizations alone more than $13 billion in 2007, including lost productivity and the additional equipment, software, and manpower needed to combat the problem.

Where does all that Spam come from? Experts from SophosLabs (a developer and vendor of security software and hardware) have analyzed spam messages caught by companies involved in the Sophos global spam monitoring network and came out with a list of top 12 countries that spread spam around the globe:

  • USA – 28.4%;
  • South Korea – 5.2%;
  • China (including Hong Kong) – 4.9%;
  • Russia – 4.4%;
  • Brazil – 3.7%;
  • France – 3.6%;
  • Germany – 3.4%;
  • Turkey – 3.%;
  • Poland – 2.7%;
  • Great Britain – 2.4%;
  • Romania – 2.3%;
  • Mexico – 1.9%;
  • Other countries – 33.9%

There are many types of electronic spam, including E-mail spam (unsolicited e-mail), Mobile phone spam (unsolicited text messages, Messaging spam (“SPIM”), use of instant messenger services for advertisement or even extortion, Spam in blogs (“BLAM”), posting random comments or promoting commercial services to blogs, wikis, guestbooks, Forum spam (posting advertisements or useless posts on a forum, Spamdexing, manipulating a search engine to create the illusion of popularity for web pages, Newsgroup spam, advertisement and forgery on newsgroups, etc.

For the purpose of this post we shall focus on Newsgroups spam, the type of spam where the targets are Usenet newsgroups.
Usenet convention defines spamming as excessive multiple posting, that is, the repeated posting of a message (or substantially similar messages). During the early 1990s there was substantial controversy among Usenet system administrators (news admins) over the use of cancel messages to control spam. A cancel message is a directive to news servers to delete a posting, causing it to be inaccessible to those who might read it.
Some regarded this as a bad precedent, leaning towards censorship, while others considered it a proper use of the available tools to control the growing spam problem.
A culture of neutrality towards content precluded defining spam on the basis of advertisement or commercial solicitations. The word “spam” was usually taken to mean excessive multiple posting (EMP), and other neologisms were coined for other abuses — such as “velveeta” (from the processed cheese product) for excessive cross-posting.
A subset of spam was deemed cancellable spam, for which it is considered justified to issue third-party cancel messages.

The Breidbart Index (BI), developed by Seth Breidbart, provides a measure of severity of newsgroup spam by calculating the breadth of any multi-posting, cross-posting, or combination of the two. BI is defined as the sum of the square roots of how many newsgroups each article was posted to. If that number approaches 20, then the posts will probably be canceled by somebody.

The use of the BI and spam-detection software has led to Usenet being policed by anti-spam volunteers, who purge newsgroups of spam by sending cancels and filtering it out on the way into servers.

A related form of Newsgroups spam is forum spam. It usually consists of links, with the dual goals of increasing search engine visibility in highly competitive areas such as sexual invigoration, weight loss, pharmaceuticals, gambling, pornography, real estate or loans, and generating more traffic for these commercial websites.
Spam posts may contain anything from a single link, to dozens of links. Text content is minimal, usually innocuous and unrelated to the forum’s topic. Full banner advertisements have also been reported.
Alternatively, the spam links are posted in the user’s signature,where is more likely to be approved by forum administrators and moderators.
Spam can also be described as posts that have no relevance to the threads topic, or have no purpose in general (e.i, a user typing “CABBAGES!” or other such useless posts in an important news thread).

When Google bought the Usenet archives in 2001, it provided a web interface to text groups (thus turning them into some kind of web forums) through Google Groups, from which more than 800 million messages dating back to 1981 can be accessed.
There are some especially memorable articles and threads in these archives, such as Tim Berners-Lee’s announcement of what became the World Wide Web:
or Linus Torvalds’ post about his “pet project”:
You can view a pick of the most relevant posts here:

But Google Groups are responsible for the higher proportion of the spam that floods the Usenet nowadays. Google Groups isn’t the only source, but is the one that makes it easier for spammers to carry out their irritating activities.
It’s so easy to spam Usenet through Google Groups that there are some infamous spammers who have been doing so for years. Perhaps the best known of all is the MI-5 Persecution spammer who gets his way across just about any other newsgroup with rambling postings that often appear as clusters of 20 or more messages all related to Mike Corley’s perceived persecution of himself by MI5, the British intelligence agency. This UK-based spammer readily admits that he suffers from mental illness in several of his postings. He annoys the rest of users in such an exasperating way, that some of them have even offered themselves to the MI-5 to personally finish off the job.

The solution, IMHO, is to implement the Breidbart Index in Google Groups. It would be an easy task for a company that excels at implementing all kinds of algorithms in their search engine, that I just can’t understand what are they waiting for.


More Info:
Newsgroup Spam

February 23, 2008

And the Winner is … Google

Filed under: Internet,Software — Rafael Minuesa @ 12:55 AM
Tags: , , , , , , , , , , , , , ,
This is one of a series of articles I posted for the 1001webs’ blog.
You can view the original version at:
* http://1001webs.blogspot.com/2008/02/and-winner-is-google.html

On Feb 1 Microsoft offered to buy Yahoo! for $31 per share, a deal that was valued at $44.6 billion, in an attempt to acquire assets that would allow MSN to become a real competitor to Google’s supremacy on the Internet.

Microsoft justified its interest in acquiring Yahoo! explaining that:

“The industry will be well served by having more than one strong player, offering more value and real choice to advertisers, publishers and consumers.”

Yahoo! would certainly add some very valuable assets to Microsoft’s Internet Division, such as an audience of more than 500 million people per month in sites devoted to news, finance and sports, or Yahoo Mail (the most widely used consumer e-mail service on the Internet) or web banner ads used by corporate brand advertisers.
Although the price was a 62% premium above the closing price of Yahoo! common stock of $19.18 on January 31, 2008, it was only about a quarter of what Yahoo was worth in 2000, and the company’s board finally rejected the offer two weeks ago because they felt they were being undervalued at $31 a share. Or at least that’s what they said.

At a conference at the Interactive Advertising Bureau on Monday, Yahoo chief executive Jerry Yang had the chance to provide their own version of the story.
Yang broke the ice with a “Before you start, let me guess what your first question is. Does it start with an M and end with a T?”
However he did not elaborate much further:

“Everyone has read what we are doing, so there is not much to report. We’re taking the proposal that Microsoft has given to us seriously. It’s been a galvanizing event for everyone at Yahoo. Our board is spending a lot of time thinking about all the alternatives. It’s something that we need to think through carefully.”

But Microsoft is not be put off so easily and has recently hired a proxy firm to try to oust Yahoo’s board.
Last Friday, Microsoft released an internal memo from Kevin Johnson, President of Microsoft’s Platforms & Services Division, where he actually sees the deal going through:

“While Yahoo! has issued a press release rejecting our proposal, we continue to believe we have a full and fair proposal on the table. We look forward to a constructive dialogue with Yahoo!’s Board, management, shareholders, and employees on the value of this combination and its strategic and financial merits.
If and when Yahoo! agrees to proceed with the proposed transaction, we will go through the process to receive regulatory approval, and expect that this transaction will close in the 2nd half of calendar year 2008. Until this proposal is accepted and receives regulatory approval, we must continue to operate our business as we do today and compete in this rapidly changing online services and advertising marketplace.
It is important to note that once Yahoo! and Microsoft agree on a transaction, we can begin the integration planning process in parallel with the regulatory review. We can create the integration plan but we cannot begin to implement it until we have formal regulatory approval and have closed the transaction. Because the integration process will be critical to our success as a combined company, we are taking this very seriously. “

On the other hand, Google is not standing idle among other obvious reasons because it owes one to Microsoft from when the latter interfered with Google’s purchase of DoubleClick last year.
Google’s chief legal officer, David Drummond, wrote in the The Official Google Blog:

“Microsoft’s hostile bid for Yahoo! raises troubling questions. This is about more than simply a financial transaction, one company taking over another. It’s about preserving the underlying principles of the Internet: openness and innovation.
Could Microsoft now attempt to exert the same sort of inappropriate and illegal influence over the Internet that it did with the PC? While the Internet rewards competitive innovation, Microsoft has frequently sought to establish proprietary monopolies — and then leverage its dominance into new, adjacent markets.
Could the acquisition of Yahoo! allow Microsoft — despite its legacy of serious legal and regulatory offenses — to extend unfair practices from browsers and operating systems to the Internet? In addition, Microsoft plus Yahoo! equals an overwhelming share of instant messaging and web email accounts. And between them, the two companies operate the two most heavily trafficked portals on the Internet. Could a combination of the two take advantage of a PC software monopoly to unfairly limit the ability of consumers to freely access competitors’ email, IM, and web-based services? Policymakers around the world need to ask these questions — and consumers deserve satisfying answers.”

In any case, it remains unclear how the situation will develop and where it will lead to. Google probably can’t stop the deal, but it can delay it considerably, and the delay will certainly act in Google’s interests.

“In the interim, we foresee disarray at Microsoft and Yahoo, We believe the deal has distracted the engineers and should benefit Google over the next 18 to 24 months, providing with a major opportunity to advance in branded advertising.”

as foreseen by analyst Marianne Wolk of Susquehanna Financial Group.
According to Wolk,

“If instead Microsoft is forced to acquire Yahoo via a proxy fight, it would mean a more protracted closing process, then the transaction will not close until early 2009, when it would begin the complex integration of Yahoo’s 14,300 employees, multiple advertising platforms, technology infrastructures, content sites, culture, etc.
Google may not face a more competitive Microsoft-Yahoo until 2010.”

By then, she said, Google could “extend its lead in search monetization” and grab a “major lead in emerging growth areas, such as video advertising, mobile and local advertising.”
Wolk also pointed out that Google would likely find it easier to hire top engineers from Microsoft and Yahoo “as they fear for their jobs in a consolidation.”

My personal bet is that even if the deal goes ahead and Microsoft pours in huge loads of money and resources, it won’t work.
And it won’t because Microsoft will try to apply the same tactics that it has applied to gain dominance over the PC market, i.e. trying to force every user to use their software.

The Internet is totally different. You can’t force people to use your staff. You have to convince them to use it. And in order to do that you have to provide a superior product. Neither MSN nor Yahoo! come even closer to what Google delivers in terms of search results and applications designed for the web.

Much needs to be improved in both MSN and Yahoo! in order to be able to compete with Google.
In the case of Yahoo is a technical issue. I have recently switched from Google to Yahoo’s search engine just to see how accurate the results were and I had to switch back because the difference with Google’s is abysmal, both in accuracy and quality of results.
I kind of feel sorry for Yahoo! because I’ve been a long time user of their services and I can see it going down the gutter, no matter what the final result of the acquisition will be. They have some top-quality services such as Yahoo! Mail or Yahoo! Finance and in many countries in Asia Yahoo! is a real competitor to Google, but they need to innovate so much that I doubt they will ever revert the downward trend.
They are moving in the right direction now with Web 2.0, but I’m afraid that it might be too late.
They have recently announced that they are opening up their Search to third party so that everybody can collaborate in building their search results:

“This open search platform enables 3rd parties to build and present the next generation of search results. There are a number of layers and capabilities that we have built into the platform, but our intent is clear — present users with richer, more useful search results so that they can complete their tasks more efficiently and get from “to do” to “done.”

Because the platform is open it gives all Web site owners — big or small — an opportunity to present more useful information on the Yahoo! Search page as compared to what is presented on other search engines. Site owners will be able to provide all types of additional information about their site directly to Yahoo! Search. So instead of a simple title, abstract and URL, for the first time users will see rich results that incorporate the massive amount of data buried in
websites — ratings and reviews, images, deep links, and all kinds of other useful data — directly on the Yahoo! Search results page.

We believe that combining a free, open platform with structured, semantic content from across the Web is a clear win for all parties involved — site owners, Yahoo! and most importantly, our users.”

Let’s wait and see.
You can see the details at the following links:

And MSN simply doesn’t get it. They’re trying to apply the same centralized tactics that made them so successful in the PC market, but it is evident that they won’t work on the Internet.

By combining both companies you’ll only get a much more cumbersome monster and the Internet is about just the opposite, decentralization and agility.

Many people attribute the initial success of Google to the quality of the search results. That is true today but it wasn’t so in the beginning, when they started to draw users from other search engines. The main reason why most of the people made Google their home page is because it was simple. No advertising or fancy graphics, just a search box and a menu where the rest of services are listed as text links on a page that loads very fast.

By trying to push users into using your services and bloating your front page with advertising, you are actually driving them away.
To be fair Yahoo! does have a version of their home page that is designed that way:
Had they made it to be their front page many years ago, they’d still be game.

Screenshot of Yahoo front pageAnother feature that convinced me to switch to Google many years ago was that they give you the opportunity to try your search terms on different search engines with just one click, with the:
“Try your search on Yahoo, Ask, AllTheWeb, Live, Lycos, Technorati, Feedster, Wikipedia, Bloglines, Altavista, A9″
that used to appear on every search results page.
Same functionality can be achieved nowadays by installing CustomizeGoogle, a highly recommended Firefox extension that enhances Google search results by adding extra information (like the above mentioned links to Yahoo, Ask.com, MSN, etc.) with the added plus of enabling you to remove unwanted information (like ads and spam).

CustomizeGoogle 2 min introduction movie

By creating an extremely simple entrance to an environment open to everybody, including their most direct competitors, they have succeeded in being the most popular home page on the Internet.
The KISS approach (“Keep It Simple, Stupid”) is what they used.
Keep It Simple. And Open. Stupid.

  • Occam’s razor: “entities must not be multiplied beyond necessity”
  • Albert Einstein: “everything should be made as simple as possible, but no simpler”
  • Leonardo Da Vinci: “Simplicity is the ultimate sophistication”
  • Antoine de Saint Exupéry: “It seems that perfection is reached not when there is nothing left to add, but when there is nothing left to take away”

February 5, 2008

Keyloggers protection

This is one of a series of articles I posted for magiKomputer.
You can view the original version at:
* * http://magikomputer.blogspot.com/2008/02/keyloggers-protection.html

Keylogging works by recording the keystrokes you type on the keyboard to a log file that can be transmitted to a third party. Keyloggers can capture user names, passwords, account numbers, social security numbers or any other confidential information that you type using your keyboard.

There are two types of Keystroke loggers:

  • Hardware key loggers are devices that are attached to the keyboard cable or installed inside the keyboard. There are commercially available products of this kind, even dedicated keyboards with key logging functionality.
  • Software key loggers are usually simple programs that can capture the keystrokes the user is typing, They can also record mouse clicks, files opened and closed, sites visited on the Internet, etc. A more advanced type of key loggers can also capture text from windows and make screenshots of what displayed on the screen.

While writing keylogging programs is simple, a different matter is installing it inside the victim’s computer without getting caught and downloading the data that has been logged without being traced.

The best protection against keyloggers is to avoid them in the first place.
A few golden rules:

  • Use a Firewall
  • Use an Anti-virus program
  • Use an Anti-spyware program
  • Never click on links sent by unknown people and be very careful of the known ones since their address might be faked. If in doubt, check the e-mail headers.
  • Never execute attachments on e-mails that are executable files (EXE, COM, SCR, etc). No exceptions here.
  • Never execute programs from the Internet that lack a security certificate. Except from Microsoft update and very few others, there should be no reason for executing any programs from the web.
  • Run a virus and spyware check on ALL files that come from external sources (USB pen, DVDs, etc)

Additional measures that can be taken are:
Monitoring what programs are running on your computer
Monitor your network whenever an application attempts to make a network connection.
Use an automatic form filler programs that prevent keylogging since they’re not using the keyboard.

There are commercially available anti-keyloggers, but if you’re looking for a free alternative try Spybot Search & Destroy, a freeware tool that does a pretty decent job at detecting all kinds of spyware:

Windows Defender, a free program that helps protect your computer against pop-ups, slow performance, and security threats caused by spyware: http://www.microsoft.com/athome/security/spyware/software/default.mspx

The Sysinternals web site hosts several utilities to help you manage, troubleshoot and diagnose Windows systems and applications.

File & Disk File and Disk Utilities
Utilities for viewing and monitoring file and disk access and usage.
Networking Networking Utilities
Networking tools that range from connection monitors to resource security analyzers.
Process Process Utilities
Utilities for looking under the hood to see what processes are doing and the resources they are consuming.
Security Security Utilities
Security configuration and management utilities, including rootkit and spyware hunting programs.
System System Information
Utilities for looking at system resource usage and configuration.
Miscellaneous Miscellaneous Utilities
A collection of diverse utilities that includes a screen saver, presentation aid, and debugging tool.

In this article:
Alex provides some free and valuable advice about keylogging protection such as using the on-screen keyboard available in W2000 and XP that can be launched by executing “osk” or the technique of mouse highlighting and overwriting.

Or you can also download Click-N-Type virtual keyboard free from:

Click for other popular layouts

Also worth reading is Wikipedia’s article on Keystroke logging:

And a simple trick to fool keyloggers:

November 7, 2007

Get a Second Life

This is one of a series of articles I posted for magiKomputer.
You can view the original version at:
* http://magikomputer.blogspot.com/2007/11/get-second-life.html
Second Life is a virtual online world with a growing population of subscribers (or “residents”). Currently, the community has well over 10,000,000 residents from all over the World.
By providing the residents with robust building and scripting tools, they can create a vast array of in-world objects, installations and programs in the fields of Animation, Audio, Music, Building, Architecture, Clothing, Fashion, Communications, Maps, Scripting, Textures, Prim, etc.

Although Second Life’s interface and display are similar to most popular massively multi-player online role playing games (or MMORPGs), there are two key differences.
First of all, Second Life provides near unlimited freedom to its Residents. This world really is whatever you make it, and your experience is what you want out of it. If you want to hang out with your friends in a garden or nightclub, you can. If you want to go shopping or fight dragons, you can. If you want to start a business, create a game or build a skyscraper you can. It’s up to you.
And you are the legal proprietor of anything you create. Since its early stages, Linden Lab (the producer of Second Life) has allowed its residents to retain full IP rights over their own creations, thereby insuring that their contributions to the community remain truly their own. As a resident you retain full IP rights over any of your in-world creations.

Second Life is the size of a small city, with thousands of servers (called simulators) and a Resident population of over 10,742,897 (and growing). Residents come to the world from over 100 countries with concentrations in North America and the UK.

Demographically, 60% are men, 40% are women and they span in age from 18 – 85. They are gamers, housewives, artists, musicians, programmers, lawyers, firemen, political activists, college students, business owners, active duty military overseas, architects, and medical doctors, to name just a few.

Even if you don’t know how to do 3D modeling, Second Life makes building an easy task, using the built-in tools. And there are lots of daily Resident-run classes and tutorials to help you learn.


The Second Life client comes with an updated-daily list of public Events, including games, parties, and contests; the Search window is a veritable traveler’s guide to Second Life—the places to see, the people to meet, and much more.

There are dozens of first-person shooters, strategy games, puzzle and adventure games, even board, and puzzle games.
Several regions of the world have been devoted to role playing, and resemble medieval towns, or futuristic cities. The building and scripting system even enables Residents to create their own version of a MMORPG, including hit points, character stats, and all the other classic elements.
Since gamers are a big part of the Second Life community, friendly games of combat are a regular event.


You can get your own virtual land at Second Life.
Having land in Second Life lets you have an on-going presence in the world, for your home, your business, or whatever other special place you’ve created. Even when you’re not online, your friends or customers can stop by to leave you a message or shop for your latest creation.
To get land you must sign up for the Premium membership. You’ll be able to purchase a 512 square meter plot of land before any land maintenance fees kick-in.
However, you can have as much land as you choose. Change the amount of land you have and your monthly fee will adjust accordingly.
You can also consider purchasing more land through the Second Life auctions or from other Residents. Alternatively, you can join with others who are interested in the project to form a group and pool your land holdings. Groups can collectively acquire and use land.


Another option is to get an island in Second Life.
Special island regions are available as a separate purchase. You can choose from several different topologies, control access from the mainland, or even decide to start your own separate community.

When you join the community you are given a small weekly stipend of L$ (Second Life’s official unit-of-trade) when you sign up for a Premium account. Plus you can earn L$ by making and selling goods and services, holding events, and playing games.

Residents can buy and sell in-world L$ from the Linden Dollar Exchange, or from other third party websites. Some of these operators offer convenient in-world “ATM” machines to facilitate transactions.


You can even start your own business in Second Life.
Shopping is a big part of the Second Life experience for many Residents. You can buy and sell anything that can be made in-world, from clothes, skins, wigs, jewelry, and custom animations for avatars, to furniture, buildings, weapons, vehicles, games, and more. Once you’re ready to bring your products to the market, it’s simply a matter of buying or sub-renting property, for opening up a shop. There are also Resident-owned malls which charge rental fees, or take a cut of your proceeds. As in the real world, the challenge is to build up a reputation that earns a steady stream of customers.
And as in the real world there’s money to be made if you are a successful business person. Real money, I mean.

My overall impression is that this is quite an awesome stuff. It looks like it is going to become the next big thing in our lives, superseding the Internet itself as we know it.
I love the concept, but I have to say that I have this uneasy feeling that somehow there’s something evil in this invention, something that one day will get out of our hands.
Not sure why but it kind of reminds me of the first Terminator movie.
Because the next logical step would be to physically build many of those 3-D human models in the real world. Combine that with the latest advances in artificial intelligence and with the increasing isolation of human beings in today’s societies and you’ll soon get androids living our lives for us.

I don’t know if it happens to anyone else but I’m able to semi-consciously
“choose” my dreams, I mean, I sort of create my dreams to my taste and
discard what I don’t like.
Not always, but many times I can do it. I can even resume some dreams
that I had left half-way through.
One of my favorites is flying. I don’t actually fly, but rather glide
for long distances, as if I were in a place with very low gravity,
just as you can do in SecondLife.

And I’m now having lots of dreams in which I continue to be in that SL
world, flying around, teleporting to strange places, meeting lots of
people, making friends, dancing, meeting beautiful girls by the dozens
and having sex with a large proportion of them. Virtual Sex, that is.
So far.

I am not addicted yet, but all my virtual friends tell me that I will
soon be.

The other day I came across this questionnaire on how Second Life
users are affected by this virtual world in their real lives.
It’s kind of scary, for example, about 30% of users say that SecondLife is the
only thing they find interesting in their lives, or those 30% who say that “The first thing I think about when I wake up is SecondLife”, or the 20% who say that
“In order to be in SecondLife I eat, sleep and/or bathe less.”
Have a look:

I wonder if I should stop now before it’s too late …

%d bloggers like this: