Saturday, November 29, 2008

HTC's Real IPhone Rival Stands Up: the Touch HD (PC World)

High Tech Computer (HTC), the world's largest maker of smartphones that use Microsoft's Windows Mobile software, launched the Touch HD handset in Taipei on Wednesday, a 3.8-inch touchscreen mobile phone that more closely matches up to the iPhone 3G.

Earlier this year, HTC launched the Touch Diamond, a sleek 3G (third generation mobile telecommunications) handset meant to rival Apple's hit handset, but its screen is much smaller at 2.8-inches. The iPhone 3G sports a 3.5-inch screen.

HTC teamed up with network operator Taiwan Mobile to launch the Touch HD. Taiwan Mobile, one of the largest mobile network operators on the island by subscribers, plans to launch the device next month.

Taiwan Mobile has not yet decided on a sales plan for the Touch HD. The retail price suggested by HTC is NT$25,900 (US$776).

HTC representatives at the launch also could not give a time frame on when the Touch HD will launch in other parts of the world. Reports say the U.K. and Singapore will see the handset shortly.

The touchscreen on the Touch HD is the most responsive yet in the Touch series, but HTC representatives were unable to say why. The processor on board, a Qualcomm 7201A, is the same as that on the Touch Diamond, and both handsets use the Microsoft Windows Mobile 6.1 operating system.

The Touch HD was quicker and more responsive than the Touch Diamond, and more closely matched the touchscreen on HTC's latest development, the T-Mobile G1 (also known as the Google phone).

The Touch HD also sports a better onboard digital camera, with 5-megapixel resolution, than the Diamond.

The HD has on board GPS (global positioning system) and works with Google Maps. The 3G handset allows users to video chat, download information over mobile networks or via Wi-Fi 802.11b/g. The smartphone works on WCDMA 900/2100MHz signals and supports quad-band GSM/GPRS/EDGE.

The Touch HD weighs 147 grams, and measures 115 millimeters by 62.8mm by 12mm.

Taiwan Mobile threw out a strong sales pitch during the Taipei news conference. The company faces stiff competition from the upcoming launch of the iPhone 3G by market leader Chunghwa Telecom.

Cliff Lai, chief operating officer at Taiwan Mobile, called on Taiwanese patriots to buy the locally made Touch HD amid the global financial crisis.

"If we went with the iPhone, our money would go to America," he said. "But we're not interested in boosting the American economy. We're interested in boosting Taiwan's economy."



from : http://tech.yahoo.com/news/pcworld/20081129/tc_pcworld/htcsrealiphonerivalstandsupthetouchhd

Wednesday, November 26, 2008

Microsoft files new cybersquatting charges

Microsoft lists 23 Web addresses in the suit that it says are registered to Domain Investments and contain Microsoft trademarks or intentional misspellings of such names. The addresses include windoesmobile.com, wwwhotmajl.com, microsoft-games.com and zunedrivers.com. Many of the sites include advertising for various products and services.

"Defendants' registration and use of the infringing domain names is to primarily capitalize on the goodwill associated with Microsoft Marks," reads the lawsuit, which also charges 23 unnamed defendants.

For the cybersquatting charges, Microsoft asks to be awarded the defendants' profits from the sites plus damages to be determined at trial, or up to US$100,000 per domain name. The suit, filed in the U.S. District Court for the Southern District of Florida in Miami, also includes charges of trademark infringement and false advertising.

Microsoft is also asking for an injunction against Domain Investments from infringing on Microsoft's trademarks and from registering any additional URLs that contain Microsoft trademarks or misspellings of them.

An e-mail sent to an address listed on the Domain Investments Web site bounced back, and a phone message asking for comment was not immediately returned. Domain Investments' Web site describes the company as a domain development and monetization business.

from : http://www.nytimes.com/external/idg/2008/11/26/26idg-Microsoft-files.html

No Keyboard? And You Call This a BlackBerry?

Research in Motion (R.I.M.), the company that brought us the BlackBerry, has been on a roll lately. For a couple of years now, it’s delivered a series of gorgeous, functional, supremely reliable smartphones that, to this day, outsell even the much-adored iPhone.

Here’s a great example of the intelligence that drives R.I.M.: The phones all have simple, memorable, logical names instead of incomprehensible model numbers. There’s the BlackBerry Pearl (with a translucent trackball). The BlackBerry Flip (with a folding design). The BlackBerry Bold (with a stunning design and faux-leather back).

Well, there’s a new one, just out ($200 after rebate, with two-year Verizon contract), officially called the BlackBerry Storm.

But I’ve got a better name for it: the BlackBerry Dud.

The first sign of trouble was the concept: a touch-screen BlackBerry. That’s right — in its zeal to cash in on some of that iPhone touch-screen mania, R.I.M. has created a BlackBerry without a physical keyboard.

Hello? Isn’t the thumb keyboard the defining feature of a BlackBerry? A BlackBerry without a keyboard is like an iPod without a scroll wheel. A Prius with terrible mileage. Cracker Jack without a prize inside.

R.I.M. hoped to soften the blow by endowing its touch screen with something extra: clickiness. The entire screen acts like a mouse button. Press hard enough, and it actually responds with a little plastic click.

As a result, the Storm offers two degrees of touchiness. You can tap the screen lightly, or you can press firmly to register the palpable click.

It’s not a bad idea. In fact, it ought to make the on-screen keyboard feel more like actual keys. In principle, you could design a brilliant operating system where the two kinds of taps do two different things. Tap lightly to type a letter — click fully to get a pop-up menu of accented characters (é, è, ë and so on). Tap lightly to open something, click fully to open a shortcut menu of options. And so on.

Unfortunately, R.I.M.’s execution is inconsistent and confusing.

Where to begin? Maybe with e-mail, the most important function of a BlackBerry. On the Storm, a light touch highlights the key but doesn’t type anything. It accomplishes nothing — a wasted software-design opportunity. Only by clicking fully do you produce a typed letter.

It’s too much work, like using a manual typewriter. (“I couldn’t send two e-mails on this thing,” said one disappointed veteran.)

It’s no help that the Storm shows you two different keyboards, depending on how you’re holding it (it has a tilt sensor like the iPhone’s).

When you hold it horizontally, you get the full, familiar Qwerty keyboard layout. But when you turn it upright, you get the less accurate SureType keyboard, where two letters appear on each “key,” and the software tries to figure out which word you’re typing.

For example, to type “get,” you press the GH, ER and TY keys. Unfortunately, that’s also “hey.” You can see the problem. And trying to enter Web addresses or unusual last names is utterly hopeless.

Furthermore, despite having had more than a year to study the iPhone, R.I.M. has failed to exploit the virtues of an on-screen keyboard. A virtual keyboard’s keys can change, permitting you to switch languages or even alphabet systems within a single sentence. A virtual keyboard can offer canned blobs of text like “.com” and “.org” when it senses that you’re entering a Web address, or offer an @ key when addressing e-mail.

But not on the Storm.

Incredibly, the Storm even muffs simple navigation tasks. When you open a menu, the commands are too close together; even if your finger seems to be squarely on the proper item, your click often winds up activating something else in the list.


from : http://www.nytimes.com/2008/11/27/technology/personaltech/27pogue.html?_r=1&em

To scroll a list, you’re supposed to flick your finger across the screen, just as on the iPhone. But even this simple act is head-bangingly frustrating; the phone takes far too long to figure out that you’re swiping and not just tapping. It inevitably highlights some random list item when you began to swipe, and then there’s a disorienting delay before the scrolling begins.

There’s no momentum to the scrolling, either, as on the iPhone or a Google phone; you can’t flick faster to scroll farther. Scrolling through a long list of phone numbers or messages, therefore, is exhausting.

Nor is that the Storm’s only delayed reaction. It can take two full seconds for the screen image to change when you turn it 90 degrees, three seconds for a program to appear, five seconds for a button-tap to register. (Remember: To convert seconds into BlackBerry time, multiply by seven.)

In short, trying to navigate this thing isn’t just an exercise in frustration — it’s a marathon of frustration.

I haven’t found a soul who tried this machine who wasn’t appalled, baffled or both.

Blu-ray Disc

Blu-ray Disc Blu-ray, also known as Blu-ray Disc (BD), is the name of a next-generation optical disc format jointly developed by the Blu-ray Disc Association (BDA), a group of the world's leading consumer electronics, personal computer and media manufacturers (including Apple, Dell, Hitachi, HP, JVC, LG, Mitsubishi, Panasonic, Pioneer, Philips, Samsung, Sharp, Sony, TDK and Thomson). The format was developed to enable recording, rewriting and playback of high-definition video (HD), as well as storing large amounts of data. The format offers more than five times the storage capacity of traditional DVDs and can hold up to 25GB on a single-layer disc and 50GB on a dual-layer disc. This extra capacity combined with the use of advanced video and audio codecs will offer consumers an unprecedented HD experience.

While current optical disc technologies such as DVD, DVD±R, DVD±RW, and DVD-RAM rely on a red laser to read and write data, the new format uses a blue-violet laser instead, hence the name Blu-ray. Despite the different type of lasers used, Blu-ray products can easily be made backwards compatible with CDs and DVDs through the use of a BD/DVD/CD compatible optical pickup unit. The benefit of using a blue-violet laser (405nm) is that it has a shorter wavelength than a red laser (650nm), which makes it possible to focus the laser spot with even greater precision. This allows data to be packed more tightly and stored in less space, so it's possible to fit more data on the disc even though it's the same size as a CD/DVD. This together with the change of numerical aperture to 0.85 is what enables Blu-ray Discs to hold 25GB/50GB. Recent development by Pioneer has pushed the storage capacity to 500GB on a single disc by using 20 layers.

Blu-ray is currently supported by about 200 of the world's leading consumer electronics, personal computer, recording media, video game and music companies. The format also has support from all Hollywood studios and countless smaller studios as a successor to today's DVD format. Many studios have also announced that they will begin releasing new feature films on Blu-ray Disc day-and-date with DVD, as well as a continuous slate of catalog titles every month. For more information about Blu-ray movies, check out our Blu-ray movies and Blu-ray reviews section which offers information about new and upcoming Blu-ray releases, as well as what movies are currently available in the Blu-ray format.

from : http://www.blu-ray.com/info/

Google History

Early history
Larry Page and Sergey Brin in 2003.

Google began in January 1996 as a research project by Larry Page, a Ph.D. student at Stanford.[1] In search for a dissertation theme, Page considered—among other things—exploring the mathematical properties of the World Wide Web, understanding its link structure as a huge graph.[2] His supervisor Terry Winograd encouraged him to pick this idea (which Page later recalled as "the best advice I ever got"[3]) and Page focused on the problem of finding out which web pages link to a given page, considering the number and nature of such backlinks to be valuable information about that page (with the role of citations in academic publishing in mind).[2] In his research project, nicknamed "BackRub", he was soon joined by Sergey Brin, a fellow Stanford Ph.D. student and close friend, whom he had first met in the summer of 1995 in a group of potential new students which Brin had volunteered to show around the campus.[2] Page's web crawler began exploring the web in March 1996, setting out from Page's own Stanford home page as its only starting point.[2] To convert the backlink data that it gathered into a measure of importance for a given web page, Brin and Page developed the PageRank algorithm.[2] Analyzing BackRub's output—which, for a given URL, consisted of a list of backlinks ranked by importance—it occurred to them that a search engine based on PageRank would produce better results than existing techniques (existing search engines at the time essentially ranked results according to how many times the search term appeared on a page).[2][4] A small search engine called RankDex was already exploring a similar strategy.[5]

Convinced that the pages with the most links to them from other highly relevant Web pages must be the most relevant pages associated with the search, Page and Brin tested their thesis as part of their studies, and laid the foundation for their search engine. Originally the search engine used the Stanford website with the domain google.stanford.edu. The domain google.com was registered on September 15, 1997. They formally incorporated their company, Google Inc., on September 4, 1998 at a friend's garage in Menlo Park, California.

The name "Google" originated from a misspelling of "googol,"[6][7] which refers to the number represented by a 1 followed by one-hundred zeros. Having found its way increasingly into everyday language, the verb, "google," was added to the Merriam Webster Collegiate Dictionary and the Oxford English Dictionary in 2006, meaning, "to use the Google search engine to obtain information on the Internet."[8][9]

By the end of 1998, Google had an index of about 60 million pages.[10] The home page was still marked "BETA", but an article in Salon.com already argued that Google's search results were better than those of competitors like Hotbot or Excite.com, and praised it for being more technologically innovative than the overloaded portal sites (like Yahoo!, Excite.com, Lycos, Netscape's Netcenter, AOL.com, Go.com and MSN.com) which at that time, during the growing dot-com bubble, were seen as "the future of the Web", especially by stock market investors.[10]

In March 1999, the company moved into offices at 165 University Avenue in Palo Alto, home to several other noted Silicon Valley technology startups.[11] After quickly outgrowing two other sites, the company leased a complex of buildings in Mountain View at 1600 Amphitheatre Parkway from Silicon Graphics (SGI) in 1999.[12] The company has remained at this location ever since, and the complex has since become known as the Googleplex (a play on the word googolplex, a 1 followed by a googol of zeros). In 2006, Google bought the property from SGI for $319 million.[13]

The Google search engine attracted a loyal following among the growing number of Internet users, who liked its simple design.[14] In 2000, Google began selling advertisements associated with search keywords.[1] The ads were text-based to maintain an uncluttered page design and to maximize page loading speed.[1] Keywords were sold based on a combination of price bid and click-throughs, with bidding starting at $.05 per click.[1] This model of selling keyword advertising was pioneered by Goto.com (later renamed Overture Services, before being acquired by Yahoo! and rebranded as Yahoo! Search Marketing).[15][16][17] While many of its dot-com rivals failed in the new Internet marketplace, Google quietly rose in stature while generating revenue.[1]


Google's declared code of conduct is "Don't be evil", a phrase which they went so far as to include in their prospectus (aka "red herring" or "S-1") for their IPO, noting, "We believe strongly that in the long term, we will be better served — as shareholders and in all other ways — by a company that does good things for the world even if we forgo some short term gains."

The Google site often includes humorous features such as cartoon modifications of the Google logo to recognize special occasions and anniversaries.[18] Known as "Google Doodles", most have been drawn by Google's international webmaster, Dennis Hwang.[19] Not only may decorative drawings be attached to the logo, but the font design may also mimic a fictional or humorous language such as Star Trek Klingon and Leet.[20] The logo is also notorious among web users for April Fool's Day tie-ins and jokes about the company.

Financing and initial public offering

The first funding for Google as a company was secured in august 1998 in the form of a $100,000USD contribution from Andy Bechtolsheim, co-founder of Sun Microsystems, given to a corporation which did not yet exist.[21]

On June 7th, 1999 a round of funding of 25 millions $ was announced[22], with the major investors being rival venture capital firms Kleiner Perkins Caufield & Byers and Sequoia Capital.[21]

In October 2003, while discussing a possible initial public offering of shares (IPO), Microsoft approached the company about a possible partnership or merger.[citation needed] However, no such deal ever materialized. In January 2004, Google announced the hiring of Morgan Stanley and Goldman Sachs Group to arrange an IPO. The IPO was projected to raise as much as $4 billion.

On April 29, 2004, Google made an S-1 form SEC filing for an IPO to raise as much as $2,718,281,828. This alludes to Google's corporate culture with a touch of mathematical humor as e ≈ 2.718281828. April 29 was also the 120th day of 2004, and according to section 12(g) of the Securities Exchange Act of 1934, "a company must file financial and other information with the SEC 120 days after the close of the year in which the company reaches $10 million in assets and/or 500 shareholders, including people with stock options.[23] Google has stated in its annual filing for 2004 that every one of its 3,021 employees, "except temporary employees and contractors, are also equity holders, with significant collective employee ownership", so Google would have needed to make its financial information public by filing them with the SEC regardless of whether or not they intended to make a public offering. As Google stated in the filing, their, "growth has reduced some of the advantages of private ownership. By law, certain private companies must report as if they were public companies. The deadline imposed by this requirement accelerated our decision." The SEC filing revealed that Google turned a profit every year since 2001 and earned a profit of $105.6 million on revenues of $961.8 million during 2003.

In May 2004, Google officially cut Goldman Sachs from the IPO, leaving Morgan Stanley and Credit Suisse First Boston as the joint underwriters. They chose the unconventional way of allocating the initial offering through an auction (specifically, a "Dutch auction"), so that "anyone" would be able to participate in the offering. The smallest required account balances at most authorized online brokers that are allowed to participate in an IPO, however, are around $100,000. In the run-up to the IPO the company was forced to slash the price and size of the offering, but the process did not run into any technical difficulties or result in any significant legal challenges. The initial offering of shares was sold for $85 a piece. The public valued it at $100.34 at the close of the first day of trading, which saw 22,351,900 shares change hands.

Google's initial public offering took place on August 19, 2004. A total of 19,605,052 shares were offered at a price of $85 per share.[24] Of that, 14,142,135 (another mathematical reference as √2 ≈ 1.4142135) were floated by Google and 5,462,917 by selling stockholders. The sale raised US$1.67 billion, and gave Google a market capitalization of more than $23 billion.[25] The vast majority of Google's 271 million shares remained under Google's control. Many of Google's employees became instant paper millionaires. Yahoo!, a competitor of Google, also benefited from the IPO because it owns 2.7 million shares of Google.[26]

The company is listed on the NASDAQ stock exchange under the ticker symbol GOOG.


Growth

In February 2003, Google acquired Pyra Labs, owner of Blogger, a pioneering and leading web log hosting website. Some analysts considered the acquisition inconsistent with Google's business model. However, the acquisition secured the company's competitive ability to use information gleaned from blog postings to improve the speed and relevance of articles contained in a companion product to the search engine, Google News.

At its peak in early 2004, Google handled upwards of 84.7 percent of all search requests on the World Wide Web through its website and through its partnerships with other Internet clients like Yahoo!, AOL, and CNN. In February 2004, Yahoo! dropped its partnership with Google, providing an independent search engine of its own. This cost Google some market share, yet Yahoo!'s move highlighted Google's own distinctiveness, and today the verb "to google" has entered a number of languages (first as a slang verb and now as a standard word), meaning, "to perform a web search" (a possible indication of "Google" becoming a genericized trademark).

Analysts speculate that Google's response to its separation from Yahoo! will be to continue to make technical and visual enhancements to personalized searches, using the personal data that is gathering from orkut, Gmail, and Google Product Search to produce unique results based on the user. Frequently, new Google enhancements or products appear in its inventory. Google Labs, the experimental section of Google.com, helps Google maximize its relationships with its users by including them in the beta development, design and testing stages of new products and enhancements of already existing ones.[27]

After the IPO, Google's stock market capitalization rose greatly and the stock price more than quadrupled. On August 19, 2004 the number of shares outstanding was 172.85 million while the "free float" was 19.60 million (which makes 89% held by insiders). In January 2005 the number of shares outstanding was up 100 million to 273.42 million, 53% of that was held by insiders, which made the float 127.70 million (up 110 million shares from the first trading day). The two founders are said to hold almost 30% of the outstanding shares. The actual voting power of the insiders is much higher, however, as Google has a dual class stock structure in which each Class B share gets ten votes compared to each Class A share getting one. Page says in the prospectus that Google has, "a dual class structure that is biased toward stability and independence and that requires investors to bet on the team, especially Sergey and me." The company has not reported any treasury stock holdings as of the Q3 2004 report.

On June 1, 2005, Google shares gained nearly four percent after Credit Suisse First Boston raised its price target on the stock to $350. On that same day, rumors circulated in the financial community that Google would soon be included in the S&P 500.[28] When companies are first listed on the S&P 500 they typically experience a bump in share price due to the rapid accumulation of the stock within index funds that track the S&P 500. The rumors, however, were premature and Google was not added to the S&P 500 until 2006. Nevertheless, on June 7, 2005, Google was valued at nearly $52 billion, making it one of the world's biggest media companies by stock market value.

On August 18, 2005 (one year after the initial IPO), Google announced that it would sell 14,159,265 (another mathematical reference as π ≈ 3.14159265) more shares of its stock to raise money. The move would double Google's cash stockpile to $7 billion. Google said it would use the money for "acquisitions of complementary businesses, technologies or other assets".[29]

On September 28, 2005, Google announced a long-term research partnership with NASA which would involve Google building a 1-million square foot R&D center at NASA's Ames Research Center, and on December 31, 2005 Time Warner's AOL unit and Google unveiled an expanded partnership—see Partnerships below.

Additionally, Google has also recently formed a partnership with Sun Microsystems to help share and distribute each other's technologies. As part of the partnership Google will hire employees to help in the open source office program OpenOffice.org.[30]

With Google's increased size comes more competition from large mainstream technology companies. One such example is the rivalry between Microsoft and Google.[31] Microsoft has been touting its MSN Search engine to counter Google's competitive position. Furthermore, the two companies are increasingly offering overlapping services, such as webmail (Gmail vs. Hotmail), search (both online and local desktop searching), and other applications (for example, Microsoft's Windows Live Local competes with Google Earth). Some have even suggested that in addition to an Internet Explorer replacement Google is designing its own Linux-based operating system called Google OS to directly compete with Microsoft Windows. There were also rumors of a Google web browser, fueled much by the fact that Google is the owner of the domain name "gbrowser.com". These were later proven when google released Google Chrome. This corporate feud is most directly expressed in hiring offers and defections. Many Microsoft employees who worked on Internet Explorer have left to work for Google. This feud boiled over into the courts when Kai-Fu Lee, a former vice-president of Microsoft, quit Microsoft to work for Google. Microsoft sued to stop his move by citing Lee's non-compete contract (he had access to much sensitive information regarding Microsoft's plans in China).

Google and Microsoft reached a settlement out of court on 22 December 2005, the terms of which are confidential.[32]

Click fraud has also become a growing problem for Google's business strategy. Google's CFO George Reyes said in a December 2004 investor conference that "something has to be done about this really, really quickly, because I think, potentially, it threatens our business model."[33] Some have suggested that Google is not doing enough to combat click fraud. Jessie Stricchiola, president of Alchemist Media, called Google, "the most stubborn and the least willing to cooperate with advertisers", when it comes to click fraud.

While the company's primary market is in the web content arena, Google has also recently began to experiment with other markets, such as radio and print publications. On January 17, 2006, Google announced that it had purchased the radio advertising company dMarc, which provides an automated system that allows companies to advertise on the radio.[34] This will allow Google to combine two advertising media—the Internet and radio—with Google's ability to laser-focus on the tastes of consumers. Google has also begun an experiment in selling advertisements from its advertisers in offline newspapers and magazines, with select advertisements in the Chicago Sun-Times.[35] They have been filling unsold space in the newspaper that would have normally been used for in-house advertisements.

During the third quarter 2005 Google Conference Call, Eric Schmidt said, "We don't do the same thing as everyone else does. And so if you try to predict our product strategy by simply saying well so and so has this and Google will do the same thing, it's almost always the wrong answer. We look at markets as they exist and we assume they are pretty well served by their existing players. We try to see new problems and new markets using the technology that others use and we build."

After months of speculation, Google was added to the Standard & Poor's 500 index (S&P 500) on March 31, 2006.[36] Google replaced Burlington Resources, a major oil producer based in Houston that had been acquired by ConocoPhillips.[37]. The day after the announcement Google's share price rose by 7%[38].

Over the course of the past decade, Google has become quite well known for its corporate culture and innovative, clean products, and has had a major impact on online culture. In July 2006, the verb, "to google", was officially added to both the Merriam Webster Collegiate Dictionary as well as the Oxford English Dictionary, meaning, "to use the Google search engine to obtain information on the Internet."

Philanthropy

In 2004, Google formed a non-profit philanthropic wing, Google.org, giving it a starting fund of $1 billion.[41] The express mission of the organization is to help with the issues of climate change (see also global warming), global public health, and global poverty. Among its first projects is to develop a viable plug-in hybrid electric vehicle that can attain 100 mpg. The current director is Dr. Larry Brilliant.[42]

Acquisitions

Since 2001, Google has acquired several small start-up companies, often consisting of innovative teams and products. One of the earlier companies that Google bought was Pyra Labs. They were the creators of Blogger, a weblog publishing platform, first launched in 1999. This acquisition led to many premium features becoming free. Pyra Labs was originally formed by Evan Williams, yet he left Google in 2004. In early 2006, Google acquired Upstartle, a company responsible for the online collaborative word processor, Writely. The technology in this product was combined with Google Spreadsheets to become Google Docs & Spreadsheets.

On October 9, 2006, Google announced that it would buy the popular online video site YouTube for $1.65 billion.[43] The brand, YouTube, will continue to exist, and will not merge with Google Video. Meanwhile, Google Video signed an agreement with Sony BMG Music Entertainment and the Warner Music Group, for both companies to deliver music videos to the site.[44] The deal was finalized by November 13.[45]

On October 31, 2006, Google announced that it had purchased JotSpot, a company that helped pioneer the market for collaborative, web-based business software to bolster its position in the online document arena. [46]

On March 17, 2007, Google announced its acquisition of two more companies. The first is Gapminder's Trendalyzer software, a company that specializes in developing information technology for provision of free statistics in new visual and animated ways[47] On the same day, Google also announced its acquisition of Adscape Media, a small in-game advertising company based in San Francisco, California.[48]

Google also acquired PeakStream Technologies.

Partnerships

Google has worked with several corporations, in order to improve production and services.

On September 28, 2005, Google announced a long-term research partnership with NASA which would involve Google building a 1-million square foot R&D center at NASA's Ames Research Center. NASA and Google are planning to work together on a variety of areas, including large-scale data management, massively distributed computing, bio-info-nano convergence, and encouragement of the entrepreneurial space industry. The new building would also include labs, offices, and housing for Google engineers.[49] In October 2006, Google formed a partnership with Sun Microsystems to help share and distribute each other's technologies. As part of the partnership Google will hire employees to help the open source office program OpenOffice.org.[50]

Time Warner's AOL unit and Google unveiled an expanded partnership on December 21, 2005, including an enhanced global advertising partnership and a $1 billion investment by Google for a 5% stake in AOL.[51] As part of the collaboration, Google plans to work with AOL on video search and offer AOL's premium-video service within Google Video. This did not allow users of Google Video to search for AOL's premium-video services. Display advertising throughout the Google network will also increase.

In August 2003, Google signed a $900 million offer with News Corp.'s Fox Interactive Media unit to provide search and advertising on MySpace and other News Corp. websites including IGN, AmericanIdol.com, Fox.com, and Rotten Tomatoes, although Fox Sports is not included as a deal already exists between News Corp. and MSN.[52] [53]

On 6 December 2006, British Sky Broadcasting released details of a Sky and Google alliance.[54] This includes a feature where Gmail will link with Sky and host a mail service for Sky, incorporating the email domain "@sky.com".

New mobile top-level domain

In coordination with several other major corporations, including Microsoft, Nokia, Samsung, and Ericsson, Google provided financial support in the launch of the .mobi top level domain created specifically for the mobile internet, stating that it is supporting the new domain extension to help set the standards that will define the future of mobile content and improve the experience of Google users.[55] In early 2006, Google launched, Google.mobi, a mobile search portal offering several Google mobile products, including stripped-down versions of its applications and services for mobile users.[56] On September 17, 2007, Google launched, "Adsense for Mobile", a service to its publishing partners providing the ability to monetize their mobile websites through the targeted placement of mobile text ads.[57] Also in September, Google acquired the mobile social networking site, Zingku.mobi, to "provide people worldwide with direct access to Google applications, and ultimately the information they want and need, right from their mobile devices.

from : http://en.wikipedia.org/wiki/History_of_Google

Technology

Technology is a broad concept that deals with a species' usage and knowledge of tools and crafts, and how it affects a species' ability to control and adapt to its environment. Technology is a term with origins in the Greek "technologia", "τεχνολογία" — "techne", "τέχνη" ("craft") and "logia", "λογία" ("saying").[1] However, a strict definition is elusive; "technology" can refer to material objects of use to humanity, such as machines, hardware or utensils, but can also encompass broader themes, including systems, methods of organization, and techniques. The term can either be applied generally or to specific areas: examples include "construction technology", "medical technology", or "state-of-the-art technology".

The human race's use of technology began with the conversion of natural resources into simple tools. The prehistorical discovery of the ability to control fire increased the available sources of food and the invention of the wheel helped humans in travelling in and controlling their environment. Recent technological developments, including the printing press, the telephone, and the Internet, have lessened physical barriers to communication and allowed humans to interact on a global scale. However, not all technology has been used for peaceful purposes; the development of weapons of ever-increasing destructive power has progressed throughout history, from clubs to nuclear weapons.

Technology has affected society and its surroundings in a number of ways. In many societies, technology has helped develop more advanced economies (including today's global economy) and has allowed the rise of a leisure class. Many technological processes produce unwanted by-products, known as pollution, and deplete natural resources, to the detriment of the Earth and its environment. Various implementations of technology influence the values of a society and new technology often raises new ethical questions. Examples include the rise of the notion of efficiency in terms of human productivity, a term originally applied only to machines, and the challenge of traditional norms.

Philosophical debates have arisen over the present and future use of technology in society, with disagreements over whether technology improves the human condition or worsens it. Neo-Luddism, anarcho-primitivism, and similar movements criticise the pervasiveness of technology in the modern world, claiming that it harms the environment and alienates people; proponents of ideologies such as transhumanism and techno-progressivism view continued technological progress as beneficial to society and the human condition. Indeed, until recently, it was believed that the development of technology was restricted only to human beings, but recent scientific studies indicate that other primates and certain dolphin communities have developed simple tools and learned to pass their knowledge to other generations.


from: http://en.wikipedia.org

Information technology

Information technology (IT), as defined by the Information Technology Association of America (ITAA), is "the study, design, development, implementation, support or management of computer-based information systems, particularly software applications and computer hardware." IT deals with the use of electronic computers and computer software to convert, store, protect, process, transmit, and securely retrieve information.

Today, the term information technology has ballooned to encompass many aspects of computing and technology, and the term has become very recognizable. The information technology umbrella can be quite large, covering many fields. IT professionals perform a variety of duties that range from installing applications to designing complex computer networks and information databases. A few of the duties that IT professionals perform may include data management, networking, engineering computer hardware, database and software design, as well as the management and administration of entire systems.

When computer and communications technologies are combined, the result is information technology, or "infotech". Information Technology (IT) is a general term that describes any technology that helps to produce, manipulate, store, communicate, and/or disseminate information. Presumably, when speaking of Information Technology (IT) as a whole, it is noted that the use of computers and information are associated.


from : http://en.wikipedia.org/

Digital library

A digital library is a library in which collections are stored in digital formats (as opposed to print, microform, or other media) and accessible by computers.[1] The digital content may be stored locally, or accessed remotely via computer networks. A digital library is a type of information retrieval system.

The first use of the term digital library in print may have been in a 1988 report to the Corporation for National Research Initiatives[2] The term digital libraries was first popularized by the NSF/DARPA/NASA Digital Libraries Initiative in 1994.The older names electronic library or virtual library are also occasionally used, though electronic library nowadays more often refers to portals, often provided by government agencies, as in the case of the Florida Electronic Library. The DELOS Digital Library Reference Modeldefines a digital library as:

An organization, which might be virtual, that comprehensively collects, manages and preserves for the long term rich digital content, and offers to its user communities specialized functionality on that content, of measurable quality and according to codified policies.

Types of digital libraries


The term digital library is diffuse enough to be applied to a wide range of collections and organizations, but, to be considered a digital library, an online collection of information must be managed by and made accessible to a community of users. Thus, some web sites can be considered digital libraries, but far from all. Many of the best known digital libraries are older than the web including Project Perseus, Project Gutenberg, and ibiblio. Nevertheless, as a result of the development of the internet and its search potential, digital libraries such as the European Library and the Library of Congress are now developing in a Web-based environment. Public, school and college libraries are also able to develop digital download websites, featuring eBooks, audiobooks, music and video, through companies like OverDrive, Inc.

A distinction is often made between content that was created in a digital format, known as born-digital, and information that has been converted from a physical medium, e.g., paper, by digitizing. The term hybrid library is sometimes used for libraries that have both physical collections and digital collections. They consist of a combination of traditional preservation efforts such as microfilming and new technologies involving digital projects. For example, American Memory is a digital library within the Library of Congress. Some important digital libraries also serve as long term archives, for example, the ePrint arXiv, and the Internet Archive.

Academic repositories

Many academic libraries are actively involved in building institutional repositories of the institution's books, papers, theses, and other works which can be digitized or were 'born digital'. Many of these repositories are made available to the general public with few restrictions, in accordance with the goals of open access. Institutional, truly free, and corporate repositories are often referred to as digital libraries.

Digital archives

Archives differ from libraries in several ways. Traditionally, archives were defined as:

  1. Containing primary sources of information (typically letters and papers directly produced by an individual or organization) rather than the secondary sources found in a library (books, etc);
  2. Having their contents organized in groups rather than individual items. Whereas books in a library are cataloged individually, items in an archive are typically grouped by provenance (the individual or organization who created them) and original order (the order in which the materials were kept by the creator);
  3. Having unique contents. Whereas a book may be found at many different libraries, depending on its rarity, the records in an archive are usually one-of-a-kind, and cannot be found or consulted at any other location except at the archive that holds them.

The technology used to create digital libraries has been even more revolutionary for archives since it breaks down the second and third of these general rules. The use of search engines, Optical Character Recognition and metadata allow digital copies of individual items (i.e. letters) to be cataloged, and the ability to remotely access digital copies has removed the necessity of physically going to a particular archive to find a particular set of records. The Oxford Text Archive is generally considered to be the oldest digital archive of academic primary source materials.

Project Gutenberg, Google Book Search, Windows Live Search Books, Internet Archive, Cornell University, The Library of Congress World Digital Library, The Digital Library at the University of Michigan, and CMU's Universal library are considered leaders in the field of digital archive creation and management. There are hundreds of regionals such as the Wisconsin Historical Society. The Vatican maintains an extensive digital library inventory and associated technology. The Packard Foundation maintains digitization facilities near the Acropolis in Athens, Greece, as examples.

The future

Large scale digitization projects are underway at Google, the Million Book Project, MSN, and Yahoo!. With continued improvements in book handling and presentation technologies such as optical character recognition and ebooks, and development of alternative depositories and business models, digital libraries are rapidly growing in popularity as demonstrated by Google, Yahoo!, and MSN's efforts. Just as libraries have ventured into audio and video collections, so have digital libraries such as the Internet Archive.

Searching

Most digital libraries provide a search interface which allows resources to be found. These resources are typically deep web (or invisible web) resources since they frequently cannot be located by search engine crawlers. Some digital libraries create special pages or sitemaps to allow search engines to find all their resources. Digital libraries frequently use the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) to expose their metadata to other digital libraries, and search engines like Google Scholar, Google, Yahoo! and Scirus can also use OAI-PMH to find these deep web resources.[5]

There are two general strategies for searching a federation of digital libraries:

  1. distributed searching, and
  2. searching previously harvested metadata.

Distributed searching typically involves a client sending multiple search requests in parallel to a number of servers in the federation. The results are gathered, duplicates are eliminated or clustered, and the remaining items are sorted and presented back to the client. Protocols like Z39.50 are frequently used in distributed searching. A benefit to this approach is that the resource-intensive tasks of indexing and storage are left to the respective servers in the federation. A drawback to this approach is that the search mechanism is limited by the different indexing and ranking capabilities of each database, making it difficult to assemble a combined result consisting of the most relevant found items.

Searching over previously harvested metadata involves searching a locally stored index of information that has previously been collected from the libraries in the federation. When a search is performed, the search mechanism does not need to make connections with the digital libraries it is searching - it already has a local representation of the information. This approach requires the creation of an indexing and harvesting mechanism which operates regularly, connecting to all the digital libraries and querying the whole collection in order to discover new and updated resources. OAI-PMH is frequently used by digital libraries for allowing metadata to be harvested. A benefit to this approach is that the search mechanism has full control over indexing and ranking algorithms, possibly allowing more consistent results. A drawback is that harvesting and indexing systems are more resource-intensive and therefore expensive.

Problems


With the ever-expanding digital collections in today’s library’s and archives we are facing new preservation challenges that seem to have no concrete solutions or universal standards in which to guide us. For centuries we have seen the evolution of paper based materials and have been able to successfully meet many of the challenges that these materials present to the realm of preservation. Our digital world, however, is far too young and mercurial to have any long-term sense of how this new media can be preserved for long-term future access.

On one hand multiple copies of a physical volume can exist in different libraries, but can only be viewed by visiting the library or repository directly. On the other hand, a digital object can be viewed from multiple locations but more than likely exists only as a single copy in a single location on one server.[12] Access to digital libraries and their collections is dependent upon a stable information technology infrastructure (power, computers, communications links etc.). Hence, despite the egalitarian potential of the digital library, many of those who could most benefit from its global reach (for instance in the Third World) are not able to do so. Smaller libraries and repositories in developed countries may also have limited resources in dealing with long term digitization projects. There are complex technological steps involved with capturing images, and librarians must evaluate the ability to commit to long term projects.[13]

Technological standards change over time and forward migration must be a constant consideration of every library. Migration is a means of transferring an unstable digital object to another more stable format, operating system, or programming language.[14] Migration allows the ability to retrieve and display digital objects that are in danger of becoming extinct. This is a rather successful short-term solution for the problem of aging and obsolete digital formats, but with the ever-changing nature of computer technologies, migration becomes this never-ending race to transfer digital objects to new and more stable formats. Migration is also flawed in the sense that when the digital files are being transferred, the new platform may not be able to capture the full integrity of the original object.[15] There are countless artifacts sitting in libraries all over the world that are essentially useless because the technology required to access the source is obsolete. In addition to obsolescence, there are rising costs that result from continually replacing the older technologies. This issue can dominate preservation policy and may put more focus on instant user access in place of physical preservation.[16]

Some people have criticized that digital libraries are hampered by copyright law, because works cannot be shared over different periods of time in the manner of a traditional library. There is a dilution of responsibility that occurs as a result of the spread-out nature of digital resources. Complex intellectual property matters may become involved since digital material isn't always owned by a library.[17] The content is, in many cases, public domain or self-generated content only. Some digital libraries, such as Project Gutenberg, work to digitize out-of-copyright works and make them freely available to the public. An estimate of the number of distinct books still existent in library catalogues from 2000BC to 1960, has been made.[18][19]

Other digital libraries accommodate copyright concerns by licensing content and distributing it on a commercial basis, which allows for better management of the content's reproduction and the payment (if required) of royalties. The Fair Use Provisions (17 USC § 107) under copyright law provide specific guidelines under which circumstances libraries are allowed to copy digital resources. Four factors that constitute fair use are purpose of use, nature of the work, market impact, and amount or substantiality used.[20]


from: http://en.wikipedia.org/


History of Microsoft Windows

In 1983 Microsoft announced the development of Windows, a graphical user interface (GUI) for its own operating system (MS-DOS) that had shipped for IBM PC and compatible computers since 1981.


Early history: an expansion of MS-DOS

The first independent version of Microsoft Windows, version 1.0, released on 20 November 1985, lacked a degree of functionality and achieved little popularity. It was originally going to be called Interface Manager, but Rowland Hanson, the head of marketing at Microsoft, convinced the company that the name Windows would be more appealing to consumers. Windows 1.0 was not a complete operating system, but rather extended MS-DOS, and shared the latter's inherent flaws and problems. The first version of Microsoft Windows included a simple graphics painting program called Windows Paint, Windows Write, a simple word processor, an appointment calendar, a cardfiler, a notepad, a clock, a control panel, a terminal, Clipboard, and RAM driver. It also included the MS-DOS Executive and a game called Reversi.

Furthermore, legal challenges by Apple limited its functionality. For example, windows could only appear "tiled" on the screen; that is, they could not overlap or overlie one another. Also, there was no trash can (place to store files prior to deletion), since Apple claimed ownership of the rights to that paradigm. Microsoft later removed both of these limitations by signing a licensing agreement.

Microsoft Windows version 2 came out on 9 December 1987, and proved slightly more popular than its predecessor. Much of the popularity for Windows 2.0 came by way of its inclusion as a "run-time version" with Microsoft's new graphical applications, Excel and Word for Windows. They could be run from MS-DOS, executing Windows for the duration of their activity, and closing down Windows upon exit.

Microsoft Windows received a major boost around this time when Aldus PageMaker appeared in a Windows version, having previously run only on Macintosh. Some computer historians date this, the first appearance of a significant and non-Microsoft application for Windows, as the beginning of the success of Windows.

Versions 2.0x used the real-mode memory model, which confined it to a maximum of 1 megabyte of memory. In such a configuration, it could run under another multitasker like DESQview, which used the 286 Protected Mode.

Later, two new versions were released: Windows/286 2.1 and Windows/386 2.1. Like previous versions of Windows, Windows/286 2.1 used the real-mode memory model, but was the first version to support the HMA. Windows/386 2.1 had a protected mode kernel with LIM-standard EMS emulation, the predecessor to XMS which would finally change the topology of IBM PC computing. All Windows and DOS-based applications at the time were real mode, running over the protected mode kernel by using the virtual 8086 mode, which was new with the 80386 processor.

Version 2.03, and later 3.0, faced challenges from Apple over its overlapping windows and other features Apple charged mimicked the "look and feel" of its operating system and "embodie[d] and generate[d] a copy of the Macintosh" in its OS. Judge William Schwarzer dropped all but 10 of the 189 charges that Apple had sued Microsoft with on 5 January 1989.

Success with Windows 3.0


Microsoft Windows scored a significant success with Windows 3.0, released in 1990. In addition to improved capabilities given to native applications, Windows also allows a user to better multitask older MS-DOS based software compared to Windows/386, thanks to the introduction of virtual memory. It made PC compatibles serious competitors to the Apple Macintosh. This benefited from the improved graphics available on PCs by this time (by means of VGA video cards), and the Protected/Enhanced mode which allowed Windows applications to use more memory in a more painless manner than their DOS counterparts could. Windows 3.0 can run in any of Real, Standard, or 386 Enhanced modes, and is compatible with any Intel processor from the 8086/8088 up to 80286 and 80386. Windows 3.0 tries to auto detect which mode to run in, although it can be forced to run in a specific mode using the switches: /r (real mode), /s ("standard" 286 protected mode) and /3 (386 enhanced protected mode) respectively. This was the first version to run Windows programs in protected mode, although the 386 enhanced mode kernel was an enhanced version of the protected mode kernel in Windows/386.

Due to this backward compatibility, Windows 3.0 applications also must be compiled in a 16-bit environment, without ever using the full 32-bit capabilities of the 386 CPU.

A "multimedia" version, Windows 3.0 with Multimedia Extensions 1.0, was released several months later. This was bundled with "multimedia upgrade kits", comprising a CD-ROM drive and a sound card, such as the Creative Labs Sound Blaster Pro. This version was the precursor to the multimedia features available in Windows 3.1 and later, and was part of the specification for Microsoft's specification for the Multimedia PC.

The features listed above and growing market support from application software developers made Windows 3.0 wildly successful, selling around 10 million copies in the two years before the release of version 3.1. Windows 3.0 became a major source of income for Microsoft, and led the company to revise some of its earlier plans.

A step sideways: OS/2


During the mid to late 1980s, Microsoft and IBM had cooperatively been developing OS/2 as a successor to DOS. OS/2 would take full advantage of the aforementioned Protected Mode of the Intel 80286 processor and up to 16MB of memory. OS/2 1.0, released in 1987, supported swapping and multitasking and allowed running of DOS executables.

A GUI, called the Presentation Manager (PM), was not available with OS/2 until version 1.1, released in 1988. Its API was incompatible with Windows. (Among other things, Presentation Manager placed X,Y coordinate 0,0 at the bottom left of the screen like Cartesian coordinates, while Windows put 0,0 at the top left of the screen like most other computer window systems.) Version 1.2, released in 1989, introduced a new file system, HPFS, to replace the FAT file system.

By the early 1990s, conflicts developed in the Microsoft/IBM relationship. They cooperated with each other in developing their PC operating systems, and had access to each others' code. Microsoft wanted to further develop Windows, while IBM desired for future work to be based on OS/2. In an attempt to resolve this tension, IBM and Microsoft agreed that IBM would develop OS/2 2.0, to replace OS/2 1.3 and Windows 3.0, while Microsoft would develop a new operating system, OS/2 3.0, to later succeed OS/2 2.0.

This agreement soon however fell apart, and the Microsoft/IBM relationship was terminated. IBM continued to develop OS/2, while Microsoft changed the name of its (as yet unreleased) OS/2 3.0 to Windows NT. Both retained the rights to use OS/2 and Windows technology developed up to the termination of the agreement; Windows NT, however, was to be written anew, mostly independently (see below).

After an interim 1.3 version to fix up many remaining problems with the 1.x series, IBM released OS/2 version 2.0 in 1992. This was a major improvement: it featured a new, object-oriented GUI, the Workplace Shell (WPS), that included a desktop and was considered by many to be OS/2's best feature. Microsoft would later imitate much of it in Windows 95. Version 2.0 also provided a full 32-bit API, offered smooth multitasking and could take advantage of the 4 gigabytes of address space provided by the Intel 80386. Still, much of the system still had 16-bit code internally which required, among other things, device drivers to be 16-bit code as well. This was one of the reasons for the chronic shortage of OS/2 drivers for the latest devices. Version 2.0 could also run DOS and Windows 3.0 programs, since IBM had retained the right to use the DOS and Windows code as a result of the breakup.

At the time, it was unclear who would win the so-called "Desktop wars". But in the end, OS/2 did not manage to gain enough market share, even though IBM released several improved versions subsequently (see below).

Windows 3.1 and NT


In response to the impending release of OS/2 2.0, Microsoft developed Windows 3.1, which includes several minor improvements to Windows 3.0 (such as display of TrueType scalable fonts, developed jointly with Apple), but primarily consists of bugfixes and multimedia support. It also excludes support for Real mode, and only runs on an 80286 or better processor. Later Microsoft also released Windows 3.11, a touch-up to Windows 3.1 which includes all of the patches and updates that followed the release of Windows 3.1 in 1992. Around the same time, Microsoft released Windows for Workgroups (WfW), available both as an add-on for existing Windows 3.1 installations and in a version that included the base Windows environment and the networking extensions all in one package. Windows for Workgroups includes improved network drivers and protocol stacks, and support for peer-to-peer networking. One optional download for WfW was the "Wolverine" TCP/IP protocol stack, which allowed for easy access to the Internet through corporate networks. There are two versions of Windows for Workgroups, WfW 3.1 and WfW 3.11. Unlike the previous versions, Windows for Workgroups 3.11 only runs in 386 Enhanced mode, and requires at least an 80386SX processor.

All these versions continued version 3.0's impressive sales pace. Even though the 3.1x series still lacked most of the important features of OS/2, such as long file names, a desktop, or protection of the system against misbehaving applications, Microsoft quickly took over the OS and GUI markets for the IBM PC. The Windows API became the de-facto standard for consumer software.

Meanwhile Microsoft continued to develop Windows NT. The main architect of the system was Dave Cutler, one of the chief architects of VMS at Digital Equipment Corporation (later purchased by Compaq, now part of Hewlett-Packard). Microsoft hired him in 1988 to create a portable version of OS/2, but Cutler created a completely new system instead. Cutler had been developing a follow-on to VMS at DEC called Mica, and when DEC dropped the project he brought the expertise and some engineers with him to Microsoft. DEC also believed he brought Mica's code to Microsoft and sued. Microsoft eventually paid $150 million U.S. and agreed to support DEC's Alpha CPU chip in NT.

Windows NT 3.1 (Microsoft marketing desired to make Windows NT appear to be a continuation of Windows 3.1) arrived in Beta form to developers at the July 1992 Professional Developers Conference in San Francisco. Microsoft announced at the conference its intentions to develop a successor to both Windows NT and Windows 3.1's replacement (code-named Chicago), which would unify the two into one operating system. This successor was codenamed Cairo. In hindsight, Cairo was a much more difficult project than Microsoft had anticipated, and as a result, NT and Chicago would not be unified until Windows XP, and still, parts of Cairo have not made it into Windows as of today. Specifically, the WinFS subsystem, which was the much touted Object File System of Cairo, which had been put on hold for a while, but Microsoft further announced that they have discontinued WinFS and will gradually incorporate the technologies developed for WinFS in other products and technologies, notably, Microsoft SQL Server.

Driver support was lacking due to the increased programming difficulty in dealing with NT's superior hardware abstraction model. This problem plagued the NT line all the way through Windows 2000. Programmers complained that it was too hard to write drivers for NT, and hardware developers were not going to go through the trouble of developing drivers for a small segment of the market. Additionally, although allowing for good performance and fuller exploitation of system resources, it was also resource-intensive on limited hardware, and thus was only suitable for larger, more expensive machines. Windows NT would not work for private users because of its resource demands; moreover, its GUI was simply a copy of Windows 3.1's, which was inferior to the OS/2 Workplace Shell, so there was not a good reason to propose it as a replacement to Windows 3.1.

However, the same features made Windows NT perfect for the LAN server market (which in 1993 was experiencing a rapid boom, as office networking was becoming a commodity), as it enjoyed advanced network connectivity options, and the efficient NTFS file system. Windows NT version 3.51 was Microsoft's stake into this market, a large part of which would be won over from Novell in the following years.

One of Microsoft's biggest advances initially developed for Windows NT was new 32-bit API, to replace the legacy 16-bit Windows API. This API was called Win32, and from then on Microsoft referred to the older 16-bit API as Win16. Win32 API had three main implementations: one for Windows NT, one for Win32s (which was a subset of Win32 which could be used on Windows 3.1 systems), and one for Chicago. Thus Microsoft sought to ensure some degree of compatibility between the Chicago design and Windows NT, even though the two systems had radically different internal architectures. Windows NT was the first Windows operating system based on a hybrid kernel.

Windows 95


After Windows 3.11, Microsoft began to develop a new consumer oriented version of the operating system code-named Chicago. Chicago was designed to have support for 32-bit preemptive multitasking like OS/2 and Windows NT, although a 16-bit kernel would remain for the sake of backward compatibility. The Win32 API first introduced with Windows NT was adopted as the standard 32-bit programming interface, with Win16 compatibility being preserved through a technique known as "thunking". A new GUI was not originally planned as part of the release, although elements of the Cairo user interface were borrowed and added as other aspects of the release (notably Plug and Play) slipped.

Microsoft did not change all of the Windows code to 32-bit; parts of it remained 16-bit (albeit not directly using real mode) for reasons of compatibility, performance and development time. This, and the fact that the numerous design flaws had to be carried over from the earlier Windows versions, eventually began to impact on the operating system's efficiency and stability.

Microsoft marketing adopted Windows 95 as the product name for Chicago when it was released on 24 August 1995. Microsoft had a double gain from its release: first it made it impossible for consumers to run Windows 95 on a cheaper, non-Microsoft DOS; secondly, although traces of DOS were never completely removed from the system, and a version of DOS would be loaded briefly as a part of the booting process, Windows 95 applications ran solely in 386 Enhanced Mode, with a flat 32-bit address space and virtual memory. These features make it possible for Win32 applications to address up to 2 gigabytes of virtual RAM (with another 2GB reserved for the operating system), and in theory prevents them from inadvertently corrupting the memory space of other Win32 applications. In this respect the functionality of Windows 95 moved closer to Windows NT, although Windows 95/98/ME does not support more than 512 megabytes of physical RAM without obscure system tweaks.

IBM continued to market OS/2, producing later versions in OS/2 3.0 and 4.0 (also called Warp). Responding to complaints about OS/2 2.0's high demands on computer hardware, version 3.0 was significantly optimized both for speed and size. Before Windows 95 was released, OS/2 Warp 3.0 was even shipped preinstalled with several large German hardware vendor chains. However, with the release of Windows 95, OS/2 began to lose market share.

It is probably impossible to nail down a specific reason why OS/2 failed to gain much market share. While OS/2 continued to run Windows 3.1 applications, it lacked support for anything but the Win32s subset of Win32 API (see above). Unlike with Windows 3.1, IBM did not have access to the source code for Windows 95 and was unwilling to commit the time and resources to emulate the moving target of the Win32 API. IBM also introduced OS/2 into the United States v. Microsoft case, blaming unfair marketing tactics on Microsoft's part, but many people would probably agree that IBM's own marketing problems and lack of support for developers contributed at least as much to the failure.

Microsoft released five different versions of Windows 95:

  • Windows 95 - original release
  • Windows 95 A - included Windows 95 OSR1 slipstreamed into the installation.
  • Windows 95 B - (OSR2) included several major enhancements, Internet Explorer (IE) 3.0 and full FAT32 file system support.
  • Windows 95 B USB - (OSR2.1) included basic USB support.
  • Windows 95 C - (OSR2.5) included all the above features, plus IE 4.0. This was the last 95 version produced.

OSR2, OSR2.1, and OSR2.5 were not released to the general public, rather, they were available only to OEMs that would preload the OS onto computers. Some companies sold new hard drives with OSR2 preinstalled (officially justifying this as needed due to the hard drive's capacity). This product was sold under the name Windows 97 in some countries in Europe.

The first Microsoft Plus! add-on pack was sold for Windows 95.


Microsoft released Windows NT 4.0, which features the new Windows 95 interface on top of the Windows NT kernel. (a patch was available for developers to make NT 3.51 use the new UI, but it was quite buggy).

Windows NT 4.0 came in four versions:

  • Windows NT 4.0 Workstation
  • Windows NT 4.0 Server
  • Windows NT 4.0 Server, Enterprise Edition (includes support for 8-way SMP and clustering)
  • Windows NT 4.0 Terminal Server

Windows 98


On 25 June 1998, Microsoft released Windows 98, which was widely regarded as a minor revision of Windows 95, but generally found to be more stable and reliable than its 1995 predecessor. It includes new hardware drivers and better support for the FAT32 file system which allows support for disk partitions larger than the 2 GB maximum accepted by Windows 95. The USB support in Windows 98 is far superior to the token, sketchy support provided by the OEM editions of Windows 95. It also controversially integrated the Internet Explorer browser into the Windows GUI and Windows Explorer file manager, prompting the opening of the United States v. Microsoft case, dealing with the question whether Microsoft was abusing its hold on the PC operating system market to unfairly compete with companies such as Netscape.

In 1999, Microsoft released Windows 98 Second Edition, an interim release whose most notable feature was the addition of Internet Connection Sharing, which was a form of network address translation, allowing several machines on a LAN (Local Area Network) to share a single Internet connection. Hardware support through device drivers was increased. Many minor problems present in the original Windows 98 were found and fixed which make it, according to many, the most stable release of Windows 9x.

Windows 2000

Microsoft released Windows 2000, known during its development cycle as Windows NT 5.0, in February 2000. It was successfully deployed both on the server and the workstation markets. Amongst Windows 2000's most significant new features was Active Directory, a near-complete replacement of the NT 4.0 Windows Server domain model, which built on industry-standard technologies like DNS, LDAP, and Kerberos to connect machines to one another. Terminal Services, previously only available as a separate edition of NT 4, was expanded to all server versions. A number of features from Windows 98 were incorporated as well, such as an improved Device Manager, Windows Media Player, and a revised DirectX that made it possible for the first time for many modern games to work on the NT kernel. Windows 2000 is also the last NT-kernel Windows operating system to lack Product Activation.

While Windows 2000 upgrades were available for Windows 95 and Windows 98, it was not intended for home users.[1] It lacked device drivers for many common consumer devices such as scanners and printers[citation needed]. The original release of Windows 2000 had a buggy and counterintuitive installation procedure[citation needed]; this was not fully rectified until Service Pack 4 in June, 2003, after XP had been released.

Windows 2000 was available in six editions:

  • Windows 2000 Professional
  • Windows 2000 Server
  • Windows 2000 Advanced Server
  • Windows 2000 Datacenter Server
  • Windows 2000 Advanced Server Limited Edition
  • Windows 2000 Datacenter Server Limited Edition

    Windows Millennium Edition (Me)


In September 2000, Microsoft introduced Windows Me (Millennium Edition), which upgraded Windows 98 with enhanced multimedia and Internet features. It also introduced the first version of System Restore, which allowed users to revert their system state to a previous "known-good" point in the case of system failure. System Restore was a notable feature that made its way into Windows XP. The first version of Windows Movie Maker was introduced as well.

Windows Me was conceived as a quick one-year project that served as a stopgap release between Windows 98 and Windows XP. Many of the new features were available from the Windows Update site as updates for older Windows versions, (System Restore and Windows Movie Maker was an exception). As a result, Windows Me was not acknowledged as a unique Operating System along the lines of 95 or 98. Windows Me was widely criticised for serious stability issues, and for lacking real mode DOS support, to the point of being referred to as the "Mistake Edition". Windows Me was the last operating system to be based on the Windows 9x (monolithic) kernel and MS-DOS. It is also the last 32-bit release of Microsoft Windows that did not include Product Activation.

Windows XP: merging the product lines

n 2001, Microsoft introduced Windows XP (codenamed "Whistler"). The merging of the Windows NT/2000 and Windows 95/98/Me lines was finally achieved with Windows XP. Windows XP uses the Windows NT 5.1 kernel, marking the entrance of the Windows NT core to the consumer market, to replace the aging 16/32-bit branch. The initial release met with considerable criticism, particularly in the area of security, leading to the release of three major Service Packs. Windows XP SP1 was released in September of 2002, SP2 came out in August, 2004 and SP3 came out in April, 2008. Service Pack 2 provided significant improvements and encouraged widespread adoption of XP among both home and business users. Windows XP was the current edition longer than any other version of Windows, from 2001 all the way to 2007 when Windows Vista was released to consumers. The Windows XP line of operating systems was succeeded by Windows Vista on 30 January 2007.

Windows XP is available in a number of versions:

  • Windows XP Home Edition, for home desktops and laptops (notebooks)
  • Windows XP Professional, for business and power users
    • Windows XP Professional N, as above, but without a default installation of Windows Media Player, as mandated by a European Union ruling
  • Windows XP Media Center Edition (MCE), released in November 2002 for desktops and notebooks with an emphasis on home entertainment
    • Windows XP Media Center Edition 2003
    • Windows XP Media Center Edition 2004
    • Windows XP Media Center Edition 2005, released on 12 October 2004.
  • Windows XP Tablet PC Edition, for tablet PCs (PCs with touch screens)
    • Windows XP Tablet PC Edition 2005
  • Windows XP Embedded, for embedded systems
  • Windows XP Starter Edition, for new computer users in developing countries
  • Windows XP Professional x64 Edition, released on 25 April 2005 for home and workstation systems utilizing 64-bit processors based on the x86-64 instruction set developed by AMD as AMD64; Intel calls their version Intel 64)
  • Windows XP 64-bit Edition, is a version for Intel's Itanium line of processors; maintains 32-bit compatibility solely through a software emulator. It is roughly analogous to Windows XP Professional in features. It was discontinued in September 2005 when the last vendor of Itanium workstations stopped shipping Itanium systems marketed as "Workstations".
    • Windows XP 64-bit Edition 2003, based on the Windows NT 5.2 codebase.

Windows Server 2003


On 24 April 2003 Microsoft launched Windows Server 2003, a notable update to Windows 2000 Server encompassing many new security features, a new "Manage Your Server" wizard that simplifies configuring a machine for specific roles, and improved performance. It has the version number NT 5.2. A few services not essential for server environments are disabled by default for stability reasons, most noticeable are the "Windows Audio" and "Themes" services; Users have to enable them manually to get sound or the "Luna" look as per Windows XP. The hardware acceleration for display is also turned off by default, users have to turn the acceleration level up themselves if they trust the display card driver.

In December 2005, Microsoft released Windows Server 2003 R2, which is actually Windows Server 2003 with SP1 (Service Pack 1) plus an add-on package. Among the new features are a number of management features for branch offices, file serving, printing and company-wide identity integration.

Windows Server 2003 is available in six editions:

  • Web Edition (32-bit)
  • Standard Edition (32 and 64-bit)
  • Enterprise Edition (32 and 64-bit)
  • Datacenter Edition (64-bit)
  • Small Business Server (32-bit)
  • Storage Server (OEM channel only)

Thin client: Windows Fundamentals for Legacy PCs

n July 2006, Microsoft released a thin-client version of Windows XP Service Pack 2, called Windows Fundamentals for Legacy PCs (WinFLP). It is only available to Software Assurance customers. The aim of WinFLP is to give companies a viable upgrade option for older PCs that are running Windows 95, 98, and Me that will be supported with patches and updates for the next several years. Most user applications will typically be run on a remote machine using Terminal Services or Citrix.

Windows Home Server


Windows Home Server (codenamed Q, Quattro) is a server product based on Windows Server 2003, designed for consumer use. The system was announced on 7 January 2007 by Bill Gates. Windows Home Server can be configured and monitored using a console program that can be installed on a client PC. Such features as Media Sharing, local and remote drive backup and file duplication are all listed as features.

http://www.microsoft.com/windows/products/winfamily/windowshomeserver/default.mspx

Windows Vista


Main article: Windows Vista
See also: Features new to Windows Vista and Development of Windows Vista

The current client version of Windows, Windows Vista (codenamed Longhorn) was released on 30 November 2006[1] to business customers, with consumer versions following on 30 January 2007. Windows Vista intends to have enhanced security by introducing a new restricted user mode called User Account Control, replacing the "administrator-by-default" philosophy of Windows XP. Vista also features new graphics features, the Windows Aero GUI, new applications (such as Windows Calendar, Windows DVD Maker and some new games including Chess, Mahjong, and Purble Place), a revised and more secure version of Internet Explorer, a new version of Windows Media Player, and a large number of underlying architectural changes.

Windows Vista ships in several editions:

  • Starter (only available in developing countries)
  • Home Basic
  • Home Premium
  • Business
  • Enterprise (only available to large businesses and enterprises)
  • Ultimate (combines both Home Premium and Enterprise)

All editions (except Starter edition) are available in both 32-bit and 64-bit versions. The biggest advantage of the 64-bit version is breaking the 4 gigabyte memory barrier, which 32-bit computers cannot fully access. In the first year after Vista's release, most installations were still 32-bit, due to poor driver support of the 64-bit version.

Windows Server 2008

Windows Server 2008, released on 27 February 2008, was originally known as Windows Server Codename "Longhorn". Windows Server 2008 builds on the technological and security advances first introduced with Windows Vista, and is significantly more modular than its predecessor, Windows Server 2003.

Windows 7


Windows 7 (formerly codenamed Blackcomb and Vienna) is the next release of Microsoft Windows, an operating system produced by Microsoft for use on personal computers, including home and business desktops, laptops, Tablet PCs, and media center PCs.[1]

Microsoft stated in 2007 that it is planning Windows 7 development for a three-year time frame starting after the release of its predecessor, Windows Vista, but that the final release date will be determined by product quality.[2]

Unlike its predecessor, Windows 7 is intended to be an incremental upgrade with the goal of being fully compatible with existing device drivers, applications and hardware.[3] Presentations given by the company in 2008 have focused on multi-touch support, a redesigned Windows Shell with a new taskbar, a home networking system called HomeGroup,[4] and performance improvements. Some applications that have been included with prior releases of Microsoft Windows, most notably Windows Mail, Windows Movie Maker and Windows Photo Gallery, are no longer included with the operating system; they are instead offered separately as part of the Windows Live Essentials suite.


source: wikipedia.org