Tag Archive: intel


Intel

Intel is asking about $500 million for OnCue, the online pay-TV service that the world’s largest chipmaker developed before dialing back its ambitions, according to people with knowledge of the process. Intel is seeking to secure a sale by year-end, said the people, who asked not to be identified because the talks are private. One suitor, Verizon, has begun talking with owners of broadcast and cable channels about terms for a streaming TV service, the people said. A sale that meets Intel’s asking price would let the company recoup its costs as it retreats on a plan to enter the pay-TV business, while still supplying chips to the new owner. Samsung and Liberty Global also met with Intel, people said earlier. Intel’s TV efforts slowed under Chief Executive Officer Brian M. Krzanich, who took the reins in May and has focused on getting chips into mobile devices.

Read the full story at Bloomberg.

ejection_seat

The recent management shake-up at chip giant Intel has claimed its first high-profile casualty. Dadi “David” Perlmutter, a well-respected executive VP who had been a leading but unsuccessful contender in the race to succeed former CEO Paul Otellini, will be leaving the company early next year, after 34 years.

Intel confirmed the move in a regulatory filing made public today.

Perlmutter, who heads up Intel’s Architecture group, the business unit that designs and manufacturers its chips that go into personal computers, servers and other devices, will leave the company in February.

After Otellini announced that he would retire earlier than expected last year, Perlmutter (pictured below) was among the candidates who pitched Intel’s board of directors on the CEO job, but lost out to the unusual joint offering of then-COO Brian Krzanich and president Rene James.

david-perlmutter_1

Perlmutter’s departure isn’t entirely surprising. Within days of Krzanich taking over as CEO, there were reports that he had moved to take direct control over Intel’s Architecture Group, and had delegated Perlmutter to a vaguely described transitional role, though Intel never made that role official.

Regulatory filings show that Perlmutter made about $15.7 million in total compensation from Intel last year, and, as of February of this year, he owned 1,968,599 shares of Intel worth nearly $47 million at today’s share price.

Perlmutter, a native of Israel, had long been seen as the company’s “Mr. Inside,” possessing a skill for getting things done. He was less effective at the public-facing portions of the jobs. A keynote he gave in Sept. 2012 – intended to reignite some excitement around Intel’s strategic plans to improve personal computers amid flagging sales – fell pretty flat.

His first big success at Intel came with the Centrino line of mobile processors that launched in 2003 and soon dominated the notebook market. Later, he was responsible for the Core line of chips that effectively replaced Intel’s longtime Pentium brand of PC processors.

Intel shares fell 41 cents, or 1.75 percent, to $23.66. The shares are up by nearly 15 percent this year.

Here’s how Intel described the move in the 8-K filing made public this morning:

“On October 18, 2013, David Perlmutter, Executive Vice President and General Manager, Intel Architecture Group, notified Intel Corporation (“Intel”) of his intention to leave Intel effective February 20, 2014, the 34th anniversary of his start of employment at Intel, to pursue other opportunities in his life and professional career. Throughout his career at Intel, Mr. Perlmutter led many of the product, technology and business transformations at Intel.

Until his departure in February 2014, Mr. Perlmutter will provide transition assistance to Intel’s Platform Engineering Group and on other matters as requested by management and will continue to participate in all applicable Intel compensation and benefit plans and arrangements. Mr. Perlmutter will receive post-employment benefits as described in the “Executive Compensation” section of Intel’s proxy statement filed with the Securities and Exchange Commission on April 3, 2013, including acceleration of the vesting of certain equity awards pursuant to company policy for employees age 60 or over and relocation assistance under the terms of Mr. Perlmutter’s relocation agreement.”

Dell Inspiron i17r-2877MRB With Intel Core i5-460M 2.53GHz Processor, 17.3 Display, 6GB DDR3 Memory, 500GB Hard Drive, Blu-Ray Combo – Mars Black

  • Intel CoreTM i5-460M (2.53GHz, Turbo boost up to 2.8GHz, 4 Threads, 3MB cache)
  • 6GB DDR3 , 500GB Hard Drive
  • Blu-ray Disc (BD) Combo (reads BD and writes to DVD/CD)
  • Intel 6250 Wireless-N w/ WiMax
  • 17.3 HD+ WLED (1366 x 768), Built-in 1.3MP webcam

Processor and Memory:
Intel CoreTM i5-460M (2.53GHz, Turbo boost up to
2.8GHz, 4 Threads, 3MB cache)
6GB Shared Dual Channel DDR3 SDRAM at 1333MHz (2 DIMMs)

Hard Drive and Multimedia Drives:
500GB 5400RPM Hard Drive
Blu-ray Disc (BD) Combo (reads BD and writes to
DVD/CD)

Audio, Video and Graphics:
SRS Audio Enhancement with 1X3 Watt Subwoofer
Intel HD Graphics

Connectivity:
Intel 6250 Wir

minorityreport

Punch card. Keyboard. Mouse. Touchscreen. Voice. Gesture.

This abbreviated history of human-computer interaction follows a clear trajectory of improvement, where each mode of communication with technology is demonstrably easier to use than the last. We are now entering an era of natural computing, where our interaction with technology becomes more like a conversation, effortless and ordinary, and less like a chore, clunky and difficult. Those of us working in the field are focused on teaching computers to understand and adapt to the most natural human actions, instead of forcing people to learn to understand and adapt to technology.

Three years ago, the industry’s only point of reference to explain this technology was science fiction, like the movie “Minority Report.” Then in November 2010, Microsoft’s Kinect for Xbox 360 sensor was released, and broad adoption of voice and gesture technology found its way into millions of living rooms. A year later, Microsoft launched Kinect for Windows, which gives researchers and businesses the ability to take the Kinect natural computing technology to market in a variety of industries.

Since then, major investments in the field have been made by established companies like Intel and Samsung, maturing natural user interface (NUI) players like Primesense and SoftKinetic, and new entrants like Leap Motion and Tobii. Natural computing is moving from the realms of researchers to the minds of marketers, and a true commercial category is starting to emerge.

But even just a year ago, there was no definition, no language and no data for the commercial category. Clearly a richer, more informed language was needed. To this end, my colleagues and I have developed a category framework: Kinect and other voice and gesture technologies are part of the Natural Computing category, defined as input devices that enable users to trigger computing events in the easiest, most efficient way possible. Understanding that the term Natural Computing has a variety of different meanings in academia, we found it was a helpful term to describe the business side of human-computer interaction technologies.

In some respects, there is evidence of natural computing all around us, and there has been for many years. Think of automatic doorways, which open up for you with no effort required on your part beyond walking toward them. Think of automatic faucets, soap dispensers and hand driers — all you have to do is offer them your hand.

These systems are the most rudimentary forms of natural computing. They each recognize a single set of data (your hand placement), automatically interpret your intent (to wash or dry your hands) and immediately respond to it (by dispensing water or soap or air). Now imagine if more complicated forms of technology could understand your intent in all its complexity, and respond to it simply, immediately and perfectly. No learning required. This is how those of us working in this field see the future.

There are currently a limited set of ways that users can interact with computing devices, although there will certainly be more in the future. Today, these include everything from manipulating a mouse and keyboard, to touching, speaking and gesturing. The illustration below breaks down these methods according to how close the user is to the screen (“far” vs. “near”), and how hard or easy it is to learn the technology (“learned” vs. “natural”).

First, each input method is designed to solve for different distances. For example, you need to be right next to a screen to be able to touch it, yet you can be several feet or more away from it when using gesture technologies. Similarly, take into consideration how much time it takes someone to learn how to use the technology. Older technologies tend to take longer to learn (think typing lessons or early command line interfaces) while newer ones tend to take less time (think touchscreens). The combination of these two ideas — proximity and ease of use — make up the Natural Computing Category Map, which enables us to better envision where certain natural computing technologies play a role now and where they could grow in the future.

natcomp

Figure 1. Natural Computing Category Map (Illustrative)

Within this new, rising category, the technology receives new information with every single gesture, move or sound, and can adapt to what it learns. After one year in market, my colleagues and I continue to see Kinect for Windows as a fundamentally human technology — one that sees and recognizes users as a whole person, with thousands of examples of human-centered applications beyond gaming in industries like healthcare, retail, training and automotive. Additionally, competitive activity has also accelerated, with new sensor and SDK releases, updates to more established open source offerings and significant partnership and investment activity by major players and new entrants alike.

These other gesture-based technology companies have evolved to form partnerships with major computer hardware manufacturers or are exploring the possibilities of integrating the technology in smartphones. The category is growing and evolving rapidly. All this activity accretes to businesses and consumers, who benefit from the quickly evolving natural computing experiences.

The future of the natural computing category is to reach end-users directly, fundamentally changing everyday interactions with technology. Imagine walking by a storefront window and having an avatar mirror your every move, talking to your next-gen TV with the same tone and sentence structure you would use with a friend, or improving your tennis swing with an immersive simulation tool. If you are reading this and wonder what the future of natural computing holds in store for you, the future of natural computer interaction is here already, albeit unevenly distributed. And natural computing is quickly beginning to demonstrate what a computer can do if you give it eyes, ears and the capacity to use them.

Leslie Feinzaig is the Senior Product Manager for Kinect for Windows. Leslie plays an important role in Microsoft’s Kinect for Windows business and has researched and developed great insights into the industry and competitive landscapes around natural computing.