Tag Archive: voices


The National Security Agency ended a program used to spy on German Chancellor Angela Merkel and a number of other world leaders after an internal Obama administration review started this summer revealed to the White House the existence of the operations, U.S. officials said.

Officials said the internal review turned up NSA monitoring of some 35 world leaders, in the U.S. government’s first public acknowledgment that the U.S. government tapped the phones of world leaders.

Read the rest of this post on the original site

In a strategy shift, Hon Hai Precision Industry Co. is making a push into software development and telecom services, its latest efforts to seek new avenues of growth as revenue from contract manufacturing slows.

Chairman Terry Gou said Thursday the company plans to hire more than 2,000 software engineers to beef up content, software development and build a data center in Taiwan.

Read the rest of this post on the original site

Long before I started work as the CEO of Apple, I became aware of a fundamental truth: People are much more willing to give of themselves when they feel that their selves are being fully recognized and embraced.

At Apple, we try to make sure people understand that they don’t have to check their identity at the door. We’re committed to creating a safe and welcoming workplace for all employees, regardless of their race, gender, nationality or sexual orientation.

As we see it, embracing people’s individuality is a matter of basic human dignity and civil rights. It also turns out to be great for the creativity that drives our business. We’ve found that when people feel valued for who they are, they have the comfort and confidence to do the best work of their lives.

Read the rest of this post on the original site

In 10 years I think we’ll look back and say we can’t believe we lived in a world with enormous transaction fees and all these security risks. That we couldn’t digitally send money to anyone anywhere in the world.

– Brightcove co-founder and former CEO Jeremy Allaire, on Circle, his bitcoin payment platform for merchants

In a deal that would be one of the biggest-ever foreign takeovers of a Japanese firm, Applied Materials Inc. agreed to acquire Tokyo Electron Ltd. to create to create a powerhouse provider of chip manufacturing equipment.

The all-stock deal announced by the two companies on Tuesday is effectively a takeover by Applied and values Tokyo Electron at $9.3 billion, a modest premium to its market value of 872.3 billion ($8.8 billion). Shareholders of Applied, valued at $19.7 billion under the deal, will own 68 percent of the new company. Both the CEO and CFO of the new company will come from Applied.

Read the rest of this post on the original site

mobile_towers_spectrum

iStockphoto | maumapho

Congress and the Federal Communications Commission have taken on an important mission. These lawmakers are trying to make more public airwaves available for mobile broadband while simultaneously preserving some free, over-the-air broadcast television signals.

It’s a big task, made all the more complicated by the number of public interest considerations the FCC has to balance, and by the fact that the agency can’t compel broadcasters to give up their licenses. All it can do, under the statute that Congress passed, is create incentives for TV stations to surrender those licenses voluntarily.

Volunteering is more lucrative than it used to be, at least in this case. In return for pitching in, TV stations that participate will receive a portion of the money that companies like AT&T, Verizon, Sprint and T-Mobile pay in an auction for the right to use that spectrum.

Or will all of those companies seek those rights? There’s the first issue the FCC faces, if this “reverse” auction works and incentivizes TV stations to give up some inventory: Will multiple mobile carriers be able to bid for this spectrum, which is the lifeblood of any wireless service? Or will it be gobbled up by the two dominant players, AT&T and Verizon? Those two enjoy a powerful duopoly, controlling nearly 80 percent of the frequencies most valuable for mobile broadband, and more than 80 percent of the profits for the entire U. S. wireless industry.

Some have suggested that this type of imbalance is no cause for concern, and argued that improving the chances for meaningful competition among wireless providers won’t matter much for wireless users. While it’s a favorite trick of powerful incumbents to attack smaller companies’ arguments about competition as “self-serving,” there’s no mistaking the benefits to the public and the broader economy of having lower prices and better service available from someone other than the two biggest carriers.

The ultimate aim of the FCC’s efforts to design this auction is to increase the chance for a wide assortment of wireless carriers to bid. That would give individuals and businesses that rely on wireless connectivity more providers that would work hard to get and keep our business, not just rest on their laurels and their market power.

That hasn’t stopped AT&T and Verizon from spreading all sorts of disinformation about the process, however, in an attempt to make it look as though the government is out to get them. (Life’s tough for $100 billion companies, isn’t it?)

To start with, there’s this notion that the FCC is dragging its feet or asking too many questions about this auction process – all at the urging of those competitive carriers. While it’s not a good thing that FCC proceedings stretch on so long, it’s hardly due to unprecedented demands by any potential bidders. The last time the FCC cleared TV channels and auctioned those frequencies for mobile use, the whole process took about eleven years, running from 1997 till 2008. As cooler heads have noted this time around, it’s still more important to get this done right than to get it done right now.

The Department of Justice has added valuable insights to the process, in an important filing outlining the motives that dominant players like AT&T and Verizon have to bid on spectrum just to keep these valuable resources out of rivals’ hands. That’s the same concern the DOJ expressed in 2010 when it explained how auction design “can go wrong in the presence of strong wireline or wireless incumbents, since the private value for incumbents in a given locale includes … ‘foreclosure value'” derived from keeping competition at a minimum. AT&T and Verizon, the dominant incumbents on both the wireless and wireline side, can and do leverage their positions to keep competitors out.

So the FCC can and should take special care to avoid such results in the upcoming auction of the reclaimed TV band, because these frequencies are especially well suited for mobile broadband service. Low-band spectrum is more valuable because it lets wireless carriers cover more territory with fewer cell sites, and it provides better coverage indoors because it travels through walls and other obstacles better than signals at higher frequencies do.

All carriers acknowledge its special characteristics and tremendous advantages. AT&T CEO Randall Stephenson said in an interview just last year that low-band spectrum “propagates like a bandit.” And Verizon CTO Tony Melone reported that using such frequencies gives Verizon “tremendous propagation advantages” for 4G services because “there will be fewer sites required and we’ll have better building penetration.”

Yet somehow in the current debate, these two carriers have tried to reverse course and sell the flatly incorrect argument that the airwaves in the upcoming auction are the same as all others. Their hypocritical claim that low-band spectrum is of no special significance any more is a lot easier to understand once you realize that they already have plenty of it, and that they just want to pull up the drawbridge.

Trade groups with innocuous-sounding names like “Mobile Future” purport to bolster the incumbents’ case about the evils of “tampering” with auction design. But take a look at Mobile Future’s membership – and what do you know, it’s AT&T and Verizon again, in a different guise. (No offense to the Arkansas Grocer and Retail Merchants Association, but I doubt that it or the Worcester Regional Chamber of Commerce is calling the shots there.)

Nobody is talking about tampering with the auction, or keeping AT&T and Verizon out of it. The statute simply says the FCC can’t altogether prevent anyone from taking part, and it specifically preserves the agency’s authority for general rules that “promote competition” and prevent excessive concentration of spectrum licenses in any one company’s hands.

The FCC is on course to consider the right questions in this proceeding. It’s far from completing that course, and has any number of decisions to make along the way. The suggestion that competitive considerations are off track is a self-serving statement made by the big carriers in the best position today. Of course they see no need for more competition or faster innovation. But the FCC, the Department of Justice and the rest of us know better.

Matt Wood is the Policy Director for Free Press, a nationwide, non-partisan, non-profit public interest group that fights for people’s right to connect and communicate on the air and online.

Aww, Canada

Between Nortel and RIM, we’ve seen $500 billion in market value vanish from Canada’s tech scene. We’d get a complex but we already have one.

– Mathew Ingram, tweeting about BlackBerry’s $4.7 billion buyout offer

Google Inc.’s Android unit has been negotiating with music companies to start a paid subscription music-streaming service akin to Spotify AB, according to people familiar with the matter.

Separately, Google’s YouTube video website is trying to obtain licenses from music labels to start a paid subscription service for music videos and potentially also for audio-only songs, these people said.

Read the rest of this post on the original site

Facebook Inc., facing multiple shareholder lawsuits related to its botched initial public offering, scored an initial legal victory when a federal judge in New York Wednesday dismissed a group of cases against the social networking company.

Last year, several Facebook investors sued the company, arguing that Facebook — which had shared internal financial forecasts with certain analysts before the IPO — was also obligated to disclose those projections in regulatory filings.

Read the rest of this post on the original site

minorityreport

Punch card. Keyboard. Mouse. Touchscreen. Voice. Gesture.

This abbreviated history of human-computer interaction follows a clear trajectory of improvement, where each mode of communication with technology is demonstrably easier to use than the last. We are now entering an era of natural computing, where our interaction with technology becomes more like a conversation, effortless and ordinary, and less like a chore, clunky and difficult. Those of us working in the field are focused on teaching computers to understand and adapt to the most natural human actions, instead of forcing people to learn to understand and adapt to technology.

Three years ago, the industry’s only point of reference to explain this technology was science fiction, like the movie “Minority Report.” Then in November 2010, Microsoft’s Kinect for Xbox 360 sensor was released, and broad adoption of voice and gesture technology found its way into millions of living rooms. A year later, Microsoft launched Kinect for Windows, which gives researchers and businesses the ability to take the Kinect natural computing technology to market in a variety of industries.

Since then, major investments in the field have been made by established companies like Intel and Samsung, maturing natural user interface (NUI) players like Primesense and SoftKinetic, and new entrants like Leap Motion and Tobii. Natural computing is moving from the realms of researchers to the minds of marketers, and a true commercial category is starting to emerge.

But even just a year ago, there was no definition, no language and no data for the commercial category. Clearly a richer, more informed language was needed. To this end, my colleagues and I have developed a category framework: Kinect and other voice and gesture technologies are part of the Natural Computing category, defined as input devices that enable users to trigger computing events in the easiest, most efficient way possible. Understanding that the term Natural Computing has a variety of different meanings in academia, we found it was a helpful term to describe the business side of human-computer interaction technologies.

In some respects, there is evidence of natural computing all around us, and there has been for many years. Think of automatic doorways, which open up for you with no effort required on your part beyond walking toward them. Think of automatic faucets, soap dispensers and hand driers — all you have to do is offer them your hand.

These systems are the most rudimentary forms of natural computing. They each recognize a single set of data (your hand placement), automatically interpret your intent (to wash or dry your hands) and immediately respond to it (by dispensing water or soap or air). Now imagine if more complicated forms of technology could understand your intent in all its complexity, and respond to it simply, immediately and perfectly. No learning required. This is how those of us working in this field see the future.

There are currently a limited set of ways that users can interact with computing devices, although there will certainly be more in the future. Today, these include everything from manipulating a mouse and keyboard, to touching, speaking and gesturing. The illustration below breaks down these methods according to how close the user is to the screen (“far” vs. “near”), and how hard or easy it is to learn the technology (“learned” vs. “natural”).

First, each input method is designed to solve for different distances. For example, you need to be right next to a screen to be able to touch it, yet you can be several feet or more away from it when using gesture technologies. Similarly, take into consideration how much time it takes someone to learn how to use the technology. Older technologies tend to take longer to learn (think typing lessons or early command line interfaces) while newer ones tend to take less time (think touchscreens). The combination of these two ideas — proximity and ease of use — make up the Natural Computing Category Map, which enables us to better envision where certain natural computing technologies play a role now and where they could grow in the future.

natcomp

Figure 1. Natural Computing Category Map (Illustrative)

Within this new, rising category, the technology receives new information with every single gesture, move or sound, and can adapt to what it learns. After one year in market, my colleagues and I continue to see Kinect for Windows as a fundamentally human technology — one that sees and recognizes users as a whole person, with thousands of examples of human-centered applications beyond gaming in industries like healthcare, retail, training and automotive. Additionally, competitive activity has also accelerated, with new sensor and SDK releases, updates to more established open source offerings and significant partnership and investment activity by major players and new entrants alike.

These other gesture-based technology companies have evolved to form partnerships with major computer hardware manufacturers or are exploring the possibilities of integrating the technology in smartphones. The category is growing and evolving rapidly. All this activity accretes to businesses and consumers, who benefit from the quickly evolving natural computing experiences.

The future of the natural computing category is to reach end-users directly, fundamentally changing everyday interactions with technology. Imagine walking by a storefront window and having an avatar mirror your every move, talking to your next-gen TV with the same tone and sentence structure you would use with a friend, or improving your tennis swing with an immersive simulation tool. If you are reading this and wonder what the future of natural computing holds in store for you, the future of natural computer interaction is here already, albeit unevenly distributed. And natural computing is quickly beginning to demonstrate what a computer can do if you give it eyes, ears and the capacity to use them.

Leslie Feinzaig is the Senior Product Manager for Kinect for Windows. Leslie plays an important role in Microsoft’s Kinect for Windows business and has researched and developed great insights into the industry and competitive landscapes around natural computing.

%d bloggers like this: