Why Data Is The New Oil

Artificial intelligence is only as good as the data it crunches.

The field of artificial intelligence is red hot thanks in part to big companies like Google, Facebook, and Microsoft using AI-related techniques to train computers to recognize objects in photos and understand human language.

But these companies were only able to train their computers to perform these difficult feats because they have the enormous quantities of data that’s required. Companies that want to use artificial intelligence techniques like machine learning algorithms to improve their businesses will need to either have or acquire copious amounts of data, which will only become more valuable as the field of AI advances.

That’s one of the takeaways from a panel on artificial intelligence Monday in Aspen, Colo. during Fortune’s annual Brainstorm Tech conference.

“Data is the new oil,” Shivon Zilis, a partner with the venture capital firm Bloomberg Beta, said about data’s increasing value.

Although companies like Google GOOG 0.57% , Facebook FB 1.09% , and Amazon AMZN 0.78% have open sourced their own artificial intelligence software so any programmer can access the code and use it to build their own apps, they are not making the necessary data available, Zilis explained. These companies are releasing these AI software toolkits for free so they can recruit more engineers to their companies, she said.

When it comes to competition, it’s the data these companies posses that’s more important than the actual AI software tools they use and release to the public.

David Kenny, the general manager of IBM’s IBM 1.12% Watson data crunching service, agreed with Zilis and said “the value of data goes up every day AI advances.”

“Data will become a currency,” Kenny said. He also explained that only 20% of the world’s information is stored on the Internet, with the other 80% being privately held within companies and organizations.

Additionally, Zilis discussed the history of so-called “AI winters” in which the hype of AI failed to meet the general public’s expectations, and led to periods of years of declining research into AI during the mid-1970’s and the late 1980’s to early 1990’s.

Today, however, the rise of the cloud and the ability to access tremendous amounts of computing power to analyze huge amounts of data has the potential to strengthen the existing AI-related algorithms and lead to new more powerful algorithms. Because of these technological developments, Zilis does not believe companies will lose interest in AI.

“Are we going to hit a winter?” asked Zilis. “Absolutely not.”

( Source: Fortune Magazine, written by Jonathan Vanian)

The future of computer vision

Within 20 years, computer vision will be a commodity component within the fabric of the worldwide analytics infrastructure, similar to the telecommunications infrastructure of today, containing distributed analytics and databases services. Application-specific analytics and intelligence will be added to all devices by default within the Internet of All Things (IoAT), including visual, audio, textual, numerical and sensor analytics. A few new Neural Computing (NC) architectures will be standardized in silicon, applicable to all forms of data.

Major government and corporate initiatives are currently underway, similar to the space race, to create artificial brains which will contribute to the NC of the future. Future systems will contain application-specific mixtures of NCs, CPUs, GPUs, sensor processors, and IO. The underlying technology will be a near zero-cost commodity, and the revenue will come from services, similar to phone or cable services.

Imaging devices will be more accurate with more on-chip processing power for image processing and analytics. Image processing algorithms will be similar to those used today, with no major innovations expected. The computer vision community will standardize on a few feature descriptors and features learning architectures, enabling a generic NC platform for application-specific innovation and market growth.

Computer vision and analytics systems will be far superior to the primitive deep learning models in use today, combining deep-learning and multivariate wide-learning together, with improved feature descriptor models and comprehensive training protocols enabled by ubiquitous databases containing labeled samples of any type of image or data such as audio, textual, financial, and information about a person, place or thing. Personal privacy will virtually disappear.

Within 20 years, most mobile and hand-held devices will contain NCs connected to remote analytics services to enable personal, business, commercial, governmental, military, law enforcement and legal organizations to perform combined audio, visual, historical, and textual evaluations to enable shopping, tourism, employment interviews, banking, commerce, law enforcement or housing applications.

Neural computers will evaluate facial expression, body language and clothing style for emotions and intentions, as well as audio evaluation of the tone and rhythm of spoken words for latent intentions and assumptions, including analysis of the words from email, texts, blogs, and historical records from local governments, academic institutions, purchasing records, and other financial transactions.

The analytics will provide scenarios and what-if analysis and prediction of future behavior within a set of circumstances, for example allowing a commercial enterprise to design situations or opportunities to suit their preferences and influence purchasing behavior, or by allowing governments to develop policies and propaganda to test the reactions of a population, their preferences, intentions and personal beliefs.

Computer vision will be a central component of the future of the analytics infrastructure. Imagine government policy and business plans being designed around the predictions generated by an NC to form future programs and evaluation of each program by another NC to form recommendations, with the best recommendation being chosen by another NC to send to the final decision authority – a human…or an NC?

(Source: www.KrigResearch.com)

Energy-friendly chip can perform powerful artificial-intelligence tasks

Advance could enable mobile devices to implement “neural networks” modeled on the human brain.

In recent years, some of the most exciting advances in artificial intelligence have come courtesy of convolutional neural networks, large virtual networks of simple information-processing units, which are loosely modeled on the anatomy of the human brain.

Neural networks are typically implemented using graphics processing units (GPUs), special-purpose graphics chips found in all computing devices with screens. A mobile GPU, of the type found in a cell phone, might have almost 200 cores, or processing units, making it well suited to simulating a network of distributed processors.

At the International Solid State Circuits Conference in San Francisco this week, MIT researchers presented a new chip designed specifically to implement neural networks. It is 10 times as efficient as a mobile GPU, so it could enable mobile devices to run powerful artificial-intelligence algorithms locally, rather than uploading data to the Internet for processing.

Neural nets were widely studied in the early days of artificial-intelligence research, but by the 1970s, they’d fallen out of favor. In the past decade, however, they’ve enjoyed a revival, under the name “deep learning.”

“Deep learning is useful for many applications, such as object recognition, speech, face detection,” says Vivienne Sze, the Emanuel E. Landsman Career Development Assistant Professor in MIT’s Department of Electrical Engineering and Computer Science whose group developed the new chip. “Right now, the networks are pretty complex and are mostly run on high-power GPUs. You can imagine that if you can bring that functionality to your cell phone or embedded devices, you could still operate even if you don’t have a Wi-Fi connection. You might also want to process locally for privacy reasons. Processing it on your phone also avoids any transmission latency, so that you can react much faster for certain applications.”

The new chip, which the researchers dubbed “Eyeriss,” could also help usher in the “Internet of things” — the idea that vehicles, appliances, civil-engineering structures, manufacturing equipment, and even livestock would have sensors that report information directly to networked servers, aiding with maintenance and task coordination. With powerful artificial-intelligence algorithms on board, networked devices could make important decisions locally, entrusting only their conclusions, rather than raw personal data, to the Internet. And, of course, onboard neural networks would be useful to battery-powered autonomous robots.

Division of labor

A neural network is typically organized into layers, and each layer contains a large number of processing nodes. Data come in and are divided up among the nodes in the bottom layer. Each node manipulates the data it receives and passes the results on to nodes in the next layer, which manipulate the data they receive and pass on the results, and so on. The output of the final layer yields the solution to some computational problem.

In a convolutional neural net, many nodes in each layer process the same data in different ways. The networks can thus swell to enormous proportions. Although they outperform more conventional algorithms on many visual-processing tasks, they require much greater computational resources.

The particular manipulations performed by each node in a neural net are the result of a training process, in which the network tries to find correlations between raw data and labels applied to it by human annotators. With a chip like the one developed by the MIT researchers, a trained network could simply be exported to a mobile device.

This application imposes design constraints on the researchers. On one hand, the way to lower the chip’s power consumption and increase its efficiency is to make each processing unit as simple as possible; on the other hand, the chip has to be flexible enough to implement different types of networks tailored to different tasks.

Sze and her colleagues — Yu-Hsin Chen, a graduate student in electrical engineering and computer science and first author on the conference paper; Joel Emer, a professor of the practice in MIT’s Department of Electrical Engineering and Computer Science, and a senior distinguished research scientist at the chip manufacturer NVidia, and, with Sze, one of the project’s two principal investigators; and Tushar Krishna, who was a postdoc with the Singapore-MIT Alliance for Research and Technology when the work was done and is now an assistant professor of computer and electrical engineering at Georgia Tech — settled on a chip with 168 cores, roughly as many as a mobile GPU has.

Act locally

The key to Eyeriss’s efficiency is to minimize the frequency with which cores need to exchange data with distant memory banks, an operation that consumes a good deal of time and energy. Whereas many of the cores in a GPU share a single, large memory bank, each of the Eyeriss cores has its own memory. Moreover, the chip has a circuit that compresses data before sending it to individual cores.

Each core is also able to communicate directly with its immediate neighbors, so that if they need to share data, they don’t have to route it through main memory. This is essential in a convolutional neural network, in which so many nodes are processing the same data.

The final key to the chip’s efficiency is special-purpose circuitry that allocates tasks across cores. In its local memory, a core needs to store not only the data manipulated by the nodes it’s simulating but data describing the nodes themselves. The allocation circuit can be reconfigured for different types of networks, automatically distributing both types of data across cores in a way that maximizes the amount of work that each of them can do before fetching more data from main memory.

At the conference, the MIT researchers used Eyeriss to implement a neural network that performs an image-recognition task, the first time that a state-of-the-art neural network has been demonstrated on a custom chip.

“This work is very important, showing how embedded processors for deep learning can provide power and performance optimizations that will bring these complex computations from the cloud to mobile devices,” says Mike Polley, a senior vice president at Samsung’s Mobile Processor Innovations Lab. “In addition to hardware considerations, the MIT paper also carefully considers how to make the embedded core useful to application developers by supporting industry-standard [network architectures] AlexNet and Caffe.”

The MIT researchers’ work was funded in part by DARPA.

(Source: By Larry Hardesty, MIT News Office)

Will LiFi Take Big Data And The Internet Of Things To A New Level?

The enormous demand for WiFi and transmissions of mass quantities of data is putting a strain on the current technologies. With the predicted exponential growth of mobile devices, by 2019 more than ten billion mobile devices will exchange 35 quintillion (1018) bytes of information each month — and that’s just mobile devices. Factor in traditional computers, big data servers, and Internet of Things devices and you start to see the magnitude of the problem.

But scientists have discovered a method of data transmission more than 100 times faster than traditional WiFi, and it only requires that you turn on a light.

LiFi is a category of Visible Light Communication; an LED light flickers at speeds undetectable to the naked eye to transmit data — a bit like high tech morse code. In fact, scientists have demonstrated in a lab that they can transmit information at as much as 224 gigabits per second, the equivalent of 18 movies of 1.5 GB each being downloaded every single second. In an office setting, they were able to achieve speeds up to 100 times faster than average WiFi speeds.

The LED lights require so little energy, they can be powered by a standard ethernet cord. Inventor Harald Haas has also suggested that the smart lights could be powered by solar cells charging batteries. In addition, LiFi does not create electromagnetic interference the way WiFi does, meaning it could have important applications in sensitive locations like healthcare facilities.

There are, of course, drawbacks. In very bright daylight, the receivers wouldn’t be able to distinguish the signal, and unlike WiFi, LiFi signal cannot pass through walls.  Of course, these limitations could be overcome with technologies like smart architecture where the light follows the user around the space. Algorithyms will determine our lighting and access to data more and more.

And actually, the fact that LiFi cannot pass through walls makes the data stream instantly more secure; users must be physically in the space in order to access the data.

In addition, LiFi could be installed anywhere users might like light and data services: bus shelters, train stations, street lights, tourist information kiosks could all provide data transmission as well as light.

The enormous demand for WiFi and transmissions of mass quantities of data is putting a strain on the current technologies. With the predicted exponential growth of mobile devices, by 2019 more than ten billion mobile devices will exchange 35 quintillion (1018) bytes of information each month — and that’s just mobile devices. Factor in traditional computers, big data servers, and Internet of Things devices and you start to see the magnitude of the problem.

But scientists have discovered a method of data transmission more than 100 times faster than traditional WiFi, and it only requires that you turn on a light.

LiFi is a category of Visible Light Communication; an LED light flickers at speeds undetectable to the naked eye to transmit data — a bit like high tech morse code. In fact, scientists have demonstrated in a lab that they can transmit information at as much as 224 gigabits per second, the equivalent of 18 movies of 1.5 GB each being downloaded every single second. In an office setting, they were able to achieve speeds up to 100 times faster than average WiFi speeds.

The LED lights require so little energy, they can be powered by a standard ethernet cord. Inventor Harald Haas has also suggested that the smart lights could be powered by solar cells charging batteries. In addition, LiFi does not create electromagnetic interference the way WiFi does, meaning it could have important applications in sensitive locations like healthcare facilities.

There are, of course, drawbacks. In very bright daylight, the receivers wouldn’t be able to distinguish the signal, and unlike WiFi, LiFi signal cannot pass through walls.  Of course, these limitations could be overcome with technologies like smart architecture where the light follows the user around the space. Algorithyms will determine our lighting and access to data more and more.

And actually, the fact that LiFi cannot pass through walls makes the data stream instantly more secure; users must be physically in the space in order to access the data.

In addition, LiFi could be installed anywhere users might like light and data services: bus shelters, train stations, street lights, tourist information kiosks could all provide data transmission as well as light.

(Source: Forbes Bernard Marr)

Event: Techno-Vernacular Creativity and STEAM

When:   Tuesday, December 15, 2015, at 12:00 pm

Where:  Harvard Law School campus, Wasserstein Hall, Milstein East C (Room 2036)

Host:     Harvard Berkman Center

Speaker: Dr. Nettrice Gaskins, STEAM Lab Director at Boston Arts Academy

This event will be webcast live here at 12:00 pm

Dr. Gaskins discusses her model for ‘techno-vernacular’ creative production as an area of practice that investigates the characteristics of this production and its application in STEAM (Science, Technology, Engineering, Art, Mathematics) learning. Her research consists of a study involving workshops conducted between 2013 and 2014 that sought to examine the impact of the following combined methods

a) culturally situated design, which connects vernacular art and crafts with standards-based STEM principles and allows users to simulate and develop their own creations;

b) art-based learning, which is effective in stimulating the development of 21st century skills such as creativity, learning, and innovation; and

c) educational applications of new technologies on underrepresented ethnic groups’ learning in STEAM. She has applied this research in her work as STEAM Lab Director at Boston Arts Academy.

More details about this event and speaker, please visit: Harvard Berkman Center.

Bluetooth Internet of Things Functionality is Coming in 2016

Nov 17, 2015 by Michael Guta

The Internet of Things (IoT) will connect billions of devices, and this connectivity is going to be driven by technologies such as Bluetooth to bring them together seamlessly.

Bluetooth Special Interest Group (SIG) just announced developments of key features that will be essential for the IoT, which will be available in 2016. This includes longer range, higher speeds and mesh networking.

Mark Powell, executive director, Bluetooth SIG, said, “Bluetooth has been adopted by countless developers and manufacturers as their connectivity solution of choice for the IoT. The new functionality we will soon be adding will further solidify Bluetooth as the backbone of IoT technology.”

The improvements Bluetooth has announced will yield benefits for better IoT applications across the board.

The first improvement is the range, which is going to be up to 4x more than existing technology. What this means is, fewer modules can be deployed while covering the same amount of space, whether it is indoors or outdoors.

The second enhancement is a 100 percent increase in speed. The speed in which data is transmitted in IoT deployments can introduce new possibilities. For real-time services in healthcare, critical infrastructures or security, speed is extremely important.

The third development is mesh networking, which is a key architecture for the IoT. Mesh is a network topology which allows devices to be interconnected between network nodes. Each node can accept and forward data to another node. This architecture provides a more cost effective scalability and easier deployment.

Combined, these improvement will give developers, service providers and hardware manufactures new possibilities in the application of Bluetooth for IoT deployments.

Until very recently, the Internet of Things and now the Internet of Everything were niche terms used by people and organization in the know. But today, the incredible forecast and potential of the technology has brought it to the public at large. One of the more bullish forecasts comes from Cisco, which has the IoT market reaching up to $19 trillion with 50 billion connected devices by 2020.

Connecting everything may conjure images of Orwell’s Big Brother, but that aside the world will be a more efficient place. As it stands, the rise in population and decline of resources demands the use of a technology that can keep track of how we are able to manage everything around us.

The way we live, work and play is going to change, and new opportunities and challenges will present themselves, with the IoT playing a critical role.

The organization said support for the new IoT and other features, as well as profile updates contained in the 2016 roadmap will be previewed in the coming months.

(Resource: Small Business Trends)

Virtual Reality

How Virtual Reality Works

If Facebook, Sony and HTC have their way, next year’s most coveted consumer technology product will not be a smartphone or a giant, paper-thin flatscreen TV. It will be a virtual-reality (VR) headset: computerised goggles that transport users to an immersive, three-dimensional universe. Here they can watch panoramic films, take virtual tours, or experience whatever other alluring distractions a growing group of VR programmers might dream up. How does the technology behind the vision work?

Brendan Iribe, the CEO of Oculus, a VR startup that was bought for $2 billion by Facebook in 2014, describes VR as a “hack on the human sensory system”. It makes sense, then, for VR companies to focus their hacking efforts on the sense that humans rely on most: vision. Humans have stereoscopic vision, which means that they perceive depth by noting the subtle differences between the images received by each of their eyes. VR headsets have two tiny screens, one for each eye, which exploit that. By carefully altering the images fed to each eye, the user’s brain is persuaded that it is looking at an entire three-dimensional world instead of a pair of flat images.

The next trick is to make it seem as if that world surrounds the user. Modern VR headsets are fitted with tiny sensors similar to those used in smartphones—accelerometers, gyroscopes and the like—which let them keep track of the movements of the wearer’s head. When the user looks around, the computer can update the view on the screens. But those sensors must update themselves dozens of times a second, and errors accumulate quickly. So headsets are also equipped with LEDs. That allows a camera, mounted elsewhere in the room, to keep track of the headset and to correct errors in the embedded sensors as they accumulate. It also allows the computer to keep track of the user’s body. That allows hand-held controls to give users a pair of virtual arms and hands, and means that walking forward in the real world results in movement in the virtual one.

That all sounds fairly simple in theory. But building a usable headset stretches modern computing technology to its limits (some readers may remember the last wave of VR hype, in the 1990s, before it became clear that technology was not up to the task). For VR to work, the illusion must be extraordinarily slick. Humans are extremely sensitive to visual inconsistencies; even small snags can cause “VR sickness”, an affliction like motion-sickness. So images must update very quickly. That requires beefy computing hardware capable of generating 90 or more frames of animation a second (standard TV, and most video games, target only 30 updates per second). And the sensors that track the user’s head must be able to talk to the computer at least that fast: any delay can cause an unpleasant dragging sensation. Despite the difficulties, engineers are convinced that such problems have, at last, been banished. They will learn whether they are right when the headsets go on sale in the next few months.

( Source: The Economist )