Why is AI (Finally) Happening Now?

AI research efforts have been underway for many decades, although university and technology industry research funding had been largely curtailed as early promises of the technology failed to deliver or were thought to be disproven. AI algorithm development in the early days was also greatly constrained by a lack of adequate computing power and limited access to the large data sets necessary to train and test AI applications.

In recent years, the combination of several important factors has helped to rejuvenate interest in AI research and attract capital investment in AI venture start-ups and internal enterprise R&D projects. Key factors for AI’s resurgence include:

1. Significant advances in machine learning approaches and new techniques for more rapidly constructing and training complex algorithms;

2. The development of very powerful and inexpensive cloud computing resources and the employment of powerful graphics processing units (“GPUs”), which are now widely recognized as being very well suited to running AI programs. Today, several companies are developing CPUs specifically designed for running AI algorithms, and, longer term, Quantum Computing will likely also be utilized to build AI models. In addition, cluster computing technologies, such as Apache Spark and MapReduce, enable the development of complex machine learning models utilizing very large data sets;

3. The availability of very large structured and unstructured data sets. These data sets have been created by enterprise “big data” projects utilizing technologies such as Hadoop, the growing and widespread adoption of enterprise SaaS platforms, the growth of on-line retailers, massive social media networks, IoT sensor data and the proliferation of mobile technologies.

In addition to the above key factors, an increasingly more open approach to enterprise and cloud-based APIs (Application Programming Interface) and competition in cloud computing services have also helped spur on the development of AI.

We note that IBM Watson famously competed on Jeopardy! in February 2011 (the IBM Watson group was later formed in January 2014), an event that helped to significantly raise the public profile of recent advances in AI technology. Watson is IBM’s analytical computing system (see Exhibit 9) that is now being used in many different applications across multiple vertical industries.

Finally, in the last two years, universities, incubators, leading software companies, and venture capitalists have all been significantly increasing their funding of advanced data science research and new AI-related ventures, as evidenced by the proliferation in north American AI technology start-ups shown in Exhibits 10 and 11. Besides rapid traditional venture capital investment growth, US corporate venture capital investment in AI is up 7x since 2013, reaching $1.77B in 2016 (126 deals) and $1.8B in 1H17 (88 deals), according to CB Insights. As in previous technology cycles, we anticipate that many of these start-ups will fail or be absorbed into the mainstream (positive cash flowing) enterprise software industry and other industry leaders.

Should AI be Considered a General Purpose Technology?

As discussed earlier, the field of AI is comprised of several different sub-disciplines, all of which are continuing to develop and rapidly evolve. We believe that AI should be considered a “general purpose” technology. That is, AI is a very broadly applicable set of technologies that can be used to automate, augment, and enhance countless organizational workflows, processes, decisions, and software applications. In some respects, we believe that AI can also be viewed as a foundational software tool as versatile as a spreadsheet or engineering Computer-Aided Design (“CAD”) tool.

AI solutions can be deployed to outright replace repetitive human tasks, for both skilled and unskilled workers, or to augment the skills of workers involved in complex operations. In general, AI currently works well when automating repetitive highvolume, low-value-added tasks, such as straightforward consumer product and service inquiries to call centres or internal IT help desks and HR department requests.

To be effective, AI solutions in use today typically require significant amounts of input or training data, such as historical business transactions, images (e.g., medical), or consumer retail purchasing data. With large amounts of data, AI algorithms can be rapidly trained to perform analysis, predict outcomes, provide recommendations, or make decisions. Importantly, data sources (aided by advances in distributed database technologies) are rapidly increasing as connected mobile devices, new IoT solutions, and the adoption of cloud-based applications (social, SaaS, etc.) continue to proliferate. In the future, as AI tools and methods improve, AI will likely require less data to be effective and several AI technologies will be combined to solve higher level, more complex process problems.

How is AI Being Applied Today?

AI application development is currently taking place in a variety of settings, including AI startups (building AI-enabled solutions for vertical and horizontal markets), university-sponsored AI incubators, custom projects within corporate IT departments, and throughout both the traditional enterprise software and the consumer-focused software industry. New applications of AI technology are being touted in the media almost daily, however, we will note just a few examples here (also see Exhibit 13):

•Improved accuracy in credit card fraud detection;
•Investment portfolio management recommendations;
•Disease diagnosis based on medical imaging;
•Self-driving vehicles;
•Facial recognition for security uses;
•Automated image labelling and classification;
•Call centre automation and support;
•Voice- and text-based language translation;
•Product recommendations for on-line shoppers;
•Supply chain and manufacturing planning optimization;
•Predicting industrial equipment failure; and
•Monitoring large volume data flows from IoT networks.

( Source: By , Cantech Letter)

How is AI disrupting financial industry

NEW YORK – Artificial intelligence (AI), along with other financial technology (fintech) innovations, are significantly changing the ways that financial business are being run, especially in the fields like trading, insurance and risk management, leading the traditional financial industry into a new era.

Robots replacing humans

Back in 2000, Goldman Sach’s New York headquarters employed 600 traders, buying and selling stock on the orders of the investment bank’s clients. Today there are just two equity traders left, as automated trading programs have taken over the rest of the work.

Meanwhile, BlackRock, the world’s biggest money manager, also cut more than 40 jobs earlier this year, replacing some of its human portfolio managers with artificially intelligent, computerized stock-trading algorithms.

Those two big companies are not the only financial institutions replacing human jobs with robots.

By 2025, AI technologies will reduce employees in the capital markets by 230,000 people worldwide, according to a report by the financial services consultancy Opimas.

“Asset managers, analysts, traders, compliance administrators, back-office data collection and analysts are most likely to lose their jobs, because their jobs are easier to be replaced by automation and AI,” Henry Huang, an associate professor at Yeshiva University’s Sy Syms School of Business, told Xinhua.

“The net effect of this kind of automation will be more about increasing the productivity of the workforce than of robots simply replacing people,” said Richard Lumb, group chief executive of Accenture’s Financial Services operating group.

The best automated firms will outperform their competitors by making existing workforces more productive through AI, he added.

While humans are losing jobs in the financial industry, companies are enjoying the benefits bringing by AI technologies.

“Initially AI will add the most value and have the largest impacts in compliance (especially anti-money laundering and know-your-customer functions), cybersecurity and robo-advice,” Lumb told Xinhua.

Wall street embraces fintech

Facing rising pressures from fintech innovations, represented by AI, Wall Street financial institutions choose to embrace the new trend.

“In general, we see the outlook for fintech as strong. Demand for fintech by banks is growing because of regulatory and capital pressures, competition from large technology players like Google and Amazon and the abundance of new security threats,” Lumb said.

The FinTech Innovation Lab, an annual program launched in 2010 by Accenture and the Partnership Fund for New York City to foster fintech growth, has helped New York participants raise more than $440 million.

“The FinTech lab has proven to be a significant program for engagement between entrepreneurial technology companies and New York’s financial industry,” said James D. Robinson III, General Partner and Co-founder of RRE Ventures.

In New York City alone, fintech investment overall has increased from 216 million dollars in 2010 to 2.4 billion dollars in 2016.

“Big new frontiers are only just beginning to opening up in fintech – from AI, block chain and robotics to biometrics, augmented reality and cybersecurity,” Lumb said.

Among all the fintech innovations, the prospect of the block chain has the highest expectation.

“The block chain will change the way people store information, which is real, spreading fast and cross-border, and its ‘de-centric’ feature will allow everyone to know what other people are doing. The application of block chain in finance will once again bring about a revolutionary impact on the industry, just like AI does,” said Huang.

Fintech in china

Although it is hard to tell which country is leading the fintech innovations, many experts agree that China has outperformed other countries in fintech services adoption.

“The work in China has been dramatically ahead of anywhere else in the world,” said Jim Bruene, founder of Finovate conferences, which showcase cutting-edge banking and financial technology.

With more intelligent, in-context financial services, especially commerce activities built around social media applications, “China is likely five or six years ahead of the United States,” Bruene told Xinhua.

The latest report by Ernst & Young showed that China’s fintech adoption rate came at 69 percent in an index that measures users’ activity in various areas, including money transfer, payments, investments, borrowing and insurance, the highest among 20 major markets globally.

Wechat Pay, the e-payment platform built inside the 900-million-user Chinese social media application Wechat, is seen as the future of fintech services by many experts.

“Messaging is the next web browser, fintech and all other applications are going to live in a mobile messaging application like Wechat, just like how they lived in web browsers,” said Greg Ratner, co-founder and chief technology officer of Troops, a U.S. artificial intelligence startup.

“It is going to be the future and is already happening in China. And I think it will come to the United States in the next five years,” Ratner told Xinhua.

According to Huang’s observation, there is a major difference between China and the United States in fintech development model.

“In the US, banks are the main driver of fintech innovations, while in China, BAT (Baidu, Alibaba, Tencent) representing the enterprises contribute most to the fintech development,” Huang said.

“Considering the scale of banks in China, they should play a more important role in fintech innovations,” he suggested.

(Xinhua | Updated: 2017-09-18 10:12)

The Top Companies Working on Education in Virtual Reality and Augmented Reality

Virtual Reality and Augmented Reality are new technologies which have been making all kinds of waves recently; but VR and AR are not only going to be used for gaming, several companies are exploring applications for VR and AR to be used in education. Around 90% of what we see and do is retained, which is something that has always been missing from education; students do not retain as much information if they are just reading a textbook compared to having more real applications of what they’re learning. Which is why VR and AR in education are so beneficial; these technologies engage students in a completely new way which is more fun and engaging for them, and it increases retention. Several companies are currently working on innovations in this space; and the work we’ve seen so far is surely a good sign of things to come.

Immersive VR Education

Lecture VR is a VR app by Immersive VR Education which simulates a lecture hall in virtual reality, while adding special effects which can’t be utilized in a traditional classroom setting. Lectures are accompanied by images, videos, and immersive experiences which enhance the lesson. Imagine learning about Apollo 11 and while the instructor is lecturing, they can transform the classroom to be inside the space shuttle which they’re lecturing about; adding much more to the lecture than would be traditionally possible. Another major asset of this type of learning is that students and professors can remote in from anywhere in the world, which makes education more accessible on a global level.

Unimersiv

Unimersiv is a VR learning platform which releases educational content on a monthly basis. The content in Unimersiv’s app is more individualized and immersive, and at the moment the 3 educational experiences available on the app are: Explore the International Space Station, Anatomy VR, and Learn about Stonehenge in Virtual Reality. The diversity in this content shows the true potential in the range of things we can learn by using VR in education; and it also shows that Unimersiv is dedicated to creating educational content on a wide range of topics, which makes their potential truly unlimited.

Google Expeditions Pioneer Program

Google is also making waves in the space of VR education with their exciting Expeditions Pioneer Program. The purpose of the program is for Expeditions teams from Google to visit schools around the world and provide everything teachers need to take their students on a journey anywhere; the team will also assist the teachers in setting up and utilizing this technology. The VR experiences are meant to be like a really cool field trip which teachers would ordinarily never be able to take their students on; whether it’s to an underwater coral reef, or into a city like Barcelona, the potential is truly limitless here. The way the app works is that the students and the teacher will see the same things and be in the same session, but the teacher will be able to lecture and highlight certain things which are relevant to the lesson.

Alchemy VR

Alchemy VR is creating immersive educational experiences on an impressive scale. The experiences on Alchemy VR are like a narrative being told to the user where they will get to see and experience a myriad of different things; one such example is exploring the Great Barrier Reef. What makes Alchemy VR standout in this space is their partnerships which contribute to the level of content they produce. Alchemy VR is partnered with Samsung, Google Expeditions, Sony, HTC, the Natural History Museum in London, and the Australian Museum in Sydney. Alchemy VR has actually made several projects for Google Expeditions and is set to release experiences on pyramids and the human body soon.

Discovery VR

Discovery has been telling stories like no one else for over 30 years; and now they are entering the modern era with their Discovery VR app. Some of your favorite content is available on the Discovery VR app, so you can experience some of your favorite Discovery shows in a whole new way. Discovery brining content to virtual reality is great for the space of education in virtual reality; Discovery has been creating educational content for years, they’re a household name and when people hear you can watch Deadliest Catch or Mythbusters in Virtual Reality it might just get more people interested in the potential applications of virtual reality for education.

zSpace

zSpace is unique in the space of education in VR because of the technology they’re using. zSpace feels that VR technology should not necessarily be so antisocial and one sided as everyone putting on their own headset. zSpace has monitors which are similar to the way 3D movies work, where a group of people use glasses which are similar in feel to 3D glasses; these glasses make the content come off the screen. Additionally zSpace users have a pen which they use in order to manipulate the stimulus as you can see below. zSpace currently has content available for STEM education, medical training, and more general math and science experiences.

Curiscope

Curiscope is a startup company focusing on education in VR, and their Virtual Tee is already turning heads. The way it works is that one person wears a t-shirt while another person with a smartphone launches the app and is able to learn about the human body in a unique way. This is a unique way to utilize AR in education and it is surely a sign of good things to come from Curiscope.

WoofbertVR

Woofbert VR is focusing on bringing art to VR technology, they are looking to reshape the landscape of storytelling and revolutionize the way we look at art and culture. WoofbertVR is a unique way to visit an art gallery, either individually or in a classroom setting giving students the opportunity to take a field trip which they would never normally get to experience. Exploring a gallery using WoffbertVR’s app is nearly identical to how you would explore it in person; you can follow any path you choose, zoom in on paintings you are interested in, and listen to audio segments on certain exhibits. The only difference here is that you do not need to take an expensive trip in order to visit a gallery which has some of your favorite paintings or pieces, now it can be done from the comfort of your own home.

Nearpod

Nearpod is an organization which combines VR and AR technology with traditional lesson plans in a classroom for a more immersive technology driven approach to learning. Nearpod utilizes 360 degree photos and videos in lesson plans, and also has something akin to a PowerPoint for students to use alongside the 360 photos and videos. There are also options to have students answer questions by typing into their laptop or tablet. Nearpod VR is giving us an idea of what a classroom of the future is going to look like, with lesson plans being enhanced with VR and AR technology and students being more engaged by using this new technology.

EON Reality

EON Reality is looking to change how teachers utilize technology in the classroom. Students and teachers can create blended learning environments with the EON Creator, which is an interactive tool that allows users to combine 3D content with videos, sound effects, notes, Powerpoint, and more. EON also has an EON Experience Portal where teachers and students can upload their work and also explore the creations of other teachers and students. There is also the EON Colliseum where students and teachers can meet virtually, which makes collaboration possible at any time and from nearly any location.

Schell Games

Schell Games is one of the largest independent game development companies in the United States; with their forte being in what they refer to as transformation games, or games that change people for the better. Among the countless games they have developed includes Water Bears VR, which is a VR puzzle game aimed at kids that promotes systems thinking and spatial recognition. There has always been a challenge in education with engaging students, especially kids, and making learning fun has always been a practical solution to that challenge. With companies like Schell Games producing fun VR games for kids, which have the added bonus of being educational, we find a fun solution to the challenge of getting kids interested in learning.

Gamar

Gamar is a company who is attempting to make museum visits more modern through the use of Augmented Reality. The way that Gamar’s content works is that they have interactive experiences and supplemental information which can be paired with certain locations to enhance museum visits. One of the chief rules of a museum is not to touch anything, but with Gamar’s app museum attendees will be able to point their smartphone or tablet at an exhibit at one of their paired locations and get additional information on that exhibit. Museum visits can be tedious for kids when all they want to do is pull out their smartphones and play with them. Gamar has provided a solution to this problem by finding a way to engage kids in museums and doing it in a way that’s interesting to them and on their level. AR and VR technology are great tools for education, but if we can’t get kids to use them then it’s all for not; which is why we need organizations like Gamar who find ways to get kids to utilize this technology in educational ways.

ThingLink

A unique take on 360 images has been developed for classroom use by ThingLink. The way that their technology works is that it allows teachers, or students, to annotate 360 images with various interactive tags, or links to different videos or applications or third party tools. Imagine being able to look at a 360 picture of London and click on various things throughout the picture, one tag might be a video enhancing something you are looking at, while another tag might be a writing assignment or a place to record notes. This kind of all in one learning experience will help in breakages in immersion which would typically occur in learning which utilizes VR or AR technology.  If students do not have to look away from a 360 image in order to take notes or do an assignment then the immersion will never truly be broken. ThingLink’s technology encourages students and teachers to utilize AR in the classroom for a unique one stop experience, and it has the potential to shape adoption of VR and AR technology in the classroom on a broad scale due to its appeal to students and teachers alike, not just one or the other.

Virtual Reality and Augmented Reality are the next big step forward in education, and these technologies are good for educators and students alike. Educators will be able to capture the attention of their students like never before and get them more actively involved in the classroom. Students are always trying to do what’s cool and what’s new; utilizing VR and AR technology in classrooms will appeal to students because it’s the latest and greatest thing and it’s very cool to use. Using VR and AR technology will not only make learning more interesting and exciting to students but it will also increase retention because it engages them in a way which textbooks simply cannot compete with. These technologies give us a window into the future of education, and it probably won’t be long until it is standard for classrooms to come equipped with some form of VR or AR technology.

( Source: Touch Stone Research)

Why Data Is The New Oil

Artificial intelligence is only as good as the data it crunches.

The field of artificial intelligence is red hot thanks in part to big companies like Google, Facebook, and Microsoft using AI-related techniques to train computers to recognize objects in photos and understand human language.

But these companies were only able to train their computers to perform these difficult feats because they have the enormous quantities of data that’s required. Companies that want to use artificial intelligence techniques like machine learning algorithms to improve their businesses will need to either have or acquire copious amounts of data, which will only become more valuable as the field of AI advances.

That’s one of the takeaways from a panel on artificial intelligence Monday in Aspen, Colo. during Fortune’s annual Brainstorm Tech conference.

“Data is the new oil,” Shivon Zilis, a partner with the venture capital firm Bloomberg Beta, said about data’s increasing value.

Although companies like Google GOOG 0.57% , Facebook FB 1.09% , and Amazon AMZN 0.78% have open sourced their own artificial intelligence software so any programmer can access the code and use it to build their own apps, they are not making the necessary data available, Zilis explained. These companies are releasing these AI software toolkits for free so they can recruit more engineers to their companies, she said.

When it comes to competition, it’s the data these companies posses that’s more important than the actual AI software tools they use and release to the public.

David Kenny, the general manager of IBM’s IBM 1.12% Watson data crunching service, agreed with Zilis and said “the value of data goes up every day AI advances.”

“Data will become a currency,” Kenny said. He also explained that only 20% of the world’s information is stored on the Internet, with the other 80% being privately held within companies and organizations.

Additionally, Zilis discussed the history of so-called “AI winters” in which the hype of AI failed to meet the general public’s expectations, and led to periods of years of declining research into AI during the mid-1970’s and the late 1980’s to early 1990’s.

Today, however, the rise of the cloud and the ability to access tremendous amounts of computing power to analyze huge amounts of data has the potential to strengthen the existing AI-related algorithms and lead to new more powerful algorithms. Because of these technological developments, Zilis does not believe companies will lose interest in AI.

“Are we going to hit a winter?” asked Zilis. “Absolutely not.”

( Source: Fortune Magazine, written by Jonathan Vanian)

Using deep learning to analyze genetic mutations

Full article written by David Beyer can be found here: Deep learning meets genome biology

  • The application of deep learning to genomic medicine is off to a promising start; it could impact diagnostics, intensive care, pharmaceuticals and insurance.
  • The “genotype-phenotype divide”—our inability to connect genetics to disease phenotypes—is preventing genomics from advancing medicine to its potential.
  • Deep learning can bridge the genotype-phenotype divide, by incorporating an exponentially growing amount of data, and accounting for the multiple layers of complex biological processes that relate the genotype to the phenotype.
  • Deep learning has been successful in applications where humans are naturally adept, such as image, text, and speech understanding. The human mind, however, isn’t intrinsically designed to understand the genome. This gap necessitates the application of “super-human intelligence” to the problem.
  • Efforts in this space must account for underlying biological mechanisms; overly simplistic, “black box” approaches will drive only limited value.

(Source: Deep Genomics)

Liberating Machine Vision From the Machines

Until recently, computer vision — used most widely in manufacturing — and mainstream computing technology have existed in parallel worlds. Along with other factory floor technologies, computer vision tends to be machine-specific, hardware driven, and makes little if any use of the Internet. Many the advances we take for granted in modern computing — ubiquitous connectivity, unlimited data storage in the cloud, insights drawn from massive unstructured data sets — have yet to be applied systematically to the factory floor in general and to computer vision specifically.

It’s no surprise when you consider that until recently most computer vision software was written by computer vision hardware makers, built on embedded systems without open APIs. What comes to mind when you think of the software that came bundled with your scanner, your Wi-Fi router, your car’s navigation system? Balky, inflexible and unintuitive. The software isn’t much more than a utility to run the hardware.

But this closed world is being broken open by a convergence of emerging technologies:

  • The proliferation of cheap, high pixel-density camera sensors
  • Open implementations of vision algorithms, machine learning, and statistical tools
  • Large amounts of cheap computing power, becoming virtually limitless in the cloud

These technologies offer all the raw materials needed for a massive shift in how computer vision is practiced. It’s a shift from focusing on the raw material of visual data — the pixels and bitmaps generated by specific cameras — to extracting data from images and using statistical and data science techniques to draw insights.

This new approach to computer vision has a powerful application amid an American manufacturing renaissance emphasizing rapid product cycles and mass customization. Whereas the archetypal American factory was built around systematic, repeatable function, modern manufacturing is about flexibility, adaptability and high efficiency. We’ve gone from Henry Ford’s “any colour he wants so long as it is black” to Google’s Moto X phone — customer-configured, manufactured in the U.S. and delivered within four days.

Unrelenting Quality Demands

But that need for flexibility on the manufacturing line is in tension with unrelenting quality demands that manufacturers face across industries and down supply chains. Despite huge investments in quality control, automakers recalled nearly as many cars as they sold in the U.S. in 2012. Ford and GM made warranty payments of $5.7 billion in 2012, more than half of the $10.5 billion they reported in net income. Automakers are now paying suppliers prices based on benchmarks like defects per million, terminating those who fall below thresholds, and pushing liability for warranty claims down to their suppliers.

While automation has transformed much of manufacturing, a surprising amount of quality control is still done by hand or otherwise relies on human judgement. Many types of inspection require visual evaluation, but manfacturers’ experience with computer vision in quality control has been a frustrating one. Walk into a factory and ask the manager about computer vision, and you are likely to hear a variant of, “Oh yeah, we tried that, it didn’t work very well, we had to throw it out.”

Existing machine vision uses a 30-year-old architecture that’s capital-intensive and severely constrained in its abilities. Today’s computer vision systems operate as stand-alone islands, rarely connected to the Internet. Every time needs change, each installation has to be manually reprogrammed, unit by unit.

Worse still, little data is kept, making it difficult to spot trends or find correlations among multiple variables. Most manufacturing quality inspection by machine vision today is pass/fail. If the initial inspections of a production run pass the quality inspection, the machines are turned on and the testing data overwritten.

The New Computer Vision

The new computer vision, liberated from its hardware shackles and empowered by connectivity, unlimited data storage and Big Data-style statistical analysis, is beginning to change the role of vision in manufacturing. Instead of being a reactive tool to detect defects, computer vision is becoming a data collection tool supporting defect prevention initiatives, improving understanding of complex processes, and enabling greater collaboration across entire supply chains in real time.

With modern web services, once the data is collected it is easily aggregated into dashboards and distributed to production workers, quality engineers, and management, locally or around the globe. Manufacturers can share data with supply chain partners, making it easier to monitor their suppliers or to satisfy reporting requirements for customers.

One of our customers, a large manufacturer of high-quality bolts and other fasteners to automakers, is bringing this vision to life. Their system uses computer vision to analyze the grain pattern of bolts. If the pattern is wrong — if the grain lines end on a load-bearing surface — the bolt head can shear off when a factory worker torques it down, or worse, when it’s already holding an engine block in place.

The company is capturing images using a $100 scanner purchased at Best Buy. All the intelligence is in the software, running remotely on Amazon’s cloud computing platform. The system compares each image to thousands of other metal grain photos stored in the cloud, looking for patterns that correlate with part failure.

The bolt maker is now exploring the extension of its the computer vision system to its steel supplier, which will capture images of metal grain from each batch of steel rods it ships to the fastener maker. The fastener maker will then be able to analyze increasingly massive data sets to correlate grain patterns in the steel rods with quality measurements in the finished bolts.

Instead of examining only a single station, large data sets let companies trace complex interactions down the production line and across the supply chain. Upstream stations may produce parts that are technically within tolerance, but when certain ranges of acceptable variation are combined, they cause downstream defects after installation.

For our bolt-making customer, the raw material (a steel rod) and the batch of bolts made from that rod may each be well within spec, but retrospective data analysis may show that certain combinations of grain pattern in the steel rods lead to higher failure rates on bolts used for specific applications.

As automakers adapt the system it will gain even more power. Should an automaker report that the fastener-maker’s bolts are breaking and leading to warranty repairs, the parts supplier now has the analytical tools to determine the source of the problem. They can run analysis to determine whether the failed bolts came from a particular batch of steel rods, or were made on a day when their line was adjusted to a specific tolerance – or whether the problem wasn’t with the bolt itself, but rather with the worker on the left side of the assembly line who consistently overtorques the engine bolts.

Once the captured data is in the cloud, such systems can store an unlimited amount of data indefinitely, for reanalysis and retrieval anytime. They let plants run correlations over time, track trends and identify root causes, and as new variables of interest arise, go back and analyze previously acquired data.

As each plant gets smarter, the whole system gets smarter. Like Google learning more about consumers with their every search and click, we’re able to aggregate our learnings from quality issues common across industries.

Ultimately, vision can turn physical world challenges into Big Data problems. We know how to solve these Big Data problems better and better every day.

(Written by Jon Sobel, CEO and co-founder of Sight Machine Inc.)

The future of computer vision

Within 20 years, computer vision will be a commodity component within the fabric of the worldwide analytics infrastructure, similar to the telecommunications infrastructure of today, containing distributed analytics and databases services. Application-specific analytics and intelligence will be added to all devices by default within the Internet of All Things (IoAT), including visual, audio, textual, numerical and sensor analytics. A few new Neural Computing (NC) architectures will be standardized in silicon, applicable to all forms of data.

Major government and corporate initiatives are currently underway, similar to the space race, to create artificial brains which will contribute to the NC of the future. Future systems will contain application-specific mixtures of NCs, CPUs, GPUs, sensor processors, and IO. The underlying technology will be a near zero-cost commodity, and the revenue will come from services, similar to phone or cable services.

Imaging devices will be more accurate with more on-chip processing power for image processing and analytics. Image processing algorithms will be similar to those used today, with no major innovations expected. The computer vision community will standardize on a few feature descriptors and features learning architectures, enabling a generic NC platform for application-specific innovation and market growth.

Computer vision and analytics systems will be far superior to the primitive deep learning models in use today, combining deep-learning and multivariate wide-learning together, with improved feature descriptor models and comprehensive training protocols enabled by ubiquitous databases containing labeled samples of any type of image or data such as audio, textual, financial, and information about a person, place or thing. Personal privacy will virtually disappear.

Within 20 years, most mobile and hand-held devices will contain NCs connected to remote analytics services to enable personal, business, commercial, governmental, military, law enforcement and legal organizations to perform combined audio, visual, historical, and textual evaluations to enable shopping, tourism, employment interviews, banking, commerce, law enforcement or housing applications.

Neural computers will evaluate facial expression, body language and clothing style for emotions and intentions, as well as audio evaluation of the tone and rhythm of spoken words for latent intentions and assumptions, including analysis of the words from email, texts, blogs, and historical records from local governments, academic institutions, purchasing records, and other financial transactions.

The analytics will provide scenarios and what-if analysis and prediction of future behavior within a set of circumstances, for example allowing a commercial enterprise to design situations or opportunities to suit their preferences and influence purchasing behavior, or by allowing governments to develop policies and propaganda to test the reactions of a population, their preferences, intentions and personal beliefs.

Computer vision will be a central component of the future of the analytics infrastructure. Imagine government policy and business plans being designed around the predictions generated by an NC to form future programs and evaluation of each program by another NC to form recommendations, with the best recommendation being chosen by another NC to send to the final decision authority – a human…or an NC?

(Source: www.KrigResearch.com)