How Brain-computer Interfaces Work

As the power of modern computers grows alongside our understanding of the human brain, we move ever closer to making some pretty spectacular science fiction into reality. Imagine transmitting signals directly to someone’s brain that would allow them to see, hear or feel specific sensory inputs. Consider the potential to manipulate computers or machinery with nothing more than a thought. It isn’t about convenience — for severely disabled people, development of a brain-computer interface (BCI) could be the most important technological breakthrough in decades. In this article, we’ll learn all about how BCIs work, their limitations and where they could be headed in the future.

The Electric Brain

The reason a BCI works at all is because of the way our brains function. Our brains are filled with neurons, individual nerve cells connected to one another by dendrites and axons. Every time we think, move, feel or remember something, our neurons are at work. That work is carried out by small electric signals that zip from neuron to neuron as fast as 250 mph [source: Walker]. The signals are generated by differences in electric potential carried by ions on the membrane of each neuron.

Although the paths the signals take are insulated by something called myelin, some of the electric signal escapes. Scientists can detect those signals, interpret what they mean and use them to direct a device of some kind. It can also work the other way around. For example, researchers could figure out what signals are sent to the brain by the optic nerve when someone sees the color red. They could rig a camera that would send those exact signals into someone’s brain whenever the camera saw red, allowing a blind person to “see” without eyes.

One of the biggest challenges facing brain-computer interface researchers today is the basic mechanics of the interface itself. The easiest and least invasive method is a set of electrodes — a device known as an electroencephalograph (EEG) — attached to the scalp. The electrodes can read brain signals. However, the skull blocks a lot of the electrical signal, and it distorts what does get through.

To get a higher-resolution signal, scientists can implant electrodes directly into the gray matter of the brain itself, or on the surface of the brain, beneath the skull. This allows for much more direct reception of electric signals and allows electrode placement in the specific area of the brain where the appropriate signals are generated. This approach has many problems, however. It requires invasive surgery to implant the electrodes, and devices left in the brain long-term tend to cause the formation of scar tissue in the gray matter. This scar tissue ultimately blocks signals.

Regardless of the location of the electrodes, the basic mechanism is the same: The electrodes measure minute differences in the voltage between neurons. The signal is then amplified and filtered. In current BCI systems, it is then interpreted by a computer program, although you might be familiar with older analogue encephalographs, which displayed the signals via pens that automatically wrote out the patterns on a continuous sheet of paper.

In the case of a sensory input BCI, the function happens in reverse. A computer converts a signal, such as one from a video camera, into the voltages necessary to trigger neurons. The signals are sent to an implant in the proper area of the brain, and if everything works correctly, the neurons fire and the subject receives a visual image corresponding to what the camera sees.

Another way to measure brain activity is with a Magnetic Resonance Image (MRI). An MRI machine is a massive, complicated device. It produces very high-resolution images of brain activity, but it can’t be used as part of a permanent or semipermanent BCI. Researchers use it to get benchmarks for certain brain functions or to map where in the brain electrodes should be placed to measure a specific function. For example, if researchers are attempting to implant electrodes that will allow someone to control a robotic arm with their thoughts, they might first put the subject into an MRI and ask him or her to think about moving their actual arm. The MRI will show which area of the brain is active during arm movement, giving them a clearer target for electrode placement.

So, what are the real-life uses of a BCI? Read on to find out the possibilities.

One of the most exciting areas of BCI research is the development of devices that can be controlled by thoughts. Some of the applications of this technology may seem frivolous, such as the ability to control a video game by thought. If you think a remote control is convenient, imagine changing channels with your mind.

However, there’s a bigger picture — devices that would allow severely disabled people to function independently. For a quadriplegic, something as basic as controlling a computer cursor via mental commands would represent a revolutionary improvement in quality of life. But how do we turn those tiny voltage measurements into the movement of a robotic arm?

Early research used monkeys with implanted electrodes. The monkeys used a joystick to control a robotic arm. Scientists measured the signals coming from the electrodes. Eventually, they changed the controls so that the robotic arm was being controlled only by the signals coming form the electrodes, not the joystick.

A more difficult task is interpreting the brain signals for movement in someone who can’t physically move their own arm. With a task like that, the subject must “train” to use the device. With an EEG or implant in place, the subject would visualize closing his or her right hand. After many trials, the software can learn the signals associated with the thought of hand-closing. Software connected to a robotic hand is programmed to receive the “close hand” signal and interpret it to mean that the robotic hand should close. At that point, when the subject thinks about closing the hand, the signals are sent and the robotic hand closes.

A similar method is used to manipulate a computer cursor, with the subject thinking about forward, left, right and back movements of the cursor. With enough practice, users can gain enough control over a cursor to draw a circle, access computer programs and control a TV [source: Ars Technica]. It could theoretically be expanded to allow users to “type” with their thoughts.

Once the basic mechanism of converting thoughts to computerized or robotic action is perfected, the potential uses for the technology are almost limitless. Instead of a robotic hand, disabled users could have robotic braces attached to their own limbs, allowing them to move and directly interact with the environment. This could even be accomplished without the “robotic” part of the device. Signals could be sent to the appropriate motor control nerves in the hands, bypassing a damaged section of the spinal cord and allowing actual movement of the subject’s own hands.

The most common and oldest way to use a BCI is a cochlear implant. For the average person, sound waves enter the ear and pass through several tiny organs that eventually pass the vibrations on to the auditory nerves in the form of electric signals. If the mechanism of the ear is severely damaged, that person will be unable to hear anything. However, the auditory nerves may be functioning perfectly well. They just aren’t receiving any signals.

A cochlear implant bypasses the nonfunctioning part of the ear, processes the sound waves into electric signals and passes them via electrodes right to the auditory nerves. The result: A previously deaf person can now hear. He might not hear perfectly, but it allows him to understand conversations.

The processing of visual information by the brain is much more complex than that of audio information, so artificial eye development isn’t as advanced. Still, the principle is the same. Electrodes are implanted in or near the visual cortex, the area of the brain that processes visual information from the retinas. A pair of glasses holding small cameras is connected to a computer and, in turn, to the implants. After a training period similar to the one used for remote thought-controlled movement, the subject can see. Again, the vision isn’t perfect, but refinements in technology have improved it tremendously since it was first attempted in the 1970s. Jens Naumann was the recipient of a second-generation implant. He was completely blind, but now he can navigate New York City‘s subways by himself and even drive a car around a parking lot [source: CBC News]. In terms of science fiction becoming reality, this process gets very close. The terminals that connect the camera glasses to the electrodes in Naumann’s brain are similar to those used to connect the VISOR (Visual Instrument and Sensory Organ) worn by blind engineering officer Geordi La Forge in the “Star Trek: The Next Generation” TV show and films, and they’re both essentially the same technology. However, Naumann isn’t able to “see” invisible portions of the electromagnetic spectrum.

Although we already understand the basic principles behind BCIs, they don’t work perfectly. There are several reasons for this.

  1. The brain is incredibly complex. To say that all thoughts or actions are the result of simple electric signals in the brain is a gross understatement. There are about 100 billion neurons in a human brain [source: Greenfield]. Each neuron is constantly sending and receiving signals through a complex web of connections. There are chemical processes involved as well, which EEGs can’t pick up on.
  2. The signal is weak and prone to interference. EEGs measure tiny voltage potentials. Something as simple as the blinking eyelids of the subject can generate much stronger signals. Refinements in EEGs and implants will probably overcome this problem to some extent in the future, but for now, reading brain signals is like listening to a bad phone connection. There’s lots of static.
  3. The equipment is less than portable. It’s far better than it used to be — early systems were hardwired to massive mainframe computers. But some BCIs still require a wired connection to the equipment, and those that are wireless require the subject to carry a computer that can weigh around 10 pounds. Like all technology, this will surely become lighter and more wireless in the future.

A few companies are pioneers in the field of BCI. Most of them are still in the research stages, though a few products are offered commercially.

  • Neural Signals is developing technology to restore speech to disabled people. An implant in an area of the brain associated with speech (Broca’s area) would transmit signals to a computer and then to a speaker. With training, the subject could learn to think each of the 39 phonemes in the English language and reconstruct speech through the computer and speaker [source: Neural Signals].
  • NASA has researched a similar system, although it reads electric signals from the nerves in the mouth and throat area, rather than directly from the brain. They succeeded in performing a Web search by mentally “typing” the term “NASA” into Google [source: New Scientist].
  • Cyberkinetics Neurotechnology Systems is marketing the BrainGate, a neural interface system that allows disabled people to control a wheelchair, robotic prosthesis or computer cursor [source: Cyberkinetics].
  • Japanese researchers have developed a preliminary BCI that allows the user to control their avatar in the online world Second Life [source: Ars Technica].

Artificial intelligence is learning to read your mind—and display what it sees

 

Artificial intelligence has taken us one baby step closer to the mind-reading machines of science fiction. Researchers have developed “deep learning” algorithms—roughly modeled on the human brain—to decipher, you guessed it, the human brain. First, they built a model of how the brain encodes information. As three women spent hours viewing hundreds of short videos, a functional MRI machine measured signals of activity in the visual cortex and elsewhere. A popular type of artificial neural network used for image processing learned to associate video images with brain activity. As the women watched additional clips, the algorithm’s predicted activity correlated with actual activity in a dozen brain regions. It also helped the scientists visualize which features each area of the cortex was processing. Another network decoded neural signals: Based on a participant’s brain activity, it could predict with about 50% accuracy what she was watching (by selecting one of 15 categories including bird, airplane, and exercise). If the network had trained on data from a different woman’s brain, it could still categorize the image with about 25% accuracy, the researchers report this month in Cerebral Cortex. The network could also partially reconstruct what a participant saw, turning brain activity into pixels, but the resulting images were little more than white blobs. The researchers hope their work will lead to the reconstruction of mental imagery, which uses some of the same brain circuits as visual processing. Translating from the mind’s eye into bits could allow people to express vivid thoughts or dreams to computers or to other people without words or mouse clicks, and could help those with strokes who have no other way to communicate.

(source: Science and AAAS By Matthew Hutson)

Why is AI (Finally) Happening Now?

AI research efforts have been underway for many decades, although university and technology industry research funding had been largely curtailed as early promises of the technology failed to deliver or were thought to be disproven. AI algorithm development in the early days was also greatly constrained by a lack of adequate computing power and limited access to the large data sets necessary to train and test AI applications.

In recent years, the combination of several important factors has helped to rejuvenate interest in AI research and attract capital investment in AI venture start-ups and internal enterprise R&D projects. Key factors for AI’s resurgence include:

1. Significant advances in machine learning approaches and new techniques for more rapidly constructing and training complex algorithms;

2. The development of very powerful and inexpensive cloud computing resources and the employment of powerful graphics processing units (“GPUs”), which are now widely recognized as being very well suited to running AI programs. Today, several companies are developing CPUs specifically designed for running AI algorithms, and, longer term, Quantum Computing will likely also be utilized to build AI models. In addition, cluster computing technologies, such as Apache Spark and MapReduce, enable the development of complex machine learning models utilizing very large data sets;

3. The availability of very large structured and unstructured data sets. These data sets have been created by enterprise “big data” projects utilizing technologies such as Hadoop, the growing and widespread adoption of enterprise SaaS platforms, the growth of on-line retailers, massive social media networks, IoT sensor data and the proliferation of mobile technologies.

In addition to the above key factors, an increasingly more open approach to enterprise and cloud-based APIs (Application Programming Interface) and competition in cloud computing services have also helped spur on the development of AI.

We note that IBM Watson famously competed on Jeopardy! in February 2011 (the IBM Watson group was later formed in January 2014), an event that helped to significantly raise the public profile of recent advances in AI technology. Watson is IBM’s analytical computing system (see Exhibit 9) that is now being used in many different applications across multiple vertical industries.

Finally, in the last two years, universities, incubators, leading software companies, and venture capitalists have all been significantly increasing their funding of advanced data science research and new AI-related ventures, as evidenced by the proliferation in north American AI technology start-ups shown in Exhibits 10 and 11. Besides rapid traditional venture capital investment growth, US corporate venture capital investment in AI is up 7x since 2013, reaching $1.77B in 2016 (126 deals) and $1.8B in 1H17 (88 deals), according to CB Insights. As in previous technology cycles, we anticipate that many of these start-ups will fail or be absorbed into the mainstream (positive cash flowing) enterprise software industry and other industry leaders.

Should AI be Considered a General Purpose Technology?

As discussed earlier, the field of AI is comprised of several different sub-disciplines, all of which are continuing to develop and rapidly evolve. We believe that AI should be considered a “general purpose” technology. That is, AI is a very broadly applicable set of technologies that can be used to automate, augment, and enhance countless organizational workflows, processes, decisions, and software applications. In some respects, we believe that AI can also be viewed as a foundational software tool as versatile as a spreadsheet or engineering Computer-Aided Design (“CAD”) tool.

AI solutions can be deployed to outright replace repetitive human tasks, for both skilled and unskilled workers, or to augment the skills of workers involved in complex operations. In general, AI currently works well when automating repetitive highvolume, low-value-added tasks, such as straightforward consumer product and service inquiries to call centres or internal IT help desks and HR department requests.

To be effective, AI solutions in use today typically require significant amounts of input or training data, such as historical business transactions, images (e.g., medical), or consumer retail purchasing data. With large amounts of data, AI algorithms can be rapidly trained to perform analysis, predict outcomes, provide recommendations, or make decisions. Importantly, data sources (aided by advances in distributed database technologies) are rapidly increasing as connected mobile devices, new IoT solutions, and the adoption of cloud-based applications (social, SaaS, etc.) continue to proliferate. In the future, as AI tools and methods improve, AI will likely require less data to be effective and several AI technologies will be combined to solve higher level, more complex process problems.

How is AI Being Applied Today?

AI application development is currently taking place in a variety of settings, including AI startups (building AI-enabled solutions for vertical and horizontal markets), university-sponsored AI incubators, custom projects within corporate IT departments, and throughout both the traditional enterprise software and the consumer-focused software industry. New applications of AI technology are being touted in the media almost daily, however, we will note just a few examples here (also see Exhibit 13):

•Improved accuracy in credit card fraud detection;
•Investment portfolio management recommendations;
•Disease diagnosis based on medical imaging;
•Self-driving vehicles;
•Facial recognition for security uses;
•Automated image labelling and classification;
•Call centre automation and support;
•Voice- and text-based language translation;
•Product recommendations for on-line shoppers;
•Supply chain and manufacturing planning optimization;
•Predicting industrial equipment failure; and
•Monitoring large volume data flows from IoT networks.

( Source: By , Cantech Letter)

How is AI disrupting financial industry

NEW YORK – Artificial intelligence (AI), along with other financial technology (fintech) innovations, are significantly changing the ways that financial business are being run, especially in the fields like trading, insurance and risk management, leading the traditional financial industry into a new era.

Robots replacing humans

Back in 2000, Goldman Sach’s New York headquarters employed 600 traders, buying and selling stock on the orders of the investment bank’s clients. Today there are just two equity traders left, as automated trading programs have taken over the rest of the work.

Meanwhile, BlackRock, the world’s biggest money manager, also cut more than 40 jobs earlier this year, replacing some of its human portfolio managers with artificially intelligent, computerized stock-trading algorithms.

Those two big companies are not the only financial institutions replacing human jobs with robots.

By 2025, AI technologies will reduce employees in the capital markets by 230,000 people worldwide, according to a report by the financial services consultancy Opimas.

“Asset managers, analysts, traders, compliance administrators, back-office data collection and analysts are most likely to lose their jobs, because their jobs are easier to be replaced by automation and AI,” Henry Huang, an associate professor at Yeshiva University’s Sy Syms School of Business, told Xinhua.

“The net effect of this kind of automation will be more about increasing the productivity of the workforce than of robots simply replacing people,” said Richard Lumb, group chief executive of Accenture’s Financial Services operating group.

The best automated firms will outperform their competitors by making existing workforces more productive through AI, he added.

While humans are losing jobs in the financial industry, companies are enjoying the benefits bringing by AI technologies.

“Initially AI will add the most value and have the largest impacts in compliance (especially anti-money laundering and know-your-customer functions), cybersecurity and robo-advice,” Lumb told Xinhua.

Wall street embraces fintech

Facing rising pressures from fintech innovations, represented by AI, Wall Street financial institutions choose to embrace the new trend.

“In general, we see the outlook for fintech as strong. Demand for fintech by banks is growing because of regulatory and capital pressures, competition from large technology players like Google and Amazon and the abundance of new security threats,” Lumb said.

The FinTech Innovation Lab, an annual program launched in 2010 by Accenture and the Partnership Fund for New York City to foster fintech growth, has helped New York participants raise more than $440 million.

“The FinTech lab has proven to be a significant program for engagement between entrepreneurial technology companies and New York’s financial industry,” said James D. Robinson III, General Partner and Co-founder of RRE Ventures.

In New York City alone, fintech investment overall has increased from 216 million dollars in 2010 to 2.4 billion dollars in 2016.

“Big new frontiers are only just beginning to opening up in fintech – from AI, block chain and robotics to biometrics, augmented reality and cybersecurity,” Lumb said.

Among all the fintech innovations, the prospect of the block chain has the highest expectation.

“The block chain will change the way people store information, which is real, spreading fast and cross-border, and its ‘de-centric’ feature will allow everyone to know what other people are doing. The application of block chain in finance will once again bring about a revolutionary impact on the industry, just like AI does,” said Huang.

Fintech in china

Although it is hard to tell which country is leading the fintech innovations, many experts agree that China has outperformed other countries in fintech services adoption.

“The work in China has been dramatically ahead of anywhere else in the world,” said Jim Bruene, founder of Finovate conferences, which showcase cutting-edge banking and financial technology.

With more intelligent, in-context financial services, especially commerce activities built around social media applications, “China is likely five or six years ahead of the United States,” Bruene told Xinhua.

The latest report by Ernst & Young showed that China’s fintech adoption rate came at 69 percent in an index that measures users’ activity in various areas, including money transfer, payments, investments, borrowing and insurance, the highest among 20 major markets globally.

Wechat Pay, the e-payment platform built inside the 900-million-user Chinese social media application Wechat, is seen as the future of fintech services by many experts.

“Messaging is the next web browser, fintech and all other applications are going to live in a mobile messaging application like Wechat, just like how they lived in web browsers,” said Greg Ratner, co-founder and chief technology officer of Troops, a U.S. artificial intelligence startup.

“It is going to be the future and is already happening in China. And I think it will come to the United States in the next five years,” Ratner told Xinhua.

According to Huang’s observation, there is a major difference between China and the United States in fintech development model.

“In the US, banks are the main driver of fintech innovations, while in China, BAT (Baidu, Alibaba, Tencent) representing the enterprises contribute most to the fintech development,” Huang said.

“Considering the scale of banks in China, they should play a more important role in fintech innovations,” he suggested.

(Xinhua | Updated: 2017-09-18 10:12)

The Top Companies Working on Education in Virtual Reality and Augmented Reality

Virtual Reality and Augmented Reality are new technologies which have been making all kinds of waves recently; but VR and AR are not only going to be used for gaming, several companies are exploring applications for VR and AR to be used in education. Around 90% of what we see and do is retained, which is something that has always been missing from education; students do not retain as much information if they are just reading a textbook compared to having more real applications of what they’re learning. Which is why VR and AR in education are so beneficial; these technologies engage students in a completely new way which is more fun and engaging for them, and it increases retention. Several companies are currently working on innovations in this space; and the work we’ve seen so far is surely a good sign of things to come.

Immersive VR Education

Lecture VR is a VR app by Immersive VR Education which simulates a lecture hall in virtual reality, while adding special effects which can’t be utilized in a traditional classroom setting. Lectures are accompanied by images, videos, and immersive experiences which enhance the lesson. Imagine learning about Apollo 11 and while the instructor is lecturing, they can transform the classroom to be inside the space shuttle which they’re lecturing about; adding much more to the lecture than would be traditionally possible. Another major asset of this type of learning is that students and professors can remote in from anywhere in the world, which makes education more accessible on a global level.

Unimersiv

Unimersiv is a VR learning platform which releases educational content on a monthly basis. The content in Unimersiv’s app is more individualized and immersive, and at the moment the 3 educational experiences available on the app are: Explore the International Space Station, Anatomy VR, and Learn about Stonehenge in Virtual Reality. The diversity in this content shows the true potential in the range of things we can learn by using VR in education; and it also shows that Unimersiv is dedicated to creating educational content on a wide range of topics, which makes their potential truly unlimited.

Google Expeditions Pioneer Program

Google is also making waves in the space of VR education with their exciting Expeditions Pioneer Program. The purpose of the program is for Expeditions teams from Google to visit schools around the world and provide everything teachers need to take their students on a journey anywhere; the team will also assist the teachers in setting up and utilizing this technology. The VR experiences are meant to be like a really cool field trip which teachers would ordinarily never be able to take their students on; whether it’s to an underwater coral reef, or into a city like Barcelona, the potential is truly limitless here. The way the app works is that the students and the teacher will see the same things and be in the same session, but the teacher will be able to lecture and highlight certain things which are relevant to the lesson.

Alchemy VR

Alchemy VR is creating immersive educational experiences on an impressive scale. The experiences on Alchemy VR are like a narrative being told to the user where they will get to see and experience a myriad of different things; one such example is exploring the Great Barrier Reef. What makes Alchemy VR standout in this space is their partnerships which contribute to the level of content they produce. Alchemy VR is partnered with Samsung, Google Expeditions, Sony, HTC, the Natural History Museum in London, and the Australian Museum in Sydney. Alchemy VR has actually made several projects for Google Expeditions and is set to release experiences on pyramids and the human body soon.

Discovery VR

Discovery has been telling stories like no one else for over 30 years; and now they are entering the modern era with their Discovery VR app. Some of your favorite content is available on the Discovery VR app, so you can experience some of your favorite Discovery shows in a whole new way. Discovery brining content to virtual reality is great for the space of education in virtual reality; Discovery has been creating educational content for years, they’re a household name and when people hear you can watch Deadliest Catch or Mythbusters in Virtual Reality it might just get more people interested in the potential applications of virtual reality for education.

zSpace

zSpace is unique in the space of education in VR because of the technology they’re using. zSpace feels that VR technology should not necessarily be so antisocial and one sided as everyone putting on their own headset. zSpace has monitors which are similar to the way 3D movies work, where a group of people use glasses which are similar in feel to 3D glasses; these glasses make the content come off the screen. Additionally zSpace users have a pen which they use in order to manipulate the stimulus as you can see below. zSpace currently has content available for STEM education, medical training, and more general math and science experiences.

Curiscope

Curiscope is a startup company focusing on education in VR, and their Virtual Tee is already turning heads. The way it works is that one person wears a t-shirt while another person with a smartphone launches the app and is able to learn about the human body in a unique way. This is a unique way to utilize AR in education and it is surely a sign of good things to come from Curiscope.

WoofbertVR

Woofbert VR is focusing on bringing art to VR technology, they are looking to reshape the landscape of storytelling and revolutionize the way we look at art and culture. WoofbertVR is a unique way to visit an art gallery, either individually or in a classroom setting giving students the opportunity to take a field trip which they would never normally get to experience. Exploring a gallery using WoffbertVR’s app is nearly identical to how you would explore it in person; you can follow any path you choose, zoom in on paintings you are interested in, and listen to audio segments on certain exhibits. The only difference here is that you do not need to take an expensive trip in order to visit a gallery which has some of your favorite paintings or pieces, now it can be done from the comfort of your own home.

Nearpod

Nearpod is an organization which combines VR and AR technology with traditional lesson plans in a classroom for a more immersive technology driven approach to learning. Nearpod utilizes 360 degree photos and videos in lesson plans, and also has something akin to a PowerPoint for students to use alongside the 360 photos and videos. There are also options to have students answer questions by typing into their laptop or tablet. Nearpod VR is giving us an idea of what a classroom of the future is going to look like, with lesson plans being enhanced with VR and AR technology and students being more engaged by using this new technology.

EON Reality

EON Reality is looking to change how teachers utilize technology in the classroom. Students and teachers can create blended learning environments with the EON Creator, which is an interactive tool that allows users to combine 3D content with videos, sound effects, notes, Powerpoint, and more. EON also has an EON Experience Portal where teachers and students can upload their work and also explore the creations of other teachers and students. There is also the EON Colliseum where students and teachers can meet virtually, which makes collaboration possible at any time and from nearly any location.

Schell Games

Schell Games is one of the largest independent game development companies in the United States; with their forte being in what they refer to as transformation games, or games that change people for the better. Among the countless games they have developed includes Water Bears VR, which is a VR puzzle game aimed at kids that promotes systems thinking and spatial recognition. There has always been a challenge in education with engaging students, especially kids, and making learning fun has always been a practical solution to that challenge. With companies like Schell Games producing fun VR games for kids, which have the added bonus of being educational, we find a fun solution to the challenge of getting kids interested in learning.

Gamar

Gamar is a company who is attempting to make museum visits more modern through the use of Augmented Reality. The way that Gamar’s content works is that they have interactive experiences and supplemental information which can be paired with certain locations to enhance museum visits. One of the chief rules of a museum is not to touch anything, but with Gamar’s app museum attendees will be able to point their smartphone or tablet at an exhibit at one of their paired locations and get additional information on that exhibit. Museum visits can be tedious for kids when all they want to do is pull out their smartphones and play with them. Gamar has provided a solution to this problem by finding a way to engage kids in museums and doing it in a way that’s interesting to them and on their level. AR and VR technology are great tools for education, but if we can’t get kids to use them then it’s all for not; which is why we need organizations like Gamar who find ways to get kids to utilize this technology in educational ways.

ThingLink

A unique take on 360 images has been developed for classroom use by ThingLink. The way that their technology works is that it allows teachers, or students, to annotate 360 images with various interactive tags, or links to different videos or applications or third party tools. Imagine being able to look at a 360 picture of London and click on various things throughout the picture, one tag might be a video enhancing something you are looking at, while another tag might be a writing assignment or a place to record notes. This kind of all in one learning experience will help in breakages in immersion which would typically occur in learning which utilizes VR or AR technology.  If students do not have to look away from a 360 image in order to take notes or do an assignment then the immersion will never truly be broken. ThingLink’s technology encourages students and teachers to utilize AR in the classroom for a unique one stop experience, and it has the potential to shape adoption of VR and AR technology in the classroom on a broad scale due to its appeal to students and teachers alike, not just one or the other.

Virtual Reality and Augmented Reality are the next big step forward in education, and these technologies are good for educators and students alike. Educators will be able to capture the attention of their students like never before and get them more actively involved in the classroom. Students are always trying to do what’s cool and what’s new; utilizing VR and AR technology in classrooms will appeal to students because it’s the latest and greatest thing and it’s very cool to use. Using VR and AR technology will not only make learning more interesting and exciting to students but it will also increase retention because it engages them in a way which textbooks simply cannot compete with. These technologies give us a window into the future of education, and it probably won’t be long until it is standard for classrooms to come equipped with some form of VR or AR technology.

( Source: Touch Stone Research)

Why Data Is The New Oil

Artificial intelligence is only as good as the data it crunches.

The field of artificial intelligence is red hot thanks in part to big companies like Google, Facebook, and Microsoft using AI-related techniques to train computers to recognize objects in photos and understand human language.

But these companies were only able to train their computers to perform these difficult feats because they have the enormous quantities of data that’s required. Companies that want to use artificial intelligence techniques like machine learning algorithms to improve their businesses will need to either have or acquire copious amounts of data, which will only become more valuable as the field of AI advances.

That’s one of the takeaways from a panel on artificial intelligence Monday in Aspen, Colo. during Fortune’s annual Brainstorm Tech conference.

“Data is the new oil,” Shivon Zilis, a partner with the venture capital firm Bloomberg Beta, said about data’s increasing value.

Although companies like Google GOOG 0.57% , Facebook FB 1.09% , and Amazon AMZN 0.78% have open sourced their own artificial intelligence software so any programmer can access the code and use it to build their own apps, they are not making the necessary data available, Zilis explained. These companies are releasing these AI software toolkits for free so they can recruit more engineers to their companies, she said.

When it comes to competition, it’s the data these companies posses that’s more important than the actual AI software tools they use and release to the public.

David Kenny, the general manager of IBM’s IBM 1.12% Watson data crunching service, agreed with Zilis and said “the value of data goes up every day AI advances.”

“Data will become a currency,” Kenny said. He also explained that only 20% of the world’s information is stored on the Internet, with the other 80% being privately held within companies and organizations.

Additionally, Zilis discussed the history of so-called “AI winters” in which the hype of AI failed to meet the general public’s expectations, and led to periods of years of declining research into AI during the mid-1970’s and the late 1980’s to early 1990’s.

Today, however, the rise of the cloud and the ability to access tremendous amounts of computing power to analyze huge amounts of data has the potential to strengthen the existing AI-related algorithms and lead to new more powerful algorithms. Because of these technological developments, Zilis does not believe companies will lose interest in AI.

“Are we going to hit a winter?” asked Zilis. “Absolutely not.”

( Source: Fortune Magazine, written by Jonathan Vanian)

Using deep learning to analyze genetic mutations

Full article written by David Beyer can be found here: Deep learning meets genome biology

  • The application of deep learning to genomic medicine is off to a promising start; it could impact diagnostics, intensive care, pharmaceuticals and insurance.
  • The “genotype-phenotype divide”—our inability to connect genetics to disease phenotypes—is preventing genomics from advancing medicine to its potential.
  • Deep learning can bridge the genotype-phenotype divide, by incorporating an exponentially growing amount of data, and accounting for the multiple layers of complex biological processes that relate the genotype to the phenotype.
  • Deep learning has been successful in applications where humans are naturally adept, such as image, text, and speech understanding. The human mind, however, isn’t intrinsically designed to understand the genome. This gap necessitates the application of “super-human intelligence” to the problem.
  • Efforts in this space must account for underlying biological mechanisms; overly simplistic, “black box” approaches will drive only limited value.

(Source: Deep Genomics)