How Brain-computer Interfaces Work

As the power of modern computers grows alongside our understanding of the human brain, we move ever closer to making some pretty spectacular science fiction into reality. Imagine transmitting signals directly to someone’s brain that would allow them to see, hear or feel specific sensory inputs. Consider the potential to manipulate computers or machinery with nothing more than a thought. It isn’t about convenience — for severely disabled people, development of a brain-computer interface (BCI) could be the most important technological breakthrough in decades. In this article, we’ll learn all about how BCIs work, their limitations and where they could be headed in the future.

The Electric Brain

The reason a BCI works at all is because of the way our brains function. Our brains are filled with neurons, individual nerve cells connected to one another by dendrites and axons. Every time we think, move, feel or remember something, our neurons are at work. That work is carried out by small electric signals that zip from neuron to neuron as fast as 250 mph [source: Walker]. The signals are generated by differences in electric potential carried by ions on the membrane of each neuron.

Although the paths the signals take are insulated by something called myelin, some of the electric signal escapes. Scientists can detect those signals, interpret what they mean and use them to direct a device of some kind. It can also work the other way around. For example, researchers could figure out what signals are sent to the brain by the optic nerve when someone sees the color red. They could rig a camera that would send those exact signals into someone’s brain whenever the camera saw red, allowing a blind person to “see” without eyes.

One of the biggest challenges facing brain-computer interface researchers today is the basic mechanics of the interface itself. The easiest and least invasive method is a set of electrodes — a device known as an electroencephalograph (EEG) — attached to the scalp. The electrodes can read brain signals. However, the skull blocks a lot of the electrical signal, and it distorts what does get through.

To get a higher-resolution signal, scientists can implant electrodes directly into the gray matter of the brain itself, or on the surface of the brain, beneath the skull. This allows for much more direct reception of electric signals and allows electrode placement in the specific area of the brain where the appropriate signals are generated. This approach has many problems, however. It requires invasive surgery to implant the electrodes, and devices left in the brain long-term tend to cause the formation of scar tissue in the gray matter. This scar tissue ultimately blocks signals.

Regardless of the location of the electrodes, the basic mechanism is the same: The electrodes measure minute differences in the voltage between neurons. The signal is then amplified and filtered. In current BCI systems, it is then interpreted by a computer program, although you might be familiar with older analogue encephalographs, which displayed the signals via pens that automatically wrote out the patterns on a continuous sheet of paper.

In the case of a sensory input BCI, the function happens in reverse. A computer converts a signal, such as one from a video camera, into the voltages necessary to trigger neurons. The signals are sent to an implant in the proper area of the brain, and if everything works correctly, the neurons fire and the subject receives a visual image corresponding to what the camera sees.

Another way to measure brain activity is with a Magnetic Resonance Image (MRI). An MRI machine is a massive, complicated device. It produces very high-resolution images of brain activity, but it can’t be used as part of a permanent or semipermanent BCI. Researchers use it to get benchmarks for certain brain functions or to map where in the brain electrodes should be placed to measure a specific function. For example, if researchers are attempting to implant electrodes that will allow someone to control a robotic arm with their thoughts, they might first put the subject into an MRI and ask him or her to think about moving their actual arm. The MRI will show which area of the brain is active during arm movement, giving them a clearer target for electrode placement.

So, what are the real-life uses of a BCI? Read on to find out the possibilities.

One of the most exciting areas of BCI research is the development of devices that can be controlled by thoughts. Some of the applications of this technology may seem frivolous, such as the ability to control a video game by thought. If you think a remote control is convenient, imagine changing channels with your mind.

However, there’s a bigger picture — devices that would allow severely disabled people to function independently. For a quadriplegic, something as basic as controlling a computer cursor via mental commands would represent a revolutionary improvement in quality of life. But how do we turn those tiny voltage measurements into the movement of a robotic arm?

Early research used monkeys with implanted electrodes. The monkeys used a joystick to control a robotic arm. Scientists measured the signals coming from the electrodes. Eventually, they changed the controls so that the robotic arm was being controlled only by the signals coming form the electrodes, not the joystick.

A more difficult task is interpreting the brain signals for movement in someone who can’t physically move their own arm. With a task like that, the subject must “train” to use the device. With an EEG or implant in place, the subject would visualize closing his or her right hand. After many trials, the software can learn the signals associated with the thought of hand-closing. Software connected to a robotic hand is programmed to receive the “close hand” signal and interpret it to mean that the robotic hand should close. At that point, when the subject thinks about closing the hand, the signals are sent and the robotic hand closes.

A similar method is used to manipulate a computer cursor, with the subject thinking about forward, left, right and back movements of the cursor. With enough practice, users can gain enough control over a cursor to draw a circle, access computer programs and control a TV [source: Ars Technica]. It could theoretically be expanded to allow users to “type” with their thoughts.

Once the basic mechanism of converting thoughts to computerized or robotic action is perfected, the potential uses for the technology are almost limitless. Instead of a robotic hand, disabled users could have robotic braces attached to their own limbs, allowing them to move and directly interact with the environment. This could even be accomplished without the “robotic” part of the device. Signals could be sent to the appropriate motor control nerves in the hands, bypassing a damaged section of the spinal cord and allowing actual movement of the subject’s own hands.

The most common and oldest way to use a BCI is a cochlear implant. For the average person, sound waves enter the ear and pass through several tiny organs that eventually pass the vibrations on to the auditory nerves in the form of electric signals. If the mechanism of the ear is severely damaged, that person will be unable to hear anything. However, the auditory nerves may be functioning perfectly well. They just aren’t receiving any signals.

A cochlear implant bypasses the nonfunctioning part of the ear, processes the sound waves into electric signals and passes them via electrodes right to the auditory nerves. The result: A previously deaf person can now hear. He might not hear perfectly, but it allows him to understand conversations.

The processing of visual information by the brain is much more complex than that of audio information, so artificial eye development isn’t as advanced. Still, the principle is the same. Electrodes are implanted in or near the visual cortex, the area of the brain that processes visual information from the retinas. A pair of glasses holding small cameras is connected to a computer and, in turn, to the implants. After a training period similar to the one used for remote thought-controlled movement, the subject can see. Again, the vision isn’t perfect, but refinements in technology have improved it tremendously since it was first attempted in the 1970s. Jens Naumann was the recipient of a second-generation implant. He was completely blind, but now he can navigate New York City‘s subways by himself and even drive a car around a parking lot [source: CBC News]. In terms of science fiction becoming reality, this process gets very close. The terminals that connect the camera glasses to the electrodes in Naumann’s brain are similar to those used to connect the VISOR (Visual Instrument and Sensory Organ) worn by blind engineering officer Geordi La Forge in the “Star Trek: The Next Generation” TV show and films, and they’re both essentially the same technology. However, Naumann isn’t able to “see” invisible portions of the electromagnetic spectrum.

Although we already understand the basic principles behind BCIs, they don’t work perfectly. There are several reasons for this.

  1. The brain is incredibly complex. To say that all thoughts or actions are the result of simple electric signals in the brain is a gross understatement. There are about 100 billion neurons in a human brain [source: Greenfield]. Each neuron is constantly sending and receiving signals through a complex web of connections. There are chemical processes involved as well, which EEGs can’t pick up on.
  2. The signal is weak and prone to interference. EEGs measure tiny voltage potentials. Something as simple as the blinking eyelids of the subject can generate much stronger signals. Refinements in EEGs and implants will probably overcome this problem to some extent in the future, but for now, reading brain signals is like listening to a bad phone connection. There’s lots of static.
  3. The equipment is less than portable. It’s far better than it used to be — early systems were hardwired to massive mainframe computers. But some BCIs still require a wired connection to the equipment, and those that are wireless require the subject to carry a computer that can weigh around 10 pounds. Like all technology, this will surely become lighter and more wireless in the future.

A few companies are pioneers in the field of BCI. Most of them are still in the research stages, though a few products are offered commercially.

  • Neural Signals is developing technology to restore speech to disabled people. An implant in an area of the brain associated with speech (Broca’s area) would transmit signals to a computer and then to a speaker. With training, the subject could learn to think each of the 39 phonemes in the English language and reconstruct speech through the computer and speaker [source: Neural Signals].
  • NASA has researched a similar system, although it reads electric signals from the nerves in the mouth and throat area, rather than directly from the brain. They succeeded in performing a Web search by mentally “typing” the term “NASA” into Google [source: New Scientist].
  • Cyberkinetics Neurotechnology Systems is marketing the BrainGate, a neural interface system that allows disabled people to control a wheelchair, robotic prosthesis or computer cursor [source: Cyberkinetics].
  • Japanese researchers have developed a preliminary BCI that allows the user to control their avatar in the online world Second Life [source: Ars Technica].

Artificial intelligence is learning to read your mind—and display what it sees

 

Artificial intelligence has taken us one baby step closer to the mind-reading machines of science fiction. Researchers have developed “deep learning” algorithms—roughly modeled on the human brain—to decipher, you guessed it, the human brain. First, they built a model of how the brain encodes information. As three women spent hours viewing hundreds of short videos, a functional MRI machine measured signals of activity in the visual cortex and elsewhere. A popular type of artificial neural network used for image processing learned to associate video images with brain activity. As the women watched additional clips, the algorithm’s predicted activity correlated with actual activity in a dozen brain regions. It also helped the scientists visualize which features each area of the cortex was processing. Another network decoded neural signals: Based on a participant’s brain activity, it could predict with about 50% accuracy what she was watching (by selecting one of 15 categories including bird, airplane, and exercise). If the network had trained on data from a different woman’s brain, it could still categorize the image with about 25% accuracy, the researchers report this month in Cerebral Cortex. The network could also partially reconstruct what a participant saw, turning brain activity into pixels, but the resulting images were little more than white blobs. The researchers hope their work will lead to the reconstruction of mental imagery, which uses some of the same brain circuits as visual processing. Translating from the mind’s eye into bits could allow people to express vivid thoughts or dreams to computers or to other people without words or mouse clicks, and could help those with strokes who have no other way to communicate.

(source: Science and AAAS By Matthew Hutson)

Novel CRISPR-derived ‘base editors’ surgically alter DNA or RNA, offering new ways to fix mutations

Since the start of the CRISPR craze 5 years ago, scientists have raced to invent ever-more-versatile or efficient variations of this powerful tool, which vastly simplifies the editing of DNA. Two studies published in Science and Nature this week broaden CRISPR’s reach further still, honing a subtler approach to modifying genetic material that’s called base editing. One study extends a strategy for editing DNA, whereas the other breaks new ground by base editing its molecular cousin, RNA.

Both open new avenues for genetic research and even curing diseases. “One shouldn’t view base editors as better than CRISPR—they’re just different,” says David Liu, a chemist at Harvard University who pioneered DNA base editing in a paper in Nature last year and co-authored the latest Nature paper. “It’s like, what’s better, a boat or a car?”

CRISPR, adapted from a primitive bacterial immune system, does its handiwork by first cutting the double-stranded DNA at a target site in a genome. Base editing, in contrast, does not cut the double helix, but instead uses enzymes to precisely rearrange some of the atoms in one of the four bases that make up DNA or RNA, converting the base into a different one without altering the bases around it. That ability greatly increases the options for altering genetic material. “It’s a very worthwhile addition and it’s here to stay,” says CRISPR researcher Erik Sontheimer of the University of Massachusetts Medical School in Worcester.

Many human diseases are caused by the mutation of a single base. CRISPR has difficulty correcting these so-called point mutations efficiently and cleanly, so base editing could provide a more effective approach. After Liu’s initial report, a group in China used DNA base editing to correct a disease-causing mutation in human embryos cloned from a patient with a genetic blood disorder.

Conventional CRISPR uses a guide RNA (gRNA) coupled with an enzyme known as a nuclease, most commonly Cas9, that together attach to a specific stretch of DNA bases; the nuclease then snips the double helix. A cellular repair mechanism attempts to rejoin the cut DNA ends, but occasionally inserts or deletes bases, which turns the DNA code into gibberish and can knock out a targeted gene. “Gene editing based on nucleases is very good at inactivating genes,” says CRISPR researcher Feng Zhang of the Broad Institute in Cambridge, Massachusetts.

Yet CRISPR, he notes, “is less efficient at making precise changes.” To fix a point mutation, a CRISPR-Cas9 system must also introduce a strand of “donor” DNA that has the correct base and then rely on a second cellular mechanism called homology-directed repair (HDR). But HDR works poorly unless cells are dividing, which means this strategy doesn’t function in, say, brain and muscle cells that no longer copy themselves. Even in dividing cells, the donor DNA rarely slots into the cut spot.


Getting to the point of mutations

Base editors borrow from CRISPR’s components—guide RNAs (gRNAs) and Cas9 or other nucleases—but don’t cut the double helix and instead chemically alter single bases with deaminase enzymes such as TadA and ADAR.

Base-editing systems, which borrow heavily from CRISPR’s tool kit, readily work in nondividing cells. DNA has four nucleotide bases—A, C, T, and G—and base editing changes one to another. In Liu’s 2016 study, his team fused gRNA with a “dead” Cas9 (dCas9) that cannot cut the whole double helix but still unzips it at the correct spot. To this complex the researchers tethered an enzyme, APOBEC1, which triggers a series of chemical reactions that ultimately change C to T. DNA’s base-pairing rules, which specify that a T on one DNA strand pairs with an A on the opposite strand, govern a subsequent change. The dCas9 was further modified to nick the unedited strand, which gooses the cell’s DNA repair mechanism into converting the G that originally paired with C into an A that pairs with the new T.

That first DNA base editor could not address the most common point mutations associated with human diseases—accounting for about half—which have A•T where there should be G•C. The new editor from Liu’s group can now make this fix. The team again fused gRNA with a dCas9, but there is no known enzyme that can convert A to G in DNA. So the lab developed one from TadA, an enzyme in the bacterium Escherichia coli. The new enzyme converts A to a base called inosine, or I. Either a cellular repair mechanism or the process of the DNA copying itself then changes the I to a G. “The big deal here is engineering the TadA enzyme to do something fairly unnatural,” says George Church of Harvard, who studies CRISPR. “My hat is off to them.”

Zhang’s team created its RNA base-editor system by fusing gRNA with a different dead nuclease, dCas13, and a natural enzyme that converts A to I in RNA. Unlike in DNA, that’s where the changes stop. The I-containing RNA simply performs as if it had a G in that spot.

Because RNA carries the genetic message from DNA to the cell’s proteinmaking factories, or can directly perform acts such as gene regulation, it, too, is an appealing target for therapies. But an RNA only sticks around in a cell for a short time. That means RNA base editors likely would have to be repeatedly administered to work as a therapeutic, which Zhang and his co-authors suggest may make sense for transient conditions, such as localized inflammation.

Although the short-lived nature of RNA makes base editing less attractive for many therapies, Sontheimer sees an upside, too. “In some ways, it’s safer to work on RNA,” he says. Researchers worry that genome editing could accidentally affect the wrong part of the genome—a change that would be permanent with a DNA base editor. “If there’s some degree of off targeting, you’re not permanently etching those mistakes into the underlying genome” with an RNA base editor, Sontheimer says.

Church says base editing should be evaluated “case-by-case” for whether it offers advantages over CRISPR and other technologies that alter nucleic acids. “People make it sound like [changing bases] was not possible before. In fact it was hard or just inefficient,” he notes.

Zhang and Liu stress that it could be several years before base-editing therapies enter clinical trials—and longer until it’s clear whether the strategy offers advantages over existing gene therapies. “It’s both scientifically short-sighted and long-term incorrect to conclude that base editing is going to be a better way to do human genetic therapy,” Liu says. What’s already clear, however, is that powerful alternatives to standard CRISPR are now in the game.

(Source: Science and AAAS,  by Jon Cohen)

These gene-edited pigs are hardy and lean—but how will they taste?

“Lean” may not be the term you associate with a good bacon strip or pork chop. But these leaner, cold-hardier piglets, created through CRISPR gene editing, could be a hit with the pork industry. The threat of hypothermia forces cold-climate farms to invest in heat lamps and other accommodations for their shivering piglets. And fatter pig breeds—though tasty—tend to grow more slowly and consume more feed than leaner ones to produce the same amount of meat. As an alternative to conventional breeding, researchers used the gene-editing technology CRISPR to introduce a gene called UCP1. Thought to have disappeared from the ancestors of modern pigs about 20 million years ago, the gene helps cells dissipate more heat and burn fat. Twelve transgenic piglets endowed with a mouse UCP1 gene were better able to maintain their body temperature than their unmodified counterparts when they were exposed to cold for a 4-hour period, the authors report today in the Proceedings of the National Academy of Sciences. And when the pigs were killed, fat made up less of their carcass weight—about 15% versus 20% in unmodified controls—while their average percentage of lean meat increased from about 50% to 53%. Will less fat make them less tasty? The authors don’t expect UCP1 to reduce the fat that accumulates in muscle fibers and contributes to flavor, but they’re now producing more pigs to make sure.

(Source: Biology, Plants & Animals, by Kelly Servick)

Why is AI (Finally) Happening Now?

AI research efforts have been underway for many decades, although university and technology industry research funding had been largely curtailed as early promises of the technology failed to deliver or were thought to be disproven. AI algorithm development in the early days was also greatly constrained by a lack of adequate computing power and limited access to the large data sets necessary to train and test AI applications.

In recent years, the combination of several important factors has helped to rejuvenate interest in AI research and attract capital investment in AI venture start-ups and internal enterprise R&D projects. Key factors for AI’s resurgence include:

1. Significant advances in machine learning approaches and new techniques for more rapidly constructing and training complex algorithms;

2. The development of very powerful and inexpensive cloud computing resources and the employment of powerful graphics processing units (“GPUs”), which are now widely recognized as being very well suited to running AI programs. Today, several companies are developing CPUs specifically designed for running AI algorithms, and, longer term, Quantum Computing will likely also be utilized to build AI models. In addition, cluster computing technologies, such as Apache Spark and MapReduce, enable the development of complex machine learning models utilizing very large data sets;

3. The availability of very large structured and unstructured data sets. These data sets have been created by enterprise “big data” projects utilizing technologies such as Hadoop, the growing and widespread adoption of enterprise SaaS platforms, the growth of on-line retailers, massive social media networks, IoT sensor data and the proliferation of mobile technologies.

In addition to the above key factors, an increasingly more open approach to enterprise and cloud-based APIs (Application Programming Interface) and competition in cloud computing services have also helped spur on the development of AI.

We note that IBM Watson famously competed on Jeopardy! in February 2011 (the IBM Watson group was later formed in January 2014), an event that helped to significantly raise the public profile of recent advances in AI technology. Watson is IBM’s analytical computing system (see Exhibit 9) that is now being used in many different applications across multiple vertical industries.

Finally, in the last two years, universities, incubators, leading software companies, and venture capitalists have all been significantly increasing their funding of advanced data science research and new AI-related ventures, as evidenced by the proliferation in north American AI technology start-ups shown in Exhibits 10 and 11. Besides rapid traditional venture capital investment growth, US corporate venture capital investment in AI is up 7x since 2013, reaching $1.77B in 2016 (126 deals) and $1.8B in 1H17 (88 deals), according to CB Insights. As in previous technology cycles, we anticipate that many of these start-ups will fail or be absorbed into the mainstream (positive cash flowing) enterprise software industry and other industry leaders.

Should AI be Considered a General Purpose Technology?

As discussed earlier, the field of AI is comprised of several different sub-disciplines, all of which are continuing to develop and rapidly evolve. We believe that AI should be considered a “general purpose” technology. That is, AI is a very broadly applicable set of technologies that can be used to automate, augment, and enhance countless organizational workflows, processes, decisions, and software applications. In some respects, we believe that AI can also be viewed as a foundational software tool as versatile as a spreadsheet or engineering Computer-Aided Design (“CAD”) tool.

AI solutions can be deployed to outright replace repetitive human tasks, for both skilled and unskilled workers, or to augment the skills of workers involved in complex operations. In general, AI currently works well when automating repetitive highvolume, low-value-added tasks, such as straightforward consumer product and service inquiries to call centres or internal IT help desks and HR department requests.

To be effective, AI solutions in use today typically require significant amounts of input or training data, such as historical business transactions, images (e.g., medical), or consumer retail purchasing data. With large amounts of data, AI algorithms can be rapidly trained to perform analysis, predict outcomes, provide recommendations, or make decisions. Importantly, data sources (aided by advances in distributed database technologies) are rapidly increasing as connected mobile devices, new IoT solutions, and the adoption of cloud-based applications (social, SaaS, etc.) continue to proliferate. In the future, as AI tools and methods improve, AI will likely require less data to be effective and several AI technologies will be combined to solve higher level, more complex process problems.

How is AI Being Applied Today?

AI application development is currently taking place in a variety of settings, including AI startups (building AI-enabled solutions for vertical and horizontal markets), university-sponsored AI incubators, custom projects within corporate IT departments, and throughout both the traditional enterprise software and the consumer-focused software industry. New applications of AI technology are being touted in the media almost daily, however, we will note just a few examples here (also see Exhibit 13):

•Improved accuracy in credit card fraud detection;
•Investment portfolio management recommendations;
•Disease diagnosis based on medical imaging;
•Self-driving vehicles;
•Facial recognition for security uses;
•Automated image labelling and classification;
•Call centre automation and support;
•Voice- and text-based language translation;
•Product recommendations for on-line shoppers;
•Supply chain and manufacturing planning optimization;
•Predicting industrial equipment failure; and
•Monitoring large volume data flows from IoT networks.

( Source: By , Cantech Letter)

How is AI disrupting financial industry

NEW YORK – Artificial intelligence (AI), along with other financial technology (fintech) innovations, are significantly changing the ways that financial business are being run, especially in the fields like trading, insurance and risk management, leading the traditional financial industry into a new era.

Robots replacing humans

Back in 2000, Goldman Sach’s New York headquarters employed 600 traders, buying and selling stock on the orders of the investment bank’s clients. Today there are just two equity traders left, as automated trading programs have taken over the rest of the work.

Meanwhile, BlackRock, the world’s biggest money manager, also cut more than 40 jobs earlier this year, replacing some of its human portfolio managers with artificially intelligent, computerized stock-trading algorithms.

Those two big companies are not the only financial institutions replacing human jobs with robots.

By 2025, AI technologies will reduce employees in the capital markets by 230,000 people worldwide, according to a report by the financial services consultancy Opimas.

“Asset managers, analysts, traders, compliance administrators, back-office data collection and analysts are most likely to lose their jobs, because their jobs are easier to be replaced by automation and AI,” Henry Huang, an associate professor at Yeshiva University’s Sy Syms School of Business, told Xinhua.

“The net effect of this kind of automation will be more about increasing the productivity of the workforce than of robots simply replacing people,” said Richard Lumb, group chief executive of Accenture’s Financial Services operating group.

The best automated firms will outperform their competitors by making existing workforces more productive through AI, he added.

While humans are losing jobs in the financial industry, companies are enjoying the benefits bringing by AI technologies.

“Initially AI will add the most value and have the largest impacts in compliance (especially anti-money laundering and know-your-customer functions), cybersecurity and robo-advice,” Lumb told Xinhua.

Wall street embraces fintech

Facing rising pressures from fintech innovations, represented by AI, Wall Street financial institutions choose to embrace the new trend.

“In general, we see the outlook for fintech as strong. Demand for fintech by banks is growing because of regulatory and capital pressures, competition from large technology players like Google and Amazon and the abundance of new security threats,” Lumb said.

The FinTech Innovation Lab, an annual program launched in 2010 by Accenture and the Partnership Fund for New York City to foster fintech growth, has helped New York participants raise more than $440 million.

“The FinTech lab has proven to be a significant program for engagement between entrepreneurial technology companies and New York’s financial industry,” said James D. Robinson III, General Partner and Co-founder of RRE Ventures.

In New York City alone, fintech investment overall has increased from 216 million dollars in 2010 to 2.4 billion dollars in 2016.

“Big new frontiers are only just beginning to opening up in fintech – from AI, block chain and robotics to biometrics, augmented reality and cybersecurity,” Lumb said.

Among all the fintech innovations, the prospect of the block chain has the highest expectation.

“The block chain will change the way people store information, which is real, spreading fast and cross-border, and its ‘de-centric’ feature will allow everyone to know what other people are doing. The application of block chain in finance will once again bring about a revolutionary impact on the industry, just like AI does,” said Huang.

Fintech in china

Although it is hard to tell which country is leading the fintech innovations, many experts agree that China has outperformed other countries in fintech services adoption.

“The work in China has been dramatically ahead of anywhere else in the world,” said Jim Bruene, founder of Finovate conferences, which showcase cutting-edge banking and financial technology.

With more intelligent, in-context financial services, especially commerce activities built around social media applications, “China is likely five or six years ahead of the United States,” Bruene told Xinhua.

The latest report by Ernst & Young showed that China’s fintech adoption rate came at 69 percent in an index that measures users’ activity in various areas, including money transfer, payments, investments, borrowing and insurance, the highest among 20 major markets globally.

Wechat Pay, the e-payment platform built inside the 900-million-user Chinese social media application Wechat, is seen as the future of fintech services by many experts.

“Messaging is the next web browser, fintech and all other applications are going to live in a mobile messaging application like Wechat, just like how they lived in web browsers,” said Greg Ratner, co-founder and chief technology officer of Troops, a U.S. artificial intelligence startup.

“It is going to be the future and is already happening in China. And I think it will come to the United States in the next five years,” Ratner told Xinhua.

According to Huang’s observation, there is a major difference between China and the United States in fintech development model.

“In the US, banks are the main driver of fintech innovations, while in China, BAT (Baidu, Alibaba, Tencent) representing the enterprises contribute most to the fintech development,” Huang said.

“Considering the scale of banks in China, they should play a more important role in fintech innovations,” he suggested.

(Xinhua | Updated: 2017-09-18 10:12)

Gene Editing: Promises and Challenges

Source: Harvard T. H. Chan School of Public Health

GENE EDITING: Promises and Challenges

Presented jointly with NBC News Digital
May 19, 2017

In labs and in clinical trials, scientists are seeking ways to rewrite DNA, a building block of life. Tools such as zinc-finger nucleases (ZFNs), TAL effector nucleases (TALENs) and, more recently, CRISPR/Cas9 have the power to seek out and replace faulty DNA. The possibilities seem almost limitless: with the ability to edit DNA at will, researchers theoretically could wipe out malaria-causing mosquitos, make disease- and pest-proof crops without the need for pesticides, and cure genetic diseases, such as sickle cell anemia and cystic fibrosis. Cancer is another target, with human clinical trials using CRISPR already underway, while, in separate efforts, HIV has been reportedly eliminated in mice thanks to the tool.

But scientists and ethicists alike are worried about the speed at which the gene editing field is moving — and the implications of the results. In this panel, we discussed the promises and challenges presented by gene editing for individual and public health. What scientific and ethical hurdles must be overcome before tools like CRISPR and others can move safely and more widely out of the lab and into fields, farms, and hospitals