Artificial intelligence is learning to read your mind—and display what it sees

 

Artificial intelligence has taken us one baby step closer to the mind-reading machines of science fiction. Researchers have developed “deep learning” algorithms—roughly modeled on the human brain—to decipher, you guessed it, the human brain. First, they built a model of how the brain encodes information. As three women spent hours viewing hundreds of short videos, a functional MRI machine measured signals of activity in the visual cortex and elsewhere. A popular type of artificial neural network used for image processing learned to associate video images with brain activity. As the women watched additional clips, the algorithm’s predicted activity correlated with actual activity in a dozen brain regions. It also helped the scientists visualize which features each area of the cortex was processing. Another network decoded neural signals: Based on a participant’s brain activity, it could predict with about 50% accuracy what she was watching (by selecting one of 15 categories including bird, airplane, and exercise). If the network had trained on data from a different woman’s brain, it could still categorize the image with about 25% accuracy, the researchers report this month in Cerebral Cortex. The network could also partially reconstruct what a participant saw, turning brain activity into pixels, but the resulting images were little more than white blobs. The researchers hope their work will lead to the reconstruction of mental imagery, which uses some of the same brain circuits as visual processing. Translating from the mind’s eye into bits could allow people to express vivid thoughts or dreams to computers or to other people without words or mouse clicks, and could help those with strokes who have no other way to communicate.

(source: Science and AAAS By Matthew Hutson)

Novel CRISPR-derived ‘base editors’ surgically alter DNA or RNA, offering new ways to fix mutations

Since the start of the CRISPR craze 5 years ago, scientists have raced to invent ever-more-versatile or efficient variations of this powerful tool, which vastly simplifies the editing of DNA. Two studies published in Science and Nature this week broaden CRISPR’s reach further still, honing a subtler approach to modifying genetic material that’s called base editing. One study extends a strategy for editing DNA, whereas the other breaks new ground by base editing its molecular cousin, RNA.

Both open new avenues for genetic research and even curing diseases. “One shouldn’t view base editors as better than CRISPR—they’re just different,” says David Liu, a chemist at Harvard University who pioneered DNA base editing in a paper in Nature last year and co-authored the latest Nature paper. “It’s like, what’s better, a boat or a car?”

CRISPR, adapted from a primitive bacterial immune system, does its handiwork by first cutting the double-stranded DNA at a target site in a genome. Base editing, in contrast, does not cut the double helix, but instead uses enzymes to precisely rearrange some of the atoms in one of the four bases that make up DNA or RNA, converting the base into a different one without altering the bases around it. That ability greatly increases the options for altering genetic material. “It’s a very worthwhile addition and it’s here to stay,” says CRISPR researcher Erik Sontheimer of the University of Massachusetts Medical School in Worcester.

Many human diseases are caused by the mutation of a single base. CRISPR has difficulty correcting these so-called point mutations efficiently and cleanly, so base editing could provide a more effective approach. After Liu’s initial report, a group in China used DNA base editing to correct a disease-causing mutation in human embryos cloned from a patient with a genetic blood disorder.

Conventional CRISPR uses a guide RNA (gRNA) coupled with an enzyme known as a nuclease, most commonly Cas9, that together attach to a specific stretch of DNA bases; the nuclease then snips the double helix. A cellular repair mechanism attempts to rejoin the cut DNA ends, but occasionally inserts or deletes bases, which turns the DNA code into gibberish and can knock out a targeted gene. “Gene editing based on nucleases is very good at inactivating genes,” says CRISPR researcher Feng Zhang of the Broad Institute in Cambridge, Massachusetts.

Yet CRISPR, he notes, “is less efficient at making precise changes.” To fix a point mutation, a CRISPR-Cas9 system must also introduce a strand of “donor” DNA that has the correct base and then rely on a second cellular mechanism called homology-directed repair (HDR). But HDR works poorly unless cells are dividing, which means this strategy doesn’t function in, say, brain and muscle cells that no longer copy themselves. Even in dividing cells, the donor DNA rarely slots into the cut spot.


Getting to the point of mutations

Base editors borrow from CRISPR’s components—guide RNAs (gRNAs) and Cas9 or other nucleases—but don’t cut the double helix and instead chemically alter single bases with deaminase enzymes such as TadA and ADAR.

Base-editing systems, which borrow heavily from CRISPR’s tool kit, readily work in nondividing cells. DNA has four nucleotide bases—A, C, T, and G—and base editing changes one to another. In Liu’s 2016 study, his team fused gRNA with a “dead” Cas9 (dCas9) that cannot cut the whole double helix but still unzips it at the correct spot. To this complex the researchers tethered an enzyme, APOBEC1, which triggers a series of chemical reactions that ultimately change C to T. DNA’s base-pairing rules, which specify that a T on one DNA strand pairs with an A on the opposite strand, govern a subsequent change. The dCas9 was further modified to nick the unedited strand, which gooses the cell’s DNA repair mechanism into converting the G that originally paired with C into an A that pairs with the new T.

That first DNA base editor could not address the most common point mutations associated with human diseases—accounting for about half—which have A•T where there should be G•C. The new editor from Liu’s group can now make this fix. The team again fused gRNA with a dCas9, but there is no known enzyme that can convert A to G in DNA. So the lab developed one from TadA, an enzyme in the bacterium Escherichia coli. The new enzyme converts A to a base called inosine, or I. Either a cellular repair mechanism or the process of the DNA copying itself then changes the I to a G. “The big deal here is engineering the TadA enzyme to do something fairly unnatural,” says George Church of Harvard, who studies CRISPR. “My hat is off to them.”

Zhang’s team created its RNA base-editor system by fusing gRNA with a different dead nuclease, dCas13, and a natural enzyme that converts A to I in RNA. Unlike in DNA, that’s where the changes stop. The I-containing RNA simply performs as if it had a G in that spot.

Because RNA carries the genetic message from DNA to the cell’s proteinmaking factories, or can directly perform acts such as gene regulation, it, too, is an appealing target for therapies. But an RNA only sticks around in a cell for a short time. That means RNA base editors likely would have to be repeatedly administered to work as a therapeutic, which Zhang and his co-authors suggest may make sense for transient conditions, such as localized inflammation.

Although the short-lived nature of RNA makes base editing less attractive for many therapies, Sontheimer sees an upside, too. “In some ways, it’s safer to work on RNA,” he says. Researchers worry that genome editing could accidentally affect the wrong part of the genome—a change that would be permanent with a DNA base editor. “If there’s some degree of off targeting, you’re not permanently etching those mistakes into the underlying genome” with an RNA base editor, Sontheimer says.

Church says base editing should be evaluated “case-by-case” for whether it offers advantages over CRISPR and other technologies that alter nucleic acids. “People make it sound like [changing bases] was not possible before. In fact it was hard or just inefficient,” he notes.

Zhang and Liu stress that it could be several years before base-editing therapies enter clinical trials—and longer until it’s clear whether the strategy offers advantages over existing gene therapies. “It’s both scientifically short-sighted and long-term incorrect to conclude that base editing is going to be a better way to do human genetic therapy,” Liu says. What’s already clear, however, is that powerful alternatives to standard CRISPR are now in the game.

(Source: Science and AAAS,  by Jon Cohen)

These gene-edited pigs are hardy and lean—but how will they taste?

“Lean” may not be the term you associate with a good bacon strip or pork chop. But these leaner, cold-hardier piglets, created through CRISPR gene editing, could be a hit with the pork industry. The threat of hypothermia forces cold-climate farms to invest in heat lamps and other accommodations for their shivering piglets. And fatter pig breeds—though tasty—tend to grow more slowly and consume more feed than leaner ones to produce the same amount of meat. As an alternative to conventional breeding, researchers used the gene-editing technology CRISPR to introduce a gene called UCP1. Thought to have disappeared from the ancestors of modern pigs about 20 million years ago, the gene helps cells dissipate more heat and burn fat. Twelve transgenic piglets endowed with a mouse UCP1 gene were better able to maintain their body temperature than their unmodified counterparts when they were exposed to cold for a 4-hour period, the authors report today in the Proceedings of the National Academy of Sciences. And when the pigs were killed, fat made up less of their carcass weight—about 15% versus 20% in unmodified controls—while their average percentage of lean meat increased from about 50% to 53%. Will less fat make them less tasty? The authors don’t expect UCP1 to reduce the fat that accumulates in muscle fibers and contributes to flavor, but they’re now producing more pigs to make sure.

(Source: Biology, Plants & Animals, by Kelly Servick)

Why is AI (Finally) Happening Now?

AI research efforts have been underway for many decades, although university and technology industry research funding had been largely curtailed as early promises of the technology failed to deliver or were thought to be disproven. AI algorithm development in the early days was also greatly constrained by a lack of adequate computing power and limited access to the large data sets necessary to train and test AI applications.

In recent years, the combination of several important factors has helped to rejuvenate interest in AI research and attract capital investment in AI venture start-ups and internal enterprise R&D projects. Key factors for AI’s resurgence include:

1. Significant advances in machine learning approaches and new techniques for more rapidly constructing and training complex algorithms;

2. The development of very powerful and inexpensive cloud computing resources and the employment of powerful graphics processing units (“GPUs”), which are now widely recognized as being very well suited to running AI programs. Today, several companies are developing CPUs specifically designed for running AI algorithms, and, longer term, Quantum Computing will likely also be utilized to build AI models. In addition, cluster computing technologies, such as Apache Spark and MapReduce, enable the development of complex machine learning models utilizing very large data sets;

3. The availability of very large structured and unstructured data sets. These data sets have been created by enterprise “big data” projects utilizing technologies such as Hadoop, the growing and widespread adoption of enterprise SaaS platforms, the growth of on-line retailers, massive social media networks, IoT sensor data and the proliferation of mobile technologies.

In addition to the above key factors, an increasingly more open approach to enterprise and cloud-based APIs (Application Programming Interface) and competition in cloud computing services have also helped spur on the development of AI.

We note that IBM Watson famously competed on Jeopardy! in February 2011 (the IBM Watson group was later formed in January 2014), an event that helped to significantly raise the public profile of recent advances in AI technology. Watson is IBM’s analytical computing system (see Exhibit 9) that is now being used in many different applications across multiple vertical industries.

Finally, in the last two years, universities, incubators, leading software companies, and venture capitalists have all been significantly increasing their funding of advanced data science research and new AI-related ventures, as evidenced by the proliferation in north American AI technology start-ups shown in Exhibits 10 and 11. Besides rapid traditional venture capital investment growth, US corporate venture capital investment in AI is up 7x since 2013, reaching $1.77B in 2016 (126 deals) and $1.8B in 1H17 (88 deals), according to CB Insights. As in previous technology cycles, we anticipate that many of these start-ups will fail or be absorbed into the mainstream (positive cash flowing) enterprise software industry and other industry leaders.

Should AI be Considered a General Purpose Technology?

As discussed earlier, the field of AI is comprised of several different sub-disciplines, all of which are continuing to develop and rapidly evolve. We believe that AI should be considered a “general purpose” technology. That is, AI is a very broadly applicable set of technologies that can be used to automate, augment, and enhance countless organizational workflows, processes, decisions, and software applications. In some respects, we believe that AI can also be viewed as a foundational software tool as versatile as a spreadsheet or engineering Computer-Aided Design (“CAD”) tool.

AI solutions can be deployed to outright replace repetitive human tasks, for both skilled and unskilled workers, or to augment the skills of workers involved in complex operations. In general, AI currently works well when automating repetitive highvolume, low-value-added tasks, such as straightforward consumer product and service inquiries to call centres or internal IT help desks and HR department requests.

To be effective, AI solutions in use today typically require significant amounts of input or training data, such as historical business transactions, images (e.g., medical), or consumer retail purchasing data. With large amounts of data, AI algorithms can be rapidly trained to perform analysis, predict outcomes, provide recommendations, or make decisions. Importantly, data sources (aided by advances in distributed database technologies) are rapidly increasing as connected mobile devices, new IoT solutions, and the adoption of cloud-based applications (social, SaaS, etc.) continue to proliferate. In the future, as AI tools and methods improve, AI will likely require less data to be effective and several AI technologies will be combined to solve higher level, more complex process problems.

How is AI Being Applied Today?

AI application development is currently taking place in a variety of settings, including AI startups (building AI-enabled solutions for vertical and horizontal markets), university-sponsored AI incubators, custom projects within corporate IT departments, and throughout both the traditional enterprise software and the consumer-focused software industry. New applications of AI technology are being touted in the media almost daily, however, we will note just a few examples here (also see Exhibit 13):

•Improved accuracy in credit card fraud detection;
•Investment portfolio management recommendations;
•Disease diagnosis based on medical imaging;
•Self-driving vehicles;
•Facial recognition for security uses;
•Automated image labelling and classification;
•Call centre automation and support;
•Voice- and text-based language translation;
•Product recommendations for on-line shoppers;
•Supply chain and manufacturing planning optimization;
•Predicting industrial equipment failure; and
•Monitoring large volume data flows from IoT networks.

( Source: By , Cantech Letter)

How is AI disrupting financial industry

NEW YORK – Artificial intelligence (AI), along with other financial technology (fintech) innovations, are significantly changing the ways that financial business are being run, especially in the fields like trading, insurance and risk management, leading the traditional financial industry into a new era.

Robots replacing humans

Back in 2000, Goldman Sach’s New York headquarters employed 600 traders, buying and selling stock on the orders of the investment bank’s clients. Today there are just two equity traders left, as automated trading programs have taken over the rest of the work.

Meanwhile, BlackRock, the world’s biggest money manager, also cut more than 40 jobs earlier this year, replacing some of its human portfolio managers with artificially intelligent, computerized stock-trading algorithms.

Those two big companies are not the only financial institutions replacing human jobs with robots.

By 2025, AI technologies will reduce employees in the capital markets by 230,000 people worldwide, according to a report by the financial services consultancy Opimas.

“Asset managers, analysts, traders, compliance administrators, back-office data collection and analysts are most likely to lose their jobs, because their jobs are easier to be replaced by automation and AI,” Henry Huang, an associate professor at Yeshiva University’s Sy Syms School of Business, told Xinhua.

“The net effect of this kind of automation will be more about increasing the productivity of the workforce than of robots simply replacing people,” said Richard Lumb, group chief executive of Accenture’s Financial Services operating group.

The best automated firms will outperform their competitors by making existing workforces more productive through AI, he added.

While humans are losing jobs in the financial industry, companies are enjoying the benefits bringing by AI technologies.

“Initially AI will add the most value and have the largest impacts in compliance (especially anti-money laundering and know-your-customer functions), cybersecurity and robo-advice,” Lumb told Xinhua.

Wall street embraces fintech

Facing rising pressures from fintech innovations, represented by AI, Wall Street financial institutions choose to embrace the new trend.

“In general, we see the outlook for fintech as strong. Demand for fintech by banks is growing because of regulatory and capital pressures, competition from large technology players like Google and Amazon and the abundance of new security threats,” Lumb said.

The FinTech Innovation Lab, an annual program launched in 2010 by Accenture and the Partnership Fund for New York City to foster fintech growth, has helped New York participants raise more than $440 million.

“The FinTech lab has proven to be a significant program for engagement between entrepreneurial technology companies and New York’s financial industry,” said James D. Robinson III, General Partner and Co-founder of RRE Ventures.

In New York City alone, fintech investment overall has increased from 216 million dollars in 2010 to 2.4 billion dollars in 2016.

“Big new frontiers are only just beginning to opening up in fintech – from AI, block chain and robotics to biometrics, augmented reality and cybersecurity,” Lumb said.

Among all the fintech innovations, the prospect of the block chain has the highest expectation.

“The block chain will change the way people store information, which is real, spreading fast and cross-border, and its ‘de-centric’ feature will allow everyone to know what other people are doing. The application of block chain in finance will once again bring about a revolutionary impact on the industry, just like AI does,” said Huang.

Fintech in china

Although it is hard to tell which country is leading the fintech innovations, many experts agree that China has outperformed other countries in fintech services adoption.

“The work in China has been dramatically ahead of anywhere else in the world,” said Jim Bruene, founder of Finovate conferences, which showcase cutting-edge banking and financial technology.

With more intelligent, in-context financial services, especially commerce activities built around social media applications, “China is likely five or six years ahead of the United States,” Bruene told Xinhua.

The latest report by Ernst & Young showed that China’s fintech adoption rate came at 69 percent in an index that measures users’ activity in various areas, including money transfer, payments, investments, borrowing and insurance, the highest among 20 major markets globally.

Wechat Pay, the e-payment platform built inside the 900-million-user Chinese social media application Wechat, is seen as the future of fintech services by many experts.

“Messaging is the next web browser, fintech and all other applications are going to live in a mobile messaging application like Wechat, just like how they lived in web browsers,” said Greg Ratner, co-founder and chief technology officer of Troops, a U.S. artificial intelligence startup.

“It is going to be the future and is already happening in China. And I think it will come to the United States in the next five years,” Ratner told Xinhua.

According to Huang’s observation, there is a major difference between China and the United States in fintech development model.

“In the US, banks are the main driver of fintech innovations, while in China, BAT (Baidu, Alibaba, Tencent) representing the enterprises contribute most to the fintech development,” Huang said.

“Considering the scale of banks in China, they should play a more important role in fintech innovations,” he suggested.

(Xinhua | Updated: 2017-09-18 10:12)

Gene Editing: Promises and Challenges

Source: Harvard T. H. Chan School of Public Health

GENE EDITING: Promises and Challenges

Presented jointly with NBC News Digital
May 19, 2017

In labs and in clinical trials, scientists are seeking ways to rewrite DNA, a building block of life. Tools such as zinc-finger nucleases (ZFNs), TAL effector nucleases (TALENs) and, more recently, CRISPR/Cas9 have the power to seek out and replace faulty DNA. The possibilities seem almost limitless: with the ability to edit DNA at will, researchers theoretically could wipe out malaria-causing mosquitos, make disease- and pest-proof crops without the need for pesticides, and cure genetic diseases, such as sickle cell anemia and cystic fibrosis. Cancer is another target, with human clinical trials using CRISPR already underway, while, in separate efforts, HIV has been reportedly eliminated in mice thanks to the tool.

But scientists and ethicists alike are worried about the speed at which the gene editing field is moving — and the implications of the results. In this panel, we discussed the promises and challenges presented by gene editing for individual and public health. What scientific and ethical hurdles must be overcome before tools like CRISPR and others can move safely and more widely out of the lab and into fields, farms, and hospitals