Tiny Hard Drive Uses Single Atoms to Store Data

The proof-of-concept device can pack hundreds of times as much data per square inch than the most advanced, commercially available data-storage technologies today.

By manipulating the interactions between individual atoms, scientists report they have created a device that can pack hundreds of times more information per square inch than the best currently available data-storage technologies.

The working prototype is part of a decades-long attempt to shrink electronics down to the atomic level, a feat scientists believe would allow them to store information much more efficiently, in less space and more cheaply. By comparison, tech companies today build warehouse-sized data centers to store the billions of photos, videos and posts consumers upload to the internet daily. Corporations including International Business Machines Corp. and Hewlett Packard Enterprise Co. also have explored research to reduce such space needs.

The so-called atomic-scale memory, described in a paper published on Monday in the scientific journal Nature Nanotechnology, can hold one kilobyte, the equivalent of roughly a paragraph of text.

It may not sound “very impressive,” said Franz Himpsel, a professor emeritus of physics at the University of Wisconsin, Madison, who wasn’t involved in the study. But “I would call it a breakthrough.”

Most previous attempts at encoding information with atoms, including his own, managed roughly one byte, Dr. Himpsel said. And data could be stored only once. To store new information, the “disk” had to be re-formatted, like CD-Rs popular in the ’90s.

With the new device, “we can rewrite it as often as we like,” said Sander Otte, an experimental physicist at Delft University of Technology in the Netherlands and the lead author on the new paper.

The researchers first stored a portion of Charles Darwin’s “On the Origin of Species” on the device. They then replaced that with 160 words from a 1959 lecture by physicist Richard Feynman in which he imagined a world powered by devices running on atomic-scale memory.

To build their prototype, the scientists peppered a flat copper bed with about 60,000 chlorine atoms scattered at random, purposely leaving roughly 8,000 empty spaces among them. A mapping algorithm guided the tiny, copper-coated tip of a high-tech microscope to gently pull each chlorine atom to a predetermined location, creating a precise arrangement of atoms and neighboring “holes.”

The team also crafted a language for their device. The stored information is encoded in the patterns of holes between atoms. The atom-tugging needle reads them as ones and zeros, turning them into regular binary code.

The researchers marked up the grid with instructions that cued the software where it should direct the needle to write and read data. For instance, a three-hole diagonal line marked the end of a file.

“That’s what I really love in this work,” said Elke Scheer, a nanoscientist at the University of Konstanz in Germany not involved with the study. “It’s not just physics. It’s also informatics.”

Writing the initial data to the device took about a week, though the rewriting process takes just a few hours, Dr. Otte said.

“It’s automated, so it’s 10 times faster than previous examples,” said Christopher Lutz, a staff scientist at IBM Research-Almaden in San Jose, Calif. Still, “this is very exploratory. It’s important not to see this one-kilobyte memory result as something that can be taken directly to a product.”

Reading the stored data is much too slow to have practical applications soon. Plus, the device is stable for only a few hours at extremely low temperatures. To be competitive with today’s hard drives, the memory would have to persist for years and work in warmer temperatures, said Victor Zhirnov, chief scientist at the Semiconductor Research Corp., a research consortium based in Durham, N.C.

When Dr. Otte’s team took the memory out of the extremely low-temperature environment in which it was built and stored, the information it held was lost. Next, his team will explore other metal surfaces as well as elements similar to, but heavier than, chlorine, to see if that improves the device’s stability.

“There’s many combinations to play with,” he said.

(Written by Daniela Hernandez, Wall Street Journal. Further readings: Nature Nanotechnology.)

Why Data Is The New Oil

Artificial intelligence is only as good as the data it crunches.

The field of artificial intelligence is red hot thanks in part to big companies like Google, Facebook, and Microsoft using AI-related techniques to train computers to recognize objects in photos and understand human language.

But these companies were only able to train their computers to perform these difficult feats because they have the enormous quantities of data that’s required. Companies that want to use artificial intelligence techniques like machine learning algorithms to improve their businesses will need to either have or acquire copious amounts of data, which will only become more valuable as the field of AI advances.

That’s one of the takeaways from a panel on artificial intelligence Monday in Aspen, Colo. during Fortune’s annual Brainstorm Tech conference.

“Data is the new oil,” Shivon Zilis, a partner with the venture capital firm Bloomberg Beta, said about data’s increasing value.

Although companies like Google GOOG 0.57% , Facebook FB 1.09% , and Amazon AMZN 0.78% have open sourced their own artificial intelligence software so any programmer can access the code and use it to build their own apps, they are not making the necessary data available, Zilis explained. These companies are releasing these AI software toolkits for free so they can recruit more engineers to their companies, she said.

When it comes to competition, it’s the data these companies posses that’s more important than the actual AI software tools they use and release to the public.

David Kenny, the general manager of IBM’s IBM 1.12% Watson data crunching service, agreed with Zilis and said “the value of data goes up every day AI advances.”

“Data will become a currency,” Kenny said. He also explained that only 20% of the world’s information is stored on the Internet, with the other 80% being privately held within companies and organizations.

Additionally, Zilis discussed the history of so-called “AI winters” in which the hype of AI failed to meet the general public’s expectations, and led to periods of years of declining research into AI during the mid-1970’s and the late 1980’s to early 1990’s.

Today, however, the rise of the cloud and the ability to access tremendous amounts of computing power to analyze huge amounts of data has the potential to strengthen the existing AI-related algorithms and lead to new more powerful algorithms. Because of these technological developments, Zilis does not believe companies will lose interest in AI.

“Are we going to hit a winter?” asked Zilis. “Absolutely not.”

( Source: Fortune Magazine, written by Jonathan Vanian)

Global Genomics Market Outlook: 2015-2020

Genomics is a discipline which analyzes the function and structure of genomes. It uses various sampling, sequencing, and data analysis and interpretation techniques to decode, assemble, and analyze genomes. The knowledge of complete set of DNA helps to identify certain genetic diseases, develop best course of treatment, and contribute to precision medicine.

With the significant decrease in the sequencing costs and rising investments in the pharmaceutical industry, the global genomics market is forecast to grow at a CAGR of 15.1% to be worth $19,938.6 million by 2020.

This growth is further driven by the technological innovations in bioinformatics, increasing clinical capabilities, and more clinically relevant sequencing timescales. However, need of significant clinical investment, lack of funding in the emerging markets, rising consolidation mainly in the instruments market, and ethical and legal challenges will act as a constraint to industry growth during the forecast period.

The global genomics market is segmented by methods, technology, instruments, consumables, services, and geography. The genomics industry is still at a nascent stage with many untapped markets present across the globe. However, the sequencing method is relatively at a mature stage, especially, in the developed markets. As, the scale of genomes data grows, the data analysis and interpretation market is expected to grow at a significant rate in the near future. Next-generation DNA sequencing (NGS) technology has transformed biomedical research, making genome and RNA sequencing an affordable and commonly used tool for a wide variety of research applications. As a result, the market has been stressed to manage the enormous data output from this process. Therefore, the complexity and sheer amount of data generated by NGS has led to a need for genomic centers to form bioinformatics teams in order to analyze the output data.

North America is the major market in the global genomics market and is expected to dominate this market during the forecast period, with the U.S. contributing a major share, followed by Europe, and Asia-Pacific. On the other hand, the Asian market, especially India and China, is expected to witness a boost in demand for genomics market during the forecast period, as a result of their economic development, increasing genetic research and development activities, drastically reduced mass scale genetic testing costs, and the growing focus of the major players in this region.

The key players in the global genomics market are Affymetrix, Inc., Agilent Technologies, BGI (Beijing Genomics Institute), Illumina, Inc., Thermo Fisher Scientific, Inc., Bio-Rad Laboratories, Inc., Cepheid, GE Healthcare, Qiagen N.V, Roche Holding AG, Pacific Biosciences of California, Inc., Oxford Nanopore Technologies Ltd., Beckman Coulter Genomics, Inc., Perkin Elmer, Inc., DNASTAR, Inc, Genomatix Software Gmbh, and GenoLogics Life Sciences Software, Inc.,

The global genomics market is segmented by methods, technology, instruments, consumables, services, and geography:

Genomics Methods/Stages
– Sampling,
– Sequencing,
– Analysis,
– Interpretation
– Application

Genomics Technology
– PCR
– Sequencing
– Microarray
– Nucleic acid Extraction & Purification

Genomics Instruments
– PCR
– NGS Platforms
– DNA Microarrays
– Nucleic acid Extraction and Purification Systems
– DNA Sequencers
– Others

NGS Platforms
– Illumina
– Thermo
– Roche
– Pacific Biosciences

Genomics Consumables
– PCR
– DNA Sequencing
– Nucleic acid extraction and purification systems
– Genechips
– Microarrays
– Others

Genomics Services
– Laboratory Services
– Software

Genomics Market, By Geography
North America
o U.S.
o Canada
Europe
o U.K.
o Germany
o France
o Italy
o Spain
o Rest of Europe
Asia-Pacific
o Japan
o China
o India
o Rest of Asia-Pacific
– Rest of the World
o Latin America
o Middle East and Africa

(Source: PRNewswire)

Using deep learning to analyze genetic mutations

Full article written by David Beyer can be found here: Deep learning meets genome biology

  • The application of deep learning to genomic medicine is off to a promising start; it could impact diagnostics, intensive care, pharmaceuticals and insurance.
  • The “genotype-phenotype divide”—our inability to connect genetics to disease phenotypes—is preventing genomics from advancing medicine to its potential.
  • Deep learning can bridge the genotype-phenotype divide, by incorporating an exponentially growing amount of data, and accounting for the multiple layers of complex biological processes that relate the genotype to the phenotype.
  • Deep learning has been successful in applications where humans are naturally adept, such as image, text, and speech understanding. The human mind, however, isn’t intrinsically designed to understand the genome. This gap necessitates the application of “super-human intelligence” to the problem.
  • Efforts in this space must account for underlying biological mechanisms; overly simplistic, “black box” approaches will drive only limited value.

(Source: Deep Genomics)

Liberating Machine Vision From the Machines

Until recently, computer vision — used most widely in manufacturing — and mainstream computing technology have existed in parallel worlds. Along with other factory floor technologies, computer vision tends to be machine-specific, hardware driven, and makes little if any use of the Internet. Many the advances we take for granted in modern computing — ubiquitous connectivity, unlimited data storage in the cloud, insights drawn from massive unstructured data sets — have yet to be applied systematically to the factory floor in general and to computer vision specifically.

It’s no surprise when you consider that until recently most computer vision software was written by computer vision hardware makers, built on embedded systems without open APIs. What comes to mind when you think of the software that came bundled with your scanner, your Wi-Fi router, your car’s navigation system? Balky, inflexible and unintuitive. The software isn’t much more than a utility to run the hardware.

But this closed world is being broken open by a convergence of emerging technologies:

  • The proliferation of cheap, high pixel-density camera sensors
  • Open implementations of vision algorithms, machine learning, and statistical tools
  • Large amounts of cheap computing power, becoming virtually limitless in the cloud

These technologies offer all the raw materials needed for a massive shift in how computer vision is practiced. It’s a shift from focusing on the raw material of visual data — the pixels and bitmaps generated by specific cameras — to extracting data from images and using statistical and data science techniques to draw insights.

This new approach to computer vision has a powerful application amid an American manufacturing renaissance emphasizing rapid product cycles and mass customization. Whereas the archetypal American factory was built around systematic, repeatable function, modern manufacturing is about flexibility, adaptability and high efficiency. We’ve gone from Henry Ford’s “any colour he wants so long as it is black” to Google’s Moto X phone — customer-configured, manufactured in the U.S. and delivered within four days.

Unrelenting Quality Demands

But that need for flexibility on the manufacturing line is in tension with unrelenting quality demands that manufacturers face across industries and down supply chains. Despite huge investments in quality control, automakers recalled nearly as many cars as they sold in the U.S. in 2012. Ford and GM made warranty payments of $5.7 billion in 2012, more than half of the $10.5 billion they reported in net income. Automakers are now paying suppliers prices based on benchmarks like defects per million, terminating those who fall below thresholds, and pushing liability for warranty claims down to their suppliers.

While automation has transformed much of manufacturing, a surprising amount of quality control is still done by hand or otherwise relies on human judgement. Many types of inspection require visual evaluation, but manfacturers’ experience with computer vision in quality control has been a frustrating one. Walk into a factory and ask the manager about computer vision, and you are likely to hear a variant of, “Oh yeah, we tried that, it didn’t work very well, we had to throw it out.”

Existing machine vision uses a 30-year-old architecture that’s capital-intensive and severely constrained in its abilities. Today’s computer vision systems operate as stand-alone islands, rarely connected to the Internet. Every time needs change, each installation has to be manually reprogrammed, unit by unit.

Worse still, little data is kept, making it difficult to spot trends or find correlations among multiple variables. Most manufacturing quality inspection by machine vision today is pass/fail. If the initial inspections of a production run pass the quality inspection, the machines are turned on and the testing data overwritten.

The New Computer Vision

The new computer vision, liberated from its hardware shackles and empowered by connectivity, unlimited data storage and Big Data-style statistical analysis, is beginning to change the role of vision in manufacturing. Instead of being a reactive tool to detect defects, computer vision is becoming a data collection tool supporting defect prevention initiatives, improving understanding of complex processes, and enabling greater collaboration across entire supply chains in real time.

With modern web services, once the data is collected it is easily aggregated into dashboards and distributed to production workers, quality engineers, and management, locally or around the globe. Manufacturers can share data with supply chain partners, making it easier to monitor their suppliers or to satisfy reporting requirements for customers.

One of our customers, a large manufacturer of high-quality bolts and other fasteners to automakers, is bringing this vision to life. Their system uses computer vision to analyze the grain pattern of bolts. If the pattern is wrong — if the grain lines end on a load-bearing surface — the bolt head can shear off when a factory worker torques it down, or worse, when it’s already holding an engine block in place.

The company is capturing images using a $100 scanner purchased at Best Buy. All the intelligence is in the software, running remotely on Amazon’s cloud computing platform. The system compares each image to thousands of other metal grain photos stored in the cloud, looking for patterns that correlate with part failure.

The bolt maker is now exploring the extension of its the computer vision system to its steel supplier, which will capture images of metal grain from each batch of steel rods it ships to the fastener maker. The fastener maker will then be able to analyze increasingly massive data sets to correlate grain patterns in the steel rods with quality measurements in the finished bolts.

Instead of examining only a single station, large data sets let companies trace complex interactions down the production line and across the supply chain. Upstream stations may produce parts that are technically within tolerance, but when certain ranges of acceptable variation are combined, they cause downstream defects after installation.

For our bolt-making customer, the raw material (a steel rod) and the batch of bolts made from that rod may each be well within spec, but retrospective data analysis may show that certain combinations of grain pattern in the steel rods lead to higher failure rates on bolts used for specific applications.

As automakers adapt the system it will gain even more power. Should an automaker report that the fastener-maker’s bolts are breaking and leading to warranty repairs, the parts supplier now has the analytical tools to determine the source of the problem. They can run analysis to determine whether the failed bolts came from a particular batch of steel rods, or were made on a day when their line was adjusted to a specific tolerance – or whether the problem wasn’t with the bolt itself, but rather with the worker on the left side of the assembly line who consistently overtorques the engine bolts.

Once the captured data is in the cloud, such systems can store an unlimited amount of data indefinitely, for reanalysis and retrieval anytime. They let plants run correlations over time, track trends and identify root causes, and as new variables of interest arise, go back and analyze previously acquired data.

As each plant gets smarter, the whole system gets smarter. Like Google learning more about consumers with their every search and click, we’re able to aggregate our learnings from quality issues common across industries.

Ultimately, vision can turn physical world challenges into Big Data problems. We know how to solve these Big Data problems better and better every day.

(Written by Jon Sobel, CEO and co-founder of Sight Machine Inc.)

Watch The Lego Batman Movie (2017) Full Movie Online Streaming Online and Download

WATCH NOW


Quality : HD
Title : The Lego Batman Movie
Director : Chris McKay.
Writer : Chris McKenna,Erik Sommers,Seth Grahame-Smith.
Release : 2017-02-08
Language : English.
Runtime : 104 min.
Genre : Fantasy, Action, Animation, Comedy, Family.

Synopsis :
Movie The Lego Batman Movie was released in February 8, 2017 in genre Fantasy. Chris McKay was directed this movie and starring by Will Arnett. This movie tell story about In the irreverent spirit of fun that made “The Lego Movie” a worldwide phenomenon, the self-described leading man of that ensemble—Lego Batman—stars in his own big-screen adventure. But there are big changes brewing in Gotham, and if he wants to save the city from The Joker’s hostile takeover, Batman may have to drop the lone vigilante thing, try to work with others and maybe, just maybe, learn to lighten up.

WATCH NOW