Selected RSS news

What kind of nanomachines will advanced nanotechnology use?

Dr. Richard Jones

Dr. Richard Jones

Long-term readers of Nanodot will be familiar with the work of Richard Jones, a UK physicist and author of Soft Machines: Nanotechnology and Life, reviewed in Foresight Update Number 55 (2005) page 10. Basically Jones follows Eric Drexler’s lead in Engines of Creation in arguing that the molecular machinery found in nature provides an existence proof of an advanced nanotechnology of enormous capabilities. However, he cites the very different physics governing biomolecular machinery operating in an aqueous environment on the one hand, and macroscopic machine tools of steel and other hard metals, on the other hand. He then argues that rigid diamondoid structures doing atomically precise mechanochemistry, as later presented by Drexler in Nanosystems, although at least theoretically feasible, do not form a practical path to advanced nanotechnology. This stance occasioned several very useful and informative debates on the relative strengths and weaknesses of different approaches to advanced nanotechnology, both on his Soft Machines blog and here on Nanodot (for example “Debate with ‘Soft Machines’ continues“, “Which way(s) to advanced nanotechnology?“, “Recent commentary“. An illuminating interview of Richard Jones over at h+ Magazine not only presents Jones’s current views, but spotlights the lack of substantial effort since 2008 in trying to resolve these issues “Going Soft on Nanotech“:

… RJ: I’m both a fan of Eric Drexler and a critic — though perhaps it would be most correct to say I’m a critic of many of his fans. Like many people, I was inspired by the vision of Engines of Creation, in outlining what would be possible if we could make functional machines and devices at the nanoscale. If Engines set out the vision in general terms, Nanosystems was a very thorough attempt to lay out one possible concrete realisation of that vision. Looking back at it twenty years on, two things strike me about it. One was already pointed out by Drexler himself — it says virtually nothing about electrons and photons, so the huge potential that nanostructures have to control their interaction, which forms the basis of the actually existing nanotechnologies that underlie electronic and optoelectronic devices, is unexplored. The other has only become obvious since the writing of the book. Engines of Creation draws a lot on the example of cell biology as an existence proof that advanced nanoscale machines, operating with atom precision, can be made. This represents one of Drexler’s most original contributions to the formation of the idea of nanotechnology — Feynman’s famous 1959 lecture, in contrast, had very little to say about biology. Since Nanosystems was written, though, we’ve discovered a huge amount about the mechanisms of how the nanomachines of biology actually work, and even more importantly, why they work in the way they do; what this tells us is that biology doesn’t use the paradigm of scaling down macroscopic mechanical engineering that underlies Nanosystems. So while it’s right to say that biology gives us an existence proof for advanced nanotechnology, it doesn’t at all support the idea that the mechanical engineering paradigm is the best way to achieve it. The view I’ve come to is that, on the contrary, the project of scaling down mechanical engineering to atomic dimensions will be very much more difficult than many of Drexler’s followers think. …

H+: In 2005, you proposed six important things MNT [an acronym for “molecular nanotechnology”, otherwise termed “molecular manufacturing” or “atomically precise manufacturing”] proponents could do to bolster the feasibility of MNT. They are listed below. How much progress have they made meeting your challenge?

  • Do more detailed and realistic computer modeling of hypothesized nanomachine components (gears, shafts, etc.) to determine if they would hold their shapes and not disintegrate or bend if actually built.
  • Re-do computer simulations of MNT nanomachines, this time using realistic assumptions about Brownian motion and thermal noise. The nanomachines’ “hard” parts would be more like rubber, and they would experience intense turbulence at all times. Delicate nanomachine mechanisms could be easily destroyed.
  • Re-do nanomachine computer simulations to realistically account for friction between the moving parts and for heat buildup. Heat and vibration could destroy nanomachines.
  • Do more detailed computer simulations of Drexler’s nano-scale motor to make sure it would actually work. He never modeled it down to the level of individual atoms. The motor is a critical component in Drexler’s theoretical nanomachine designs as it powers many of the moving parts.
  • Design a nano-scale valve or pump that selectively moves matter between the nanomachine’s enclosed inner area and the ambient environment. To be useful, nanomachines would need to “ingest” matter, modify it internally, and then expel it, but they would also have to block entry of unwanted matter that would jam up their exposed nano-moving parts. A valve or pump that is 100% accurate at discriminating between good and bad foreign materials is thus needed.
  • Flesh out a convincing, multi-year implementation plan for building the first MNT nanomachines. Either top-down or bottom-up approaches may be pursued. In either case, the plan must be technically feasible and must make sense.


RJ: Not a great deal, as far as I can tell. There was some progress made on points 1, 2 and 3 following the introduction of the software tool Nanoengineer by the company Nanorex, but this seems to have come to a halt around 2008. I don’t know of any progress on 4 and 5. The 2007 Technology Roadmap for Productive Nanosystems from Battelle and the Foresight Nanotech Institute has a good list of things that need to be done to achieve progress with a number of different approaches to making functional nanoscale systems, including MNT approaches, but it does not go into a great deal of detail about how to do them.

In the remainder of the interview, Prof. Jones explains what he means by “soft machines” and describes his current work in the area. He also comments on advances in nanotechnology, on the rate of technology advancement in general, on investment bubbles, on prospects for near-term nanotechnology in various fields, and on things to fear from the development of nanotechnology. His biggest fear: “But rather than worrying about runaway technology, my biggest fear now is the opposite — that we won’t devote enough resources to get the innovation we need.” On this last point, I personally have to agree completely. Other excellent items for further thought:

… But history seems to suggest that having more advanced technologies makes societies less, not more, equal. Access to new technologies gives access to power, and people don’t seem very good at sharing power. …

In the past we had many laboratories (in both the private sector and the public sector) that connected basic science to the people who could convert technological innovations into new processes and products — one thinks in the USA of great institutions like Bell Laboratories. These applied technology labs have been run down or liquidated, and their place has not been fully taken by the new world of spin-outs and venture capital backed start-ups, whose time horizons are too short to develop truly radical innovations in the material and biological realms. So we need to do something to fill that gap. …

On these and other points, I would recommend the interview as very well worth reading in full. As to Prof. Jones’s view on the difficulty of implementing MNT as it was presented in Nanosystems, the issues he raises remain largely unanswered. The Roadmap for Productive Nanosystems was released nearly seven years ago, and was only a first step toward a complete roadmap, and to my knowledge it has not been extended or updated. The progress with soft nanomachines, especially in structural DNA nanotechnology, has been substantial, but no one has published a detailed implementation path by which such progress could be extended to making and breaking covalent bonds with atomic precision, and from there to nanofactories capable of general purpose atomically precise manufacturing. There is much to be done, and the necessary investment is, to my knowledge, nowhere in sight.
—James Lewis, PhD

VN:F [1.9.17_1161]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.17_1161]
Rating: 0 (from 0 votes)

Proof of principle for nanoscale assembly line

microfluidic molecular assembly line

The assembly carrier moves through several reaction chambres where different molecules bind to its surface. The graph below shows the trajectory of a single shuttle. (Graphics: from Steuerwald et al. 2014)

One step toward nanofactories for atomically precise manufacturing would be the development of nanoscale production lines for assembling molecular cargo or other nanostructures into larger functional devices. Over the past few years we have cited here various advances toward this goal based on structural DNA nanotechnology, such as DNA walkers moving along tracks formed by DNA origami: DNA-based ‘robotic’ assembly begins (2010), DNA molecular robots learn to walk in any direction along a branched track (2011), AFM visualization of molecular robot moving along DNA scaffold (with video) (2011), and DNA motor navigates network of DNA tracks (2012). Back in 2006 another possibility was pointed out by bionanotechnologist Viola Vogel, working with natural motor proteins and cytoskeletal components, in an interview cited here: “Maybe in the future we can build an assembly line to assemble nanosystems into working devices, like a car assembly line, but at the nanoscale.” The future has apparently arrived. A hat tip to nanotech-now for drawing our attention to this news release from Prof. Vogel’s group at ETH Zürich announcing an important proof of principle demonstration “Nanoscale assembly line“:

ETH researchers have realised a long-held dream: inspired by an industrial assembly line, they have developed a nanoscale production line for the assembly of biological molecules.

Cars, planes and many electronic products are now built with the help of sophisticated assembly lines. Mobile assembly carriers, on to which the objects are fixed, are an important part of these assembly lines. In the case of a car body, the assembly components are attached in various work stages arranged in a precise spatial and chronological sequence, resulting in a complete vehicle at the end of the line.

The creation of such an assembly line at molecular level has been a long-held dream of many nanoscientists. “It would enable us to assemble new complex substances or materials for specific applications,” says Professor Viola Vogel, head of the Laboratory of Applied Mechanobiology at ETH Zurich. Vogel has been working on this ambitious project together with her team and has recently made an important step. In a paper published in the latest issue of the Royal Society of Chemistry’s Lab on a Chip journal [“Nanoshuttles propelled by motor proteins sequentially assemble molecular cargo in a microfluidic device” abstract; full text requires payment], the ETH researchers presented a molecular assembly line featuring all the elements of a conventional production line: a mobile assembly carrier, an assembly object, assembly components attached at various assembly stations and a motor (including fuel) for the assembly carrier to transport the object from one assembly station to the next.

Production line three times thinner than a hair

At the nano level, the assembly line takes the form of a microfluid platform into which an aqueous solution is pumped. This platform is essentially a canal system with the main canal just 30 micrometres wide — three times thinner than a human hair. Several inflows and outflows lead to and from the canal at right angles. The platform was developed by Vogel’s PhD student Dirk Steuerwald and the prototype was created in the clean room at the IBM Research Centre in Rüschlikon.

The canal system is fitted with a carpet made of the motor protein kinesin. This protein has two mobile heads that are moved by the energy-rich molecule ATP, which supplies the cells of humans and other life forms with energy and therefore make it the fuel of choice in this artificial system.

Assembling molecules step-by-step

The ETH researchers used microtubules as assembly carriers. Microtubules are string-like protein polymers that together with kinesin transport cargo around the cells. With its mobile heads, kinesin binds to the microtubules and propels them forward along the surface of the device. This propulsion is further supported by the current generated by the fluid being pumped into the canal system. Five inflows and outflows direct the current in the main canal and divide it into strictly separated segments: a loading area, from where the assembly carriers depart, two assembly stations and two end stations, where the cargo is delivered.

The researchers can add the objects to the system through the lines that supply the assembly segments. In their most recent work, they tested the system using NeutrAvidin, the first molecule that binds to the nanoshuttle. A second component — a single, short strand of genetic material (DNA) — then binds to the NeutrAvidin, creating a small molecular complex.

Technical applications are still a long way off

Although Vogel’s team has achieved a long-held dream with this work, the ETH professor remains cautious: “The system is still in its infancy. We’re still far away from a technical application.” Vogel believes they have shown merely that the principle works.

She points out that although the construction of such a molecular nanoshuttle system may look easy, a great deal of creative effort and knowledge from different disciplines goes into every single component of the system. The creation of a functional unit from individual components remains a big challenge. “We have put a lot of thought into how to design the mechanical properties of bonds to bind the cargo to the shuttles and then unload it again in the right place.”

The use of biological motors for technical applications is not easy. Molecular engines such as kinesin have to be removed from their biological context and integrated into an artificial entity without any loss of their functionality. The researchers also had to consider how to build the assembly carriers and what the ‘tracks’ and assembly stations would look like. “These are all separate problems that we have now managed to combine into a functioning whole,” says Vogel.

Sophisticated products from the nano assembly line

The researchers envision numerous applications, including the selective modification of organic molecules such as protein and DNA, the assembly of nanotechnological components or small organic polymers, or the chemical alteration of carbon nanotubes. “We need to continue to optimise the system and learn more about how we can design the individual components of this nanoshuttle system to make these applications possible in the future,” says the ETH professor. The conditions for further research in this field are excellent: her group is now part of the new NCCR in Basel — Molecular Systems Engineering: Engineering functional molecular modules to factories.

Apparently the major innovation here was combining nanoshuttle-mediated ative transport and pressure-driven passive transport into a single microfluidic device. Perhaps the next challenge for the microfluidic approach is to see how many protocols can be implemented in a single chip. Whether it will prove possible to scale up either of these very different approaches to the point of making practical, complex nanodevices remains to be seen, but having two feasible approaches should improve the odds of something working.
—James Lewis, PhD

VN:F [1.9.17_1161]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.17_1161]
Rating: 0 (from 0 votes)

Seeing and touching a single synthetic molecular machine

a single synthetic molecular machine

Schematic illustration for single-molecule motion capturing and manipulation of 1-nm sized synthetic molecular machine by optical microscopy using a bead probe. A large bead attached to the rotor part of the synthetic molecular bearing (double decker porphyrin) traces its motion. credit Tomohiro Ikeda

Molecular machines are a central component of efforts to develop atomically precise manufacturing. Optical microscopy and optical trap manipulation of single molecules, made possible by attachment of micrometer-scale beads, have facilitated greater understanding of the workings of biomolecular machines. For example, a 2008 paper published in Cell (“Intramolecular Strain Coordinates Kinesin Stepping Behavior along Microtubules“) revealed how the kinesin molecular motor molecule coordinates its two motor domains to achieve one-way stepping along microtubule proteins. When additional peptides were inserted into the mechanical “neck linker” elements that span the two motor domains, tension was reduced, and as a result, the motor’s velocity was reduced. Motor velocity returned to near normal when external tension was applied via an optical trap operating on a 920 nm diameter bead attached via an antibody to the molecular motor. Nanotechnologists can use similar techniques to study a wide variety of biomolecular machines, including naturally occurring molecular motors with typical length scales of 10 nm. Until now, however, it has not been possible to use similar approaches to study smaller synthetic molecular machines, with typical length scales on the order of one nm. A hat tip to Asian Scientist for reprinting this press release from the University of Tokyo “Seeing and touching a single 1-nm-sized synthetic molecular machine“:

Single-molecule imaging and manipulation with optical microscopy using a bead probe as a marker (single–molecule “motion capturing”) unveils fundamental properties of biomolecular machines such as direction of motion, step size and force the molecule exerts, which cannot be resolved by whole-molecule measurements. As a result, it has become an essential method for research of biomolecular machines. In addition, single-molecule motion capturing could also become a powerful tool to develop “synthetic” molecular machines. However, it is difficult to apply the conventional method to individual molecules because the size of a typical synthetic molecular machine is only 1 nm, about one-tenth the size of a biomolecular machine. This miniaturization of the target molecule causes significant problems such as low efficiency of the bead probe immobilization reaction and undesired interaction between the surfaces of the bead and substrate.

[University of Tokyo researchers] captured the motion of a single synthetic machine about 1 nm in size for the first time. In this experiment, the researchers resolved the problems in the conventional method which are caused by the small size of the target and successfully visualized the rotational motion of a single double-decker porphyrin (DD), known as a synthetic molecular bearing, by imaging a bead on DD. Furthermore the researchers successfully manipulated the motion of the single DD molecule by applying an external force to the bead.

This method, that allows us to “see and touch” single synthetic molecular machines, provides currently the only strategy to verify and evaluate the performance of synthetic molecular motors generating force, one of the ultimate goals in the development of synthetic molecular machines. For example, if it were possible to create a light-driven synthetic molecular motor connected to a biomolecular motor, it should then be possible to establish a tailor-made energy conversion system that can control various chemical reactions by application of light. Therefore this seminal technique will contribute to establishment of tailor-made energy conversion systems based on molecular machines.

This research has been featured on the back cover of the journal Angewandte Chemie International Edition [abstract, full text PDF courtesy of senior author].

The 1-nm sized double-decker porphyrin (DD) is based on a meso-tetraaryl DD. The potential barrier for rotation of a DD depends on the bulkiness of the substituents on the porphyrin ring side chains and on the ionic radius of the central metal ion, a cerium ion (CeIII). Substituents included terminal azide groups to enable click chemistry for linking to an alkyne-modified magnetic bead or glass substrate. The rotational dynamics of the 200-nm magnetic bead were visualized using optical microscopy, revealing rotation only in discrete steps of 90° each, as expected from the symmetry of the DD. The motions visualized are Brownian and passive. It would be interesting to see similar results from the application of force by a molecular machine.
—James Lewis, PhD

VN:F [1.9.17_1161]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.17_1161]
Rating: 0 (from 0 votes)

Recent cases of &#39accessible&#39 high-tech: Open source chips & Origami robots

From "An origami robot transforming from flat to 3D. Photo courtesy of Seth Kroll, Wyss Institute."

Nanotech promises more commonplace access to advanced technology as material and fabrication costs fall and traditional barriers to innovation are removed. Examples are already being seen globally: more access to laptops and cell phones in developing countries, desktop 3D printers, a surge in establishment of shared-use research facilities, etc.

A couple recent cases getting attention on include the latest release of RISC-based open source chip from UC Berkeley, and self-folding ‘origami’ robots developed at the Wyss Institute and published in Science.

About the chips:

Fed up with the limitations of current computer chips and their related intellectual property, a team of researchers at the University of California, Berkeley, is pushing an open source alternative. The RISC-V instruction set architecture was originally developed at the university to help teach computer architecture to students, but now its creators want to push it into the mainstream to help propel emerging markets such as cloud computing and the internet of things.

One of the researchers leading the charge behind RISC-V is David Patterson, the project’s creator and also the creator of the original RISC instruction set in the 1980s. He views the issue as one centered around innovation. Popular chip architectures historically have been locked down behind strict licensing rules by companies such as Intel, ARM and IBM (although IBM has opened this up a bit for industry partners with its OpenPower foundation). Even for companies that can afford licenses, he argues, the instruction sets they receive can be complex and bloated, requiring a fair amount of effort to shape around the desired outcome.

But Patterson appears to be looking out for the little guy — small companies or researchers that want to develop their own chips for their own specialized applications, but don’t have deep pockets. That requires being able to experiment with the underlying instruction set, experiment with chip designs and share that work openly without fearing a violation of license terms.

About the robots:

The robots are made from paper, plastic and electronic components. Networks of circuits deliver heat created by a battery to the areas of the robot that need to fold. The plastic, which was made to transform into a preset shape when exposed to temperatures higher than 212 degrees Fahrenheit, then begins its transformation. The robots created at Harvard took about four minutes to turn into their final 3D shape.

The robots could also be an interesting alternative to 3D printing, according to Rob Wood, the senior author on a study about the robots that appeared in the journal Science today. He said the starting materials to build the robots are all off-the-shelf and makeable with tools like laser cutters, so they are relatively cheap. The technique is similar to 3D printing in that it is especially suited to making between 100 and 1,000 units of an object, but it’s faster. And like 3D printing, it can be used to make items at a scale far too tiny for human hands.

-Posted by Stephanie C

VN:F [1.9.17_1161]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.17_1161]
Rating: 0 (from 0 votes)

Surprisingly real value from virtual reality

Looks can be deceiving — these gamers may be engaged in highly cooperative, albeit remote, team objectives. Credit: Reuters

Speaking of big computation, cyberspace isn’t yet as potent as Neal Stephenson portrayed in Snow Crash and subsequent books, but it’s getting there. A new article in the Wall Street Journal online titled Can World of Warcraft Game Skills Help Land a Job? states that some job seekers are adding gaming skills to their resumes to indicate their ability to work productively in large, remote teams:

Gamers’ ability to accomplish complex tasks across virtual teams could be seen as a plus for some companies.

“This capability to engage in strategy-building, team-building, knowledge-sharing and problem-solving remotely is really important,” said, Ms. LeGoues, currently vice president of transformation at the YAI Network of nonprofits.

The topic was also featured on the Harvard Business Review blog, and was picked up at Yahoo Finance (video and article here), noting that skill-based hobbies such as chess and even golf have been touted in LinkedIn profiles for some time:

…Still, excluding people who work at video game companies, less than 2,000 have mentioned World of Warcraft on their resumes on LinkedIn. More than 250,000 people list chess on their LinkedIn profile, mostly in the fields of IT, computer software and finance. That beats the 116,000 who list golfing skills, mainly in the fields of finance, real estate and marketing and advertising. Poker is less common, listed on only 43,000 profiles, and about half are people who work in the gaming industry. The rest, about 22,000, are concentrated in IT, advertising, and marketing and finance.

MIT researcher Michael Schrage says a whole bunch of modern, digital pursuits such as fantasy baseball and Minecraft should eventually become appealing to hiring companies, since they signify modern skills.

Feeling skeptical? How about flight simulators, which have been used for decades
to help train pilots but have only recently achieved high enough levels of computational sophistication to effectively prepare pilots for dangerous flight conditions that were previously relegated to ‘on the job training’.

A very mainstream news broadcast from shows a simulator at Seatac airport and describes the real-life skills gained from incredibly realistic virtual flight experiences:

It’s not so much the flying, said [Director of Flight Training Capt. Douglas] Burton. He says what pilot training is really focused on is helping pilots make better decisions, especially during difficult situations involving bad weather, mechanical troubles or a combination of the two.

They get lost in the scenario. said Burton.

The reality of the simulation helps draw the pilots in, to work through those decisions while safely on the ground and get ready for the time if they ever confront that type of reality in the air.

Flight simulation may feel sufficiently focused and applied to be a common-sense use of virtual reality, but it’s a short walk over to the large-scale cooperative, distributed, and remote teamwork that is becoming increasingly important with globalization.

Social-based online games exercise skill sets related to ‘virtual cooperation’ specifically in the context of remote interactions and may at least serve as a filter for finding job candidates with particular aptitude in these areas.

-Posted by Stephanie C

VN:F [1.9.17_1161]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.17_1161]
Rating: 0 (from 0 votes)

Big computation brings your ideas into 3D

Hyve3D credit: University of Montreal

What 3D printers are doing to facilitate fabrication, 3D drawing programs are surpassing to facilitate design. As described at, two systems referred to as “powerful” and “spectacular” are being highlighted at the SIGGRAPH 2014 conference in Vancouver this week:

True2Form (out of University of British Columbia) brings 2D sketches into 3D (excerpt from SD reprint):

…”In line-drawings, designers and artists use descriptive curves and informative viewpoints to convey the full shape of an object,” says Alla Sheffer, a professor in UBC’s Dept. of Computer Science. “Our system mimics the results of human three-dimensional shape inference to lift a sketch curve network into 3-D, while preserving fidelity to the original sketch.”

True2Form uses powerful mathematics to interpret artists’ strokes automatically lifting drawings off of the page. It produces convincing, complex 3-D shapes computed from individual sketches, automatically corrected to account for inherent drawing inaccuracy…

Hyve3D (out of University of Montreal) delivers collaborative, real-time 3D sketching (excerpt from SD reprint):

…For example, as the designers are immersed in their work, this could mean designing the outside of a car, and then actually getting into it to work on the interior detailing. Hyve-3D stands for “Hybrid Virtual Environment 3D.” Univalor, the university’s technology commercialization unit, is supporting the market launch of the system.

The 3D images are the result of an optical illusion created by a widescreen high-resolution projector, a specially designed 5m-diameter spherically concave fabric screen and a 16-inch dome mirror projecting the image onto the screen…

While industrial/commercial/military applications are pretty obvious, the potential for classrooms (remember trying to visualize molecular stereochemistry by pointing your fingers in unnatural directions?!) and for individual innovators (as these types of systems arrive in collaborative facilities or as costs/size come down to allow desktop versions) makes the imagination soar.
-Posted by Stephanie C

VN:F [1.9.17_1161]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.17_1161]
Rating: 0 (from 0 votes)

Tunable Assembly of Nanoparticles for (Photovoltaic) Devices

credit: Venkataraman et al., University of Massachusetts Amherst

Photovoltaics are an interesting case where atomic precision is not necessary to achieve potentially dramatic global impacts. Even an “ok efficiency” device that is easy to manufacture with reduced environmental hazard could have significant beneficial effects on energy resources and on device fabrication processes (which could, in turn, contribute to developments toward APM).

The struggle to balance ease of manufacture and device efficiency is a major driver behind current research efforts.  Two recent publications out of Massachusetts alone make the point: Research from University of Massachusetts Amherst describes the fabrication of (very low efficiency) photovoltaic devices via tunable self-assembly of aqueous nanoparticle dispersions (organic nanospheres). The work is published in NanoLetters and the press release is reprinted at here (excerpt below), and research from MIT utilizes quantum dots to reach a notable 9% efficiency (high for QD-based devices). This work is published in ACSNano and the press release also reprinted at here.

A team of materials chemists, polymer scientists, device physicists and others at the University of Massachusetts Amherst today report a breakthrough technique for controlling molecular assembly of nanoparticles over multiple length scales that should allow faster, cheaper, more ecologically friendly manufacture of organic photovoltaics and other electronic devices. Details are in the current issue of Nano Letters.

Lead investigator, chemist Dhandapani Venkataraman, points out that the new techniques successfully address two major goals for device manufacture: controlling molecular assembly and avoiding toxic solvents like chlorobenzene. “Now we have a rational way of controlling this assembly in a water-based system,” he says. “It’s a completely new way to look at problems. With this technique we can force it into the exact structure that you want.”

-Posted by Stephanie C

VN:F [1.9.17_1161]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.17_1161]
Rating: 0 (from 0 votes)

Nanotechnology-based next generation memory nears mass production

This scanning electron microscope image and schematic show the design and composition of new RRAM memory devices based on porous silicon oxide that were created at Rice University. Credit: Tour Group/Rice University

Investment in the ultimate promise of advanced or molecular nanotechnology, that is, molecular manufacturing or atomically precise manufacturing, may well rest upon the success of current nanoscale science and incremental nanotechnology. Computation represents a major area of investment for current nanotechnology. One researcher who has contributed greatly to both atomically precise devices leading toward molecular manufacturing, on the one hand, and current commercializable nanotechnology, on the other hand, is James Tour of Rice University, winner of the 2008 Foresight Institute Feynman Prize in the Experimental category. A hat tip to KurzweilAI for reprinting this Rice University news release “Rice’s silicon oxide memories catch manufacturers’ eye“:

Rice University’s breakthrough silicon oxide technology for high-density, next-generation computer memory is one step closer to mass production, thanks to a refinement that will allow manufacturers to fabricate devices at room temperature with conventional production methods.

First discovered five years ago, Rice’s silicon oxide memories are a type of two-terminal, “resistive random-access memory” (RRAM) technology. In a new paper available online in the American Chemical Society journal Nano Letters [abstract], a Rice team led by chemist James Tour compared its RRAM technology to more than a dozen competing versions.

“This memory is superior to all other two-terminal unipolar resistive memories by almost every metric,” Tour said. “And because our devices use silicon oxide — the most studied material on Earth — the underlying physics are both well-understood and easy to implement in existing fabrication facilities.” Tour is Rice’s T.T. and W.F. Chao Chair in Chemistry and professor of computer science and of materials science and nanoengineering.

Tour and colleagues began work on their breakthrough RRAM technology more than five years ago. The basic concept behind resistive memory devices is the insertion of a dielectric material — one that won’t normally conduct electricity — between two wires. When a sufficiently high voltage is applied across the wires, a narrow conduction path can be formed through the dielectric material.

The presence or absence of these conduction pathways can be used to represent the binary 1s and 0s of digital data. Research with a number of dielectric materials over the past decade has shown that such conduction pathways can be formed, broken and reformed thousands of times, which means RRAM can be used as the basis of rewritable random-access memory.

RRAM is under development worldwide and expected to supplant flash memory technology in the marketplace within a few years because it is faster than flash and can pack far more information into less space. For example, manufacturers have announced plans for RRAM prototype chips that will be capable of storing about one terabyte of data on a device the size of a postage stamp — more than 50 times the data density of current flash memory technology.

The key ingredient of Rice’s RRAM is its dielectric component, silicon oxide. Silicon is the most abundant element on Earth and the basic ingredient in conventional microchips. Microelectronics fabrication technologies based on silicon are widespread and easily understood, but until the 2010 discovery of conductive filament pathways in silicon oxide in Tour’s lab, the material wasn’t considered an option for RRAM.

Since then, Tour’s team has raced to further develop its RRAM and even used it for exotic new devices like transparent flexible memory chips. At the same time, the researchers also conducted countless tests to compare the performance of silicon oxide memories with competing dielectric RRAM technologies.

Our technology is the only one that satisfies every market requirement, both from a production and a performance standpoint, for nonvolatile memory,” Tour said. “It can be manufactured at room temperature, has an extremely low forming voltage, high on-off ratio, low power consumption, nine-bit capacity per cell, exceptional switching speeds and excellent cycling endurance.”

In the latest study, a team headed by lead author and Rice postdoctoral researcher Gunuk Wang showed that using a porous version of silicon oxide could dramatically improve Rice’s RRAM in several ways. First, the porous material reduced the forming voltage — the power needed to form conduction pathways — to less than two volts, a 13-fold improvement over the team’s previous best and a number that stacks up against competing RRAM technologies. In addition, the porous silicon oxide also allowed Tour’s team to eliminate the need for a “device edge structure.”

“That means we can take a sheet of porous silicon oxide and just drop down electrodes without having to fabricate edges,” Tour said. “When we made our initial announcement about silicon oxide in 2010, one of the first questions I got from industry was whether we could do this without fabricating edges. At the time we could not, but the change to porous silicon oxide finally allows us to do that.”

Wang said, “We also demonstrated that the porous silicon oxide material increased the endurance cycles more than 100 times as compared with previous nonporous silicon oxide memories. Finally, the porous silicon oxide material has a capacity of up to nine bits per cell that is highest number among oxide-based memories, and the multiple capacity is unaffected by high temperatures.”

Tour said the latest developments with porous silicon oxide — reduced forming voltage, elimination of need for edge fabrication, excellent endurance cycling and multi-bit capacity — are extremely appealing to memory companies.

“This is a major accomplishment, and we’ve already been approached by companies interested in licensing this new technology,” he said.

Prof. Tour’s contributions to nanotechnology for computation go back at least to a paper he presented at the 1997 Fifth Foresight Conference on Molecular Nanotechnology titled “Molecular Scale Electronics. Syntheses and Testing“.
—James Lewis, PhD

VN:F [1.9.17_1161]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.17_1161]
Rating: 0 (from 0 votes)

Emergence of nanobiotechnology points to importance of deep collaboration

(credit: Elicia Maine et al./Nature Nanotechnology)

We’ve suggested that biotechnology could serve as an enabling technology for the development of atomically precise manufacturing (through an opensource biological parts depository for building molecular machines and through a ground-breaking innovation to have robots do biotech experiments “in the cloud”). Whether or not the confluence of biotechnology and nanotech advances the development of atomically precise manufacturing, this confluence has already opened a path to new products and new markets. An article over at KurzweilAI reports two studies on opportunities in the emerging nanobiotechnology industry “Innovation management and the emergence of the nanobiotechnology industry“:

The confluence of nanotechnology and biotechnology is creating opportunities and an emerging industry, nanobiotechnology, with tremendous potential for economic and social value creation, according to an international research team at MIT, Simon Fraser University, and the University of New South Wales

The medical applications of nanobiotechnology are promising, including effectively targeted drug delivery — imagine highly efficacious cancer treatment with few side effects — and real time, minimally invasive diagnostics. But there is little known about the emergence of this industry or of ways to reap the possible benefits. …

Innovation, and the growth of new industries, is thought to be more likely when a firm occupies the confluence or convergence of distinct streams of emerging technology. Research progress at the intersection of fields is probably more likely to occur when cross-disciplinary new product development teams are designed by the organization and when routines and processes are designed to support cross-disciplinary learning. A confluence of technologies is characterized both by the bringing together of formerly disparate fields of knowledge, and by the creation of new product markets. When a confluence of technology streams occurs, rich opportunities for experiment and progress may result, suggests Professor James Utterback of MIT.

Strategies that one might suggest as particularly useful for enabling innovation would include importing ideas from broad networks and diverse sources, hiring persons from diverse backgrounds, and creating an environment conducive to deep collaboration. This would include co-location of diverse groups, creating a culture and which encourages vigorous debate and differences in perspective.

The Foresight Institute was perhaps the first organization to perceive the different technological threads that would weave the fabric of nanotechnology, and the value of bringing them together in the hope of seeding collaborations. Starting in 1989, Foresight Conferences on nanotechnology have brought together experts in fields as diverse as biotechnology, surface physics, chemistry, materials science, and computation to encourage cross-fertilization and collaborations, shaping nanotechnology as it exists today, and in the hope of paving the way to productive nanosystems and atomically precise manufacturing tomorrow.
—James Lewis, PhD

VN:F [1.9.17_1161]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.17_1161]
Rating: 0 (from 0 votes)

Biotech lab in the cloud lowers entry barrier to nanotech research

credit The Emerald Cloud Laboratory

Recently we cited an alliance between Foresight and parts of the Bay Area biotech community to build an opensource biological parts repository, and we expressed the hope that spreading the meme that biological machines are indeed machines that can be engineered might seed efforts toward the open source development of molecular manufacturing. We thank Desiree D. Dudley of the Synthetic Neurobiology Group, MIT for letting us know of another Bay Area biotech
initiative that might also speed development of molecular manufacturing (also known as atomically precise manufacturing). Emerald Therapeutics, whose co-founder DJ Kleinbaum spoke on “Democratizing Biotechnology” at the 2014 Foresight Technical Conference, has just announced The Emerald Cloud Laboratory, “a web-based life sciences lab, developed by scientists for scientists.” From Ashlee Vance at Bloomberg Businessweek “Emerald Therapeutics: Biotech Lab for Hire“:

There’s a basic formula these days for anyone looking to develop a cure for a disease. Along with a good idea, you need $20 million, a team of about 30 scientists, and a year to set up the lab equipment to start testing your theory. From there, the grunt work begins, as your team of well-paid researchers squirts fluid into test tubes, feeds chemicals into machines, and analyzes the results from thousands of experiments. If you luck out and discover something useful, then it’s time to pray that the desired result can be replicated.

Emerald Therapeutics, a 17-person startup in Silicon Valley, claims to have modernized much of this burdensome process, which might make drug discovery faster and cheaper. On July 1 the company unveiled a service that lets other labs send it instructions for their experiments via the Web. Robots then complete the work. The idea is a variation on the cloud-computing model, in which companies rent computers by the hour from (AMZN), Google (GOOG), and Microsoft (MSFT) instead of buying and managing their own equipment. In this case, biotech startups could offload some of their basic tasks—counting cells one at a time or isolating proteins—freeing their researchers to work on more complex jobs and analyze results. To control the myriad lab machines, Emerald has developed its own computer language and management software. The company is charging clients $1 to $100 per experiment and has vowed to return results within a day. …

As Allison Proffitt at Bio-IT World puts it:

… Don’t outsource experiments to another lab; let the robots do them!

That is the vision of Emerald Therapeutics, a Menlo Park company that announced its latest financing round today ($13.5 million), and its flagship product: Emerald Cloud Laboratory (ECL), a state-of-the-art life sciences laboratory that scientists can remotely access via the Internet where automated robotics conduct experiments exactly as specified by the user.

A user simply sits down at his computer, specifies the parameters for a given experiment, and the robots do the work. Experiments are executed using Emerald’s robotically automated systems, and within 24-48 hours all of the data generated is cataloged and placed into a database where it is accessible to the user and can be analyzed and shared with other scientists. …

James Temple at adds:

… The nearly 20-person company has packed a 5,000-square-foot facility in a little office park in Silicon Valley with more than $2 million worth of mass spectrometers, automated pipettes and microscopes, capable of carrying out remote life sciences experiments under controlled conditions. …

… “We really hope this system, in addition to supporting labs that are currently doing research, will also enable people to do their own research and form new companies for a fraction of the cost it currently takes,” said Daniel Jerome Kleinbaum, the company’s co-founder and co-CEO, who earned a PhD in organic chemistry at Stanford. …

… The cloud lab can currently carry out 40 standard life sciences experiments, such as Western Blots tests that identify specific proteins in tissue. The company expects to support up to 120 within the next year and a half. …

Biotechnology—experiments with DNA, proteins, RNA—is fundamental to the modular molecular composite nanosystems (MMCNs) route to atomically precise productive nanosystems, and to the new field of synthetic biology as another possible path to atomically precise manufacturing. The connections between biotechnology and nanotechnology are not unknown to Emerald’s co-founders. Emerald’s other co-founder Brian Frezza studied under two Foresight Feynman prize-winners—M. Reza Ghadiri and Nadrian Seeman. At prices like $1 to $100 per experiment, the Emerald Cloud Laboratory could make it possible for anyone with good ideas, from small start-ups to open source communities, to contribute to the development of molecular manufacturing.
—James Lewis, PhD

VN:F [1.9.17_1161]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.17_1161]
Rating: 0 (from 0 votes)
Username: 0...