Selected RSS news

Roles of materials research and polymer chemistry in developing nanotechnology

Nanoarchitectonics: a new materials horizon for nanotechnology. Credit Materials Horizons, Ariga et al. DOI: 10.1039/C5MH00012B

A review written by a group at the National Institute for Materials Science, Ibaraki, Japan, and published in nature.com’s Polymer Journal introduces the concept of “nanoarchitectonics” and explores why nanotechnology is not just an extension of microtechnology “What are the emerging concepts and challenges in NANO? Nanoarchitectonics, hand-operating nanotechnology and mechanobiology” [abstract]:

Most of us may mistakenly believe that sciences within the nano regime are a simple extension of what is observed in micrometer regions. We may be misled to think that nanotechnology is merely a far advanced version of microtechnology. These thoughts are basically wrong. For true developments in nanosciences and related engineering outputs, a simple transformation of technology concepts from micro to nano may not be perfect. A novel concept, nanoarchitectonics, has emerged in conjunction with well-known nanotechnology. In the first part of this review, the concept and examples of nanoarchitectonics will be introduced. In the concept of nanoarchitectonics, materials are architected through controlled harmonized interactions to create unexpected functions. The second emerging concept is to control nano-functions by easy macroscopic mechanical actions. To utilize sophisticated forefront science in daily life, high-tech-driven strategies must be replaced by low-tech-driven strategies. As a required novel concept, hand-operation nanotechnology can control nano and molecular systems through incredibly easy action. Hand-motion-like macroscopic mechanical motions will be described in this review as the second emerging concept. These concepts are related bio-processes that create the third emerging concept, mechanobiology and related mechano-control of bio-functions. According to this story flow, we provide some incredible recent examples such as atom-level switches, operation of molecular machines by hand-like easy motions, and mechanical control of cell fate. To promote and activate science and technology based on these emerging concepts in nanotechnology, the contribution and participation of polymer scientists are crucial. We hope that some readers would have interests within what we describe.

The authors introduce the concept of nanoarchitectonics with the observation that working in the nanoscale introduces difficult-to-control factors like thermal and statistical fluctuation, mutual interactions with surrounding components, and quantum effects. Nanoarchitectonics thus requires harmonizing various effects not encountered in microscale design. Strategies include fabrication by organizing nanoscale building blocks, even if unreliable, and organizing functions on the basis of mutual interactions. Early examples include atomic switches based on manipulation of atoms across small gaps, which, unlike conventional physical switches, can be modified to mimic the behavior of synapses. Collective responses to stimulus inputs may represent self-organized devices unexpectedly creating higher functions.

Another example is the automatic regulation of material releases from silica capsules with hierarchic pore architectures in which submicron-scale empty reservoirs are surrounded by silica walls with nanostructured pore channels, assembled layer-by-layer from silicon nanoparticles and polyelectrolytes. Even without external stimuli, water is released in a stepwise ON-OFF cycles from non-equilibrated evaporation from nanoscale pore channels and capillary movement from the nanoscale pores into the central reservoir.

The second emerging concept the authors introduce is “hand-operating nanotechnology” in which molecular machines are controlled by macroscopic mechanical action, based upon interfacial supramolecular chemistry. The authors cite examples in which efficiency of molecular recognition is increased at interfaces, noting improvements in molecular recognition through changing the interfacial environment. Molecular assembly in the confined spaces of interfaces restricts material diffusion and molecular motion, resulting in highly oriented structures with nano- and molecular-scale precision. At interfaces, macroscopic actions in the xy plane can be connected with nano/molecular functions in the z dimension. Thus it becomes possible that nanotechnological functions could be controlled by actions like hand motions.

One example is provided by the steroid cyclophane embedded at an air-water interface. This molecular machine has a central ring that converts from an open ring to a closed cavity upon high-pressure lateral compression, corresponding to a shift in the molecular area at the air-water interface from 7 to 2 nm2. This change can be correlated with the capture of a guest molecule.

The third emerging concept the authors present is mechanobiology and mechanical access to biological function. Mechanobiology examines how biomolecules, cells and tissues respond to mechanical stresses, measurement of mechanical parameters in biological systems, and developing mechanical control of biological activities. Envisioning mechanobiology as an outgrowth of nanoarchitectonics and interfacial mechanical control, the authors see it providing an attractive target for materials research: the development of mechano-structural stimulo-responsive polymeric materials for cell fate control. For example, they cite a hydrogel that becomes much softer on UV irradiation, thus modulating the stiffness of the extracellular matrix, thus modulating the behavior of cells in culture. Another example involves determining the differentiation and lineage specification of stem cells through the elasticity of the microenvironment.

This review presents an interesting roadmap into one developing sector of nanotechnology. It will be interesting to see what polymer chemists can contribute to the current stew of DNA nanotechnology, protein design, synthetic organic chemistry, surface science, etc.
—James Lewis, PhD

VN:F [1.9.17_1161]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.17_1161]
Rating: 0 (from 0 votes)

Multiple advances in de novo protein design and prediction

David Baker, UW professor of biochemistry, in his lab at the Institute for Protein Design. Credit: UW Institute for Protein Design

Concluding our brief update on Eric Drexler’s 1981 proposal that de novo protein design provides a path from biotechnology to general capabilities for molecular manipulation, we return to this University of Washington news release “Big moves in protein structure prediction and design”:

In addition to [their recent reports] on modular construction of proteins with repeating motifs [de novo protein design and rational design of protein architectures not found in nature], here are some other recent developments [from the research group of David Baker at the University of Washington Institute for Protein Design]:

Evolution offers clues to shaping proteins: The function of many proteins tends to stay the same across species, even after their amino acid sequences have changed over billions of years of evolution. Locating co-evolved pairs of amino acids helps calculate their proximity when the molecule folds. UW graduate student Sergey Ovchinnikov applied this co-evolution DNA sequence analysis in an E-Life paper published Sept. 3, 2015, “Large-scale determination of previously unsolved protein structures using evolutionary information.” [Open Access] The effort illuminated for the first time the structures of 58 families of proteins that have hundreds of thousands of additional, structurally related family members.

“This achievement was a grand slam home run in the history of protein structure prediction,” said Baker.

Barrel-fold design: Baker lab postdoctoral fellow Po-Ssu Huang, together with Birte Höcker at the Max Planck Institute for Developmental Biology in Germany discovered the elusive design principles for a barrel-shaped fold underpinning many natural enzyme molecules. The custom designed barrels folds, built at the IPD, were presented Nov. 23, 2015, in the Nature Chemical Biology paper, “De novo design of a four-fold symmetric TIM-barrel protein with atomic-level accuracy.” [abstract, full text PDF courtesy of the Baker lab] This achievement has opened the door for bioengineers to generate totally new enzymes that speed up chemical reactions by positioning smaller molecules in custom barrel compartments.

Self-assembling apparatus: Ordered protein arrays along a flat plane are found in bacteria, the heart, and other muscles. Overcoming protein interaction complexities, researchers at the IPD and the Janelia Research Campus of the Howard Hughes Medical Institute programmed proteins to self-assemble into novel symmetric, 2-dimensional sheets of protein lattice patterns. UW graduate student Shane Gonen in the Baker lab, with his brother Tamir Gonen at Janelia, described their work in the June 19, 2015, issue of Science, “Design of ordered two-dimensional arrays mediated by non-covalent protein-protein interfaces.” [abstract, full text PDF courtesy of the Baker lab] This research has applications for self-assembling protein nanomaterials, especially efficient sensors or light harvesters.

Precision sculpting: Protein designers are continuously refining principles for fashioning ideal protein structures. A paper in the Oct. 6, 2015, Proceedings of the National Academy of Sciences, “Control over overall shape and size in de novo designed proteins,” further explains methods for systematically varying protein architecture inspired by nature. Such finesse could optimize designed proteins to take on the proper shapes for the desired functions. This work was led by Baker lab graduate student Yu-Ru Lin in collaboration with Nobuyasu Koga at the Institute for Molecular Science in Japan.

Each of the advances reported in the above papers is noteworthy in itself, but what seems especially exciting is that multiple advances in de novo protein design have occurred in such a short time in one place. Other significant advances related to the ones covered above and in the two previous posts can be seen in the titles of the papers listed on the Baker Lab publications page. This technology is clearly being developed to pursue an array of goals covering a number of time frames, but this progress also gives hope that momentum may be building along the de novo protein design path to atomically precise manufacturing.
—James Lewis, PhD

VN:F [1.9.17_1161]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.17_1161]
Rating: 0 (from 0 votes)

Rational design of protein architectures not found in nature

Designed monomeric repeat architectures. Credit: L. Doyle et al. Nature.

Continuing our coverage of several major advances in de novo protein design recently reported by the research group of David A. Baker with a consideration of the second of the two research papers they published two months ago in Nature: “Rational design of alpha-helical tandem repeat proteins with closed architecture.” [abstract, full text PDF courtesy of the Baker lab], which concerns the rational design of a class of proteins that play important roles in binding macromolecules, as scaffolds, and as building blocks for assembling more complex materials. The University of Washington news release we cited last time continues to explain the significance of understanding and designing protein structures “Big moves in protein structure prediction and design“:

… The protein structure problem is figuring out how a protein’s chemical makeup predetermines its molecular structure, and in turn, its biological role. UW researchers have developed powerful algorithms to make unprecedented, accurate, blind predictions about the structure of large proteins of more than 200 amino acids in length. This has opened the door to predicting the structures for hundreds of thousands of recently discovered proteins in the ocean, soil, and gut microbiome.

Equally difficult is designing amino acid sequences that will fold into new protein structures.

Researchers have now shown the possibility of doing this with precision for protein folds inspired by naturally occurring proteins. More important, researchers can now devise amino acid sequences to fashion novel, previously unknown folds, far surpassing what is predicted to occur in the natural world.

The new proteins are designed with help from volunteers around the globe participating in the Rosetta@home distributed computing project. The custom-designed amino acid sequences are encoded in synthetic genes, the proteins are produced in the laboratory, and their structures are revealed through x-ray crystallography. The computer models in almost all cases match the experimentally determined crystal structures with near atomic level accuracy.

Researches have also reported new protein designs, all with near atomic level accuracy, for such shapes as barrels, sheets, rings and screws. This adds to previous achievements in designing protein cubes and spheres, and suggests the possibility of making a totally new class of protein materials.

By furthering advances such as these, researchers hope to build proteins for critical tasks in medical, environmental and industrial arenas. Examples of their goals are nanoscale tools that: boost the immune response against HIV and other recalcitrant viruses, block the flu virus so that it cannot infect cells, target drugs to cancer cells while reducing side effects, stop allergens from causing symptoms, neutralize deposits, called amyloids, thought to damage vital tissues in Alzheimer’s disease, mop up medications in the body as an antidote, and fulfill other diagnostic and therapeutic needs. Scientists are also interested in new proteins for biofuels and clean energy. …

The previous paper we reviewed here reported that a fully automated design protocol generates dozens of designs for proteins based on helix-loop-helix-loop repeat units that are very stable, have crystal structures that match the design, have very different overall shapes, and are unrelated to any natural protein. This paper presents the validation of computational methods for de novo design of protein architectures to achieve specified geometric criteria without reference to existing protein family sequences and structures.

The authors note that the overall architecture of tandem repeat protein structures—dictated by the internal geometry and the local packing of the repeat building blocks—ranges from extended super-helical folds that bind RNA, RNA, or other proteins to compact, closed conformations with internal cavities suitable to binding small molecules and catalysis. They employ their computational de novo design methods to design a series of α-solenoid repeat structures, termed α-toroids, constrained to juxtapose the amino and carboxy termini of the proteins.

The closed tandem repeat architecture chosen by the authors for this paper imposes simple geometric constraints: the rise of the repeats around a central axis must be zero, and the curvature multiplied by the number of repeats must equal a multiple of 360°. Applying their design procedures produced “a diverse array of toroidal structures”. The authors explain that they focused primarily on designs with left-handed bundles because the closed, left-handed α-solenoid appeared to be absent from the structural database of known protein structures. Five families of α-toroid monomeric repeat architectures were selected for experimental characterization: a left-handed 3-repeat family, left- and right-handed 6-repeat families, a left-handed 9-repeat family, and a 12-repeat design formed by adding three repeats to one of the 9-repeat designs.

To increase the chances for successful expression, purification, and crystallization of representatives of each family, multiple designed sequences from each family were tested. Five crystal structures were determined for representatives of four of the designed toroid families. The crystal structures determined show that all four designs form left-handed α-helical toroids with the intended geometries. The deviation between the design model and experimental structure increased with the number of repeats in the design, from 0.06 nm for 3 repeats to 0.09 nm for 6 repeats, to 0.11 nm for 9 or 12 repeats. All five structures were stable to heat and to changes in protein and salt concentrations.

From these successes the authors determine that the apparent absence of this fold from the current protein structure database is not due to constraints imposed by the structure. Possibly such folds exist in natural proteins that have not yet been observed, or that region of fold space has not been sampled by natural protein evolution. Thus, the results of this research confirm the results of the paper we reviewed here last week that the known structures of natural proteins are only a small part of the structure space available to rationally designed proteins.
—James Lewis, PhD

VN:F [1.9.17_1161]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.17_1161]
Rating: 0 (from 0 votes)

De novo protein design space extends far beyond biology

Foldit is a protein molecule modeling program used by citizen scientists worldwide to contribute to protein design research. Credit: University of Washington Institute for Protein Design

In his first (1981) publication on what he later (1986) termed nanotechnology Eric Drexler pointed to molecular engineering as a pathway from current biotechnology toward “general capabilities for molecular manipulation”, more recently described as “high-throughput atomically precise manufacturing“. Specifically, he pointed to de novo protein design as a path leading eventually to complex non-biological machinery, suggesting that designing proteins to fold as needed will be easier than predicting how natural proteins will fold. Accordingly de novo protein design has been one of our favorite topics on Nanodot—for example, these milestones from the past five years: “Designing protein-protein interactions for advanced nanotechnology“, “Gamers, citizen science, and protein structures“, “Crowd-sourced protein design a promising path to advanced nanotechnology“, “Nanotechnology milestone: general method for designing stable proteins“, “Computational design of protein-small molecule interactions“.

This past year, several major advances in de novo protein design have been reported by the research group of David A. Baker, who shared the 2004 Feynman Prize in Nanotechnology in the Theory category, at the University of Washington, and their collaborators at the Fred Hutchinson Cancer Research Institute. A hat tip to ScienceDaily for reprinting this University of Washington news release “Big moves in protein structure prediction and design“:

Custom design with atomic level accuracy enables researchers to craft a whole new world of proteins

The potential of modular design for brand new proteins that do not yet exist in the natural world is explored Dec. 16 in the journal Nature. The reports are the latest in a recent series of developments toward custom-designing proteins.

Naturally occurring proteins are the nanoscale machines that carry out nearly all the essential functions in living things.

While it has been known for more than 40 years that a protein’s sequence of amino acids determines its shape, it has been challenging for scientists to predict a protein’s three-dimensional structure from its amino acid sequence. Conversely, it has been difficult for scientists to devise brand new amino acid sequences that fold up into hitherto unseen structures. A protein’s structure dictates the types of biochemical and biological tasks it can perform.

The Nature letters look at one type of natural construction: proteins formed of repeat copies of a structural component. The researchers examined the potential for creating new types of these proteins. Just as the manufacturing industry was revolutionized by interchangeable parts, originating protein molecules with the right twists, turns and connections for their modular assembly would be a bold direction for biotechnology.

The letters are, “Exploring the repeat protein universe through computational design” [abstract, full text PDF courtesy of the Baker lab] and “Rational design of alpha-helical tandem repeat proteins with closed architecture.” [abstract, full text PDF courtesy of the Baker lab] The findings suggest the possibilities for producing protein geometries that exceed what nature has achieved. The work was led by postdoctoral fellows TJ Brunette, Fabio Parmeggiani and Po-Ssu Huang in David Baker’s lab at the University of Washington, and Lindsey Doyle and Phil Bradley at the Fred Hutchinson Cancer Research Institute in Seattle.

In addition, over the past several months, researchers at the UW Institute for Protein Design (IPD), Fred Hutch, and other institutions have described several advances in two longstanding problem areas in building new proteins from scratch.

“It has been a watershed year for protein structure predictions and design,” said Baker, a UW professor of biochemistry, Howard Hughes Medical Institute investigator, and head of the IPD. …

Because this news release reports a number of major advances that are not individually described, we will only consider the first of the above two Nature letters in this post. Subsequent posts will consider other research results highlighted by this news release. “Exploring the repeat protein universe through computational design” presents a completely automatic protocol to design variations on a specified structural motif. The protocol produces folds that are completely unlike those found in nature, suggesting that known families of protein structures sample only a small part of the polypeptide structure space, and thus opening up a wide array of new possibilities for molecular engineering.

Noting the widespread occurrence in nature of families of proteins made of multiple tandem copies of a repeating structural motif, that some of these naturally occurring repeat proteins have been re-engineered for molecular recognition and molecular scaffolding applications, and that all known designed repeat structures have been based on naturally occurring protein families, the authors ask if these families cover all stable repeat structures that can be built from the 20 genetically coded amino acids, or if natural evolution has only sampled a small subset of what is possible.

To explore the range of possible repeat protein structures, they generated new protein backbone arrangements and designed sequences, unrelated to any naturally occurring repeat protein, predicted to fold into those structures. Since they knew from natural proteins that a wide variety of curvatures can be generated by tandem repeating a helix-loop-helix-loop structural repeat, they chose a motif with two helices varied from 10 to 28 residues and two turns from 1 to 4 residues. A completely automated design process was used to generate designs fitting the chosen motif with low energies and complementary core side-chain packing. All the designs have an overall helical structure, and are thus classified on the basis of three parameters defining a helix: (1) the radius, (2) the twist between adjacent repeats around the helix axis, and (3) the translation between adjacent repeats along the helical axis.

The 761 designed helical repeat proteins (DHR) that passed all of the filters used to check for stable helices cover a much larger range of these parameters than do the native repeat families found in natural proteins. 83 designs, none of which were related to natural proteins, were selected for experimental characterization, expressed in Escherichia coli, and purified. 72 of these were stably folded at 95°C, and 53 of these were monomeric. The crystal structures were solved for 15 of these structures, and found to match the design over both the protein backbone and the hydrophobic core side chains. The designs have very different overall shapes; for example, linear and untwisted, linear and twisted, spiral, and a flat toroid. The authors conclude:

… The crystal structures illustrate both the wide range of twist and curvature sampled by our repeat protein generation process and the accuracy with which these can be designed. … The design models and sequences are very different from each other and from naturally occurring repeat proteins, without any significant sequence or structural homology to known proteins …

The fully automatic design protocol that this research uses to design examples of one structural motif clearly demonstrates that known, natural proteins are “… only the tip of the iceberg of what is possible for polypeptide chains …” It would seem that the design skills and software to design proteins that fold predictably has now reached a level of maturity that a serious attempt to investigate designed proteins as a path to general capabilities in molecular engineering, and eventually to high throughput atomically precise manufacturing, as proposed by Drexler in 1981, would not be unreasonable.
—James Lewis, PhD

VN:F [1.9.17_1161]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.17_1161]
Rating: 0 (from 0 votes)

Foresight advisor MIT Prof. Marvin Minsky (1927-2016)

Marvin Minsky at One Laptop per Child office, Cambridge Mass. 2008 (credit: Bcjordan/Wikimedia Commons)

“We are greatly saddened to hear of the death of Marvin Minsky, age 88. A pioneer in artificial intelligence, Marvin served as an Advisor to Foresight Institute from its earliest days, extending back to our predecessor organization, the MIT Nanotechnology Study Group. He wrote the Foreword to the first nanotechnology book, Engines of Creation, and was the dissertation advisor for the first-ever PhD in Molecular Nanotechnology, granted by MIT to K. Eric Drexler. Marvin’s genius and humor are well-known, and his insights will be immensely missed.”
—Christine Peterson, co-founder Foresight Institute

VN:F [1.9.17_1161]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.17_1161]
Rating: 0 (from 0 votes)

Conference video: Nanoscale Materials, Devices, and Processing Predicted from First Principles

Credit: William Goddard

A select set of videos from the 2013 Foresight Technical Conference: Illuminating Atomic Precision, held January 11-13, 2013 in Palo Alto, have been made available on vimeo. Videos have been posted of those presentations for which the speakers have consented. Other presentations contained confidential information and will not be posted.

The seventh speaker at the Computation and Molecular Nanotechnolgies session, William Goddard, presented “Nanoscale Materials, Devices, and Processing Predicted from First Principals”. https://vimeo.com/62119945 – video length 34:22. Prof. Goddard addressed some of the method developments to allow modeling of large-scale systems, followed by some examples. He noted that the grand vision over the past 25 years has been that theory can be used to predict something useful. To predict new systems where there is no empirical data it is necessary to start with first principles.

Prof Goddard reviewed the advances that enabled going from first principles to nanoscopic systems of interest. Starting from quantum mechanics to describe a few hundred—perhaps a thousand—atoms, it was necessary to describe realistic temperatures, pressures, and concentrations in systems with millions or billions of atoms.

In describing work in his group, Prof Goddard noted that 40% of their efforts were spent developing new methods to solve problems, but they get only 5% of their funding for doing that because people want answers to problems, not new methods.

Prof Goddard proceeded to discuss four of the breakthroughs that they have made in addressing these issus: (1) the eFF approach of doing non adiabatic quantum mechanics on systems of highly excited electrons; (2) the Reax approach for doing reactions on millions of atoms, with good descriptions of barriers; (3) a fast way of doing entropy in the same time scale used for electrodynamics, free energy; (4) describing the coarse-grained force fields for DNA and RNA nanotechnology to try to assemble devices.

Prof. Goddard continued that there are four applications of interest for the eFF approach:

  • model processes involved in damage-free low-energy electron enhanced etching
  • what happens when electrons are ejected from cracked semiconductors like silicon
  • look at what happens when a field is applied and electrons come through and cause excitations leading to dielectric breakdown
  • simulate the process of an STM forming images based on the structure of the tip and the surface

The issue in eFF, Prof. Goddard continued, is that electrons are in highly excited states, hopping off perhaps with plasma nearby—not the first or second excited state but perhaps the millionth excited state. The approach taken is to simplify the electrons, describing them as Gaussian functions, the normal electrostatic interactions between electrons and nuclei and of electrons with each other are evolved. One serious approximation is made by taking the wave function as a Hartree product (multiplying together the million orbitals of a million electrons), and then propagate this wave function with Hamiltonians. The Pauli principle is introduced by recognizing that the Pauli principle forces two orbitals with the same spin to be orthogonal to each other. Figuring out what kind of force orthogonalizing two Gaussians leads to results in obtaining three scaling parameters, all near 1, obtained from calculations on small systems.

One of the problems Prof Goddard’s group applied this procedure to was to understand what happens with low-energy electron enhanced etching od semiconductors to get the incredibly smooth surfaces that are seen. They speculated that maybe this is an Auger effect, that maybe what the electrons do is excite a silicon core electron, and when that vacancy is filled there is enough energy to kick out a second electron, breaking the bond. They used the eFF process to simulated the Auger process to find out what the electrons are actually doing. They ionized the down-spin carbon 1s electron in a diamondoid with a couple hundred atoms, four electrons in the four bonds to that atom rush in to try to fill the hole. The Auger electron is kicked out, and in the particular case Prof Goddard presented, and CH radical is also kicked out. Repeating the simulation many times identifies all of the possible reactions that can happen. One result was that all of the carbon atoms that get etched away come from the surface, explaining why this process leads to smooth surfaces with no damage. Prof Goddard noted that no other quantum mechanics-based method can handle Auger decay processes, and that eFF can handle highly excited electronic states. Current efforts in this area include looking at whether electrons can be tuned to have differential effects even without masks on the surface.

Another eFF success that Prof Goddard notes is simulating what happens when silicon fractures. It has been experimentally observed that electrons will leave the system, producing current flow and positive and negative voltage fluctuations as the crack propagates through the silicon. The eFF approach predicts the experimentally observed propagation rate, which has not been achieved by any classical approach.

The second breakthrough that Prof Goddard presented is their ReaxFF reactive force field as a solution for the problem that even simple nanodevices to be modeled, such as something 25x54x27 nm, contains 3.7 million atoms, which is too many to be practical for any quantum mechanics (QM) based method. However, their reactive force field describes the atoms almost as well as does QM, and 3.7 million atoms is a practical size. In the ReaxFF system charges are allowed to flow in the system, just as in QM, as bonds break and form. The bonds are aso allowed to change bond order and energy as the bond stretches. All the parameters come from QM. There is no dependence on empirical data, which is important with these materials–based systems because there is almost never enough empirical data to determine a system. With ReaxFF, Prof Goddard continued, one force field can describe vanadium metal, vanadium oxide,and vanadium oxidation states II, III, IV and V. The example he presented had to do with the successful prediction of hot spot formation in a material segment of 3.7 million atoms.

The third breakthrough that Prof Goddard presented addresses the difficulty of getting free energies. Usually talk of energies is restricted to enthalpies, or in the case of QM, enthalpies at 0 K. We want, Prof Goddard continued, to be able to calculate at the appropriate temperature and pressure and get free energies. THe method they developed has the same costs as doing dynamics, while previous methods for doing entropies were about 3000 times slower. One example looking at a branched chain DNA structure of about 50,000 atoms to determine how free energy, enthalpy, and entropy change over about 100 ns revealed that this process is dominated by entropy—a hydrophobic effect of release of water from the system into the surrounding solvent shell that dominates the system. Entropy calculations are fundamentally an equilibrium situation, but all that is needed is an equilibrium calculation of 10 to 20 ps. To do the calculation a density of states was obtained from molecular dynamics calculations, which was used to calculate partition function and entropy, etc.

The fourth breakthrough that Prof Goddard presented addressed the DNA origami technique: using DNA to organize large systems. A typical problem could involve 1856 DNA base-pairs and a structure 2.5×17.7×60.5 nm involving more than 360,000 atoms including counterions and water molecules. THe estimated time to complete a simulation of 1.5 µs using 100 processors would be about 4 years. Their solution was to invent a coarse-grained force field in which each sugar moiety is replaced with one bead (pseudo atom), each phosphate with one bead, and each base with three connected beads. In collaboration with DNA origami experimentalists, Prof Goddard’s group simulated using DNA origami to assemble structures of carbon nanotubes. They designed sequences for the linker and DNA hooks to ensure that various assumptions made to design the nanostructures were realistic. They worked out conditions to deposit the carbon nanotubes on either face of the origami structure and for depositing the nanostructures on silicon dioxide and for positioning various nanoparticles on the DNA origami.
—James Lewis, PhD

VN:F [1.9.17_1161]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.17_1161]
Rating: 0 (from 0 votes)

Conference video: Mythbusting Knowledge Transfer Mechanisms through Science Gateways

Credit: Gerhard Klimeck

A select set of videos from the 2013 Foresight Technical Conference: Illuminating Atomic Precision, held January 11-13, 2013 in Palo Alto, have been made available on vimeo. Videos have been posted of those presentations for which the speakers have consented. Other presentations contained confidential information and will not be posted.

The fifth speaker at the Computation and Molecular Nanotechnolgies session, Gerhard Klimeck, presented “Mythbusting Knowledge Transfer Mechanisms through Science Gateways” . https://vimeo.com/62119946 – video length 40:28. Prof. Klimeck addressed novel ways to disseminate nanotechnology simulations to broader audiences using as an example a user facility, the web site nanoHUB. About 12,000 users sign up for accounts and run half a million simulations throughout the year. Even more people view lectures, seminars, tutorials, for which no account is needed. Looking at a display of the use of the facility from day to day, Prof. Klimeck asked whether the bursts of activity that appear show signs of knowledge transfer happening. Some of this data illustrates the mythbusting over the past ten years—all the things we were told we could not do. For example, the facility used to have about a thousand users per year, and then the tools were made interactive and suddenly that grew to about 12,000 today. Lectures and seminars were introduced and led to dramatic growth. Citing recent interest in MOOCs (massive open online course), Prof. Klimeck noted that they had been a MOOC for a number of years.

Prof. Klimeck shared a video about nanoHUB that illustrated the real research tools that are available as simulations users can run, for example, nanopores and quantum dots. He emphasized that the facility is completely free.

Looking back to 1965 and the advent of Moore’s Law, Prof. Klimeck pointed out that the enormous technological and economic progress embodied in Moore’s Law arose in part due to two pieces of software: Berkeley SPICE and Stanford SUPREM, used for circuit simulation and device processing. What is not generally recognized, Prof. Klimeck notes, is that SPICE started with a teaching tool. It was created in graduate school to teach about circuits. The creators continued to develop it by giving tapes of the program to students and inviting them to break the program so they could fix it. They knew that software is never perfect and they invited user participation. Other tools that existed at the same time were kept proprietary and thus no one knows their names anymore, but Simulation Program with Integrated Circuit Emphasis revolutionized the world. Similarly the Stanford University PRocEss Modeling tool started out as research tool that students took along with them to industry.Those teo tools generated a new industry, said Prof. Klimeck, noting that Intel has a capitalization of $85B, an the total industry of $ 280B.

Inspired by the benefits created by SPICE and SUPREM, Prof. Klimeck asked what benefits nanotechnology research can create for the world. “How do you get stuff out of this cornucopia of research and make it useful?”

To illustrate what anyone can do today using nanoHUB, Prof. Klimeck showed a molecular dynamics animation of carbon nanotubes and an animation of the eigenstates of quantum dots. The free availability of these simulations means they are available to experimentalists, and they can vet the tools so they can be improved. But can these tools rapidly be migrated to education, and can this migration have economic impact?

Although a gateway like nanoHUB need not be specific to nanotechnology, Prof. Klimeck said he was unaware of any other science gateways that have had an impact comparable to that of nanoHUB. Focusing on why it has proven difficult to replicated the success of nanoHUB, Prof. Klimeck noted that the whole process of placing things on the web is very difficult. One problem is that most code is written by one person tobe used by that same person, so the code is very user unfriendly—it tends to appears as gobbledygook to anyone except the person who wrote it. In addition to being ‘user friendly’, the code must be ‘developer friendly’.It must be easy to make interfaces and put simulations out for the community to try. Finally the code has to be accessible, so someone has to operate the facility to allow people to use the code.

Bringing these three stakeholders—users, developers, and facility operators—together has been difficult, in part because of myths that have developed. Prof. Klimeck related that he has been told in no uncertain terms that codes must be rewritten to use them for education, or that you must rewrite your whole code to put it on the web, or you must write your own code to do research, that is, you must understand every line of code to reuse someone else’s code.

Prof. Klimeck considered the case of a researcher with a cool code useful for some scientific question, so he writes a proposal to put the code on the web by hiring a web developed who speaks a different language to put the code ont he web. The process typically takes 2-3 years to rewrite the code, but what researcher wants to not modify his code for 2-3 years during this process? So the researcher is frustrated because he is answering questions from the person rewriting the code, he no longer trusts the code, money is being spent, and no new research is getting done. Thus people conclude that such a web site cannot be used for real research—they can only be toy applications that perhaps can be used for education, but not for deep research.

nanoHUB is different, Prof. Klimeck explained, because they eliminated the middle person, and did publish 175 tools in 4 years. There is no code rewrite; the researcher retains ownership of the code, and this process only takes one or two weeks instead of two or three years. To make this possible, nanoHUB supplies a whole software ecosystem very similar to Linux that is easy to learn. This system has produced hundreds of tools and formed developer networks, producing small collaborations that can end up serving thousands of users, and helping some researchers procure early promotions through their involvement in nanoHUB.

As to the use of nanoHUB in education, Prof. Klimeck showed that knowledge transfer out of research and into education involved 14,521 students, taking 761 courses, in 189 institutions. Another interesting statistic her presented was that more than half of the tools coming out of research were adopted in classrooms in less than 6 months. Further there were 960 citations of these tools in the literature, proving that other researchers were indeed using the tools that researchers had placed on nanoHUB. Further, as with SPICE and SUPREM, many of the tools on nanoHUB find dual use in research and in education.
—James Lewis, PhD

VN:F [1.9.17_1161]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.17_1161]
Rating: 0 (from 0 votes)

Conference video: Nanoscale Materials, Devices, and Processing Predicted from First Principals

Credit: William Goddard

A select set of videos from the 2013 Foresight Technical Conference: Illuminating Atomic Precision, held January 11-13, 2013 in Palo Alto, have been made available on vimeo. Videos have been posted of those presentations for which the speakers have consented. Other presentations contained confidential information and will not be posted.

The seventh speaker at the Computation and Molecular Nanotechnolgies session, William Goddard, presented “Nanoscale Materials, Devices, and Processing Predicted from First Principals”. https://vimeo.com/62119945 – video length 34:22. Prof. Goddard addressed some of the method developments to allow modeling of large-scale systems, followed by some examples. He noted that the grand vision over the past 25 years has been that theory can be used to predict something useful. To predict new systems where there is no empirical data it is necessary to start with first principles.

Prof Goddard reviewed the advances that enabled going from first principles to nanoscopic systems of interest. Starting from quantum mechanics to describe a few hundred—perhaps a thousand—atoms, it was necessary to describe realistic temperatures, pressures, and concentrations in systems with millions or billions of atoms.

In describing work in his group, Prof Goddard noted that 40% of their efforts were spent developing new methods to solve problems, but they get only 5% of their funding for doing that because people want answers to problems, not new methods.

Prof Goddard proceeded to discuss four of the breakthroughs that they have made in addressing these issus: (1) the eFF approach of doing non adiabatic quantum mechanics on systems of highly excited electrons; (2) the Reax approach for doing reactions on millions of atoms, with good descriptions of barriers; (3) a fast way of doing entropy in the same time scale used for electrodynamics, free energy; (4) describing the coarse-grained force fields for DNA and RNA nanotechnology to try to assemble devices.

Prof. Goddard continued that there are four applications of interest for the eFF approach:

  • model processes involved in damage-free low-energy electron enhanced etching
  • what happens when electrons are ejected from cracked semiconductors like silicon
  • look at what happens when a field is applied and electrons come through and cause excitations leading to dielectric breakdown
  • simulate the process of an STM forming images based on the structure of the tip and the surface

The issue in eFF, Prof. Goddard continued, is that electrons are in highly excited states, hopping off perhaps with plasma nearby—not the first or second excited state but perhaps the millionth excited state. The approach taken is to simplify the electrons, describing them as Gaussian functions, the normal electrostatic interactions between electrons and nuclei and of electrons with each other are evolved. One serious approximation is made by taking the wave function as a Hartree product (multiplying together the million orbitals of a million electrons), and then propagate this wave function with Hamiltonians. The Pauli principle is introduced by recognizing that the Pauli principle forces two orbitals with the same spin to be orthogonal to each other. Figuring out what kind of force orthogonalizing two Gaussians leads to results in obtaining three scaling parameters, all near 1, obtained from calculations on small systems.

One of the problems Prof Goddard’s group applied this procedure to was to understand what happens with low-energy electron enhanced etching od semiconductors to get the incredibly smooth surfaces that are seen. They speculated that maybe this is an Auger effect, that maybe what the electrons do is excite a silicon core electron, and when that vacancy is filled there is enough energy to kick out a second electron, breaking the bond. They used the eFF process to simulated the Auger process to find out what the electrons are actually doing. They ionized the down-spin carbon 1s electron in a diamondoid with a couple hundred atoms, four electrons in the four bonds to that atom rush in to try to fill the hole. The Auger electron is kicked out, and in the particular case Prof Goddard presented, and CH radical is also kicked out. Repeating the simulation many times identifies all of the possible reactions that can happen. One result was that all of the carbon atoms that get etched away come from the surface, explaining why this process leads to smooth surfaces with no damage. Prof Goddard noted that no other quantum mechanics-based method can handle Auger decay processes, and that eFF can handle highly excited electronic states. Current efforts in this area include looking at whether electrons can be tuned to have differential effects even without masks on the surface.

Another eFF success that Prof Goddard notes is simulating what happens when silicon fractures. It has been experimentally observed that electrons will leave the system, producing current flow and positive and negative voltage fluctuations as the crack propagates through the silicon. The eFF approach predicts the experimentally observed propagation rate, which has not been achieved by any classical approach.

The second breakthrough that Prof Goddard presented is their ReaxFF reactive force field as a solution for the problem that even simple nanodevices to be modeled, such as something 25x54x27 nm, contains 3.7 million atoms, which is too many to be practical for any quantum mechanics (QM) based method. However, their reactive force field describes the atoms almost as well as does QM, and 3.7 million atoms is a practical size. In the ReaxFF system charges are allowed to flow in the system, just as in QM, as bonds break and form. The bonds are aso allowed to change bond order and energy as the bond stretches. All the parameters come from QM. There is no dependence on empirical data, which is important with these materials–based systems because there is almost never enough empirical data to determine a system. With ReaxFF, Prof Goddard continued, one force field can describe vanadium metal, vanadium oxide,and vanadium oxidation states II, III, IV and V. The example he presented had to do with the successful prediction of hot spot formation in a material segment of 3.7 million atoms.

The third breakthrough that Prof Goddard presented addresses the difficulty of getting free energies. Usually talk of energies is restricted to enthalpies, or in the case of QM, enthalpies at 0 K. We want, Prof Goddard continued, to be able to calculate at the appropriate temperature and pressure and get free energies. THe method they developed has the same costs as doing dynamics, while previous methods for doing entropies were about 3000 times slower. One example looking at a branched chain DNA structure of about 50,000 atoms to determine how free energy, enthalpy, and entropy change over about 100 ns revealed that this process is dominated by entropy—a hydrophobic effect of release of water from the system into the surrounding solvent shell that dominates the system. Entropy calculations are fundamentally an equilibrium situation, but all that is needed is an equilibrium calculation of 10 to 20 ps. To do the calculation a density of states was obtained from molecular dynamics calculations, which was used to calculate partition function and entropy, etc.

The fourth breakthrough that Prof Goddard presented addressed the DNA origami technique: using DNA to organize large systems. A typical problem could involve 1856 DNA base-pairs and a structure 2.5×17.7×60.5 nm involving more than 360,000 atoms including counterions and water molecules. THe estimated time to complete a simulation of 1.5 µs using 100 processors would be about 4 years. Their solution was to invent a coarse-grained force field in which each sugar moiety is replaced with one bead (pseudo atom), each phosphate with one bead, and each base with three connected beads. In collaboration with DNA origami experimentalists, Prof Goddard’s group simulated using DNA origami to assemble structures of carbon nanotubes. They designed sequences for the linker and DNA hooks to ensure that various assumptions made to design the nanostructures were realistic. They worked out conditions to deposit the carbon nanotubes on either face of the origami structure and for depositing the nanostructures on silicon dioxide and for positioning various nanoparticles on the DNA origami.
—James Lewis, PhD

VN:F [1.9.17_1161]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.17_1161]
Rating: 0 (from 0 votes)

Molecular arm grabs, transports, releases molecular cargo

Cartoon representation of a small-molecule robot able to transport a molecular cargo (shown in red) in either direction from blue-to-green or green-to-blue platform sites. Credit: Leigh Group, University of Manchester

A few months ago we pointed to a very thorough review of artificial molecular machines authored by Prof. David Leigh (winner of the 2007 Feynman Prize in Nanotechnology, Theory category) and three colleagues. Over at ChemistryWorld Simon Hadlington describes recent work from Prof. Leigh’s group using a molecular robot arm to pick up, transport, and release a molecular cargo. “Molecular robot opens the way to nano-assembly lines“:

UK chemists have devised a nanoscale robot that can grasp a cargo molecule, pick it up, place it in a new position some distance away and release it. At no time does the cargo dissociate from the machine or exchange with other molecules. While such a sequence of actions is trivial on a macroscopic scale, to achieve it synthetically with small molecules is unprecedented and could mark the start of a new era of molecular robotics. Multiple similar robots in sequence could, for example, replicate a factory’s assembly line to build increasingly complex molecular structures. …

The research paper was published last month in Nature CHemistry “Pick-up, transport and release of a molecular cargo using a small-molecule robotic arm” [abstract]. The abstract rightly notes that to date the concept of similarly manipulating molecular fragments has only been explored with biomolecular building blocks (specifically structural DNA nanotechnology). This advance implements the concept using small synthetic molecules, with movement provided by inducing conformational and configurational changes within an embedded hydrazone molecular rotary switch.

Chemical structure of a molecular robotic arm (shown in black) able to reposition a molecular cargo (shown in red) in either direction from blue-to-green or green-to-blue platform sites. Credit: Leigh Group, University of Manchester

The sequence of steps is also illustrated on the Leigh Group’s web site “Molecular Robotics“:

… Now chemists at the University of Manchester have made a molecular machine with a ‘robotic arm’ that is able to pick up a molecular cargo, reposition it, set it down and release it at a second site approximately 2 nm (0.000002 mm) away from the starting position … The relocation of molecular fragments with a nanoscale robotic arm—making and breaking chemical bonds in a process during which the cargo is unable to exchange with others in the bulk—is the first step towards the controlled manipulation of molecular-level structures through programmable small-molecule robotics. …

By the way, a few months ago when we pointed to the comprehensive 126-page review of artificial molecular machines by Prof. Leigh and his colleagues, we noted that “Unfortunately, at this writing, it only seems to exist behind a pay wall”. Fortunately, sometime between then and now the review “Artificial Molecular Machines” was changed to an open access article. For those who would like a less comprehensive and more succinct treatment of important aspects of the same topic, Prof. Leigh and another colleague published an open access essay in Angewandte Chemie International Edition last summer “Rise of the Molecular Machines“. Between this small molecule approach and the biomolecular approach we have multiple ways to pursue complex systems of molecular machinery.
—James Lewis, PhD

VN:F [1.9.17_1161]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.17_1161]
Rating: 0 (from 0 votes)

DNA nanotechnology controls which molecules enter cells

Lock and key mechanism (courtesy of Dr Stefan Howorka and Dr Jonathan Burns, UCL)

A useful addition to the toolkits for nanomedicine and synthetic biology would be a cell membrane pore to exclude or admit predetermined molecules on demand. A hat tip to Kurzweil Accelerating Intelligence for describing and pointing to this UCL news release “DNA ‘building blocks’ pave the way for improved drug delivery“:

DNA has been used as a ‘molecular building block’ to construct synthetic bio-inspired pores which will improve the way drugs are delivered and help advance the field of synthetic biology, according to scientists from UCL and Nanion Technologies.

The study, published today in Nature Nanotechnology [abstract] and funded by the Biotechnology and Biological Sciences Research Council (BBSRC), Leverhulme Trust and UCL Chemistry, shows how DNA can be used to build stable and predictable pores that have a defined shape and charge to control which molecules can pass through the pore and when.

Lead author, Dr Stefan Howorka (UCL Chemistry), said: “Natural biological pores made of proteins are essential for transporting cargo into and out of biological cells but they are hard to design from scratch. DNA offers a whole new strategy for constructing highly specific synthetic pores that we can open and close on demand. We’ve engineered our pores to act like doors – the door unlocks only when provided with the right key. By building these pores into drug carriers, we think it will allow for much more precise targeting of therapeutics.”

Many therapeutics including anti-cancer drugs can be ferried around the body in tiny carriers called vesicles which are targeted to different tissues using biological markers. Previously, releasing the drugs from inside the vesicles was triggered with temperature-induced leaky vesicle walls or with inserted peptide channels, which are less rigid and predictable than DNA.

Using DNA building blocks, the team designed pores with pre-determined structures and defined properties which were precisely anchored into the walls – or membranes – of vesicles.

“Our pores take the shape of an open barrel made of six DNA staves. We designed a molecular gate to close off one entrance but then re-open the channel when a specific molecule binds. Anchors with high membrane affinity were attached to tether the water-soluble pores into the oily membrane,” explained first author, Dr Jonathan Burns (UCL Chemistry).

Using electrophysiology techniques, the researchers verified that the pore vertically spanned the surface of the membrane and was stable with an internal width of 2 nm, which is an appropriate size for small drugs molecules to fit through.

The gate’s lock and release mechanism was then tested with electrophysiology techniques as well as with fluorophores, which are of equivalent size to small molecules. As the DNA pore had a net negative charge, fluorophores with a net negative charge moved through with more ease than those with a net positive charge, showing selectivity for which cargo could exit. Removing the lock with a matching key increased of traffic 140-fold compared to a mismatched key.

Co-author Astrid Seifert who works with Dr Niels Fertig at Nanion Technologies, said: “We were able to precisely analyse the performance of each of the pores we created. We first inserted pores in membranes and then tested the biophysical response of each channel using advanced microchips. We’ve not only developed a new way to design highly specific pores but also an automated method to test their properties in situ, which will be important for testing pores being used for targeted drug delivery in the future.”

The researchers plan on testing the synthetic pores in a variety of scenarios including the release of anti-cancer drugs to cells and the development of pores that release pharmaceutically active biomolecules.

Dr Howorka added, “Our approach is a big step forward in building and using synthetic biological structures and promises a new era in pore design and synthetic biology. We have demonstrated such precise control over the behaviour of the pore, both in terms of selectivity and in terms of responsiveness that we believe that the method paves the way for a wide range of applications from drug delivery to nanosensing.”

A variety of applications are suggested by the ability to build such a precisely controlled cellular valve. It will be interesting to see which ones appear soon. Perhaps a point to clarify: how difficult will it be to insert such engineered pores into various types of cells in living animals/patients? How well do the pores hold up in the living organism?
—James Lewis, PhD

VN:F [1.9.17_1161]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.17_1161]
Rating: 0 (from 0 votes)
Username: 0...