California Institute of Technology

California Institute of Technology

A Technology Powerhouse

Search in this school

California Institute of Technology

California Institute of Technology

Owner: Brian

School members: 2

Description:

The California Institute of Technology (Caltech) is a private research university situated in Pasadena, California. Caltech has six academic divisions with strong emphasis on science and engineering. Its 124-acre primary campus is located approximately 11 mi northeast of downtown Los Angeles.

Founded as a preparatory and vocational school by Amos G. Throop in 1891, the college attracted leading early 20th-century scientists such as Robert Andrews Millikan and George Ellery Hale. Despite its relatively small size, 31 Caltech alumni and faculty have won the Nobel Prize and 66 have won the United States National Medal of Science or Technology. There are 110 faculty members who have been elected to the National Academies. Caltech managed $333 million in sponsored research in 2011 and $1.76 billion for its endowment in 2012.

Caltech was ranked first in the 2012–2013 Times Higher Education World University Rankings for the second year running, as well as ranking first in Engineering & Technology and Physical Sciences.

Honors: A Technology Powerhouse

School Activity

We could not find any activity.

Caltech News

News from www.caltech.edu

Computer scientists at Caltech have designed DNA molecules that can carry out reprogrammable computations, for the first time creating so-called algorithmic self-assembly in which the same "hardware" can be configured to run different "software."

In a paper publishing in Nature on March 21, a team headed by Caltech's Erik Winfree (PhD '98), professor of computer science, computation and neural systems, and bioengineering, showed how the DNA computations could execute six-bit algorithms that perform simple tasks. The system is analogous to a computer, but instead of using transistors and diodes, it uses molecules to represent a six-bit binary number (for example, 011001) as input, during computation, and as output. One such algorithm determines whether the number of 1-bits in the input is odd or even, (the example above would be odd, since it has three 1-bits); while another determines whether the input is a palindrome; and yet another generates random numbers.

"Think of them as nano apps," says Damien Woods, professor of computer science at Maynooth University near Dublin, Ireland, and one of two lead authors of the study. "The ability to run any type of software program without having to change the hardware is what allowed computers to become so useful. We are implementing that idea in molecules, essentially embedding an algorithm within chemistry to control chemical processes."

The system works by self-assembly: small, specially designed DNA strands stick together to build a logic circuit while simultaneously executing the circuit algorithm. Starting with the original six bits that represent the input, the system adds row after row of molecules—progressively running the algorithm. Modern digital electronic computers use electricity flowing through circuits to manipulate information; here, the rows of DNA strands sticking together perform the computation. The end result is a test tube filled with billions of completed algorithms, each one resembling a knitted scarf of DNA, representing a readout of the computation. The pattern on each "scarf" gives you the solution to the algorithm that you were running. The system can be reprogrammed to run a different algorithm by simply selecting a different subset of strands from the roughly 700 that constitute the system.

"We were surprised by the versatility of programs we were able to design, despite being limited to six-bit inputs," says David Doty, fellow lead author and assistant professor of computer science at the University of California, Davis. "When we began experiments, we had only designed three programs. But once we started using the system, we realized just how much potential it has. It was the same excitement we felt the first time we programmed a computer, and we became intensely curious about what else these strands could do. By the end, we had designed and run a total of 21 circuits."

The researchers were able to experimentally demonstrate six-bit molecular algorithms for a diverse set of tasks. In mathematics, their circuits tested inputs to assess if they were multiples of three, performed equality checks, and counted to 63. Other circuits drew "pictures" on the DNA "scarves," such as a zigzag, a double helix, and irregularly spaced diamonds. Probabilistic behaviors were also demonstrated, including random walks, as well as a clever algorithm (originally developed by computer pioneer John von Neumann) for obtaining a fair 50/50 random choice from a biased coin.

Both Woods and Doty were theoretical computer scientists when beginning this research, so they had to learn a new set of "wet lab" skills that are typically more in the wheelhouse of bioengineers and biophysicists. "When engineering requires crossing disciplines, there is a significant barrier to entry," says Winfree. "Computer engineering overcame this barrier by designing machines that are reprogrammable at a high level—so today's programmers don't need to know transistor physics. Our goal in this work was to show that molecular systems similarly can be programmed at a high level, so that in the future, tomorrow's molecular programmers can unleash their creativity without having to master multiple disciplines."

"Unlike previous experiments on molecules specially designed to execute a single computation, reprogramming our system to solve these different problems was as simple as choosing different test tubes to mix together," Woods says. "We were programming at the lab bench."

Although DNA computers have the potential to perform more complex computations than the ones featured in the Nature paper, Winfree cautions that one should not expect them to start replacing the standard silicon microchip computers. That is not the point of this research. "These are rudimentary computations, but they have the power to teach us more about how simple molecular processes like self-assembly can encode information and carry out algorithms. Biology is proof that chemistry is inherently information-based and can store information that can direct algorithmic behavior at the molecular level," he says.

The paper is titled "Diverse and robust molecular algorithms using reprogrammable DNA self-assembly." Co-authors of the paper include Cameron Myhrvold, Joy Hui, and Peng Yin of Harvard University, and Felix Zhou of the University of Oxford. Funding to support this research came from the National Science Foundation and NASA.

Researchers at Caltech have designed a way to levitate and propel objects using only light, by creating specific nanoscale patterning on the objects' surfaces.

Though still theoretical, the work is a step toward developing a spacecraft that could reach the nearest planet outside of our solar system in 20 years, powered and accelerated only by light.

A paper describing the research appears online in the March 18 issue of the journal Nature Photonics. The research was done in the laboratory of Harry Atwater, Howard Hughes Professor of Applied Physics and Materials Science in Caltech's Division of Engineering and Applied Science.

Decades ago, the development of so-called optical tweezers enabled scientists to move and manipulate tiny objects, like nanoparticles, using the radiative pressure from a sharply focused beam of laser light. This work formed the basis for the 2018 Nobel Prize in Physics. However, optical tweezers are only able to manipulate very small objects and only at very short distances.

Ognjen Ilic, postdoctoral scholar and the study's first author, gives an analogy: "One can levitate a ping pong ball using a steady stream of air from a hair dryer. But it wouldn't work if the ping pong ball were too big, or if it were too far away from the hair dryer, and so on."

With this new research, objects of many different shapes and sizes—from micrometers to meters—could be manipulated with a light beam. The key is to create specific nanoscale patterns on an object's surface. This patterning interacts with light in such a way that the object can right itself when perturbed, creating a restoring torque to keep it in the light beam. Thus, rather than requiring highly focused laser beams, the objects' patterning is designed to "encode" their own stability. The light source can also be millions of miles away.

"We have come up with a method that could levitate macroscopic objects," says Atwater, who is also the director of the Joint Center for Artificial Photosynthesis. "There is an audaciously interesting application to use this technique as a means for propulsion of a new generation of spacecraft. We're a long way from actually doing that, but we are in the process of testing out the principles."

In theory, this spacecraft could be patterned with nanoscale structures and accelerated by an Earth-based laser light. Without needing to carry fuel, the spacecraft could reach very high, even relativistic speeds and possibly travel to other stars.

Atwater also envisions that the technology could be used here on Earth to enable rapid manufacturing of ever-smaller objects, like circuit boards.

The paper is titled "Self-stabilizing photonic levitation and propulsion of nanostructured macroscopic objects." Funding was provided by the Air Force Office of Scientific Research.

Many humans are able to unconsciously detect changes in Earth-strength magnetic fields, according to scientists at Caltech and the University of Tokyo.

The study, led by geoscientist Joseph Kirschvink (BS, MS '75) and neuroscientist Shin Shimojo at Caltech as well as neuroengineer Ayu Matani at the University of Tokyo, offers experimental evidence that human brain waves respond to controlled changes in Earth-strength magnetic fields. Kirschvink and Shimojo say this is the first concrete evidence of a new human sense: magnetoreception. Their findings were published by the journal eNeuro on March 18.

"Many animals have magnetoreception, so why not us?" asks Connie Wang, Caltech graduate student and lead author of the eNeuro study. For example, honeybees, salmon, turtles, birds, whales, and bats use the geomagnetic field to help them navigate, and dogs can be trained to locate buried magnets. It has long been theorized that humans may share a similar ability. However, despite a flurry of research attempting to test for it in the '80s, it has never been conclusively demonstrated.

"Aristotle described the five basic senses as including vision, hearing, taste, smell, and touch," says Kirschvink, co-corresponding author of the eNeuro study and Nico and Marilyn Van Wingen Professor of Geobiology. "However, he did not consider gravity, temperature, pain, balance, and several other internal stimuli that we now know are part of the human nervous system. Our animal ancestry argues that geomagnetic field sensors should also be there representing not the sixth sense but perhaps the 10th or 11th human sense to be discovered."

To try to determine whether humans do sense magnetic fields, Kirschvink and Shimojo built an isolated radiofrequency-shielded chamber and had participants sit in silence and utter darkness for an hour. During that time, they shifted the magnetic field silently around the chamber and measured participants' brain waves via electrodes positioned at 64 locations on their heads.

The test was performed with 34 human participants from a wide age range and a variety of ethnicities. During a given session, the participants consciously experienced nothing more interesting than sitting alone in the dark. However, among many participants, changes in their brain waves correlated with changes in the magnetic field around them. Specifically, the researchers tracked the alpha rhythm in the brain, which occurs at between 8 and 13 Hertz and is a measure of whether the brain is being engaged or is in a resting or "autopilot" mode. When a human brain is unengaged, the alpha power is high. When something catches its attention, consciously or unconsciously, its alpha power drops. Several other sensory stimuli like vision, hearing, and touch are known to cause abrupt drops in the amplitude of alpha waves in the first few seconds after the stimulus.

The experiments showed that, in some participants, alpha power began to drop from baseline levels immediately after magnetic stimulation, decreasing by as much as 60 percent over several hundred milliseconds, then recovering to baseline a few seconds after the stimulus. "This is a classic, well-studied brain wave response to a sensory input, termed event-related desynchronization, or alpha-ERD," says Shimojo, Gertrude Baltimore Professor of Experimental Psychology and affiliated faculty member of the Tianqiao and Chrissy Chen Institute for Neuroscience at Caltech.

The tests further revealed that the brain appears to be actively processing magnetic information and rejecting signals that are not "natural." For example, when the vertical component of the magnetic field pointed steadily upward during the experiments, there were no corresponding changes in brain waves. Because the magnetic field normally points down in the Northern Hemisphere, it seems that the brain is ignoring signals that are obviously “wrong.” This component of the study could be verified by replicating the experiment in the Southern Hemisphere, Kirschvink suggests, where the opposite pattern should hold.

"Alpha-ERD is a strong neural signature of sensory detection and the resulting attention shift. The fact that we see it in response to simple magnetic rotations like we experience when turning or shaking our head is powerful evidence for human magnetoreception. The large individual differences we found are also intriguing with regard to human evolution and the influences of modern life," says Shimojo. "As for the next step, we ought to try bringing this into conscious awareness."

One of the challenges in early attempts to test human magnetoreception was the difficulty of making sure that those changes in brain waves were, indeed, correlated to the magnetic field and not to some other confounding effect. For example, if the coils generating the magnetic field around the chamber created an audible hum, that might be enough to trigger a change in alpha power in participants.

To address those issues, the chamber used in this study was not only pitch black and isolated, the copper wires for altering the magnetic field were wrapped and cemented in place in duplicate: each coil has a pair of wires rather than a single strand. When current is directed through these wire pairs in the same direction, the magnetic field in the chamber is altered. However, running the current in opposite directions through the wires in the pairs cancels their magnetic fields, while yielding the same electrical heating and mechanical artifacts. Computers completely controlled the experiments and recorded the data. Results were processed automatically with turn-key computer scripts and no subjective steps. In this fashion, the team was able to show that the human brains did, indeed, respond to the magnetic field as opposed to just the energizing of the coils themselves.

"Our results rule out electrical induction and the 'quantum compass' hypotheses for the magnetic sense," says Kirschvink, naming two possibilities that have been proposed for explaining the mechanism behind magnetoreception. Kirschvink suggests instead that the results implicate biological magnetite as the sensory agent for human magnetoreception. In 1962, Heinz A. Lowenstam, a Caltech professor from 1954 until his death in 1993, discovered that magnetite, a naturally magnetic mineral, occurs in mollusk teeth. Since then, biological magnetite has been found to exist in organisms from bacteria to humans and has been linked to the geomagnetic sense in many of them.

By developing and demonstrating a robust methodology for testing humans for magnetoreception, Kirschvink says he hopes this study can act as a roadmap for other researchers who are interested in attempting to replicate and extend this research. "Given the known presence of highly evolved geomagnetic navigation systems in species across the animal kingdom, it is perhaps not surprising that we might retain at least some functioning neural components, especially given the nomadic hunter-gatherer lifestyle of our not-too-distant ancestors. The full extent of this inheritance remains to be discovered," he says.

The paper is titled "Transduction of the Geomagnetic Field as Evidenced from Alpha-band Activity in the Human Brain." In addition to Kirschvink, Shimojo, and Wang, co-authors include Ayumu Matani of the University of Tokyo, Caltech staff members Daw-An Wu (PhD '06) and Isaac Hilburn (BS '04), former Caltech undergraduates Christopher Cousté (BS '17) and Jacob Abrahams (BS '17), former University of Tokyo graduate student Yuki Mizuhara, and Princeton University student Sam Bernstein. This research was supported initially by the Human Frontiers Science Program and more recently by the RadioBio program of the Defense Advanced Research Projects Agency (DARPA) to the Caltech group, by the Japanese Science and Technology Agency (CREST) to Wang and Shimojo, and by the Japan Society for the Promotion of Science to the University of Tokyo group.

On any typical day, Millikan Pond is not an exciting place to be. Tuesday, March 12, was not a typical day.

Instead of a serene reflecting pool occasionally populated by a wayward duck, the pond transformed into an aquatic arena where amphibious robots duked it out in Caltech's annual ME72 design competition. The competition serves as the final exam for the ME72 Engineering Design Laboratory course, which is taught by Michael Mello (PhD '12), a lecturer and research scientist in the mechanical and civil engineering department and the Division of Engineering and Applied Science.

The event challenged four student teams to build three robots each. The robots had to be capable of traversing both land and water and collecting floating balls. Most of all, they needed to be durable, because although the competition does not ask the robots to battle each other, it can get rough on the field of play.

So, for the past several months, students in the class toiled away in the Jim Hall Design and Prototyping Lab, designing, machining, testing, and tweaking until they had three of the most pond-worthy crafts they could devise.

When the big day finally arrived, the four teams, Pirates of the Millikean, Finding Waboba, Undescided, and Misteltein, gathered around the pond in front of a crowd of onlookers and tested their robots' mettle.

Round by round, the bots splashed and crashed around the pond, greedily grabbing colorful floating balls and carrying them to the aquatic equivalents of football's end zones. If they were agile enough to deliver the balls into floating goals, their teams earned even more points.

Sometimes, like children getting banged up while roughhousing in the pool, the robots needed an adult to step in.

And there were repairs.

Lots …

…and lots …

…of repairs.

In the end, team Mistletein, whose bots Horse and Liquor made abundant and effective use of chicken wire, emerged as the victors.

But truly, on this day, everyone was a winner.

On Friday afternoons, Caltech computer science students visit public schools in Pasadena to help third-, fourth-, and fifth-graders learn to code. Their work is part of a recently introduced course in which Caltech undergrads study and practice strategies for teaching programming to children.

“It reminds our students why they were first inspired about computer science,” says Claire Ralph, lecturer and outreach director for Caltech’s computing and mathematical sciences department, part of the Division of Engineering and Applied Science. “It’s an opportunity to give back, another way to have an impact on the field.”

As part of the course, which was created in collaboration with Caltech’s Center for Teaching, Learning, and Outreach, students meet weekly to discuss, develop, and eventually deliver lessons plans.

“We start with basic concepts and, by the end, students have coded their own games in Scratch [a visual programming language developed for children],” says Caltech senior Anna Resnick, who helps lead the class as a teaching assistant. “A few have even told us they want to be programmers someday.”

Each year, members of the Caltech community serve thousands of Pasadena-area students through workshops, tutoring programs, science fairs, and public events. The coding initiative started about five years ago when a Pasadena Unified School District teacher requested Caltech’s help with computer science instruction, says Mitch Aiken, the Institute’s associate director for educational outreach. Around the same time, a group of first-year students at Caltech expressed interest in teaching coding.

An initial pilot program, in which student volunteers visited schools to deliver programming lessons, proved promising, Aiken recalls. But organizers determined that more students would be able to consistently commit time to the project if it were part of a formal class rather than a volunteer effort.

Now, through an undergraduate computer science course introduced last spring, Caltech students teach coding to schoolchildren using district-provided Chromebooks. The children’s lessons are conducted by the Caltech students over about a seven-month period and come at no cost to the schools, which enroll predominantly underserved families.

“I’ve always loved teaching, helping people understand things,” Caltech senior Steven Brotz says. “The kids are all familiar with computer games. We have the chance to help them understand how those games get created.”

For participants—undergrads and elementary schoolers alike—the experience can also make computer science seem a little more accessible, Ralph says.

“For Caltech students, it’s a good reminder of how far they’ve come,” she says. “It can be easy to underestimate how much you’ve learned and how much you know. You have to really understand something well to be able to explain it to a fifth-grader.”

On a recent afternoon, Alix Espino, a Caltech senior, introduced the week’s lesson to third-graders at Jefferson School.

“This week, we’re going to work with something called a variable,” she told them.

After the lesson, Espino said she hopes the time she spends with younger students encourages them to consider careers in computer science.

“I felt like it was important for me to get involved because there are not a lot of Latinos in tech, and this school is predominantly Latino,” Espino said. “I thought I could be a good role model.”

Totem magazine recently concluded its “Art of Science” contest celebrating the aesthetics of scientific research and the way in which science and art inform each other.

The winners of the recent contest were (from first to third place): “Turing Model for Animal Patterns” by undergraduate Michelle Dan; “Butterfly Wings in a Penrose Tiling” by undergraduate George Stathopoulos; and “Building Nano Epcot” by graduate students Carlos Portela and Bryce Edwards.

Fireflies, heart cells, clocks, and power grids all do it—they can spontaneously sync up, sending signals out in unison. For centuries, scientists have been perplexed by this self-organizing behavior, coming up with theories and experiments that make up the science of sync. But despite progress being made in the field, mysteries still persist—in particular how networks of completely identical elements can fall out of sync.

Now, in a new study in the March 8 issue of the journal Science, Caltech researchers have shown experimentally how a simple network of identical synchronized nanomachines can give rise to out-of-sync, complex states. Imagine a line of Rockette dancers: When they are all kicking at the same time, they are in sync. One of the complex states observed to arise from the simple network would be akin to the Rockette dancers kicking their legs "out of phase" with each other, where every other dancer is kicking a leg up, while the dancers in between have just finished a kick.

The findings experimentally demonstrate that even simple networks can lead to complexity, and this knowledge, in turn, may ultimately lead to new tools for controlling those networks. For example, by better understanding how heart cells or power grids display complexity in seemingly uniform networks, researchers may be able to develop new tools for pushing those networks back into rhythm.

"We want to learn how we can just tickle, or gently push, a system in the right direction to set it back into a synced state," says Michael L. Roukes, the Frank J. Roshek Professor of Physics, Applied Physics, and Bioengineering at Caltech, and principal investigator of the new Sciencestudy. "This could perhaps engender a form of new, less harsh defibrillators, for example, for shocking the heart back into rhythm."

Synchronized oscillations were first noted as far back as the 1600s, when the Dutch scientist Christiaan Huygens, known for discovering the Saturnian moon Titan, noted that two pendulum clocks hung from a common support would eventually come to tick in unison. Through the centuries, mathematicians and other scientists have come up with various ways to explain the strange phenomenon, seen also in heart and brain cells, fireflies, clouds of cold atoms, the circadian rhythms of animals, and many other systems.

In essence, these networks consist of two or more oscillators (the nodesof the network), which have the ability to tick on their own, sending out repeated signals. The nodes must also be connected in some way to each other (through the network edges), so that they can communicate and send out messages about their various states.

But it has also been observed since the early 2000s that these networks, even when consisting of identical oscillators, can spontaneously flip out of sync and evolve into complex patterns. To better understand what is going on, Roukes and colleagues began to develop networks of oscillating nanomechanical devices. They started by just connecting two, and now, in the new study, have developed an interconnected system of eight.

To the team's surprise, the eight-node system spontaneously evolved into various exotic, complex states. "This is the first experimental demonstration that these many distinct, complex states can occur in the same simple system," says co-author James Crutchfield, a visiting associate in physics at Caltech and a professor of physics at UC Davis.

To return to the Rockettes metaphor, another example of one of these complex states would be if every other dancer were kicking a leg up, while the dancers in between were doing something entirely different like waving their hats. And the examples get even more nuanced than this; with pairs of dancers doing the same movements between pairs of other dances doing something different.

"The perplexing feature of these particular states is that the Rockettes in our metaphor can only see their nearest neighbor, yet manage to be coordinating with their neighbor's neighbor," says lead author Matthew Matheny, a research scientist at Caltech and member of the Kavli Nanoscience Institute.

"We didn't know what we were going to see," says Matheny. "But what these experiments are telling us is that you can get complexity out of a very simple system. This was something that was hinted at before but not shown experimentally until now."

"These exotic states arising from a simple system are what we call emergent," says Roukes. "The whole is greater than the sum of the parts."

The researchers hope to continue to build increasingly complex networks and observe what happens when more than eight nodes are connected. They say that the more they can understand about how the networks evolve over time, the more they can precisely control them in useful ways. And eventually they may even be able to apply what they are learning to model and better understand the human brain—one of the most complex networks that we know of, with not just eight nodes but 200 billion neurons connected to each other typically by thousands of synaptic edges.

"Decades after the first theories of the science of sync, and we are just finally beginning to understand what's going on," says Roukes. "It's going to take quite a while before we understand the unbelievably complex network of our brain."

The new Science study, titled, "Exotic States in a Simple Network of Nanoelectromechanical Oscillators," was funded by the U.S. Army and the Intel Corporation. Other Caltech authors include Warren Fon, a research engineer; Jarvis Li, a graduate student; and M.C. Cross, a professor of theoretical physics, emeritus.

Kavya Sreedhar, a senior double majoring in electrical engineering and business, economics, and management, has been named to this year's class of Knight-Hennessy Scholars, a graduate-level scholarship program founded by Stanford University.

The program provides full tuition, room and board, and a living stipend to its scholars to study in any Stanford graduate school. It also provides leadership training and will bring the scholars into contact with national and world leaders.

Sreedhar plans to pursue a PhD in electrical engineering, focused on circuits and hardware research for machine learning and artificial intelligence applications. She is joined by 67 other students chosen from a pool of 4,424 applicants for the program's 2019 cohort.

The application process included spending a weekend at Stanford, where Sreedhar and 150 other finalists interacted with Stanford faculty and leaders; participated in group leadership development activities and social events; and had individual interviews with program administrators, faculty, and the 2018 cohort.

"I am really excited about the offer and the community of Knight-Hennessy scholars," Sreedhar says. "I had the opportunity to meet some really amazing people from around the world working toward all types of graduate degrees and passionate about various causes. I am really looking forward to staying in touch with them."

As a student at Caltech, she conducted research with Richard Abbott, lead electronics engineer for LIGO, and enrolled in Physics 11, a freshman seminar research class that only admits four to eight students each year. Admission to the course is contingent on completing two challenging problems devised for each year's applicants. She also chaired the Academics and Research Committee for the Associated Students of Caltech and was president of the Caltech Y.

Sreedhar has also interned with Microsoft's automated machine learning team, Intel's wearables technology group, Digimarc's intellectual property team, and Hard Valuable Fun, a tech incubator. In addition, she has a second-degree black belt in taekwondo and plays the piano.

"Kavya Sreedhar, as a member of the second class of Knight-Hennessey Scholars and graduate student at Stanford, will bring to this experience her commitment to her community, excellence in academics, and her desire to find innovative solutions to significant problems in technology," says Lauren Stolper, director of Fellowships Advising, Study Abroad and the Career Development Center.

For more information about the Knight-Hennessy Scholars program, visit its website.

Caltech's Edward M. Stolper has been selected to receive the Geological Society of London's highest award, the Wollaston Medal, this year.

Named for William Hyde Wollaston, who discovered the elements palladium and rhodium, the Wollaston Medal is presented to earth scientists whose research has had a substantial impact on pure or applied aspects of geology. Stolper, the William E. Leonhard Professor of Geology, was recognized for his achievements in geology both on and off the earth.

"A leading igneous petrologist, Professor Stolper has studied processes on Earth, Mars, and asteroids. He was first to propose that the SNC meteorites originated from Mars, and in a more than 40-year research career has published key papers on the petrogenesis of meteorites, terrestrial basalts, island arc volcanics, and Martian rocks," notes the Society's announcement.

Stolper has studied igneous rocks from the earth, from the moon and Mars, and from meteorites that represent the earliest history of the solar system. He has also contributed to understanding the processes by which planetary interiors melt; the roles of water, carbon dioxide, and sulfur in igneous processes; and the oxidation states of magmas and the earth's interior. Currently, his lab is studying a range of questions related to the petrogenesis of igneous rocks on the earth and other planets.

Among other honors, he is the recipient of the V. M. Goldschmidt Award from the Geochemical Society, the highest award of the international geochemical community; the Arthur L. Day Medal from the Geological Society of America, awarded for distinction in the application of physics and chemistry to the solution of geologic problems; the Arthur Holmes Medal from the European Geosciences Union, awarded for scientific achievements in terrestrial (or extraterrestrial) materials sciences; and the Roebling Medal, the highest award from the Mineralogical Society of America for outstanding original research in mineralogy. He holds honorary degrees from the University of Edinburgh (2008), the Hebrew University of Jerusalem (2012), and the University of Bristol (2018), and he was named an honorary alumnus of Caltech in 2004. The mineral stolperite is named after him as well as the asteroid 7551 Edstolper.

The Wollaston Medal was first awarded in 1831 to William Smith, in honor of his first-of-its-kind 1815 geological map of England, Wales, and part of Scotland. Among the many others who have received the award in the almost two centuries over which it has been given are Charles Darwin, who received it in 1859, and three other Caltech scientists—Gerald Wasserburg, Peter Wyllie, and Samuel Epstein. Stolper will be presented with the medal on June 6 at Burlington House in London.

Geophysicists at Caltech have created a new method for determining earthquake hazards by measuring how fast energy is building up on faults in a specific region, and then comparing that to how much is being released through fault creep and earthquakes.

They applied the new method to the faults underneath central Los Angeles, and found that on the long-term average, the strongest earthquake that is likely to occur along those faults is between magnitude 6.8 and 7.1, and that a magnitude 6.8—about 50 percent stronger than the 1994 Northridge earthquake—could occur roughly every 300 years on average.

That is not to say that a larger earthquake beneath central L.A. is impossible, the researchers say; rather, they find that the crust beneath Los Angeles does not seem to be being squeezed from south to north fast enough to make such an earthquake quite as likely.

The method also allows for an assessment of the likelihood of smaller earthquakes. If one excludes aftershocks, the probability that a magnitude 6.0 or greater earthquake will occur in central LA over any given 10-year period is about 9 percent, while the chance of a magnitude 6.5 or greater earthquake is about 2 percent.

A paper describing these findings was published by Geophysical Research Letters on February 27.

These levels of seismic hazard are somewhat lower but do not differ significantly from what has already been predicted by the Working Group on California Earthquake Probabilities. But that is actually the point, the Caltech scientists say.

Current state-of-the-art methods for assessing the seismic hazard of an area involve generating a detailed assessment of the kinds of earthquake ruptures that can be expected along each fault, a complicated process that relies on supercomputers to generate a final model. By contrast, the new method—developed by Caltech graduate student Chris Rollins and Jean-Philippe Avouac, Earle C. Anthony Professor of Geology and Mechanical and Civil Engineering—is much simpler, relying on the strain budget and the overall earthquake statistics in a region.

"We basically ask, 'Given that central L.A. is being squeezed from north to south at a few millimeters per year, what can we say about how often earthquakes of various magnitudes might occur in the area, and how large earthquakes might get?'" Rollins says.

When one tectonic plate pushes against another, elastic strain is built up along the boundary between the two plates. The strain increases until one plate either creeps slowly past the other, or it jerks violently. The violent jerks are felt as earthquakes.

Fortunately, the gradual bending of the crust between earthquakes can be measured at the surface by studying how the earth's surface deforms. In a previous study (done in collaboration with Caltech research software engineer Walter Landry; Don Argus of the Jet Propulsion Laboratory, which is managed by Caltech for NASA; and Sylvain Barbot of USC), Avouac and Rollins measured ground displacement using permanent global positioning system (GPS) stations that are part of the Plate Boundary Observatory network, supported by the National Science Foundation (NSF) and NASA. The GPS measurements revealed how fast the land beneath L.A. is being bent. From that, the researchers calculated how much strain was being released by creep and how much was being stored as elastic strain available to drive earthquakes.

The new study assesses whether that earthquake strain is most likely to be released by frequent small earthquakes or by one very large one, or something in between. Avouac and Rollins examined the historical record of earthquakes in Los Angeles from 1932 to 2017, as recorded by the Southern California Seismic Network, and selected the scenario that best fit the region's observed behavior.

"Estimating the magnitude and frequency of the most extreme events, which can't be assumed to be known from history or instrumental observations, is very hard. Our method provides a framework to solve that problem and calculate earthquake probabilities," says Avouac.

This new method of estimating earthquake likelihood can be easily applied to other areas, offering a way to assess seismic hazards based on physical principles. "We are now refining the method to take into account the time distribution of past earthquakes, to make the forecasts more accurate, and we are adapting the framework so that it can apply to induced seismicity," Avouac says.

The study is titled "A geodesy- and seismicity-based local earthquake likelihood model for central Los Angeles." This research was supported by a NASA Earth and Space Science Fellowship.