California Institute of Technology

California Institute of Technology

A Technology Powerhouse

Search in this school

California Institute of Technology

California Institute of Technology

Owner: Brian

School members: 2

Description:

The California Institute of Technology (Caltech) is a private research university situated in Pasadena, California. Caltech has six academic divisions with strong emphasis on science and engineering. Its 124-acre primary campus is located approximately 11 mi northeast of downtown Los Angeles.

Founded as a preparatory and vocational school by Amos G. Throop in 1891, the college attracted leading early 20th-century scientists such as Robert Andrews Millikan and George Ellery Hale. Despite its relatively small size, 31 Caltech alumni and faculty have won the Nobel Prize and 66 have won the United States National Medal of Science or Technology. There are 110 faculty members who have been elected to the National Academies. Caltech managed $333 million in sponsored research in 2011 and $1.76 billion for its endowment in 2012.

Caltech was ranked first in the 2012–2013 Times Higher Education World University Rankings for the second year running, as well as ranking first in Engineering & Technology and Physical Sciences.

Honors: A Technology Powerhouse

School Activity

We could not find any activity.

Caltech News

Caltech News

Telescopes around the globe and in space raced to witness a dramatic stellar explosionNews Writer: Whitney Clavin A collage of observatories used in the Caltech studies of Cow: ALMA, NuSTAR, and the Submillimeter Array. Credit: ALMA (ESO/NAOJ/NRAO); NASA/JPL-Caltech; SAO (Glen Petitpas)

On June 16, 2018, a brilliant stellar explosion unlike any seen before went off in the skies, quickly capturing the attention of astronomers around the globe. First spotted by the ATLAS survey in Hawaii, the event was dubbed AT2018cow through a randomized naming system, and soon earned the nickname "Cow." Just three days after exploding, Cow had become 10 times brighter than a typical supernova—a powerful explosion that heralds the death of a massive star.

"We still don't know what this is, although it is one of the most intensely studied cosmic events in history," says Anna Ho (MS '17), a graduate student at Caltech and lead author of a new study about the event. Cow was likely a supernova, she says, although some scientists have proposed that it instead may have been caused by a black hole ripping apart a type of star called a white dwarf. 

In the hours, days, and weeks after the event, telescopes on the ground and in space set their sights on Cow, witnessing a dramatic increase in brightness across the electromagnetic spectrum, from high-energy X-rays to low-energy radio waves. Ho and her colleagues observed millimeter-wave light, which is slightly higher in energy than radio waves. 

"We've never seen a supernova this bright in millimeter waves," she says. "We were shocked." 

Ho presented these results on January 10 at the 233rd meeting of the American Astronomical Society in Seattle. She and her team began observing Cow five days after it exploded using the Submillimeter Array in Hawaii, and soon after using the National Science Foundation-funded Atacama Large Millimeter Array (ALMA) in Chile. They observed the event, which has since declined in brightness in millimeter waves, on and off for a total of 80 days.

Ho says that the millimeter-wave data reveal that a shock wave is traveling outward from the explosion at one-tenth the speed of light. "The millimeter-wave data tell us about the early evolution of these fast-paced events, and about their impact on the environment," says Ho. By combining the millimeter-wave data with publicly available X-ray data, the team was also able to conclude that Cow is likely "engine-driven," and that a central object formed from a supernova—such as a black hole or dense dead star called a magnetar—was behind the flurry of activity.

In the future, Ho says that astronomers should be able to observe more real-time cosmic events like this in millimeter wavelengths, thanks to state-of-the-art surveys like ATLAS and Caltech's Zwicky Transient Facility (ZTF) at Palomar Observatory, which catch these events more quickly than before. "You have to act fast to catch the millimeter waves, but when you do, you are given a new window into what is happening in these brilliant explosions."

Caltech research scientist Brian Grefenstette is co-author of another recent study on Cow, led by Raffaella Margutti of the Center for Interdisciplinary Exploration and Research in Astrophysics and Northwestern University. The team used, among other telescopes, NASA's Nuclear Spectroscopic Telescope Array, or NuSTAR, to observe high-energy X-rays emitted in the explosion. NuSTAR captured an unusual "bump" in the high-energy X-ray spectrum, which also suggests an engine-driven explosion generated by a black hole or magnetar formed in a supernova. 

"Thanks to the agility of the operations team, NuSTAR got on target about a week after Cow was discovered," says Grefenstette, who is a NuSTAR instrument scientist. "Astronomers think that this may be the first live view of a newborn compact object, such as a black hole, glowing brightly in X-rays. Because it is embedded in the ejecta of the supernova explosion, the engine would normally be hidden from view. The high-energy X-ray observations with NuSTAR are essential to piecing together the puzzle of what happened in this dramatic event."

The millimeter-wave study led by Ho, titled, "AT2018cow: a luminous millimeter transient," is accepted for publication in The Astrophysical Journal. Ho is funded by the Global Relay of Observatories Watching Transients Happen (GROWTH) program. Other Caltech authors include Professor of Theoretical Astrophysics Sterl Phinney; Visiting Associate in Physics Vikram Ravi; the George Ellery Hale Professor of Astronomy and Planetary Science Shri Kulkarni; graduate student Nikita Kamraj, and Assistant Professor of Astronomy Mansi Kasliwal. 

The study led by Margutti with co-author Grefenstette, titled, "An embedded X-ray Source Shines Through the Aspherical AT2018cow: Revealing the Inner Workings of the Most Luminous Fast-Evolving Optical Transients," is accepted for publication in The Astrophysical Journal

NuSTAR is led by Caltech's Fiona Harrison, the Benjamin M. Rosen Professor of Physics, and managed by the Jet Propulsion Laboratory for NASA's Science Mission Directorate in Washington. Caltech manages JPL for NASA.

Related Links: GROWTH feature story on Cow supernova
Wed, 09 Jan 2019
New research looks at possible links between magnetars and extragalactic radio bursts News Writer: Whitney Clavin Illustration of a magnetar—a rotating neutron star with incredibly powerful magnetic fields. Credit: NASA/CXC/M.Weiss

In a new Caltech-led study, researchers from campus and the Jet Propulsion Laboratory (JPL) have analyzed pulses of radio waves coming from a magnetar—a rotating, dense, dead star with a strong magnetic field—that is located near the supermassive black hole at the heart of the Milky Way galaxy. The new research provides clues that magnetars like this one, lying in close proximity to a black hole, could perhaps be linked to the source of "fast radio bursts," or FRBs. FRBs are high-energy blasts that originate beyond our galaxy but whose exact nature is unknown.

"Our observations show that a radio magnetar can emit pulses with many of the same characteristics as those seen in some FRBs," says Caltech graduate student Aaron Pearlman, who presented the results today at the 233rd meeting of the American Astronomical Society in Seattle. "Other astronomers have also proposed that magnetars near black holes could be behind FRBs, but more research is needed to confirm these suspicions."

The research team was led by Walid Majid, a visiting associate at Caltech and principal research scientist at JPL, which is managed by Caltech for NASA, and Tom Prince, the Ira S. Bowen Professor of Physics at Caltech. The team looked at the magnetar named PSR J1745-2900, located in the Milky Way's galactic center, using the largest of NASA's Deep Space Network radio dishes in Australia. PSR J1745-2900 was initially spotted by NASA's Swift X-ray telescope, and later determined to be a magnetar by NASA's Nuclear Spectroscopic Telescope Array (NuSTAR), in 2013. 

"PSR J1745-2900 is an amazing object. It's a fascinating magnetar, but it also has been used as a probe of the conditions near the Milky Way's supermassive black hole," says Fiona Harrison, the Benjamin M. Rosen Professor of Physics at Caltech and the principal investigator of NuSTAR. "It's interesting that there could be a connection between PSR J1745-2900 and the enigmatic FRBs."

Magnetars are a rare subtype of a group of objects called pulsars; pulsars, in turn, belong to a class of rotating dead stars known as neutron stars. Magnetars are thought to be young pulsars that spin more slowly than ordinary pulsars and have much stronger magnetic fields, which suggests that perhaps all pulsars go through a magnetar-like phase in their lifetime.

The magnetar PSR J1745-2900 is the closest-known pulsar to the supermassive black hole at the center of the galaxy, separated by a distance of only 0.3 light-years, and it is the only pulsar known to be gravitationally bound to the black hole and the environment around it. 

In addition to discovering similarities between the galactic-center magnetar and FRBs, the researchers also gleaned new details about the magnetar's radio pulses. Using one of the Deep Space Network's largest radio antennas, the scientists were able to analyze individual pulses emitted by the star every time it rotated, a feat that is very rare in radio studies of pulsars. They found that some pulses were stretched, or broadened, by a larger amount than predicted when compared to previous measurements of the magnetar's average pulse behavior. Moreover, this behavior varied from pulse to pulse.

"We are seeing these changes in the individual components of each pulse on a very fast time scale. This behavior is very unusual for a magnetar," says Pearlman. The radio components, he notes, are separated by only 30 milliseconds on average.

One theory to explain the signal variability involves clumps of plasma moving at high speeds near the magnetar. Other scientists have proposed that such clumps might exist but, in the new study, the researchers propose that the movement of these clumps may be a possible cause of the observed signal variability. Another theory proposes that the variability is intrinsic to the magnetar itself. 

"Understanding this signal variability will help in future studies of both magnetars and pulsars at the center of our galaxy," says Pearlman.

In the future, Pearlman and his colleagues hope to use the Deep Space Network radio dish to solve another outstanding pulsar mystery: Why are there so few pulsars near the galactic center? Their goal is to find a non-magnetar pulsar near the galactic-center black hole.

"Finding a stable pulsar in a close, gravitationally bound orbit with the supermassive black hole at the galactic center could prove to be the Holy Grail for testing theories of gravity," says Pearlman. "If we find one, we can do all sorts of new, unprecedented tests of Albert Einstein's general theory of relativity." 

The new study, titled, "Pulse Morphology of the Galactic Center Magnetar PSR J1745-2900," appeared in the October 20, 2018, issue of The Astrophysical Journal and was funded by a Research and Technology Development grant through a contract with NASA; JPL and Caltech's President's and Director's Fund; the Department of Defense; and the National Science Foundation. Other authors include Jonathon Kocz of Caltech and Shinji Horiuchi of the CSIRO (Commonwealth Scientific and Industrial Research Organization) Astronomy & Space Science, Canberra Deep Space Communication Complex.

Tue, 08 Jan 2019
Omer TamuzCredit: Caltech

Random walks—trajectories formed by successions of random steps—have been studied for more than a hundred years as important models in physics, computer science, finance, and economics, and as interesting mathematical objects in their own right. Still, many simple questions remain unanswered, and are the subject of current research. During his January 16 Watson Lecture, Omer Tamuz, this year's Biedebach Memorial Lecturer, will describe some classical results, introduce random walks on groups and graphs, present a few open questions regarding their long-run behavior, and talk about the solution to a longstanding problem as well as a surprising connection to economics. 

Tamuz is an assistant professor of economics and mathematics at Caltech whose research focuses on microeconomic theory, including game theory, learning and information as well as probability, ergodic theory, and group theory. He also studies machine  learning and statistics. Tamuz received his bachelor's degree in computer science and physics from Tel Aviv University in 2006 and his PhD in mathematics from the Weizmann Institute of Science in 2013. From 2013 until 2015, Tamuz was a Schramm Postdoctoral Fellow at the MIT math department / Microsoft Research.

The lecture—which will be held at 8 p.m. on Wednesday, January 16, in Beckman Auditorium—is a free event; no tickets or reservations are required.

Named for the late Caltech professor Earnest C. Watson, who founded the series in 1922, the Watson Lectures present Caltech and JPL researchers describing their work to the public. Many past Watson Lectures are available online at Caltech's YouTube site.

Mon, 07 Jan 2019
1927–2019 Harold Brown

Harold Brown, who was Caltech president, emeritus; life member of the Caltech Board of Trustees; and former secretary of the U.S. Department of Defense (DOD), passed away on January 4, 2019. He was 91 years old.

Brown was president of Caltech from 1969 until 1977. During his presidency, he made significant changes to the undergraduate curriculum, establishing programs in independent studies and in applied physics, and turning the environmental engineering program into a degree option. He also developed a campus master plan, purchasing surrounding lots to make space for new buildings and creating an identifiable character for the Institute. Most significant, perhaps, were his efforts to open Caltech to female undergraduates. In 1970, the Board of Trustees voted to admit women. However, the board made the admission of women conditional upon the building of new student houses, which they knew would take at least two years. Not wanting to wait, Brown pressed for an arrangement that would set aside corridors for women in existing houses. Because of his persistence, Caltech began admitting women in the fall of that year.

"Harold Brown's accomplishments in any one arena were significant enough for a gratifying life's work," says David L. Lee (PhD '74), chair of the Caltech Board of Trustees. "Taken together, the different chapters of Harold's life reflect an individual with absolutely selfless dedication to others—whether students, colleagues across various industries, or citizens of our nation. We are grateful for Harold's service at critical times to our Institute and in our nation's history."

In an interview with Engineering & Science magazine during his tenure, Brown noted that he was surprised by how quickly he became "extremely proud" of Caltech: "It's a very infectious spirit, and as you get to see what's going on, you see the quality of the research in science and technology, its variety, and the really outstanding nature of the people. You inevitably become very proud of the place, very protective of it, very loyal to it."

"Harold's life was a brilliant example of service to colleagues and country, informed by a restless intellect and a broad-minded commitment to talent," says Caltech president Thomas Rosenbaum, the Sonja and William Davidow Presidential Chair and professor of physics. "He memorably shaped the Caltech of today."

On January 20, 1977, Brown was named secretary of defense by President Jimmy Carter and he was confirmed by the U.S. Senate the same day. He took the oath of office on January 21, becoming the first scientist to hold the position, in which he remained until January 1981.

As secretary, Brown helped to develop and presided over America's nuclear arsenal of intercontinental ballistic missiles, bombers, and nuclear submarines, and negotiated the Strategic Arms Limitation Talks II treaty between the U.S. and the Soviet Union.

After his tenure in the DOD, Brown became chairman of the Foreign Policy Institute of The Johns Hopkins University's Paul H. Nitze School of Advanced International Studies and was a distinguished visiting professor at the Nitze School until 1992. He was a retired partner in Warburg Pincus LLC, a director of Chemical Engineering Partners and of Philip Morris International, and a trustee emeritus of RAND and of the Trilateral Commission (North America). Brown served as counselor at the Center for Strategic and International Studies from 1992 until his passing.

Harold Brown was born on September 19, 1927, in New York, New York. He graduated from Columbia University with an AB degree in 1945, an AM in 1946, and a PhD in physics in 1949, at the age of 21. After working as a teacher and doing postdoctoral research for a short time, Brown joined the University of California Radiation Laboratory at Berkeley in 1950. When the lab's offshoot, the E.O. Lawrence Radiation Laboratory in Livermore, California, was founded in 1952, Brown became a staff member. He became the lab's director in 1960.

He served as the director of defense research and engineering in the DOD from 1961–65 and as secretary of the Air Force from 1965–69.

Among other advisory positions, Brown was a consultant to the Air Force Scientific Advisory Board from 1956–57 and a member of the board from 1958–61; a member of the Polaris Steering Committee from 1956–58; Senior Scientific Advisor to the U.S. Delegation to the Conference on Discontinuance of Nuclear Weapons Tests from November 1958 to February 1959; and a member of the Scientific Advisory Committee on Ballistic Missiles to the secretary of defense from 1958–61. Brown also was a consultant to several panels of the President's Science Advisory Committee from 1958 to 1960 and was appointed a member of the Committee in 1961.

Brown was reelected to the Caltech Board of Trustees in 1985 and was made a life member in 2010. During his tenure on the board, Brown served as a member of the Executive Committee, the Audit and Compliance Committee, the Buildings and Grounds Committee, the Business and Finance Committee, the Investment Committee, and the Nominating Committee. At the time of his death, he was a consulting participant of the Technology Transfer Committee and the Jet Propulsion Laboratory Committee.

Among his many honors, Brown was the recipient of the Presidential Medal of Freedom (1981) and the Fermi Award (1993), and was a member of the National Academy of Sciences and of the National Academy of Engineering.

Colene Brown, his wife of more than six decades, passed away last year. He is survived by daughters Deborah and Ellen and other family.

Sat, 05 Jan 2019
Caltech researchers discuss AI and the future of seismology News Writer: Elise Cutts A snapshot of seismic data taken at a single station during the peak of an aftershock sequence.Credit: Zachary Ross/Caltech

Understanding earthquakes is a challenging problem—not only because they are potentially dangerous but also because they are complicated phenomena that are difficult to study. Interpreting the massive, often convoluted data sets that are recorded by earthquake monitoring networks is a herculean task for seismologists, but the effort involved in producing accurate analyses could significantly improve the development of reliable earthquake early-warning systems. 

A promising new collaboration between Caltech seismologists and computer scientists using artificial intelligence (AI)—computer systems capable of learning and performing tasks that previously required humans—aims to improve the automated processes that identify earthquake waves and assess the strength, speed, and direction of shaking in real time. The collaboration includes researchers from the divisions of Geological and Planetary Sciences and Engineering and Applied Science, and is part of Caltech's AI4Science Initiative to apply AI to the big-data problems faced by scientists throughout the Institute. Powered by advanced hardware and machine-learning algorithms, modern AI has the potential to revolutionize seismological data tools and make all of us a little safer from earthquakes. 

Recently, Caltech's Yisong Yue, an assistant professor of computing and mathematical sciences, sat down with his collaborators, Research Professor of Geophysics Egill Hauksson, Postdoctoral Scholar in Geophysics Zachary Ross, and Associate Staff Seismologist Men-Andrin Meier, to discuss the new project and future of AI and earthquake science. 

What seismological problem inspired you to include AI in your research?

Meier: One of the things that I work on is earthquake early warning. Early warning requires us to try to detect earthquakes very rapidly and predict the shaking that they will produce later so that you can get a few seconds to maybe tens of seconds of warning before the shaking starts. 

Hauksson: It has to be done very quickly—that's the game. The earthquake waves will hit the closest monitoring station first, and if we can recognize them immediately, then we can send out an alert before the waves travel farther. 

Meier: You only have a few seconds of seismogram to decide whether it is an earthquake, which would mean sending out an alert, or if it is instead a nuisance signal—a truck driving by one of our seismometers or something like that. We have too many false classifications, too many false alerts, and people don't like that. This is a classic machine-learning problem: you have some data and you need to make a realistic and accurate classification. So, we reached out to Caltech's computing and mathematical science (CMS) department and started working on it with them.

Why is AI a good tool for improving earthquake monitoring systems?

Yue: The reasons why AI can be a good tool have to do with scale and complexity coupled with an abundant amount of data. Earthquake monitoring systems generate massive data sets that need to be processed in order to provide useful information to scientists. AI can do that faster and more accurately than humans can, and even find patterns that would otherwise escape the human eye. Furthermore, the patterns we hope to extract are hard for rule-based systems to adequately capture, and so the advanced pattern-matching abilities of modern deep learning can offer superior performance than existing automated earthquake monitoring algorithms.

Ross: In a big aftershock sequence, for example, you could have events that are spaced every 10 seconds, rapid fire, all day long. We use maybe 400 stations in Southern California to monitor earthquakes, and the waves caused by each different earthquake will hit them all at different times. 

Yue: When you have multiple earthquakes, and the sensors are all firing at different locations, you want to be able to unscramble which data belong to which earthquake. Cleaning up and analyzing the data takes time. But once you train a machine-learning algorithm—a computer program that learns by studying examples as opposed to through explicit programing—to do this, it could make an assessment really quickly. That's the value.

How else will AI help seismologists?

Yue: We are not just interested in the occasional very big earthquake that happens every few years or so. We are interested in the earthquakes of all sizes that happen every day. AI has the potential to identify small earthquakes that are currently indistinguishable from background noise.

Ross: On average we see about 50 or so earthquakes each day in Southern California, and we have a mandate from the U.S. Geological Survey to monitor each one. There are many more, but they're just too small for us to detect with existing technology. And the smaller they are, the more often they occur. What we are trying to do is monitor, locate, detect, and characterize each and every one of those events to build "earthquake catalogs." All of this analysis is starting to reveal the very intricate details of the physical processes that drive earthquakes. Those details were not really visible before.

Why hasn't anyone applied AI to seismology before?

Ross: Only in the last year or two has seismology started to seriously consider AI technology. Part of it has to do with the dramatic increase in computer processing power that we have seen just within the past decade.

What is the long-term goal of this collaboration?

Meier: Ultimately, we want to build an algorithm that mimics what human experts do. A human seismologist can feel an earthquake or see a seismogram and immediately tell a lot of things about that earthquake just from experience. It was really difficult to teach that to a computer. With artificial intelligence, we can get much closer to how a human expert would treat the problem. We are getting much closer to creating a "virtual seismologist."

Why do we need a "virtual seismologist?"

Yue: Fundamentally both in seismology and beyond, the reason that you want to do this kind of thing is scale and complexity. If you can train an AI that learns, then you can take a specialized skill set and make it available to anyone. The other issue is complexity. You could have a human look at detailed seismic data for a long time and uncover small earthquakes. Or you could just have an algorithm learn to pick out the patterns that matter much faster.

Meier: The detailed information that we're gathering helps us figure out the physics of earthquakes—why they fizzle out along certain faults and trigger big quakes along others, and how often they occur. 

Will creating a "virtual seismologist" mean the end of human seismologists?

Ross: Having talked to a range of students, I can say with fairly high confidence that most of them don't want to do cataloguing work. [Laughs.] They would rather be doing more exciting work.

Yue: Imagine that you're a musician and before you can become a musician, first you have to build your own piano. So you spend five years building your piano, and then you become a musician. Now we have an automated way of building pianos—are we going to destroy musicians' jobs? No, we are actually empowering a new generation of musicians. We have other problems that they could be working on.

Thu, 03 Jan 2019
Through Base 11, high-achieving community college students work and study in labs at CaltechNews Writer: Robert Perkins Base 11 students at Caltech

This summer, Maria Hernandez—a student at Santa Monica Community College—lived in Caltech student housing and spent her days in Beverley McKeon's lab, building an autonomous submersible robot from scratch.

This was the second summer in a row that Hernandez participated in a program through the nonprofit organization Base 11, which connects high-achieving, underrepresented students from community colleges throughout the country with top research institutions like Caltech. Base 11 describes its mission as tackling the lack of qualified workers in STEM (science, technology, engineering, and math) fields, which in turn is fueled by an underrepresentation of women and minorities. Based in California, Base 11 seeks to create a pipeline of science and engineering students from community colleges into the workforce.  

Hernandez, who lives with her parents in East Los Angeles, is a first-generation college student who leaves her house every weekday morning at 5 o'clock and spends all day on campus in Santa Monica, either in class or working as a tutor to help pay tuition, finally returning home around midnight to sleep. Through Base 11, she has had the opportunity to instead spend summers working at labs throughout Southern California, including with women engineers—like McKeon—who have given her the encouragement and guidance she has needed to chart her career path. 

"This program gave me the inspiration to become an engineer," says Hernandez, now in her fourth year of college. "Throughout high school, I was always good at math, but I never really knew what engineering was. The closest thing to an engineer in the community I grew up in was a mechanic."

Caltech has partnered with Base 11 to bring around a dozen students to campus each year to work with Caltech faculty members. Caltech's involvement in the program was spearheaded originally by Guruswami Ravichandran, John E. Goode, Jr., Professor of Aerospace and Mechanical Engineering and Otis Booth Leadership Chair of the Division of Engineering and Applied Science, working in concert with Caltech volunteer and donor Foster Stanback, who has close connections with Base 11.

"These are low-resource, high-potential students who are creating top-quality research products," says Ravichandran. 

Spending the summer at Caltech or another academic institution that Base 11 partners with has impacts beyond academics. "Most of these students are the first in their families to go to college, and so they have never even imagined themselves in a setting like Caltech," says Ingrid Ellerbe, executive director of Base 11. "To be made to feel at home there, to receive mentorship from people they can relate to, and to realize that they can meaningfully contribute to research is a truly life-changing experience that empowers them to believe they can achieve things they never thought possible."

Over the past two years, Beverley McKeon, Theodore von Kármán Professor of Aeronautics, and 2018 recipient of the Northrop Grumman Prize for Excellence in Teaching, has taken the lead in coordinating the relationship with Base 11. The students participate in one of two programs, one that runs for the full academic year, and one that runs during the summer. The academic-year program draws students from community colleges to campus once per month, nine times a year, to work with Caltech graduate students on research projects.

For the summer program, four students each year come to campus through the Caltech Student-Faculty Programs office to work on a research project in a faculty member's lab, akin to a Summer Undergraduate Research Fellowship (SURF). These Base 11 students work with a student mentor from the lab and ultimately present their findings in a seminar day at the end of the summer.

David Huynh, a graduate student in McKeon's lab, served as a student mentor from 2014 to 2017, helping students work on projects related to physics and fluid mechanics. 

"They all came in pretty gung ho, but they didn't know exactly what they wanted to do, which is the perfect mindset," Huynh says. "They were excited to learn—but about anything, not just what they wanted to do. They were motivated to learn and gain from this experience, which makes it a lot of fun."

That energy and curiosity translated into impressive and successful final projects. "I have been blown away by the presentations that the Base 11 interns made. Their performance is as good or better than the undergraduates coming from the best of the best schools for summer research fellowships," Ravichandran says.

McKeon agrees. "Students who come have been really outstanding, and the research that they've done has been indistinguishable from any of the other summer programs. They have thrived in an environment that can be quite new and challenging," she says.

Exposure to laboratories is one of the key components of the program for Base 11 students, she says, giving participating students the opportunity to do research and then decide if it is something they would want to pursue as a career.

"Part of the point of the program is to enable people who may not be initially comfortable at a place like Caltech to feel at home in a technical environment and see whether research or graduate school is something they're interested in," McKeon adds. "For some, it's the first step toward a long and brilliant academic career. An equally valid outcome is that the student discovers that academia is not what they want. Either way, they've been able to sample it and make that decision," she says.

The program has so far been funded by Foster Stanback and his wife Coco. Caltech's Office of Technology Transfer and Corporate Partnerships is currently seeking additional collaborations and partnerships with the aerospace industry to ensure that the program continues at Caltech.

Related Links: Summer Activities at Caltech
Wed, 02 Jan 2019
Caltech biochemistry professor Bil Clemons (right) and graduate student Shyam SaladiCredit: Caltech

Integral membrane proteins (IMPs): They're unquestionably important and notoriously elusive. Caltech biochemistry professor Bil Clemons is on a mission to take some of the guesswork out of how we study them.

Step 1: Define the problem

The cell membrane, just a few nanometers thick, is what separates the interior of a cell from the cell's environment. Integral membrane proteins, as their name suggests, are embedded in cell membranes. In contact with both the cell's interior and everything around the cell, IMPs are gatekeepers, transporters, and conduits of information, and they enable cells to communicate with each other.

Because of their complicated habitat—partway in the cell, partway out, and partly in the cell membrane—researchers find it extremely difficult to successfully extract IMPs for study.

Read the full story in The Caltech Effect

Thu, 20 Dec 2018
The technology could be used to develop more sophisticated nanomachines with reconfigurable partsNews Writer: Emily Velasco An artist's rendering of a game of tic-tac-toe played with DNA tilesCredit: Caltech

Move over Mona Lisa, here comes tic-tac-toe.

It was just about a year ago that Caltech scientists in the laboratory of Lulu Qian, assistant professor of bioengineering, announced they had used a technique known as DNA origami to create tiles that could be designed to self-assemble into larger nanostructures that carry predesigned patterns. They chose to make the world's smallest version of the iconic Mona Lisa.

The feat was impressive, but the technique had a limitation similar to that of Leonardo da Vinci's oil paints: Once the image was created, it could not easily be changed.

Now, the Caltech team has made another leap forward with the technology. They have created new tiles that are more dynamic, allowing the researchers to reshape already-built DNA structures. When Caltech's Paul Rothemund (BS '94) pioneered DNA origami more than a decade ago, he used the technique to build a smiley face. Qian's team can now turn that smile into a frown, and then, if they want, turn that frown upside down. And they have gone even further, fashioning a microscopic game of tic-tac-toe in which players place their X's and O's by adding special DNA tiles to the board.

"We developed a mechanism to program the dynamic interactions between complex DNA nanostructures," says Qian. "Using this mechanism, we created the world's smallest game board for playing tic-tac-toe, where every move involves molecular self-reconfiguration for swapping in and out hundreds of DNA strands at once."

Putting the Pieces Together

That swapping mechanism combines two previously developed DNA nanotechnologies. It uses the building blocks from one and the general concept from the other: self-assembling tiles, which were used to create the tiny Mona Lisa; and strand displacement, which has been used by Qian's team to build DNA robots.

Both technologies make use of DNA's ability to be programmed through the arrangement of its molecules. Each strand of DNA consists of a backbone and four types of molecules known as bases. These bases—adenine, guanine, cytosine, and thymine, abbreviated as A, T, C, and G—can be arranged in any order, with the order representing information that can be used by cells, or in this case by engineered nanomachines.

The second property of DNA that makes it useful for building nanostructures is that the A, T, C, and G bases have a natural tendency to pair up with their counterparts. The A base pairs with T, and C pairs with G. By extension, any sequence of bases will want to pair up with a complementary sequence. For example, ATTAGCA will want to pair up with TAATCGT.


A pair of complementary DNA sequences bonded together.

However, a sequence can also pair up with a partially matching sequence. If ATTAGCA and TAATACC were put together, their ATTA and TAAT portions would pair up, and the nonmatching portions would dangle off the ends. The more closely two strands complement each other, the more attracted they are to each other, and the more strongly they bond.


Partially paired DNA strands leave unpaired sequences dangling off the ends.

To picture what happens in strand displacement, imagine two people who are dating and have several things in common. Amy likes dogs, hiking, movies, and going to the beach. Adam likes dogs, hiking, and wine tasting. They bond over their shared interest in dogs and hiking. Then another person comes into the picture. Eddie happens to like dogs, hiking, movies, and bowling. Amy realizes she has three things in common with Eddie, and only two in common with Adam. Amy and Eddie find themselves strongly attracted to each other, and Adam gets dumped—like a displaced DNA strand.


Amy and Adam paired up like complementary DNA strands.


Eddie and Amy have more in common and their bond is stronger. As in DNA strand displacement, Amy leaves with Eddie


Adam is now alone, much like a displaced strand of DNA.

The other technology, self-assembling tiles, is more straightforward to explain. Essentially, the tiles, though all square in shape, are designed to behave like the pieces of a jigsaw puzzle. Each tile has its own place in the assembled picture, and it only fits in that spot.

In creating their new technology, Qian's team imbued self-assembling tiles with displacement abilities. The result is tiles that can find their designated spot in a structure and then kick out the tile that already occupies that position. Whereas Eddie merely bonded with one person, causing another to be kicked to the curb, the tiles are more like an adopted child who connects so strongly with a new family that they take the title of "favorite" away from biological offspring.

"In this work, we invented the mechanism of tile displacement, which follows the abstract principle of strand displacement but occurs at a larger scale between DNA origami structures," says Qian's former graduate student Philip Petersen (PhD '18), lead author of the study. "This is the first mechanism that can be used to program dynamic behaviors in systems of multiple interacting DNA origami structures."

Let's Play

To get the tic-tac-toe game started, Qian's team mixed up a solution of blank board tiles in a test tube. Once the board assembled itself, the players took turns adding either X tiles or O tiles to the solution. Because of the programmable nature of the DNA they are made from, the tiles were designed to slide into specific spots on the board, replacing the blank tiles that had been there. An X tile could be designed to only slide into the lower left-hand corner of the board, for example. Players could put an X or and O in any blank spot they wanted by using tiles designed to go where they wanted. After six days of riveting gameplay, player X emerged victorious.

Obviously, no parents will be rushing out to buy their children a tic-tac-toe game that takes almost a week to play, but tic-tac-toe is not really the point, says Grigory Tikhomirov, senior postdoctoral scholar and co-first author of the study. The goal is to use the technology to develop nanomachines that can be modified or repaired after they have already been built.

"When you get a flat tire, you will likely just replace it instead of buying a new car. Such a manual repair is not possible for nanoscale machines," he says. "But with this tile displacement process we discovered, it becomes possible to replace and upgrade multiple parts of engineered nanoscale machines to make them more efficient and sophisticated."

Their paper, titled "Information-based autonomous reconfiguration in systems of interacting DNA nanostructures," appears in the December 18 issue of Nature Communications. Funding was provided by the Burroughs Wellcome Fund, the Shurl and Kay Curci Foundation, the National Institutes of Health, and the National Science Foundation.

Wed, 19 Dec 2018
Caltech's planetary visualization lab played a key role in helping choose the Mars 2020 landing siteNews Writer: Robert Perkins A topographical map of Jezero Crater, where the Mars 2020 mission will land. Credit: Murray Lab/Caltech

In November, NASA announced that the upcoming Mars 2020 rover mission will be sent to Jezero Crater, a 28-mile-wide feature on the western edge of Isidis Planitia, a giant impact basin just north of the Martian equator. 

The decision to go to Jezero Crater was the subject of much debate among Mars 2020 scientists. In the end, it came down to a choice between two highly favored landing sites, Jezero and a nearby site called Northeast Syrtis, each of which presented different research opportunities. To determine which to target, the Mars 2020 team relied on assistance from the Bruce Murray Laboratory for Planetary Visualization at Caltech, where researchers compiled disparate data sets about Mars to create multilayered, easy-to-read maps of the landing sites and a 3-D visualization tool. 

"The Murray Lab was incredibly helpful as we determined where to go in 2020," says Ken Farley, the W. M. Keck Foundation Professor of Geochemistry and project scientist for Mars 2020 at JPL, which Caltech manages for NASA. "The lab has the ability to assemble and manipulate the data in a way that makes it accessible to anyone regardless of their background."

Known as the Murray Lab for short, the facility was established on the second floor of the Charles Arms Laboratory of the Geological Sciences in 2016 to develop next-generation image-processing capabilities for planetary scientists. The facility is named for the late planetary science pioneer Bruce Murray, a faculty member at Caltech for nearly 50 years and director of JPL from 1976 to 1982. On the surface, the lab appears to be just a room with an enormous TV, a long couch, and a lot of computer servers humming away in the background. In reality, it is both a state-of-the-art image-processing facility and meeting space for examining and discussing those images.

For the Mars 2020 mission, a team led by Murray Lab manager Jay Dickson, research scientist in image processing, processed hundreds of gigabytes of data from NASA—images, topographical maps, thermal data, and so on—into files that can be opened using Google Earth. Scientists could do flyovers and generate perspective views based on the data in a free, easy-to-use interface. 


A Google Earth flyover of Jezero Crater created using data compiled by the Murray Lab. Credit: Caltech

For previous Mars missions, specialist scientists would pore over data sets and satellite images of ground features and then debate possible landing sites armed with PowerPoint presentations. 

"It's not that easy to just look at images of Mars in a useful way," says Dickson, who was brought on to the Mars 2020 working group during the summer of 2017. A challenge was that most of the available information about the potential landing sites was gleaned from satellite measurements, and yet many of the scientists tasked with helping to decide which landing site to choose were geologists more used to working directly with samples or in a landscape, not studying it from above.

To overcome that, the Murray Lab team first stitched together images captured by the Context Camera (CTX) on JPL's Mars Reconnaissance Orbiter (MRO) and then folded in information from disparate data sets—mineralogy data, for example, and temperature maps that reveal whether a patch of ground is rock or soil. 

The final product, which allows users to navigate a region and seamlessly call up relevant data, represents a democratization of information processing, because it allows a more diverse group of researchers to make informed decisions about the landing site regardless of their skill set, says Bethany Ehlmann, a professor of planetary science, JPL research scientist, and member of the Mars 2020 team. "The Murray Lab allowed more people on the Mars 2020 team to get involved in evaluating landing sites."

The Murray Lab's involvement did not end with the creation of these multilayered maps. They also developed a 3-D visualization tool that could be viewed at the Murray Lab. As the home to an 84-inch, 4K resolution 3-D display, the Murray Lab became a regular meeting spot where Mars 2020 scientists could "fly" around the landscapes of the two main potential landing sites (as well as a third, "Midway," located midway between the two) and debate their merits. 

The Mars 2020 team finally arrived at what Farley calls an "excellent compromise" solution. The rover, which will collect and ultimately cache samples for possible retrieval by a future mission, will land at Jezero and—if the mission lasts long enough—eventually head toward Midway.

"We know how to land at both locations, so the cache would be safe at either place. And the value of multiple cached samples would be enhanced beyond what you'd get at any one landing site," Farley says. 

Sample retrieval aside, the Murray Lab has changed the way that landing sites will be evaluated, which could bring more potential partners into the lab for future projects, Dickson says. "This has been a great vehicle for introducing people to the lab," he says. 

All high-resolution data produced by The Murray Lab are available for streaming using Google Earth through the Murray Lab's website (murray-lab.caltech.edu/Mars2020/). The Murray Lab has received support from Foster and Coco Stanback and the Twenty-Seven Foundation.

Related Links: JPL News: NASA Announces Landing Site for Mars 2020 RoverWhere to Land Mars 2020: A Conversation with Ken Farley
Wed, 19 Dec 2018
Baltimore chairs an international committee to discuss human genome editingNews Writer: Lori Dajose Gene Editing and Ethics: A Q&A with David Baltimore David Baltimore answers questions about emerging technologies, CRISPR, and the ethical debates in the field of gene editing. The Nobel laureate and Millikan Professor of Biology at Caltech recently returned from the Second International Summit on Human Genome Editing in Hong Kong.Credit: Caltech

Less than a decade ago, scientists gained the unprecedented ability to alter the genetic code of living organisms with the development of a tool called CRISPR-Cas9. In late 2015, recognizing the power of the CRISPR technology, a group of scientists held the first International Summit on Human Gene Editing. Led by Caltech's David Baltimore, president emeritus and Robert Andrews Millikan Professor of Biology, the group concluded that gene editing technology was far too underdeveloped to be used on humans.

This year, on the eve of the second international summit held in November in Hong Kong, a scientist announced that he had already edited the genomes of human embryos and inserted them into their mother's uterus—in spite of an international agreement not to carry out such an insertion—and that the twin babies had just been born. We sat down with Baltimore to discuss this report and the outcomes of the second international conference.

What was the motivation to hold this summit for the second time?

At the first summit, we made a clear distinction between somatic gene editing (where no edited genes can be passed down to the next generation) and germline gene editing (where the edited genes are passed down). We concluded that somatic editing was like any other medical intervention: once it is deemed safe and effective, it should become part of ordinary medicine. However, germline editing raises many questions, both practical and moral, and, we concluded, it needed much more study before it could be considered for human use. We did encourage further research to perfect the methods. 

The second summit, three years after the first, was meant as an update—to take stock of the advances of the previous three years and to decide how perspectives had changed. We asked questions about how research was progressing on somatic editing and recognized that many investigators were initiating clinical trials focused on a variety of diseases. We saw that new, safer methods of germline editing had been developed, but we concluded that the moral and practical uncertainties remained to be resolved, and we continued to believe that it would be irresponsible to initiate trials in humans. 

The day before the summit, a scientist announced that he had modified the genomes of two embryos, and that they had been successfully carried to term and born. What was the committee's response to this?

We were surprised, to say the least, when we heard just before the meeting began that somebody was going to announce that he had actually implanted gene-edited embryos back into a woman and that she had given birth to two children. What had been planned as a largely academic discussion became a media circus. This announcement was a very serious ethical challenge.

In a sense, the publicity that surrounded this basically irresponsible act had a plus side. It focused the attention of the general public on certain activities of modern science. Sometimes, it takes a very dramatic situation for nonscientists to pay attention long enough to recognize the advances of science; it produced a teaching moment. I think you'll find that many of the people who were at this meeting in Hong Kong are now back in their own countries and cities and laboratories, where they are being asked to talk to their local radio stations, talk to their local community organizations, and that's positive, although it does in no way justify the actions of Dr. He Jiankui, the scientist who carried out the human germline editing. 

You and the committee described being disturbed by this research, but are you hopeful that one day germline editing could be conducted safely and responsibly?

I certainly hope that we will reach that point. It's part of my general belief that modern medicine will have the ability to ameliorate much of the burden of the diseases that we still suffer with as human beings, like cancer, inherited disease, heart disease. I'm hopeful that we will ameliorate those and that the world will become, in that sense, a better place because of modern biology.

Other than ameliorating disease, can genetic engineering solve other global problems?

Most global problems have a strong social dimension, and social problems are not solved by genetic tricks; they're endemic to the culture of our society. However, the general worry is that if we develop the ability to modify the germline, only wealthy people will be able to take advantage of that, and so it may exacerbate the difference between the opportunities available to the wealthy and to the impoverished. This concern applies to all medical advances and is not specific to gene editing. It's a social problem created by medical successes, and we have to think about how to make such successful treatments widely available.

What do you see as the future of the field of genome editing?

The science is advancing very rapidly. New ways of using the technology are being invented continually, so it will just get more and more effective and powerful over time. That's what I see happening, in particular in the area of somatic gene therapy.

We can do a lot with somatic gene editing. Some of it involves the direct modification of an inherited genetic problem. For example, sickle cell disease is caused by a single mutation in the hemoglobin beta gene. We could directly correct that or modify other genes to provide a replacement for the defective beta gene. That would majorly improve the lives of people who inherit sickle cell disease.

Another application of somatic gene editing is immunotherapy for cancer. We are today treating people who have cancer by modifying their immune cells and making them attack the cancer cells and kill them. You're actually getting the body to clean itself up, in a sense.

Both of those things are in clinical trials today, and we expect this will become a part of medicine within the next few years.

I suspect that as time passes, we will want to rethink whether gene editing ought to be used in modifying the germline.

There are thousands of single-gene defects in humans. We're going to see, I think, some pressure to use germline technology in people where the medical need is great, such as sickle cell hemoglobin disease, Huntington's disease, and others. I think we'll find situations in which the benefit-risk ratio is very much in favor of the benefit. At that point, I think there's a moral argument to be made that we have to use gene editing because we can improve the lives of people.

What are the challenges in doing germline modification? Where do we need to be cautious?

There are two kinds of practical challenges in using the technology. One is an off-target effect: you want to modify a gene at a certain position, and you inadvertently cause a change somewhere else in the genome. These are accidental errors that would be passed on to later generations and need to be carefully avoided.

The other problem is that, if the edit is done as the cells are dividing in the embryo, you could have a situation in which the embryo becomes what we call mosaic: some of its cells are edited, some of its cells are not edited.

I think people who work with this technology are reasonably comfortable that off-target effects can be assayed and minimized. But there are real questions about whether we know how to handle the mosaicism. 

What are some of the moral and ethical boundaries surrounding gene editing?

Because germline editing involves making alterations in the genome that would be passed down through the generations, it should only be done when we have a clear idea of what the consequences of the gene alterations will be. Right now, I believe that limits the use of germline alteration to genes with predictable behaviors, like that which causes Huntington's disease. 

Some people oppose gene alteration on basically religious grounds. They would say there should never be gene modification. For instance, if you believe that humans are perfect, then you may not want to modify them even if they're not healthy.   

Then there are, I think, lots of other people who believe that if there is a way to make the lives of people better, we should do it. Those people now have to make another distinction: that distinction is between a modification that is only in your own body and a modification that is inherited by your offspring. That's a fundamental difference, not because of the mechanics of it but because of the moral status of the individual; by modifying the genes, have you modified some essence of the individual? Again, there are people on both sides of that question.

We're going to debate these questions over the next years and decades, and there are always going to be people on both sides of the issue. We will have to decide to go one way or another. I think it's pretty clear where I would go, but I don't have any more important status than anybody else in this discussion, and so it really will come down to what the majority of people think is the right way to behave.

It seems that your camp of researchers must also determine what modifications will actually improve people's lives and what modifications are for aesthetic preferences or maybe superfluous.

Yes, that is a fundamental distinction that is very hard to make. When is a gene alteration a way of improving an individual's health and when is it an aesthetic preference or a socially desirable characteristic? That's a conversation that's going on with the whole world today. I have emphasized the easy case, which is where an individual has genes that are in some way driving ill health. But how about genes that people would just like to see in their children? Blue eyes, or intelligence, or the like. I think the general feeling is that we shouldn't be doing that, but there is a concern that once we perfect the methods for improving health, the same methods could be used for other purposes. That is a "slippery-slope" argument, and people are even saying we should not use the methods for dealing with serious diseases because it opens up the slippery-slope concern.  

Predicting all the consequences of a gene alteration is difficult. For instance, in the U.S., sickle cell disease is clearly something we would want to avoid if possible—but in Africa, the sickle cell trait protects an individual against malaria and therefore has a positive consequence as well as a negative one. So there is a risk/benefit calculus to consider for any gene alteration, and we simply may not know enough to make the judgment confidently. So we must ask whether we know enough to make a judgment, or would we be best off taking a humble stance in the face of uncertainty. Thus, our advances in science face us with a mixture of practical and moral questions, and opportunities that are not easily resolved.   

 

The First International Summit on Human Gene Editing was convened by the Chinese Academy of Sciences, the Royal Society, the U.S. National Academy of Sciences, and the U.S. National Academy of Medicine. The Second International Summit on Human Genome Editing was convened by the Academy of Sciences of Hong Kong, the Royal Society, the U.S. National Academy of Sciences, and the U.S. National Academy of Medicine.

Wed, 19 Dec 2018