Check out a story about our work predicting outcomes of HIV drugs, authored by Matt Luchette for Bio-IT World.
From Pleuni Pennings:
… overall, drug resistance is not as big a problem as one may think. Treatments have become very good, which means that the rate of evolution of drug resistance is low. At the same time, many new drugs have become available so that when drug resistance evolves, the patient can be switched to another set of drugs. However, in poor countries, where viral genotyping, viral load monitoring and many new drugs are not available, drug resistance still poses a serious threat to people’s health.
I learned a couple of cool things about HIV recombination recently:
1) There are recombination hotspots in the HIV genome. When two viral genomes cohabit in the same virus capsid, their “offspring” virus will inherit a bit from each parent. A hotspot is a likely location for genetic crossover: If the offspring has genetic material from parent “A” upstream of a hotspot, then it is particularly likely to have genetic material from parent “B” downstream of a hotspot. I learned about this issue from Atila Iamarino, whose team in São Paulo has been chasing down new recombinant strains in Brazil, which seem to be popping up despite the spread of antiretroviral therapy.
The existence of hotspots in HIV is not surprising, as recombination hotspots show up in many species when people have searched for them…
2) … But knowing about these hotspots might give us some clues about ways to slow down the evolution of resistance. Many people discuss drug resistance as if it were an on-or-off switch — either a bug is resistant or it ain’t. This hasty approximation has caught on because of its clinical convenience: if you are a doctor treating a patient, then you want to know, simply: Do I use this drug or not? Yes or no? At the point of care, a complicated or nuanced answer is not helpful. But one of the benefits of being a non-clinical researcher is that I can take a step back from the point of care and think about the broader picture, which includes viral evolution. Imagine a drug where resistance is a sliding scale: a single mutation may confer weak resistance, but as the number of mutations increases, it becomes harder to use the drug successfully (e.g., a patient will need to be very careful about taking 100% of their pills, or will need to switch drugs). Say that one lucky virus within a treated person has just evolved three resistance mutations; this bug is now a major threat to continued treatment success. If the three mutations are far apart on the genome — especially if recombination hotspots lie between them — then it is quite likely for recombination to break up those resistance mutations when the virus pairs with the typical, nonresistant virus, and so its viral offspring will have fewer resistance mutations. Recombination hotspots can actually prevent the growth of resistance, so it is a good idea to choose drugs (or combinations of drugs) in which the “killer” resistance mutations are spread far apart on the genome and have hotspots separating them.
There are caveats — and there are cases where recombination could work in the opposite direction, causing strongly drug-resistant viruses to persist in a patient for a long time, after they have spread widely — but the above story is the one best supported by the modeling literature so far.
There are millions of scenarios that HIV researchers would love to test out — what would happen if the typical infected New Yorker found out their HIV status a week sooner? Or what if the average Parisian infected with the virus waited a month longer to begin treatment? What would happen if a thousand people in Johannesburg were all of a sudden stricken with a drug-resistant strain? In each case, we would want to know how each individual patient would fare, how best to intervene and treat them, and how the size of the epidemic would change as a result of different interventions.
But it’s neither possible (nor ethical!) to test every scenario that an enterprising epidemiologist dreams up. That’s where computer models come in — using them, we can simulate cases almost as fast as we dream them up. Positively Aware‘s Rick Guasco writes about how these models may change the face of HIV treatment. And he featured some work that we did at Harvard & JHU on finding regimens to fight drug resistance. Right after I spoke with Rick about our project, I was worried that I had inundated him with a great too many details about mutation rates, dose-response curves, and goodness knows what else came to mind, but he distilled everything beautifully — of course, that is why he is a journalist and I am not! (Note to self: develop clearer style for chatting with journalists…)
See the rest of the issue for a tour of other ways that new technology is changing life with HIV.
If I toss a coin, it is certain that I will get heads or tails, but that outcome depends on my tossing the coin, which I may not necessarily do. Likewise, any particular universe may follow from the existence of a multiverse, but the existence of the multiverse remains to be explained. In particular, the universe-generating process assumed by some multiverse theories is itself contingent because it depends on the action of laws assumed by the theory. The latter might be called meta-laws, since they form the basis for the origin of the individual universes, each with its own individual set of laws. So what determines the meta-laws? Either we must introduce meta-meta-laws, and so on in infinite regression, or we must hold that the meta-laws themselves are necessary — and so we have in effect just changed our understanding of what the fundamental universe is to one that contains many universes. In that case, we are still left without ultimate explanations as to why that universe exists or has the characteristics it does.
It is possible to be a naturalist without embracing scientism, but it does take some work.
We have two kidneys, mayflies lay packets of thousands of eggs, and duplicate genes abound in all living things. Gallifreyans even have two hearts. Redundancy is a universal fact of biology — but how does it play a role in HIV?
For HIV infection to grow within a person, it must spread from one T-cell to another. Typically, individual virus particles bud from an infected cell, diffuse through the lymph, and then enter a second T-cell. It just takes a single virus to infect that second cell — and so the many viruses exiting from one cell can infect many other cells. This mode is called cell-free transmission. But there is a second way that T-cell infection can occur: An infected T-cell can “link up” to an uninfected T-cell, forming a synapse between the two. Tens or hundreds of virus particles may then shuttle directly through the synapse, infecting the new cell. This second mode of transmission is called synaptic transmission or direct cell-to-cell transmission.
Why should doctors & patients care? It turns out that this mode of transmission may give the virus a way to avoid the effects of drug treatment. My back-of-the-envelope calculation suggests that synaptic transmission can halve or quarter a patient’s effective drug dose!
But why the virus evolved to spread in this manner is a bit of a puzzle — if only a single particle is needed to infect a cell, why waste hundreds all at once? Does the virus benefit from this alternate lifecycle, or is this redundant mode of transmission a quirky side-effect (a “spandrel”) of T-cell physiology that is just wasteful, from the virus’ perspective?
In PLoS ONE last month, Komarova, Levy, and Wodarz investigate exactly this question by modeling how the virus can evolve to maximize growth of the infection. Using a model of immune response, they conclude that there is room for two opposing viral strategies: a “stealth” strategy where just one or a few viruses infect each cell, and a “saturation” strategy where many viruses infect each cell, ganging up to overwhelm the person’s immune response. Synaptic transmission may have evolved to let HIV “gang up” on the intracellular immune response.
This saturation strategy is analogous to the commonly observed adaptation known as predator satiation — a strategy used by prey species (such as the mayfly laying thousands of eggs!) to flood their immediate environment with many more offspring than the cohabiting predator species can possibly consume. KLW’s model shows a way in which viral saturation can likewise be adaptive.
While they do not say so explicitly, an extension of KLW’s model might also explain the coexistence of cell-free and synaptic modes of transmission, as a way for the virus to “bet hedge” — using the cell-free mode to spread quickly in permissive immune environments, and using the synaptic mode to ensure safe passage from cell to cell when confronted with a stronger immune response. Even with the uncertainty surrounding exactly how to model the immune response, KLW’s theory should provide a helpful framework for virologists who want to think about the evolutionary causes and pathological effects of synaptic transmission.
I woke up the other day and said to myself,
“You’re a mathematical biologist, and you’re doing pretty cool stuff, but there’s an entire world of mathy-bio types doing amazing things… and you know so little about them! You’ve read such a tiny fraction of the literature out there — so what do you know?”
This blog is my attempt to push myself to read a slightly larger fraction of what’s out there, to think about it, to write about it, and — if my dear readers are so inclined — to have conversations about it. Welcome to all!