We caught up with Dr. Gerald Lushington, PHD, Editor-in-Chief to ask about his latest updates and activities!
In conjunction with Bentham Science Publishers’ interview series of journal editors, it is a privilege to introduce Dr. Gerald Lushington, Editor in Chief for the journal Combinatorial Chemistry and High Throughput Screening (CCHTS).
As a corporate executive and professional consultant specializing in a wide range of biomedical technologies, I first heard about Gerry roughly four years ago, while I was serving as an external evaluator of bioinformatics and biomedical informatics programs in the greater Kansas City Area. In the course of my evaluations, I met with a fair number of life sciences professionals who spoke very highly of Lushington’s research and technical acumen.
Intrigued by this background, I telephoned Gerry in the fall of 2016, and soon we were exploring consulting collaborations, as well as possible commercialization of some novel antimicrobial peptide formulations that he had co-invented. While the antimicrobial peptides did not gain immediate traction with investors, our discussions led to other compelling applications for peptides. The most interesting discussions tended to converge on our common interests in neuroscience.
In particular, Gerry and I had one memorable discussion in a cold December parking garage after having left an antimicrobial peptide planning meeting. Our breath hanging in the frosty air, Gerry was describing his recent paper on neurodegenerative disorders (in particular Alzheimer’s disease and Amyotrophic Lateral Sclerosis, but also Parkinson’s, Huntington’s and others) and the recent research confirming that many neuropathies involved conformationally induced protein misfolding similar to what had been first identified in the 1980’s for prion diseases. At the time of our discussion, Lushington believed that pathological protein misfolding might be triggered within stress granules formed under destabilized conditions (viral threats, trauma, intoxication, etc.) in the brain. As he spoke, I found myself reflecting on my long involvement with neuropsychiatric research (especially schizophrenia, but also other disorders), wondering if his models could also play a role in causing broader mental illness. These thoughts because the seed of what would become TheraPeptics, which (through models a lot more sophisticated than our earlier parking garage discussion) we are now polishing very novel, highly consistent, and druggable models for Alzheimer’s, neuropsychiatric lupus, bipolar disorder, diabetic neuropathies, and various other somewhat related pathologies.
If the scope of TheraPeptics targets is broad, then so are Gerry’s research interests. In recent years, in addition to his regular CCHTS editorials, Gerry has researched and written a spate of papers on antimicrobial drug design, neurodegeneration, cognitive physiology and drug toxicology. It’s all done here, in a small laboratory which is really no more than an office with a few book shelves and, most crucially, a collection of computers.
“That’s all I need,” he says. “I’m a different breed than, say, analytical biologists, synthetic chemists, clinical researchers, in that my work is all on the computer. Of course, those disciplines have computers too; virtually all scientists use software to set up, run, and analyze their experiments. But modern laboratory research increasingly teams up with a whole other field called ‘computational science’ that uses sophisticated algorithms to explore the broader implications of laboratory data, predicting behavior at different levels of abstraction – different time scales, different environments, or under more realistic real-world conditions.”
I’m not sure exactly what Dr. Lushington means, so I ask him for an example, and the interview is off and running.
- Gerald H. Lushington (GHL): An example? Sure, let me tell you about the bees.
- Anthony C. Barnes (ACB): The bees?
GHL: Right. A year and a half ago, I met a brilliant young scientific mind, Mary Zgurzynsky, who had come from a family of bee keepers. She had done some low-tech, yet very thorough, behavioral tests of the effect of the herbicide glyphosate on the homing ability of honey bees. Using basic but meticulous sampling strategies, she had convincingly shown that glyphosate impaired the ability of bees to return efficiently to their hive. That much was clear, but she wasn’t certain why it was having that effect.
ACB: Glyphosate. That being the main ingredient in RoundUp.
GHL: Exactly. As you probably know, glyphosate was carefully engineered to target weeds rather than food crops, and has been subject to laborious toxicology tests to rule out threats to animals or humans cultivating or consuming glyphosate-treated crops. Yet here we were confronted with clear, unbiased data suggesting a significant ‘intoxication’ effect.
ACB: Ah. When you say ‘intoxication’, you’re referring to something like the effect of alcohol on humans?
GHL: Sort of, yes. You see, on top of the basic statistical evidence that glyphosate-dosed bees took longer to return to the hive, Mary also had important qualitative observations. The bees were still trying to return home, and would still generally still make it, but they were far less direct. When challenged with mazes back to the hive entrance, intoxicated bees took an unusually large number of wrong turns, and needed to pause more often.
ACB: Would you say that they paused because they were exhausted?
GHL: That would have been my guess. Fortunately, Mary has a much keener eye for fine detail than I’ll ever have. There were lots of subtle behavioral clues suggesting that the ‘rest stops’ weren’t very ‘restful’. The affected bees kept making micro-motions normally associated with elevated stress, and they frequently extended their proboscis, as though they weren’t seeing very well and needed to rely more heavily on their sense of smell.
ACB: So, their sensory perception was degraded.
GHL: Yes, a very important hint. But that wasn’t the only clue. Their motor control was also deficient, and so was their cognitive ability. A healthy honey bee is a brilliantly programmed little calculator, almost like a tiny Roomba, with a detailed virtual map of where it’s gone and where it needs to go next. Not so with the glyphosate bees – not only would they take a lot of wrong turns, but they would often take the same wrong turn more than once.
ACB: Cognitive, sensory and motor skills all affected? Sounds more complicated than the average bender.
GHL: I agree. And this complexity, itself, was a clue.
ACB: So, the two of you figured out the mechanism? Using computational science?
GHL: I think so, yes. After a few false starts, we came to the hypothesis that glyphosate was acting like a glutamate mimic.
ACB: Aha! Glutamate, the neurotransmitter.
GHL: Exactly. Under the weight of evidence, it seemed the best explanation for the diverse set of behaviors. After data mining followed by extensive computational modeling, it was the only answer that held up to rigorous scrutiny.
ACB: How did you reach your hypothesis?
GHL: Well… To begin with, we looked in the PubChem database for any reports of glyphosate binding to known neuroreceptors in humans or otherwise. We also checked to see whether there was evidence it might interfere with insect metabolism, since hypoglycemia can mimic various neurological symptoms, but screening data on glyphosate was really quite sparse. That basically told me that, although the substance had been subject to a lot of toxicology studies, it had rarely ever been included in biochemically-specific targeted screens. That’s not surprising, since biochemical targeting is used more for pharmacology than toxicology, but it’s likely a key reason why glyphosate was never considered a possible neuromodulator. However, all that said, the lack of evidence is not evidence of lack. I turned away from glyphosate, and began to mine for screening data on well assayed molecules whose shape and charge distribution were similar to glyphosate. That’s when the lightbulb flashed. Among well-tested analog molecules that were most similar to glyphosate, there was a lot of glutamate receptor activity.
ACB: So you used cheminformatics and bioinformatics to pinpoint a hypothesis. And then came validation?
GHL: Exactly. We were pretty sure we were onto something, but the glutamate hypothesis was quite novel and potentially controversial, so we knew it had to be defensible. At that point, we didn’t have the resources to do rigorous bench top assays, so we decided to hit it hard with predictive molecular simulations, comparing glyphosate to relevant positive controls, including glutamate itself, as well as other compounds known from past biochemical, pharmacological and toxicological studies to interact with glutamate receptors.
ACB: So, you used molecular docking and molecular dynamics simulations?
GHL: Right. We started with docking, which is widely used to predict whether a compound has the right shape and charge profile to bind to a receptor. Our study was different than most, though. In drug design, you often dock a large number of compounds against a small number of targets, but our goal was instead to dock a small number of compounds into numerous receptors to see which target best explains the apparent toxic effect. Thus, we used homology modeling to compute the structures of 72 unique honey bee glutamate receptor isoforms, and docked glyphosate and six control compounds into each of those receptors. We also threw in a dozen human glutamate receptor isoforms for comparison.
ACB: And what did you learn?
GHL: Well, the docking studies gave clear and consistent results. Glyphosate was generally not predicted to do much to humans, usually failing to outcompete glutamate in binding to human glutamate receptors. That agrees with the general opinion that glyphosate is not a significant human neurotoxin. However, the situation for honey bee glutamate receptors was markedly different. Glyphosate was predicted to outcompete glutamate in nearly all honey bee AMPA receptor isoforms, as well as many NMDA receptors. Furthermore, in many of the bee simulations, glyphosate was predicted to outcompete some, or even most, of the previously known positive control compounds that are experimentally known to modulate these proteins. We then chose to carefully validate trends for representative subset of interesting receptors via accelerated molecular dynamics (MD) studies. MD simulations are often at least as quantitatively accurate as instrumental techniques like surface plasmon resonance, and they give substantially more detailed information about the nature of specific binding events.
ACB: And the dynamics simulations confirmed the docking results?
GHL: Confirmed and clarified. For starters, MD showed us that the one potentially relevant non-glutamate-receptor, phosphoglycerate mutase, was likely not part of the toxicology mechanism, since glyphosate was clearly shown to not outcompete the native substrate (3-phoshoglycerate). That simplified our narrative. MD then confirmed that metabotropic and kainate glutamate receptors produce ambiguous results, so we discounted them as probable causes for glyphosate intoxication. That eliminated everything except NMDA and AMPA receptors and, for those, there was much stronger evidence – a modest but significant advantage for glyphosate competing for honey bee NMDA receptors, and a major effect predicted for AMPA.
ACB: Where AMPA receptors are important for motor control and sensory perception, while NMDA is important for cognition and memory.
ACB: And you were able learn all this using a few desktop computers and a small server?
GHL: Yes. I’ll admit we can’t tackle every problem at this level. Some science is far more data intensive and might instead demand a huge bank of high speed processors, but there are still many important experiments that can benefit from someone with a good desktop computer, a solid grasp of the principles being explored, and expertise with the right software. Many of my research achievements have involved being one of those ‘someones’. Of my nearly 200 publications, most involved helping other analytical scientists to predict, rationalize, or confirm insight lurking in their data.
ACB: So, most of your career has focused on helping applied scientists to pursue their ideas?
GHL: I’d say so, yes. A bit of my work has been more abstract – in my Ph.D. I developed novel quantum chemical methods and, in the years since, I’ve devised a few more algorithms as the need arose – things like a modified approach for receptor-based QSAR, and the application of feature biclustering to parse complex, multimodal pharmacology and toxicology data sets into mechanistically consistent subsets. But for the most part, I’ve used existing methods in ways that have helped others extract more insight from their experiments.
ACB: But the hypothesis attracting most of your attention now is your own. The misfold-based neurodegenerative model?
GHL: ‘Our’ own, I’d say, since we’ve thrashed through everything together. But yes, in a sense, I’ve had to scale back a lot of side projects recently to hone my focus on the mystery of neurodegeneration. By comparison to my earlier jack-of-all trades project portfolio, it feels a bit narrow but, on the other hand, our work ties together efforts of a big community. What we’re doing is to combine a number of interesting different observations, each made in isolation in diverse subfields in neuroscience, then exploring how these observations are all mutually consistent. Within that consistency, we’ve found what appear to be interesting commonalities across a variety of neuropathologies; insight that we hope may lead to novel treatments.
ACB: A variety of neuropathologies. In particular Alzheimer’s, but also lupus, bipolar disorder, and others.
GHL: Yes. There are distinctions among all neuropathologies, which is what makes them different diseases, but it’s been fascinating to find so many similarities – specific proteins that are upregulated consistently over quite a number of different disorders; similar inflammatory biomarkers; shared risk factors, and so forth. Right now, I’m looking at an intriguing number of biochemical analogies between Alzheimer’s, bipolar disorder, type 2 diabetes, glaucoma, COPD and a few other diseases. These are definitely different maladies, affecting different cells in different parts of the body, but I’d argue that some key symptoms all share common biomolecular malfunctions, common metabolic dysregulation, and I believe even share a common glycolipid target that localizes the pathology onto cell or synaptic membranes.
ACB: Common pathogens, shared metabolic dysfunction, and similar targets? Without revealing all of your secrets, can you share for readers a bit about how you honed in on these hypotheses?
GHL: Sure. At least 90% of the core ideas that we’ve incorporated into our model are well established in the scientific literature, and it was mostly a matter of combining biomarker knowledge with an understanding of how protein structural changes affect function and interaction profiles. It’s been a bit like solving a jigsaw puzzle. The only novel part of our approach is pulling puzzle pieces from a bunch of different boxes. A huge amount of Alzheimer’s research, for example, has involved researchers only pulling puzzle pieces from the box labeled ‘Amyloid hypothesis’, while not finding the time to sort through those other boxes labeled ‘Neuroinflammation’, or ‘Metabolic disorders’, or the like.
ACB: Not many Alzheimer’s researchers read papers about diabetes, or even bipolar disorder.
GHL: Exactly! Fewer still ever read much about COPD or glaucoma. Yet, in and around all of those different pathologies, there are very similar processes taking place, and it makes sense that some of those shared processes correlate with, or maybe partly cause, core pathologies in multiple different diseases.
ACB: Again, while being careful about intellectual property, could you sketch out an idea of what you think might be happening in these diseases?
GHL: I think so. It would come as little surprise to anyone studying neurodegeneration, that I do believe that Amyloid beta (Abeta) plays a key role in Alzheimer’s pathology. It might surprise some Alzheimer’s researchers, though, to learn that Abeta is pathologically relevant to bipolar, diabetes, various eye diseases, and COPD.
ACB: Does Amyloid beta behave similarly in all of those diseases?
GHL: No and yes. Abeta probably doesn’t form Alzheimer’s-like plaques in bipolar disorder, diabetes or COPD. Abeta does aggregate in ocular cataracts but it’s not the primary component of those plaques. So, in that respect, the disorders are different. Yet, in all of these pathologies, there’s strong evidence the Abeta is chemically modified. Specifically, Abeta is ‘glycated’, which as the subject of a recent article of ours in CCHTS. There’s further evidence that the oligomers formed from glycated Abeta are structurally different from oligomers, fibrils and plaques formed from non-glycated Abeta. Unfortunately, the vast majority of experiments conducted on Abeta aggregation have focused on the non-glycated, physiological form of the peptide.
ACB: So, if pathogenic amyloid has a structure that is different than what most people have studied, it might explain why so many Alzheimer’s drug candidates have failed to pass phase 2 or phase 3 clinical trials.
GHL: That’s my belief, yes. People have known about Amyloid aggregation for as long as they’ve known about Alzheimer’s, but it’s been a constant struggle to grasp why it’s so neurotoxic. Billions of dollars have been spent trying to pharmaceutically dissolve amyloid plaques, or trying to suppress amyloid precuror protein, or influence how the precursor is cleaved, or to bind and clear any Abeta oligomer, but the outcomes have ranged from outright harm, to no effect, to marginal possible benefits inadequate to satisfy FDA efficacy standards. Alzheimer’s research has been… well, not quite like shooting in the dark, but maybe like hunting in a dim, thick forest, where your prey is well camouflaged. What we hope is that clues gathered from a broader spectrum of biochemistry and physiology will clarify the situation. Like high tech hunters using pattern recognition to pinpoint a target, we hope to confidently spot the precise molecular interactions that drive Alzheimer’s pathology, and ultimately use the information to devise new medicines.
ACB: And the publication on this work is still pending.
GHL: For the time being, yes, in consideration of intellectual property. However we are preparing to soon publish a synopsis of some of the key points of evidence, covering links between Abeta, glycation, inflammation and specific neuropathies. And the rest will follow, hopefully, after not too long. I’m a huge proponent of open sharing of knowledge and data. We would never have made our own conclusions without access to tremendously useful data and publications from numerous other sources. It’s a difficult environment right now for journals to thrive in, but the effective dissemination of knowledge is utterly crucial in the face of huge societal challenges like treating Alzheimer’s Disease.
ACB: Journals. That brings us back full circle to the reason we’re here. We recently published our short review on glycation pathology in the journal that you edit – Combinatorial Chemistry and High Throughput Screening. Do you see being a journal editor as an extension of your scientific work?
GHL: I think so. There’s a temptation to talk about the differences, since administering a broad-focus journal like CCHTS requires a wide angle lens, while most scientific studies are very narrow. In truth, though, the scope of CCHTS, with its blend of synthetic and analytical chemistry; analytical biology; pharmacology and toxicology; pharmacognosy and natural products – ideas from each one of those disciplines have helped to open our eyes to the underlying mechanisms and potential treatments for odd neuropathologies.
ACB: So, you implied that journals were facing challenges in today’s research environment. What are some of those challenges?
GHL: To be honest, the ‘publish or perish’ mentality, which has now spread like wildfire across all six inhabited continents, is stretching the capacity of scientific publishers to be high quality adjudicators of truth. Despite some helpful push-back from voices of reason who say that the quality of science is far more important than quantity, there’s now this huge torrent of rushed manuscripts being submitted. Many studies reaching our desk are significantly deficient in terms of the hard, unglamorous, labor of rigorous validation. A few years back, I helped Rathnam Chaguturu (CCHTS Emeritus Editor) to write several reviews on scientific reproducibility, in light of reports like that of Begley and Ellis (https://www.ncbi.nlm.nih.gov/pubmed/22460880) claiming that downwards of 10% of cancer biology studies were demonstrably reproducible. There are many ways a well-intentioned study may eventually fail in external validation, but the best guarantee for failure is to not pursue a solid battery of tests in-house before submitting the work for publication.
ACB: And what does a journal editor do about the problem?
GHL: Send weaker manuscripts back for improvement. A lot of deficient papers are interesting, and even plausible. A lot of them I would enjoy seeing in print, as long as I felt confident the results could be trusted. So, rather than outright reject every deficient paper, my Section Editors and I spend time identifying validation tests that, if performed, might really boost impact and believability. The authors then have the choice of adding validation comparable to what we’ve suggested, or submitting their paper elsewhere. Intervening like this is a lot of work for editors, but our publisher, Bentham Science, has been very supportive of the process. This reflects Bentham’s business model, which is geared toward sustaining journals through subscriptions rather than exorbitant publication costs. To keep selling subscriptions, you need high quality articles that are really going to get read and cited.
ACB: Where do you see the journal heading from here?
GHL: Well, I think we’re quite solidly settled on our niche of reporting accelerated technologies for research at the boundary of chemistry and biology. That general mandate is likely to remain the same, even though the actual terms ‘combinatorial chemistry’ and ‘high throughput screening’ are vestiges of early 2000’s research. No matter. There are still plenty of journals with titles more outdated than ours, and one has to weigh the value of name recognition. In any case, our continued relevance will require sustaining and growing our impact factor (which has been recovering well after some weak years in mid-decade) and being early adopters of emerging technological foci and applications. The advent of new artificial intelligence, for example, is reforming how biomarkers are perceived and validated. We’ve seen a real surge in biomarker papers in the last 18 months.
ACB: And what prospects do you see out on the horizon?
GHL: As in, something speculative that might someday erupt into prominence?
GHL: AI see something that impacts both publishers like Bentham and information scientists like ourselves – a resurgence of text mining. Right now, with the huge increase in scientific publishing, there’s the same sort of high volume ‘chatter’ going on that AI-oriented economists use to mine for financial trends, and that geopolitical risk analysts exploit to assess regime instability or terror threats. No single scientific researcher has time to pay attention to more than a tiny fraction of the chatter in a given discipline, and only a fraction of those papers are really rock-solid science anyway, but artificial intelligence tools are becoming robust enough to uncover and assess emerging trends, and that will help us put assemble semi-connected bits, keywords and ideas in ways that should lead to transformational new hypotheses. Despite how chaotic things may seem in biomedical research right now, I feel really confident saying that amazing insight is taking shape, gradually bubbling its way into the open where it will become tomorrow’s scientific breakthroughs.