Shotgun Searching For Drugs

The search for new drugs is daunting, expensive, and risky.

If chemicals were confined to molecular weights of less than 600 Da and consisted of common atoms, the chemistry space is estimated to contain 1040 to 10100 molecules, an impossibly large space to search for potential drugs [1]. To address this limitation of vastness, "maximal chemical diversity'' [2] was applied in constructing large experimental screening libraries. Such libraries have been directed at biological "targets" (proteins) to identify active molecules, with the hope that some of these "hits" may someday become drugs. The current target space is very small—less than 500 targets have been used to discover the known drugs [3]. This number may expand to several thousand in the near future as genomics-based technologies uncover new target opportunities [4]. For example, the human genome mapping has identified over 3000 transcription factors, 580 protein kinases, 560 G-protein coupled receptors, 200 proteases, 130 ion transporters, 120 phosphatases, over 80 cation channels, and 60 nuclear hormone receptors [5].

Although screening throughputs have massively increased since the early 1990s, lead discovery productivity has not necessarily increased accordingly [6-8]. Lipinski has concluded that maximal chemical diversity is an inefficient library design strategy, given the enormous size of the chemistry space, and especially that clinically useful drugs appear to exist as small tight clusters in chemistry space:

Absorption and Drug Development: Solubility, Permeability, and Charge State. By Alex Avdeef ISBN 0-471-423653. Copyright © 2003 John Wiley & Sons, Inc.

''one can make the argument that screening truly diverse libraries for drug activity is the fastest way for a company to go bankrupt because the screening yield will be so low'' [1]. Hits are made in pharmaceutical companies, but this is because the most effective (not necessarily the largest) screening libraries are highly focused, to reflect the putative tight clustering. Looking for ways to reduce the number of tests, to make the screens ''smarter,'' has an enormous cost reduction implication.

The emergence of combinatorial methods in the 1990s has lead to enormous numbers of new chemical entities (NCEs) [9]. These are the molecules of the newest screening libraries. A large pharmaceutical company may screen 3 million molecules for biological activity each year. Some 30,000 hits are made. Most of these molecules, however potent, do not have the right physical, metabolic, and safety properties. Large pharmaceutical companies can cope with about 30 molecules taken into development each year. A good year sees three molecules reach the product stage. Some years see none. These are just rough numbers, recited at various conferences.

A drug product may cost as much as $880 M (million) to bring out. It has been estimated that about 30% of the molecules that reach development are eventually rejected due to ADME (absorption, distribution, metabolism, excretion) problems. Much more money is spent on compounds that fail than on those that succeed [10,11]. The industry has started to respond by attempting to screen out those molecules with inappropriate ADME properties during discovery, before the molecules reach development. However, that has led to another challenge: how to do the additional screening quickly enough, while keeping costs down [6,12].

0 0

Post a comment