Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Computational Toxicology for Drug Safety and a Sustainable Environment
Computational Toxicology for Drug Safety and a Sustainable Environment
Computational Toxicology for Drug Safety and a Sustainable Environment
Ebook494 pages4 hours

Computational Toxicology for Drug Safety and a Sustainable Environment

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Computational Toxicology for Drug Safety and a Sustainable Environment is a primer on computational techniques in environmental toxicology for scholars. The book presents 9 in-depth chapters authored by expert academicians and scientists aimed to give readers an understanding of how computational models, software and algorithms are being used to predict toxicological profiles of chemical compounds. The book also aims to help academics view toxicological assessment from the lens of sustainability by providing an overview of the recent developments in environmentally-friendly practices. The chapters review the strengths and weaknesses of the existing methodologies, and cover new developments in computational tools to explain how researchers aim to get accurate results. Each chapter features a simple introduction and list of references to benefit a broad range of academic readers.

List of topics:
1. Applications of computational toxicology in pharmaceuticals, environmental and
industrial practices
2. Verification, validation and sensitivity studies of computational models used in toxicology
assessment
3. Computational toxicological approaches for drug profiling and development of online
clinical repositories
4. How to neutralize chemicals that kill environment and humans: an application of
computational toxicology
5. Adverse environmental impact of pharmaceutical waste and its computational assessment
6. Computational aspects of organochlorine compounds: DFT study and molecular docking
calculations
7. In-silico studies of anisole and glyoxylic acid derivatives
8. Computational toxicology studies of chemical compounds released from firecrackers
9. Computational nanotoxicology and its applications

Readership
Graduate and postgraduate students, academics and researchers in pharmacology, computational biology, toxicology and environmental science programs.

LanguageEnglish
Release dateMar 10, 2001
ISBN9789815196986
Computational Toxicology for Drug Safety and a Sustainable Environment

Read more from Tahmeena Khan

Related to Computational Toxicology for Drug Safety and a Sustainable Environment

Related ebooks

Chemistry For You

View More

Related articles

Reviews for Computational Toxicology for Drug Safety and a Sustainable Environment

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Computational Toxicology for Drug Safety and a Sustainable Environment - Tahmeena Khan

    Applications of Computational Toxicology in Pharmaceuticals, Environmental and Industrial Practices

    Nidhi Singh¹, *, Seema Joshi¹, Jaya Pandey²

    ¹ Department of Chemistry, Isabella Thoburn College, Lucknow, U.P., India

    ² Amity Institute of Applied Sciences, Amity University, Lucknow, U.P., India

    Abstract

    Computational toxicology is a rapidly developing field that uses computational logarithms and mathematical models for a better understanding of the toxicity of compounds and test systems. This recent branch is a combination of various fields encompassing chemistry, computer science, biology, biochemistry, mathematics, and engineering. This chapter focuses on the usage of computational toxicology in various fields. This multifaceted field finds application in almost every pharmaceutical and industrial process which in turn offers safer environmental practices. Computational toxicology has revolutionized the field of drug discovery as it has helped in the production of significantly efficient drug molecules through time-saving and cost-effective methods. It has also proved a boon for various industries ranging from often-used cosmetics to daily-use food products, as toxicological assessment of chemical constituents in them provides quicker and safer production. All these computational assessments thereby save a lot of chemical wastage and thus give a helping hand in exercising healthy environmental practices. Besides this, pollutant categorization and waste management through computational tools have also been favoured by many agencies that work for environmental sustainability. Thus, to sum up, computational technology has completely transformed the processes and practices followed in pharmaceutics, environment protection and industries, and paved the way for efficient, cost-effective, and less hazardous routes.

    Keywords: Computational toxicology, Drug discovery, Environment, Industry, Pharmaceutics.


    * Corresponding author Nidhi Singh: Department of Chemistry, Isabella Thoburn College, Lucknow, U.P., India; E-mail: nidhi.singh23081993@gmail.com

    INTRODUCTION

    Computational toxicology is a rapidly advancing technology that uses mathematical models designed from integrated data, through easy computer-based

    software applications or programs, for the prediction of metabolic and toxic properties of chemicals, drugs, edible items, pollutants and others [1]. This prediction can help reduce the synthesis time as well as the efficiency of many products without any detrimental effects to the environment. The branch of computational toxicology integrates various disciplines in it like chemistry, mathematics, biochemistry, medicine, computer science, biology and engineering [2, 3]. An integrated approach to various scientific fields in computational toxicology is depicted in Fig. (1). Besides toxicological predictions, it also predicts metabolic interaction predictions of chemicals at cellular and molecular levels in biological systems, thus making it a useful branch of study in multifarious fields [4]. The integrative approaches for toxicological research are modelled into computational tools for easy usage by researchers and scientists [5]. This predictive modelling assessment has greatly reduced the time consumed in the production of drugs, cosmetics, and food products, the unnecessary hazardous effects of chemical wastage on the environment, and the usage of in vivo methods and reliance on animal testing, and has improved the efficacy of drugs and cosmetic products with minimum health hazard risks [6].

    Fig. (1))

    Computational toxicology as an integrated sub-discipline of various disciplines.

    The integrated computational models for toxicological assessment are prepared through sequential steps. The general steps involved in the preparation of each model include a series of steps starting from the identification of user needs, followed by data collection, further followed by its expert assessment and data cleanup, succeeded by data harmonization or data standardization and finally ending at toxicity assessment [7, 8]. These basic steps form the basis of each artificial intelligence-based predictive model in computational toxicology. The first step ensures that the demand of the user is met i.e., a clear picture of user needs is required to be known; For example, if it’s toxicity assessment of some hazardous pollutant, data collection should be according to it, or if it is an assessment for toxicity of any chemical compound or permissible limits of any component in products, then the data must be collected accordingly as per those needs or if the manufacturer tends to prepare a new formulation, then the data for comparative toxicological limits of various chemical components must be curated [9, 10]. A clear start gives the best ending for our prediction models. Thus, the identification of appropriate users’ needs helps in identifying the regulatory endpoints for predictive assessment [11]. The second step includes data collection which is as per the requirements of the user. Sufficient metadata and reproducible data are the key points for the development of a reliable model. Data are collected from primary data reports, aggregated reports, repositories like PubChem, or through already existing computational predictive models.

    The third step takes into account expert assessment which involves the evaluation of data by subject matter experts for additional contexts to existing or incomplete data or the removal of irrelevant data. The fourth step involves data cleanup where erroneous data is identified and sorted out for better and more efficient assessment [12-14]. This step addresses any changes in spelling, special characters, and typographical errors incompatible with the computational tools and resolves these inconsistencies through automated workflow processing of data. The next step includes data harmonization or standardization where the sorted data is standardized for being compatible with the integrated chemical environment, to increase its interoperability like with EPA CompTox chemicals dashboard. In this step, data is standardized as per authoritative and regulatory standards [15, 16]. The final step uses the standardized data in conjunction with an integrated chemical environment or other descriptors for toxicity assessment. These sequential steps are diagrammatically explained in Fig. (2).

    Computational toxicology has numerous advantages over traditional toxicology testing methods. It is a timesaving, cost-effective, eco-friendly approach as compared to the in vivo approach where actual animal models are used for toxicity prediction studies causing loss of lives as well as chemicals and time. These in silico or in vitro models are accurate as well as advantageous in terms of time and economic and ecological practices. Thus, computational toxicology is highly advantageous over traditional toxicology testing.

    Fig. (2))

    Sequential steps for the development of a computational model for toxicity assessment.

    Applications of Computational Toxicology

    Computational toxicology has a wide number of applications in pharmaceutics, diagnostics, therapeutics, synthesis, cosmetics, food and beverages, and environmental risk assessments. Toxicogenomics, metabolomics, and proteomics create data sets for the working of computational software to generate an assessment of gene or protein expression or metabolite generation at a particular cell, tissue, or organ level [17, 18]. Thus, it helps in the risk assessment of different response pathways for toxic or non-toxic outcomes and also in the identification of major gene products that might regulate the biological behaviours that lead to toxicity [19]. Prediction of the toxicology of chemical compounds via QSAR (Quantitative Structure-activity relationship) was one of the first applications of computational toxicology [20]. These QSAR models combine chemical and biological descriptors for a better assessment of the toxicity of the chemical component itself as well as its toxic effect on the interaction with the biological systems [21]. Later, the high throughput screening of chemicals for their biological responses has proved to be a boon for pharmaceutical industries and has saved time, labour and cost for them [22]. Besides, high-throughput assays are also conducted by the National Institute of Health’s Chemical Genomics for the identification of biological processes that might affect the environment. Apart from risk assessment, computational models can help in the prediction of the mode of action of any chemical component or effluent, in dose-response predictions of drugs and in predicting the limits of pollutant exposure in the environment [23].

    Computational toxicology helps in the prediction of dosage or dose range or limit, in the prediction of toxicity endpoints, in the prediction of physicochemical properties, and in the prediction of health effects and risks. These predictions are governed by the SAR (Structure-activity Relationship) model. This improved technology helps in understanding the mechanism or mode of action and metabolism which might guide the researcher to the best possible route with minimum toxic effects [24]. This technology also assesses the chemical-biological molecular interactions and health risks. It helps in the enhancement of the MRL (Manufacturing Readiness Levels) approach in the processing and production of drugs and food products, as shown in Fig. (3) [25]. Computational toxicology can help predict susceptible populations for a particular health hazard in a short period by detection and simulation of the prevailing pollutants in that area [26]. Thus, overall this interdisciplinary approach has proved to be a boon for medical, environmental, and industrial fields by reducing the risks and time frames of diagnosis and production and enhancing efficiency and cost-effectiveness.

    Fig. (3))

    Manufacturing readiness levels.

    Applications of Computational Toxicology in Pharmaceuticals

    Computational toxicology has played a critical role in diagnostic and therapeutic areas. The high-throughput screening for chemical compounds, molecular simulations and docking studies used for drug-biosystem interactions have greatly reduced the risk of drug failures and enhanced the efficacy of drug production [27, 28]. Computational studies can also predict safety modifications in a drug at an early stage for its production as a more potent and less toxic drug, which was not possible earlier, and the drugs were removed from the market after a substantial loss and hazard were caused by them. Like nefazodone, a targeted antidepressant had to be retracted from the market in 2003, owing to its high hepato-toxicity [29]. In in vitro assays, the drug had also shown safety liabilities in general toxicity and mitochondrial dysfunction. Since, the advent of computational toxicology, such issues can be solved at an early stage of drug production, so that the risks of health hazards are reduced [30].

    The drug discovery process has been highly optimized due to investigative and computational toxicology. Through this process, the molecular target and disease are fixed first so that the target site is known or specific receptor protein is known, and then the data from chemical space is extracted and drug screening is performed [31]. This screening provides the best lead compounds which can be further optimized and put to clinical trials and finally to marketing and production. This computational screening saves time and reduces the use of animal models to a limit as well, thus proving to be more beneficial than traditional synthetic processes [32]. Besides the screening also detects the incompatible chemical components of drugs with the receptor, which can be modified or substituted with other groups to alter its toxicity and give more efficient drug molecules. For QSAR-based drug discovery, first chemogenomics data is accumulated through databases and then chemical descriptors are calculated as an aid for wet lab chemists or bench chemists [33]. These chemical descriptors include values for log P, log S, pKa/pKb, total polar surface area (TPSA), molecular polar surface area (MPSA), molecular volume, molecular weight (M.W.), number of Hydrogen bond acceptors (HBA), hydrogen bond donors (HBD) and number of rotatable bonds (nrotb). These descriptors are then checked for agreement or non-agreement with Lipinski’s Rule of Five or Ro5. This rule states that for a good, orally active, and biocompatible drug molecule, the following criteria need to be fulfilled:

    • Log P <5. The calculated octanol-water partition coefficient should not exceed 5.

    • M.W. <500 daltons. The molecular weight of the potential drug molecule should be less than 500Daltons.

    • Nrotb < 10. The number of rotatable bonds should be less than 10.

    • HBA <10. The number of hydrogen bond acceptors (nitrogen and oxygen atoms) should be limited to 10 or less.

    • HBD <5. The number of hydrogen bond donors (Nitrogen hydrogen and Oxygen hydrogen bonds) should not exceed 5.

    If the screened molecule violates more than one of these criteria, then it may have poor or problematic bioavailability [34]. Thus, these predictions help in assessing the potency of the screened drug molecules at an initial state and can be beneficial for the selection of lead compounds in a much less time and in and cost-effective way. Besides, the chemical descriptors of the drug candidates to be screened are also screened for some target families like kinase inhibitors, ion-channel modulators, nuclear receptor ligands, protease inhibitors, enzyme inhibitors and G-protein coupled receptor (GPCR) ligands. If the scores for the screened molecules lie between -0.5 to 0.0, the compounds are said to be fairly potent and moderately active. If the score is more than 0.0 then the compounds are considered to be highly potent and if the score is less than -0.5 then the compound is a poor candidate for being a drug molecule and requires modifications for its improvement [35-38]. These studies can easily help filter a large number of molecules and can help in separating chemical compounds from lead drug molecules, in a cost-effective and time-constrained manner. A process depiction of computationally modelled drug discovery can be seen in Fig. (4).

    Fig. (4))

    Application of computational toxicology studies in drug discovery/pharmaceutics.

    Molecular docking and molecular simulation studies have played a major role in the drug discovery field of pharmaceutics [39]. It is not only a tool for screening potent drug molecules but also helps in designing a potent drug molecule by the recognition of the targeted active binding site in the receptor [40]. Simulation studies help in predicting the possible risks of a particular chemical molecule when used for in vivo models, thereby limiting the usage of animal models only to high-success rate drug molecules [41, 42]. Molecular docking is an in silico method for recognizing the best docking pose or binding pose for a ligand with the active site of the targeted receptor molecule based on the docking scores, as shown in Fig. (5). The best score is ranked ordered in comparison to other poses and the most compatible target receptor site and ligand binding is used for the development or prediction of a potent drug molecule [43]. This technique combines and optimizes various parameters like hydrophobic, steric, and electrostatic interactions. When the active site is unknown, blind docking is used for the prediction of a suitable target active site for the synthesized compound, which could be implemented by knowing the mechanism of action of the synthesized compound. This docking could predict an agonist or antagonist mode of action for a receptor site and synthesized ligand [44]. Various docking software are available for docking based on ensemble, induced fit, and rigid models, which could easily predict the risk of a drug candidate to be used in in vivo models.

    Fig. (5))

    Molecular docking of the chemical compound at the active site.

    Computational toxicology simulation studies can also be used in the evaluation of the process interaction of genes. Like critical genes in somite formation formed in the embryo of vertebrates can be modelled for the development of somitic boundaries and their positional information, which could help in the assessment of any particular toxic compound prevalent in an area to cause a particular disease in the residents [45]. Thus, this could also serve as a diagnostic tool. Besides, physiologically based pharmacokinetic (PBPK) models help in projecting a relationship between administered doses and delivered doses i.e., a relationship between the amount of drug taken up by the patient and the amount of drug metabolized in the body [46]. This model uses a realistic mammalian physiology and biochemistry description as its algorithm. It can easily predict risk assessment in patients, can address the critical gap between in vitro and in vivo models and attempt to provide dose-response interactions between chemicals and biological systems. This might help the regulatory bodies to define limits of drug dosage for a particular drug in different age groups and set a standard for dosage all over the world [47].

    Thus, computational toxicology has proved to be a blessing for pharmaceutics in terms of drug discovery including both synthesis and modification, drug-gene interaction, diagnosis of prevalent diseases in a particular area as well as in terms of risk assessment of dose intake and hence maintaining the standards of drug-toxicity.

    Applications of Computational Toxicology in Environmental Practices

    The advent of computational toxicology has played a pivotal role in maintaining safe environmental hazards by risk assessment of various toxic pollutants discharged into the environment. The risk assessment ability of PBPK models can avoid many hazardous synthetic or interactive procedures and be helpful for environmental protection. The scientific standardization agencies rely on these innovations and technologies for setting standard limits regarding toxicity from hazardous pollutants or effluent exposures. Chemical toxicology can help in better characterization of effluents on the target site exposure. The dose-response relationships predicted through computational models can be useful in regularizing doses with minimum toxic emissions in the environment [48, 49]. The U.S.E.P.A. (United States Environmental Protection Agency) has developed many large data resource centres to assist this data-intensive technology. Along with PBPK models, benchmark dose (BMD) models are also assisting in risk assessment from toxic compounds [50]. The BMD model analyzes all the experimentally accumulated information of a dose-response relationship curve and with minimum extrapolations, provides a human health guide against toxic substances or gives the threshold of toxicological concern (TTC) [51, 52]. The TTC for some compounds was predicted as per Cramer’s classification:

    • Substances with simple chemical structures with identified metabolic pathways and non-hazardous end products propose a low toxicity profile, and its TTC is 30µg/kg body weight per day.

    • Substances with relatively complex chemical compounds with only a predictive mechanism of action, and no definite metabolic pathway are suggested to possess a certain level of toxicity and its TTC is 9µg/kg body weight per day.

    • Substances with complex chemical structures that interact with environmental factors, with an unidentified metabolic pathway are labelled to have high toxicity profile, with a TTC of 1.5µg/kg body weight per day.

    These models help in the prediction of adverse outcome pathways (AOP) which could help in dealing with environmental repercussions with a suggestive alternate route or limiting of compound dosage. An AOP is a theoretical framework that depicts current knowledge regarding the association between a direct molecular originating event and an adverse outcome, at a level of biological organization relevant to risk assessment, as depicted in Fig. (6). This could help in maintaining standards of ecotoxicology by various regulatory authorities [53].

    Fig. (6))

    Adverse outcome pathway depicting organism and population responses.

    Besides these direct applications, computational toxicology has implicitly facilitated a lot in bringing about a transformation in environmental sustainability procedures. The high throughput screening and QSAR studies and pharmacokinetic profiling of drug molecules have been a great aid in reducing the chemical effluents from the laboratory, produced during synthesis [54]. Earlier if hundreds of molecules were synthesized, only one molecule could be the lead molecule for drug discovery and there was a great deal of chemical wastage, dumped from laboratories to the natural environment. But, currently with the advent of computational toxicology, these practices have been transformed, and screening of chemical compounds through computational models can provide four to five lead molecules by screening over thousands of chemical components. So, once the lead molecule is identified, only these molecules are synthesized and optimized for drug discovery. This saves a whole deal of chemicals and time, and since it promotes less usage of chemicals, therefore less disposition of chemical wastes and subsequently lesser harm to the environment [55]. This also reduces the cost of each drug synthesis, thereby making it a cost-efficient technique as well. Apart from this, when current computational toxicology tools are used instead of traditional synthetic methods, only a few in vivo or animal models are required for their testing rather than testing each molecule as synthesized by traditional routes. This helps in time conservation, cost reduction, and animal preservation as well, thereby helping in maintaining the ecological pyramid of the environment [56].

    This interdisciplinary branch of computer and science gave rise to another branch called green chemistry which focuses on designing and production of processes that minimize the use and disposition of hazardous chemicals. This has helped chemistry in transforming its conservative synthetic methods. It is a branch that targets alternative sustainable technologies. Many of the principles of green chemistry are already included in computational toxicology studies like less hazardous chemical effluents, design of safer chemicals, design for energy efficiency, reduced derivative production, real-time analysis for pollution prevention, and safer alternative routes [57, 58].

    Thus, computational toxicology has helped a great deal in transforming the environmental practices causing hazards to the environment and has provided a much safer, and reliable alternative for environmental protection without hindering the product and development. This has given rise to the concept of green chemistry which is the best alternative to date for the conservational method.

    Applications of Computational Toxicology in Industrial Practices

    Computational toxicology has played a key role in industries as well. It has improved cost efficiency and

    Enjoying the preview?
    Page 1 of 1