Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Applications of Artificial Intelligence Techniques in the Petroleum Industry
Applications of Artificial Intelligence Techniques in the Petroleum Industry
Applications of Artificial Intelligence Techniques in the Petroleum Industry
Ebook531 pages3 hours

Applications of Artificial Intelligence Techniques in the Petroleum Industry

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Applications of Artificial Intelligence Techniques in the Petroleum Industry gives engineers a critical resource to help them understand the machine learning that will solve specific engineering challenges. The reference begins with fundamentals, covering preprocessing of data, types of intelligent models, and training and optimization algorithms. The book moves on to methodically address artificial intelligence technology and applications by the upstream sector, covering exploration, drilling, reservoir and production engineering. Final sections cover current gaps and future challenges.

  • Teaches how to apply machine learning algorithms that work best in exploration, drilling, reservoir or production engineering
  • Helps readers increase their existing knowledge on intelligent data modeling, machine learning and artificial intelligence, with foundational chapters covering the preprocessing of data and training on algorithms
  • Provides tactics on how to cover complex projects such as shale gas, tight oils, and other types of unconventional reservoirs with more advanced model input
LanguageEnglish
Release dateAug 26, 2020
ISBN9780128223857
Applications of Artificial Intelligence Techniques in the Petroleum Industry
Author

Abdolhossein Hemmati-Sarapardeh

Abdolhossein Hemmati-Sarapardeh is currently an assistant professor at Shahid Bahonar University of Kerman. He is also an adjunct professor at Jilin University and Northeast Petroleum University in China. He was previously a visiting scholar at the University of Calgary. He earned a PhD in petroleum engineering from Amirkabir University of Technology, an MSc in hydrocarbon reservoir engineering from the Sharif University of Technology, and a BSc in petroleum engineering from the Amirkabir University of Technology. His research interests include enhanced oil recovery processes, heavy oil systems, nanotechnology, and applications of intelligent models in the petroleum industry. Abdolhossein has been awarded as a distinguished graduate MSc student, was an honor PhD student, and a recipient of the National Elites Foundation Scholarship. He works as an associate professor in the Journal of Petroleum Science and Engineering. He has published over 150 journal articles, three books, several conference proceedings, and earned one patent in 2016.

Related authors

Related to Applications of Artificial Intelligence Techniques in the Petroleum Industry

Related ebooks

Science & Mathematics For You

View More

Related articles

Reviews for Applications of Artificial Intelligence Techniques in the Petroleum Industry

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Applications of Artificial Intelligence Techniques in the Petroleum Industry - Abdolhossein Hemmati-Sarapardeh

    Applications of Artificial Intelligence Techniques in the Petroleum Industry

    Abdolhossein Hemmati-Sarapardeh

    Department of Petroleum Engineering, Shahid Bahonar University of Kerman, Kerman, Iran

    Aydin Larestani

    Department of Petroleum Engineering, Shahid Bahonar University of Kerman, Kerman, Iran

    Menad Nait Amar

    Département Etudes Thermodynamiques, Division Laboratoires, Sonatrach, Boumerdes, Algeria

    Laboratory of Hydrocarbons Physical Engineering, Faculty of Hydrocarbons and Chemistry, University of M’Hamed Bougara Boumerdes, Boumerdes, Algeria

    Sassan Hajirezaie

    Department of Civil and Environmental Engineering, Princeton University, NJ, United States

    Table of Contents

    Cover image

    Title page

    Copyright

    About the author

    Chapter 1. Introduction

    Abstract

    Contents

    1.1 Overview

    1.2 Preprocessing of data

    1.3 Processing of data

    1.4 Postprocessing of data

    1.5 Applicability domain of a model

    1.6 Sensitivity analysis on models’ inputs

    1.7 The areas of intelligent models applications in the petroleum industry

    References

    Chapter 2. Intelligent models

    Abstract

    Contents

    2.1 Artificial neural networks

    2.2 Fuzzy logic systems

    2.3 Adaptive neuro-fuzzy inference system

    2.4 Support vector machine

    2.5 Decision tree

    2.6 Group method of data handling

    2.7 Genetic programming

    2.8 Gene expression programming

    2.9 Case-based reasoning

    2.10 Committee machine intelligent system

    References

    Chapter 3. Training and optimization algorithms

    Abstract

    Contents

    3.1 Overview

    3.2 Genetic algorithm

    3.3 Differential evolution

    3.4 Particle swarm optimization

    3.5 Ant colony optimization

    3.6 Artificial bee colony

    3.7 Firefly algorithm

    3.8 Imperialist competitive algorithm

    3.9 Simulated annealing

    3.10 Coupled simulated annealing

    3.11 Gravitational search algorithm

    3.12 Cuckoo optimization algorithm

    3.13 Gray wolf optimization

    3.14 Whale optimization algorithm

    3.15 Levenberg–Marquardt algorithm

    3.16 Bayesian regularization algorithm

    3.17 Scaled conjugate gradient algorithm

    3.18 Resilient backpropagation algorithm

    References

    Chapter 4. Application of intelligent models in reservoir and production engineering

    Abstract

    Contents

    4.1 Reservoir fluid properties

    4.2 Rock properties

    4.3 Enhanced oil recovery

    4.4 Well test analysis

    4.5 Formation damage

    4.6 Asphaltene

    4.7 Production pipelines

    4.8 Wax

    4.9 Other applications

    References

    Chapter 5. Application of intelligent models in drilling engineering

    Abstract

    Contents

    5.1 Drilling fluids

    5.2 Lost circulation problem

    5.3 Stuck pipe

    5.4 Flow patterns and frictional pressure loss of two-phase fluids

    5.5 Rate of penetration

    5.6 Other applications

    References

    Chapter 6. Application of intelligent models in exploration engineering

    Abstract

    Contents

    6.1 Overview

    6.2 Geochemistry

    6.3 Geophysics

    6.4 Petro-physics

    6.5 Geo-mechanical characterization of organic-rich shales

    6.6 Brittleness index in shale gas and tight oils

    6.7 Total organic carbon determination

    6.8 Shear wave velocity

    6.9 Flow units

    6.10 Facies identification from well log

    References

    Chapter 7. Weaknesses and strengths of intelligent models in petroleum industry

    Abstract

    Contents

    7.1 Overview

    7.2 Intelligent models versus theoretical models

    7.3 Intelligent models versus empirical correlations

    7.4 Effect of the number of actual data

    7.5 Validation of the developed models

    References

    Index

    Copyright

    Gulf Professional Publishing is an imprint of Elsevier

    50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States

    The Boulevard, Langford Lane, Kidlington, Oxford, OX5 1GB, United Kingdom

    Copyright © 2020 Elsevier Inc. All rights reserved.

    No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions.

    This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein).

    Notices

    Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary.

    Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility.

    To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein.

    British Library Cataloguing-in-Publication Data

    A catalogue record for this book is available from the British Library

    Library of Congress Cataloging-in-Publication Data

    A catalog record for this book is available from the Library of Congress

    ISBN: 978-0-12-818680-0

    For Information on all Gulf Professional Publishing publications visit our website at https://www.elsevier.com/books-and-journals

    Publisher: Brian Romer

    Acquisitions Editor: Katie Hammon

    Editorial Project Manager: Ali Afzal-Khan

    Production Project Manager: Sojan P. Pazhayattil

    Cover Designer: Christian J. Bilbow

    Typeset by MPS Limited, Chennai, India

    About the author

    Abdolhossein Hemmati Sarapardeh is currently an Assistant Professor at the Department of Petroleum Engineering, Shahid Bahonar University of Kerman, Iran. He is also an adjunct professor at College of Construction Engineering, Jilin University, Changchun, China; and visiting scholar at Institute of Research and Development, Duy Tan University, Da Nang, Vietnam and Faculty of Environment and Chemical Engineering, Duy Tan University, Da Nang, Vietnam. He was previously a visiting scholar at the University of Calgary, Canada. He earned a PhD in petroleum engineering from the Amirkabir University of Technology, an MSc in hydrocarbon reservoir engineering from the Sharif University of Technology, and a BSc in petroleum engineering from the Amirkabir University of Technology. His research interests include enhanced oil recovery processes, heavy oil systems, nanotechnology, and applications of intelligent models in the petroleum industry. He has been awarded as a distinguished graduate MSc student, was an honor PhD student, and a recipient of the National Elites Foundation Scholarship. He was named Outstanding Reviewer in five prestigious journals including Journal of Fuel and Journal of Petroleum Science and Engineering, published by Elsevier. He has published over 90 journal articles, several conference proceedings, and earned one patent.

    Sassan Hajirezaie is currently a PhD candidate at Princeton University studying civil and environmental engineering. He earned a Master of Science in petroleum engineering from the University of Oklahoma, and a Bachelor of Science in petroleum engineering from Sharif University of Technology. His research focuses on carbon capture and storage (CCS), application of machine learning models in unconventional oil and gas production, and renewable energy sources. He has published many journal articles, peer reviewed at several journals, and is a member of the Society of Petroleum Engineers and the American Geophysical Union.

    Menad Nait Amar received the BSc degree, the MSc degree, and the PhD degree in Petroleum/reservoir Engineering at University M’hamed Bougara of Boumerdes, Algeria in 2013, 2015, and 2018, respectively. His research interests include machine learning, optimization and data mining, and their applications in the oil industry. He is currently an Engineer Researcher at Sonatrach and an Assistant Professor within the Faculty of Hydrocarbons and Chemistry at the University M’hamed Bougara of Boumerdes in Algeria.

    Aydin Larestani is a Research Assistant in Department of Petroleum Engineering, Shahid Bahonar University of Kerman, Iran. He is currently an MSc student at the University of Kerman and a member of the Iranian Oil Industry Youth Committee in the World Petroleum Council. He is the first ranked student in MSc in hydrocarbon reservoir engineering at the Shahid Bahonar University of Kerman and first ranked graduate in Bachelor of Science in drilling engineering. He was the secretary of petroleum engineering scientific association from 2015 to 2018. His research interests include applications of intelligent models in the petroleum industry, chemical enhanced oil recovery, thermal EOR, interfacial tension, and heavy oil.

    Chapter 1

    Introduction

    Abstract

    In this chapter, the main statistical and graphical approaches used to analyze the performance of artificial predictive models in the oil and gas industry are described. In addition, data preprocessing steps as necessary steps to refine a data bank before performing statistical and graphical error analyses are presented. These processes eliminate unreliable data points to ensure the development of more accurate predictive models. These unreliable data points could be false data or outliers that should be removed from a dataset. Error analysis techniques are used to evaluate the performance and accuracy of predictive models. The basis of these techniques is measuring the deviation of predictions from the measured data points using different mathematical formulations. Graphical error analyses, on the other hand, are used to enable an easier way to compare the performance of multiple predictive models and select the most accurate one. These techniques are a representation of the outcome from statistical techniques and are used to facilitate the process of model selection. In this chapter, first, a brief description of different procedures for data analysis is provided, and then some examples of these procedures are given. The definitions and formulations of statistical error analyses are presented along with specific examples using data from real oil and gas operations. In addition, different graphical error analyses with graphical examples are presented. Dew point pressure was chosen as a candidate parameter to describe these statistical and graphical methods better.

    Keywords

    Data processing; data cleaning; data integration; applicability domain; sensitivity analysis

    Contents

    Outline

    1.1 Overview 1

    1.2 Preprocessing of data 2

    1.2.1 Data cleaning 2

    1.2.2 Data integration 2

    1.2.3 Data transformation 3

    1.2.4 Data reduction 3

    1.2.5 Data discretization 4

    1.2.6 Data statistics 4

    1.3 Processing of data 5

    1.3.1 Data training 5

    1.3.2 Data validation and testing 5

    1.4 Postprocessing of data 6

    1.4.1 Statistical analyses for models’ evaluation 6

    1.4.2 Graphical error analysis for models’ evaluation 9

    1.5 Applicability domain of a model 19

    1.5.1 Identification of experimental data outliers 19

    1.6 Sensitivity analysis on models’ inputs 21

    1.6.1 Relevancy factor analysis 21

    1.7 The areas of intelligent models applications in the petroleum industry 21

    References 22

    1.1 Overview

    In this chapter, the main statistical and graphical approaches used to analyze the performance of artificial predictive models in the oil and gas industry are described. In addition, data preprocessing steps, as necessary steps to refine a data bank before performing statistical and graphical error analyses, are presented. These processes eliminate unreliable data points to ensure the development of more accurate predictive models. These unreliable data points could be false data or outliers that should be removed from a dataset. Error analysis techniques are used to evaluate the performance and accuracy of predictive models. The basis of these techniques is measuring the deviation of predictions from the measured data points using different mathematical formulations. Graphical error analyses, on the other hand, are used to enable an easier way to compare the performance of multiple predictive models and select the most accurate one. These techniques are a representation of the outcome from statistical techniques and are used to facilitate the process of model selection. In this chapter, first, a brief description of different procedures for data analysis is provided, and then some examples of these procedures are given. The definitions and formulations of statistical error analyses are presented along with specific examples using data from real oil and gas operations. In addition, different graphical error analyses with graphical examples are presented. Dew point pressure was chosen as a candidate parameter to describe these statistical and graphical methods better.

    1.2 Preprocessing of data

    Preprocessing of data means the steps that are necessary to take before using the data for processing purposes. These steps are described next.

    1.2.1 Data cleaning

    Data cleaning consists of eliminating the inconsistencies in the data, smoothing the noisy data, and compensating the missing data points. The data that need to be cleaned include the data that were added to the data bank by mistake or data outliers. For example, when the data points represent minimum miscibility pressure of gas–oil systems, negative values would not be a reasonable representative of pressure values since pressure cannot be a negative value. Therefore, these data points should be removed. On the other hand, in order to use a dataset for model development, every data point in the set should have all the inputs that the model uses for predictions. If a data point does not have even one of the inputs, it should be removed during the data cleaning stage. Outliers are another group that should be eliminated from a dataset. For example, if the target output of the model is reservoir temperature and one of the data points is 273 K, it should be removed from the data as reservoir temperature cannot be 0°C.

    1.2.2 Data integration

    Data can be represented differently. During this step, all the different representations are combined, and a unique and consistent representation is made. For example, when the data represent the components of injected gas, the percentages can be either reported individually or as ratios. Another example is the different ways to present critical temperature and pressure values when predicting the viscosity of a gas. These critical values could be either reported individually or as pseudocritical values at certain temperature and pressure values. The units of data points also need to be consistent. For example, if the predictive model uses temperature in Kelvin, all the temperature values need to be converted to Kelvin during the data integration stage.

    1.2.3 Data transformation

    where xavg and above, the log of data points can be used for normalization.

    1.2.4 Data reduction

    Data reduction consists of a reduction in the representation of data in a data bank. In some instances, there are a greater number of inputs than necessary, and they can be reduced. Dimensional analysis is usually helpful in reducing the number of input parameters. Also, sometimes, there are too many data points in the dataset, and a percentage of the data points can be discarded without hurting the model development process. In some cases, for example, when the heavy components of oil need to be considered for model development, an average value over the input parameters (i.e., average critical temperature) can be considered as one input and that decreases the needed computations.

    1.2.5 Data discretization

    Several data points can be eliminated by dividing continuous attributes by the range of attributed intervals during the data discretization process. Generally, averaging the data points that are all located within the same interval is an effective approach to perform this. For example, when there are many data points in the set and within certain ranges and one data point would be a sufficient representative of each range, averaging over the interval is used to reduce the computations.

    1.2.6 Data statistics

    1.2.6.1 Skewness

    In addition to minimum, maximum, average, and standard deviation of a data set, sometimes other statistical parameters such as skewness and kurtosis are used to better describe the distribution of data points in a data set. The asymmetry of the probability distribution of a random variable about its mean is measured by skewness. Skewness can be positive, negative, or undefined. In unimodal distributions, positive skewness occurs when the tail of the distribution is toward the right side, and the mass of the distribution is concentrated on the left side. Negative skewness occurs when the tail of the distribution is toward the left side, and the mass of the distribution is concentrated on the right side. Undefined skewness occurs in symmetric distributions when one tail is fat, and one tail is long. In asymmetric distributions, one tail is long and thin, and one tail is fat and short.

    The formula and a schematic figure for negative and positive skewness are shown in the following equation and Fig. 1.1:

    (1.1)

    Figure 1.1 Illustration of data distribution with respect to the skew values.

    1.2.6.2 Kurtosis

    The tailedness of a probability distribution of a random variable is measured by kurtosis. Similar to skewness, kurtosis is a measure of the shape of a probability distribution. A positive kurtosis means that many data points are located in the tails of the distribution, while a negative distribution entails few data points in the tails. For a dataset with n values, kurtosis is calculated as follows:

    (1.2)

    A normal distribution has a kurtosis of 3. Some authors prefer to use excess kurtosis which is kurtosis minus 3.

    1.3 Processing of data

    1.3.1 Data training

    Data training is performed to construct a predictive model. Usually, 70%–80% of the data points are used to train the model, and the rest is used for validation and testing. The dataset needs to go through the preprocessing steps before being used during the training stage, as it was explained before.

    1.3.2 Data validation and testing

    In order to evaluate the ability of a model in predicting new data points, data validation and testing are performed to investigate how accurately the current data points can be predicted, and therefore new predictions can be made reliably. This step is performed to evaluate the effectiveness of the training stage. Usually, 15% of the data points are assigned to data validation, and 15% are used for data testing. In some cases the validation stage is skipped, and all the data points, excluding the training set, are used for data testing.

    1.4 Postprocessing of data

    1.4.1 Statistical analyses for models’ evaluation

    Statistical analysis methods are used to evaluate the performance of a model. This is generally done by comparing the model predictions with the experimental values by introducing various error calculation approaches. Here, some of the main statistical techniques are presented.

    1.4.1.1 Average percent relative error (APRE)

    In this technique, the relative deviation of predicted data points by a model from the corresponding experimental data is calculated. This parameter is sometimes called the average relative deviation. This parameter is defined as follows:

    (1.3)

    where Ei is the relative deviation of a represented/predicted (denoted by rep./pred) value from an experimental (denoted by exp) value based on the following formula:

    (1.4)

    1.4.1.2 Average absolute percent relative error (AAPRE)

    Sometimes called average absolute percent relative deviation, this technique is the same as average percent relative error except the absolute values of errors are considered in the calculation of the final error value, as shown in the following equation:

    (1.5)

    Care should be taken when using APRE and AAPRE results. While APRE is a measure of relative error, AAPRE indicates the absolute error value, and the smaller it is for a model, the more accurate the model is. To clarify this issue, five model performances are reported in Table 1.1. While all models have the same AAPRE values, they have very different APRE results. Models A and B indicate a uniform distribution of outputs because their APRE values are relatively small. However, model C underestimates the measured values because, based on Eq. 1.4, it generally predicts smaller values than the experimental values, which causes an underestimation of the results. Similarly, model D overestimates the results as its APRE is a negative value. The APRE reported for model E is unrealistic because the average of absolute error values calculated by AAPRE cannot be smaller than APRE.

    Table 1.1

    Table 1.2 shows an example of how to calculate AAPRE from absolute percent relative error. By taking an average of the absolute percent relative error values, AAPRE can be obtained as reported in the same table.

    Table 1.2

    1.4.1.3 Root mean square error (RMSE)

    The data dispersion around zero is calculated using this technique. Generally, the smaller this value generated by a model, the more accurate that model is in predicting the measured values. Root mean square error is calculated as follows:

    (1.6)

    where Oiexp and Oirep represent the experimental and predicted outputs, respectively.

    In the following equation the root mean square error (RMSE) calculation for the sample data points in Table 1.2 is presented:

    1.4.1.4 Standard deviation (SD)

    The degree of data scattering is quantified using this technique, and similar to RMSE, a smaller SD value generated by a model means a more accurate predictive model.

    SD is calculated as follows:

    (1.7)

    1.4.1.5 Coefficient of determination (R²)

    This parameter is used to quantify how close the prediction values are to the experimental values. This is done by comparing the R² values, and a closer value of R² to unity is equivalent to a more accurate model.

    Coefficient of determination is calculated as follows:

    (1.8)

    1.4.2 Graphical error analysis for models’ evaluation

    Graphical approaches are used to visualize the performance of a model based on the error between the model’s predictions and experimental data. Here, some of the main graphical approaches are presented.

    Fig. 1.2 is an example of using graphical AAPRE analysis to compare the performance of the models in Table 1.2. As the figure shows, while model 5 is the most accurate one, model 6 can make predictions with the least accuracy. This graphical technique significantly helps when there are numerous models, and their performances are to be evaluated.

    Figure 1.2 Graphical comparison of AAPRE values generated by different models. AAPRE, Average absolute percent relative error.

    1.4.2.1 Error distribution curve

    In this technique, the data points are plotted around the zero-error line to investigate whether or not the predictive model has an error trend as the values of predictions (outputs) increase.

    Fig. 1.3 is an example of using graphical error distribution analysis to evaluate the error trend of a model in predicting dew point pressure values. This figure represents a model that uniformly predicts the experimental values with high accuracy over a wide range of data points. As can be seen, the data points lie close to the zero-error line, and regardless of the increase or decrease in the values, they follow the same trend. There are not many scattered or outlier data points, and there is no systematic or random separation of the data points from the zero-error line. This figure indicates that there is no error trend in the model performance as the pressure values increase, which means this model is a good candidate to be used for any range of data points within the allowed domain of the model. This model has been developed based on enough number of input data points during the training stage. A final remark about this model is that even small deviations from the zero-error line predicted by this model are balanced, which means this model does not suffer from even a small systematic error.

    Figure 1.3 Error distribution of a model that uniformly makes predictions.

    Fig. 1.4 shows a model that suffers from a random error in predicting experimental values. This model is the least accurate model with the largest AAPRE value among all the example models presented in this chapter. The large AAPRE is equivalent to large deviations from the zero-error line, as can be seen in this figure. This model is not appropriate to be used for any range of data points, and all the data points randomly deviate from the zero-error line. This poor performance can be caused due to using not appropriate model structure during the model development or not having enough data points during the model training stage. Using a better structure or better training algorithm for the model helps improve the performance of such models.

    Figure 1.4 Error distribution of a model that fails in making accurate predictions.

    Fig. 1.5 shows an example of a model that underestimates the measured data points. Under these conditions, the relative error is a positive number, which can be observed in this figure, and predictions deviate from the correct values. However, Fig. 1.5 indicates a model that underestimates only a portion of the data points (larger values) and does a fair job in uniformly predicting the other data points (smaller and medium-range numbers). This model would be a good candidate to use when only small- and medium-range values are to be predicted. Nevertheless, this model indicates an error trend and should not be considered a reliable model.

    Figure 1.5 Error distribution of a model that underestimates predictions.

    Fig. 1.6 shows a model that consistently underestimates the values and is not appropriate to use for any range of data points. In this case a systematic error is observed in predicting the data points, and almost all the experimental points are predicted to be smaller than their true values. The data points significantly deviate from the zero-error line, and a significant systematic error in predictions is evident. This error can be due to the lack of data points or wrong model structure during the model development stage.

    Figure 1.6 Error distribution of a model that underestimates predictions.

    Fig. 1.7 represents a model that overestimates the experimental data points. Overestimation occurs when the predicted values tend to be larger than the experimental values, as described in Eq. (1.4). Under these conditions the relative error is a negative number, as can be seen in this figure, and predictions deviate from the correct values. However, Fig. 1.7 indicates a model that overestimates only a portion of the data points (larger values) and does a relatively fair job in uniformly predicting the other data points (smaller and medium-range numbers). Even though this model would be a good candidate to use when only small- and medium-range values are

    Enjoying the preview?
    Page 1 of 1