Computational Mathematics – Demidovich, Maron

We now come to Computational Mathematics by B. P. Demidovich,
  I. A. Maron.

computational mathematics

The basic aim of this book is to give as far as possible a
systematic and modern presentation of the most important methods and  techniques of computational mathematics on the basis of the general  course of higher mathematics taught in higher technical schools. The  book has been arranged so that the basic portion constitutes a  manual for the first cycle of studies in approximate computations  for higher technical colleges. The text contains supplementary  material which goes beyond the scope of the ordinary college course,  but the reader can select those sections which interest him and omit  any extra material without loss of continuity. The chapters and  sections which may be dropped out in a first reading are marked with an asterisk.

For a full comprehension of the contents of this book, the reader
should have a background of linear algebra and the theory of linear
vector spaces. With the aim of making the text as self-contained as
possible, the authors have included all the necessary starting
material in these subjects. The appropriate chapter are completely
independent of the basic text and can be omitted by readers who have  already studied these sections.

A few words about the contents of the book. In the main it is
devoted to the following problems: operations involving approximate  numbers, computation of functions by means of series and iterative  processes, approximate and numerical solution of algebraic and  transcendental equations, computational methods of linear algebra,  interpolation of functions, numerical differentiation and  integration of functions, and the Monte Carlo method.

This book was translated from the Russian by George Yankovsky. The book was  published by first Mir Publishers in 1973, with reprints in 1976,
and 1981. The book below is from the 1981 reprint.

All credits to the original uploader.

DJVU | OCR | 17.1 MB | Pages: 688 |
You can get the book here
For magnet / torrent links go here.
Password if needed: mirtitles

4-shared link here

Password, if required, for 4shared files:

www.mirtitles.org

Facing problems while extracting? See FAQs

Table of Contents
PREFACE

INTRODUCTION.

GENERAL RULES OF COMPUTATIONAL WORK

CHAPTER 1
APPROXIMATE  NUMBERS 19

1.1 Absolute and relative errors 19
1.2 Basic sources of errors  22
1.3 Scientific notation. Significant digits, The number of correct
digits 23
1.4 Rounding of numbers  26
1.5 relationship between the relative error of an approximate number
and the number of correct digits  27
1.6 Tables for determining the limiting relative error from the number
of correct digits and vice versa 30
1.7 The error of a sum 33
1.8 The error of a difference 35
1.9 The error of a product 37
1.10 The number of correct digits in a product 39
1.11 The error of a quotient 40
1.12 The number of correct digits in a quotient 41
1.13 The relative error of a power 41
1.14 The relative error of a root 41
1.15 Computations in which errors are not taken into exact account 42
1.16 General formula for errors 42
1.17 The inverse problem of the theory of errors 44
1.18 Accuracy in the determination of arguments from a tabulated
function 48
1.19 The method of bounds 50
1.20 The notion of a probability error estimate 52
References for Chapter 1 54

CHAPTER 2
SOME FACTS FROM THE THEORY OF CONTINUOUS FRACTIONS 55

2.1 The definition of a continued fraction 55
2.2 Converting a continued fraction to a simple fraction and vice
versa 56
2.3 Convergents 58
2.4 Nonterminating continued fractions 66
2.5 Expanding functions into continued fractions 72
References for Chapter 2 76

CHAPTER 3
COMPUTING THE VALUES OF FUNCTIONS 77

3.1 Computing the values of a polynomial. Horner’s scheme  77
3.2 The generalized Horner scheme 80
3.3 Computing the values of rational fractions 82
3.4 Approximating the sums of numerical series 83
3.5 Computing the values of an analytic function 89
3.6 Computing the values of exponential functions 91
3.7 Computing the values of a logarithmic function 95
3.8 Computing the values of trigonometric functions 98
3.9 Computing the values of hyperbolic functions 101
3.10 Using the method of iteration for approximating the values of
function 103
3.11 Computing reciprocals 104
3.12 Computing square roots 107
3.13 Computing the reciprocal of a square root 111
3.14 Computing cube roots 112
References for Chapter 3 114

CHAPTER 4
APPROXIMATE SOLUTIONS OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS 115

4.1 Isolation of roots 115
4.2 Graphical solution of equations 119
4.3 The halving method 121
4.4 The method of proportional parts (method of chords) 122
4.5 Newton’s method {method of tangents) 127
4.6 Modified Newton method 135
4.7 Combination method 136
4.8 The method of iteration 138
4.9 The method of iteration for a system of two equations 152
4.10 Newton’s method for a’system of two equations 156
4.11 Newton’s method for the case of complex roots 157
References for Chapter 5 161

CHAPTER 5
SPECIAL TECHNIQUES FOR APPROXIMATE SOLUTION OF EQUATIONS 162

5.1 General properties of algebraic equations 162
5.2 The bounds of real roots of algebraic equations 167
5.3 The method of alternating sums 169
5.4 Newton’s method 171
5.5 The number of real roots of a polynomial 173
5.6 The theorem of Budan-Fourier 175
5.7 The underlying principle of the method of Lobachevsky-Graeife 179
5.8 The root-squaring process 182
5.9 The Lobachevsky-Graeffe method for the case of real and distinct
roots 184
5.10 The Lobachevsky-Graeife method for the case of complex roots 187
5.11 The case of a pair of complex roots 190
5.12 The case of two pairs of complex roots 194
5.13 Bernoulli’s method 198
References for Chapter 5 202

CHAPTER 6
ACCELERATING THE CONVERGENCE OF SERIES 203

6.1 Accelerating the convergence of numerical series 203
6.2 Accelerating the convergence of power series by the Euler-Abel
method 209
6.3 Estimates of Fourier coefficient 213
6.4 Accelerating the convergence of Fourier trigonometric series by
the method of A, N. Krylov 217
6.5 Trigonometric approximation 225
References for Chapter 6 228

CHAPTER 7
MATRIX ALGEBRA 229

7.1 Basic definitions 229
7.2 Operations involving matrices 230
7.3 The transpose of a matrix 234
7.4 The inverse matrix 236
7.5 Powers of a matrix 240
7.6 Rational functions of a matrix 241
7.7 The absolute value and norm of a matrix 242
7.8 The rank of a matrix 248
7.9 The limit of a matrix 249
7.10 Series of matrices 251
7.11 Partitioned matrices 256
7.12 Matrix inversion by partitioning 260
7.13 Triangular matrices 265
7.14 Elementary transformations of matrices 268
7.15 Computation of determinants 269
References for Chapter 7 272

CHAPTER 8
SOLVING  SYSTEMS OF LINEAR EQUATIONS 273

8.1 A general description of methods of solving systems of linear
equations 273
8.2 Solution by inversion of matrices. Cramer’s rule 273
8.3 The Gaussian method 277
8.4 Improving roots  284
8.5 The method of principal elements 287
8.6 Use of the Gaussian method in computing determinants 288
8.7 Inversion of matrices by the Gaussian method 290
8.8 Square-root method 293
8.9 The scheme of Khaletsky 296
8.10 The method of iteration 300
8.11 Reducing a linear system to a form convenient for iteration 307
8.12 The Seidel method 309
8.13 The case of a normal system 311
8.14 The method of relaxation 313
8.15 Correcting elements of an approximate inverse matrix 316
References for Chapter 8 321

CHAPTER 9
THE CONVERGENCE OF ITERATION PROCESSES FOR SYSTEMS OF LINEAR EQUATIONS 322

9.1 Sufficient conditions for the convergence of the iteration process 322
9.2 An estimate of the error of approximations in the iteration
process 324
9.3 First sufficient condition for convergence of the Seidel process 327
9.4 Estimating the error of approximations in the Seidel process by the m-norm 330
9.5 Second sufficient condition for convergence of the Seidel process 330
9.6 Estimating the error of approximations in the Seidei process by
the l-norm 332
9.7 Third sufficient condition for convergence of the Seidel process 333
References for Chapter 9 335

CHAPTER 10
ESSENTIALS OF THEORY OF LINEAR VECTOR SPACES 336

10.1 The concept of a linear vector space 336
10.2 The linear dependence of vectors 337
10.3 The scalar product of vectors 343
10.4 Orthogonal systems of vectors 345
10.5 Transformations of the coordinates of a vector the basis 348
10.6 Orthogonal matrices 350
10.7 Orthogonalization of matrices 351
10.8 Applying orthogonalixation methods to the solutions of linear
equations 358
10.9 The solution space of a homogeneous system 364
10.10 Linear transformations of variables 367
10.11 Inverse transformation 373
10.12 Eigenvectors and eigenvalues of a matrix 375
10.13 Similar matrices 380
10.14 Bilinear form of a matrix 384
10.15 Properties of symmetric matrices 384
10.16 Properties of matrices with real elements 389
References for Chapter 10 393

CHAPTER 11
ADDITIONAL FACTS ABOUT THE CONVERGENCE OF ITERATION PROCESSES FOR
SYSTEMS OF LINEAR EQUATIQHS 394

11.1 The convergence of matrix power series 394
11.2 The Cayley-Hamilton theorem 397
11.3 Necessary and sufficient conditions for the convergence of the
process of iteration for a system of linear equations 398
11.4 Necessary and sufficient conditions for the convergence of the
Seidel process for a system of linear equations 400
11.5 Convergence of the Seidel process for a normal system 403
11.6 Methods for effectively checking the conditions of convergence 405
References for Chapter 11 409

CHAPTER 12
FINDING THE EIGENVALUES AND EIGENVECTORS OF A MATRIX 410

12.1 Introductory remarks 410
12.2 Expansion of secular determinants 410
12.3 The method of Danilevsky 412
12.4 Exceptional cases in the Danilevsky method 418
12.5 Computation of eigenvectors by the Danilevsky method 420
12.6 The method of Krylov 421
12.7 Computation of eigenvectors by the Krylov method 424
12.8 Leverrier’s method 426
12.9 On the method of undetermined coefficients 428
12.10 A comparison of different methods of expanding a secular
determinant 429
12.11 Finding the numerically largest eigenvalue of a matrix and the
corresponding eigenvector 430
12.12 The method of scalar products for finding the first eigenvalue
of a real matrix 436
12.13 Finding the second eigenvalue of a matrix and the second
eigenvector 439
12.14 The method of exhaustion 443
12.15 Finding the eigenvalues and eigenvectors of a positive definite
symmetric matrix 445
12.16 Using the coefficients of the characteristic polynomial of a
matrix for matrix inversion 450
12.17 The method of Lyusternik for accelerating the convergence of the
iteration process in the solution of a system of linear equation 453
References for Chapter 12  458

CHAPTER 13
APPROXIMATE SOLUTION OF SYSTEMS OF NOHLINEAR EQUATIONS  459

13.1 Newton’s method 459
13.2 General remarks on the convergence of the Newton process 465
13.3 The existence of roots of a system and the convergence of the
Newton process 469
13.4 The rapidity of convergence of the Newton process 474
13.5 Uniqueness of solution 475
13.6 Stability of convergence of the Newton process under variations
of the initial approximation 478
13.7 The modified Newton method 481
13.8 The method of iteration 484
13.9 The notion of a contraction mapping 487
13.10 First sufficient condition for the convergence of the process of
iteration 491
13.11 Second sufficient condition for the convergence of the process
of iteration 493
13.12 The method of steepest descent (gradient method) 496
13.13 The method of steepest descent for the case of a system of
linear equations 501
13.14 The method of power series 504
References for Chapter 13 506

CHAPTER 14
THE INTERPOLATION OF FUNCTIONS 507

14.1 Finite differences of various orders 507
14.2 Difference table 510
14.3 Generalized power 517
14.4 Statement of the problem of interpolation 518
14.5 Newton’s first interpolation formula 519
14.6 Newton’s second interpolation formula 526
14.7 Table of central differences 530
14.8 Gaussian interpolation formulas 531
14.9 Stirling’s interpolation formula 533
14.10 Bessel’s interpolation formula 534
14.11 General description of interpolation formulas with constant
interval 536
14.12 Lagrange’s interpolation formula 539
14.13 Computing Lagrangian coefficients 543
14.14 Error estimate of Lagrange’s interpolation formula 547
14.15 Error estimates of Newton’s interpolation formulas 550
14.16 Error estimates of the central interpolation formulas 552
14.17 On the best choice of interpolation points 553
14.18 Divided differences 554
14.19 Newton’s interpolation formula for unequally spaced values of
the argument 556
14.20 Inverse interpolation for the case of equally spaced points 559
14.21 Inverse interpolation for the case of unequally spaced points 562
14.22 Finding the roots of an equation by inverse interpolation 564
14.23 The interpolation method for expanding a secular determinant 565
14.24 Interpolation of functions of two variables 567
14.25 Double differences of higher order 570
14.26 Newton’s interpolation formula for a function of two variables 571
References for Chapter 14 573

CHAPTER 15
APPROXIMATE DIFFERENTIATION 574

15.1 Statement of the problem 574
15.2 Formulas of approximate differentiation based on Newton’s first
interpolation formula 575
15.3 Formulas of approximate differentiation based on Stirling’s
formula 580
15.4 Formulas of numerical differentiation for equally spaced points 583
15.5 Graphical differentiation 586
15.6 On the approximate calculation of partial derivatives 588
References for Chapter 15  589

CHAPTER 16
APPROXIMATE INTEGRATION OF FUNCTIONS 590

16.1 General remarks 590
16.2 Newton-Cotes quadrature formulas 593
16.3 The trapezoidal formula and its remainder term 595
16.4 Simpson’s formula and its remainder term 596
16.5 Newton-Cotes formulas of higher orders 599
16.6 General trapezoidal formula (trapezoidal rule) 601
16.7 Simpson’s general formula (parabolic rule) 603
16.8 On Chebyshev’s quadrature formula 607
16.9 Gaussian quadrature formula 611
16.10 Some remarks on the accuracy of quadrature formulas 618
16.11 Richardson extrapolation 622
16.12 Bernoulli numbers 625
16.13 Euler-Maclaurin formula 628
16.14 Approximation of improper integrals 633
16.15 The method of Kantorovich for isolating singularities 635
16.16 Graphical integration 639
16.17 On cubature formulas 641
16.18 A cubature formula of Simpson type 644
References for Chapter 16 648

CHAPTER 17
THE MONTE CARLO METHOD 649

17.1 The idea of the Monte Carlo method 649
17.2 Random numbers 650
17.3 Ways of generating random numbers 653
17.4 Monte Carlo evaluation of multiple integrals 656
17.5 Solving systems of linear algebraic equations method by the Monte
Carlo method 666
References for Chapter 17 674

COMPLETE LIST OF REFERENCES 675

INDEX 679 

This entry was posted in books, mathematics, mir books, mir publishers and tagged , , , , . Bookmark the permalink.

14 Responses to Computational Mathematics – Demidovich, Maron

  1. CS says:

    sir i have a copy of ‘problems in descriptive geometry by Kh.A.ARUSTAMOV’ A Mir publishers book of 1974.Can it be of any help to you.

    Like

  2. dimitrios says:

    thank you!

    your effort is a great relief for me, who loves the books by MIR Publishers!

    thanks again!

    Like

  3. Where can I get this book in India?

    Like

  4. db.jan says:

    can you help me?
    “the server this file is on is currently offline for maintenance, try again later, please contact support if this issue persists”

    Like

  5. m95 says:

    great effort, thanks

    Like

  6. please re upload the link
    thanks

    Like

  7. jerry says:

    Says invalid link please help

    Like

  8. Hari says:

    Hi admin,

    The download links appear to be broken. I am trying to download the book Computational mathematics by Demidovich. I could see all available links not working.
    Can you please help me read this book?

    Thanks
    Mail: vivekv109@gmail.com

    Like

Leave a reply to CS Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.