Margins
Cambridge Series in Statistical and Probabilistic Mathematics book cover 1
Cambridge Series in Statistical and Probabilistic Mathematics book cover 2
Cambridge Series in Statistical and Probabilistic Mathematics book cover 3
Cambridge Series in Statistical and Probabilistic Mathematics
Series · 35
books · 1997-2019

Books in series

Bootstrap Methods and their Application book cover
#1

Bootstrap Methods and their Application

1997

This book gives a broad and up-to-date coverage of bootstrap methods, with numerous applied examples, developed in a coherent way with the necessary theoretical basis. Applications include stratified data; finite populations; censored and missing data; linear, nonlinear, and smooth regression models; classification; time series and spatial problems. Special features of the book extensive discussion of significance tests and confidence intervals; material on various diagnostic methods; and methods for efficient computation, including improved Monte Carlo simulation. Each chapter includes both practical and theoretical exercises. Included with the book is a disk of purpose-written S-Plus programs for implementing the methods described in the text. Computer algorithms are clearly described, and computer code is included on a 3-inch, 1.4M disk for use with IBM computers and compatible machines. Users must have the S-Plus computer application. Author resource
Markov Chains book cover
#2

Markov Chains

1997

Markov chains are central to the understanding of random processes. This textbook, aimed at advanced undergraduate or MSc students with some background in basic probability theory, focuses on Markov chains and develops quickly a coherent and rigorous theory whilst showing also how actually to apply it. There are applications to simulation, economics, optimal control, genetics, queues and many other topics, and a careful selection of exercises and examples drawn both from theory and practice.
Asymptotic Statistics book cover
#3

Asymptotic Statistics

1998

This book is an introduction to the field of asymptotic statistics. The treatment is both practical and mathematically rigorous. In addition to most of the standard topics of an asymptotics course, including likelihood inference, M-estimation, the theory of asymptotic efficiency, U-statistics, and rank procedures, the book presents recent research topics such as semiparametric models, the bootstrap, and empirical processes and their applications. The topics are organized from the central idea of approximation by limit experiments, which gives the book one of its unifying themes. This entails mainly the local approximation of the classical i.i.d. setup with smooth parameters by location experiments involving a single, normally distributed observation. Thus, even the standard subjects of asymptotic statistics are presented in a novel way. Suitable as a text for a graduate or Master's level statistics course, this book will also give researchers in statistics, probability, and their applications an overview of the latest research in asymptotic statistics. —back cover
Wavelet Methods for Time Series Analysis book cover
#4

Wavelet Methods for Time Series Analysis

2000

The analysis of time series data is essential to many areas of science, engineering, finance and economics. This introduction to wavelet analysis "from the ground level and up," and to wavelet-based statistical analysis of time series focuses on practical discrete time techniques, with detailed descriptions of the theory and algorithms needed to understand and implement the discrete wavelet transforms. Numerous examples illustrate the techniques on actual time series. The many embedded exercises—with complete solutions provided in the Appendix—allow readers to use the book for self-guided study. Additional exercises can be used in a classroom setting. A Web site offers access to the time series and wavelets used in the book, as well as information on accessing software in S-Plus and other languages. Students and researchers wishing to use wavelet methods to analyze time series will find this book essential. Author resource
Bayesian Methods book cover
#5

Bayesian Methods

An Analysis for Statisticians and Interdisciplinary Researchers

1999

This exposition of the Bayesian approach to statistics at a level suitable for final year undergraduate and Masters students is unique in presenting its subject with a practical flavor and an emphasis on mainstream statistics. It shows how to infer scientific, medical, and social conclusions from numerical data. The authors draw on many years of experience with practical and research programs and describe many new statistical methods, not available elsewhere. It will be essential reading for all statisticians, statistics students, and related interdisciplinary researchers.
Empirical Processes in M-Estimation book cover
#6

Empirical Processes in M-Estimation

2000

The theory of empirical processes provides valuable tools for the development of asymptotic theory in (nonparametric) statistical models, and makes it possible to give a unified treatment of various models. This book reveals the relation between the asymptotic behavior of M-estimators and the complexity of parameter space, using entropy as a measure of complexity, presenting tools and methods to analyze nonparametric, and in some cases, semiparametric methods. Graduate students and professionals in statistics, as well as those interested in applications, e.g. to econometrics, medical statistics, etc., will welcome this treatment.
A User's Guide to Measure Theoretic Probability book cover
#8

A User's Guide to Measure Theoretic Probability

2001

Rigorous probabilistic arguments, built on the foundation of measure theory introduced eighty years ago by Kolmogorov, have invaded many fields. Students of statistics, biostatistics, econometrics, finance, and other changing disciplines now find themselves needing to absorb theory beyond what they might have learned in the typical undergraduate, calculus-based probability course. This 2002 book grew from a one-semester course offered for many years to a mixed audience of graduate and undergraduate students who have not had the luxury of taking a course in measure theory. The core of the book covers the basic topics of independence, conditioning, martingales, convergence in distribution, and Fourier transforms. In addition there are numerous sections treating topics traditionally thought of as more advanced, such as coupling and the KMT strong approximation, option pricing via the equivalent martingale measure, and the isoperimetric inequality for Gaussian processes. The book is not just a presentation of mathematical theory, but is also a discussion of why that theory takes its current form. It will be a secure starting point for anyone who needs to invoke rigorous probabilistic arguments and understand what they mean.
The Estimation and Tracking of Frequency book cover
#9

The Estimation and Tracking of Frequency

2001

Many electronic and acoustic signals can be modeled as sums of sinusoids and noise. However, the amplitudes, phases and frequencies of the sinusoids are often unknown and must be estimated in order to characterize the periodicity or near-periodicity of a signal and consequently to identify its source. Quinn and Hannan present and analyze several practical techniques used for such estimation. The problem of tracking slow frequency changes of a very noisy sinusoid over time is also considered. Rigorous analyses are presented via asymptotic or large sample theory, together with physical insight. The book focuses on achieving extremely accurate estimates when the signal to noise ratio is low but the sample size is large.
Data Analysis and Graphics Using R book cover
#10

Data Analysis and Graphics Using R

An Example-Based Approach

2003

Join the revolution ignited by the ground-breaking R system! Starting with an introduction to R, covering standard regression methods, then presenting more advanced topics, this book guides users through the practical and powerful tools that the R system provides. The emphasis is on hands-on analysis, graphical display and interpretation of data. The many worked examples, taken from real-world research, are accompanied by commentary on what is done and why. A website provides computer code and data sets, allowing readers to reproduce all analyses. Updates and solutions to selected exercises are also available. Assuming only basic statistical knowledge, the book is ideal for research scientists, final-year undergraduate or graduate level students of applied statistics, and practising statisticians. It is both for learning and for reference. This revised edition reflects changes in R since 2003 and has new material on survival analysis, random coefficient models, and the handling of high-dimensional data.
Statistical Models book cover
#11

Statistical Models

2003

Models and likelihood are the backbone of modern statistics and data analysis. The coverage is unrivaled, with sections on survival analysis, missing data, Markov chains, Markov random fields, point processes, graphical models, simulation and Markov chain Monte Carlo, estimating functions, asymptotic approximations, local likelihood and spline regressions as well as on more standard topics. Anthony Davison blends theory and practice to provide an integrated text for advanced undergraduate and graduate students, researchers and practicioners. Its comprehensive coverage makes this the standard text and reference in the subject.
Semiparametric Regression book cover
#12

Semiparametric Regression

1999

Science abounds with problems where the data are noisy and the answer is not a straight line. Semiparametric regression analysis helps make sense of such data in application areas that include engineering, finance, medicine and public health. The book is geared towards researchers and professionals with little background in regression as well as statistically oriented scientists (biostatisticians, econometricians, quantitative social scientists, and epidemiologists) with knowledge of regression and the desire to begin using more flexible semiparametric models. Author resource
Measure Theory and Filtering book cover
#15

Measure Theory and Filtering

Introduction and Applications

2004

Aimed primarily at those outside of the field of statistics, this book not only provides an accessible introduction to measure theory, stochastic calculus, and stochastic processes, with particular emphasis on martingales and Brownian motion, but develops into an excellent user's guide to filtering. Including exercises for students, it will be a complete resource for engineers, signal processing researchers, or anyone with an interest in practical implementation of filtering techniques, in particular, the Kalman filter. Three separate chapters concentrate on applications arising in finance, genetics, and population modelling.
Essentials of Statistical Inference book cover
#16

Essentials of Statistical Inference

2005

This textbook presents the concepts and results underlying the Bayesian, frequentist, and Fisherian approaches to statistical inference, with particular emphasis on the contrasts between them. Aimed at advanced undergraduates and graduate students in mathematics and related disciplines, it covers basic mathematical theory as well as more advanced material, including such contemporary topics as Bayesian computation, higher-order likelihood theory, predictive inference, bootstrap methods, and conditional inference.
Elements of Distribution Theory book cover
#17

Elements of Distribution Theory

2005

This detailed introduction to distribution theory is designed as a text for the probability portion of the first year statistical theory sequence for Master's and PhD students in statistics, biostatistics, and econometrics. The text uses no measure theory, requiring only a background in calculus and linear algebra. Topics range from the basic distribution and density functions, expectation, conditioning, characteristic functions, cumulants, convergence in distribution and the central limit theorem to more advanced concepts such as exchangeability, models with a group structure, asymptotic approximations to integrals and orthogonal polynomials. An appendix gives a detailed summary of the mathematical definitions and results that are used in the book.
Statistical Mechanics of Disordered Systems book cover
#18

Statistical Mechanics of Disordered Systems

A Mathematical Perspective

2006

This self-contained book is a graduate-level introduction for mathematicians and for physicists interested in the mathematical foundations of the field, and can be used as a textbook for a two-semester course on mathematical statistical mechanics. It assumes only basic knowledge of classical physics and, on the mathematics side, a good working knowledge of graduate-level probability theory. The book starts with a concise introduction to statistical mechanics, proceeds to disordered lattice spin systems, and concludes with a presentation of the latest developments in the mathematical understanding of mean-field spin glass models. In particular, progress towards a rigorous understanding of the replica symmetry-breaking solutions of the Sherrington-Kirkpatrick spin glass models, due to Guerra, Aizenman-Sims-Starr and Talagrand, is reviewed in some detail.
The Coordinate-Free Approach to Linear Models book cover
#19

The Coordinate-Free Approach to Linear Models

2006

This book is about the coordinate-free, or geometric, approach to the theory of linear models; more precisely, Model I ANOVA and linear regression models with nonrandom predictors in a finite-dimensional setting. This approach is more insightful, more elegant, more direct, and simpler than the more common matrix approach to linear regression, analysis of variance, and analysis of covariance models in statistics. The book discusses the intuition behind and optimal properties of various methods of estimating and testing hypotheses about unknown parameters in the models. Topics covered include inner product spaces, orthogonal projections, book orthogonal spaces, Tjur experimental designs, basic distribution theory, the geometric version of the Gauss-Markov theorem, optimal and nonoptimal properties of Gauss-Markov, Bayes, and shrinkage estimators under the assumption of normality, the optimal properties of F-tests, and the analysis of covariance and missing observations.
Random Graph Dynamics book cover
#20

Random Graph Dynamics

2006

The theory of random graphs began in the late 1950s in several papers by Erdos and Renyi. In the late twentieth century, the notion of six degrees of separation, meaning that any two people on the planet can be connected by a short chain of people who know each other, inspired Strogatz and Watts to define the small world random graph in which each site is connected to k close neighbors, but also has long-range connections. At about the same time, it was observed in human social and sexual networks and on the Internet that the number of neighbors of an individual or computer has a power law distribution. This inspired Barabasi and Albert to define the preferential attachment model, which has these properties. These two papers have led to an explosion of research. While this literature is extensive, many of the papers are based on simulations and nonrigorous arguments. The purpose of this book is to use a wide variety of mathematical argument to obtain insights into the properties of these graphs. A unique feature of this book is the interest in the dynamics of process taking place on the graph in addition to their geometric properties, such as connectedness and diameter.
Networks book cover
#21

Networks

Optimisation and Evolution

2007

Point-to-point vs hub-and-spoke. Questions of network design are real and involve many billions of dollars. Yet little is known about optimising design - nearly all work concerns optimising flow assuming a given design. This foundational book tackles optimisation of network structure itself, deriving comprehensible and realistic design principles. With fixed material cost rates, a natural class of models implies the optimality of direct source-destination connections, but considerations of variable load and environmental intrusion then enforce trunking in the optimal design, producing an arterial or hierarchical net. Its determination requires a continuum formulation, which can however be simplified once a discrete structure begins to emerge. Connections are made with the masterly work of Bendsøe and Sigmund on optimal mechanical structures and also with neural, processing and communication networks, including those of the Internet and the World Wide Web. Technical appendices are provided on random graphs and polymer models and on the Klimov index.
Saddlepoint Approximations with Applications book cover
#22

Saddlepoint Approximations with Applications

2007

Modern statistical methods use complex, sophisticated models that can lead to intractable computations. Saddlepoint approximations can be the answer. Written from the user's point of view, this book explains in clear language how such approximate probability computations are made, taking readers from the very beginnings to current applications. The core material is presented in chapters 1-6 at an elementary mathematical level. Chapters 7-9 then give a highly readable account of higher-order asymptotic inference. Later chapters address areas where saddlepoint methods have had substantial multivariate testing, stochastic systems and applied probability, bootstrap implementation in the transform domain, and Bayesian computation and inference. No previous background in the area is required. Data examples from real applications demonstrate the practical value of the methods. Ideal for graduate students and researchers in statistics, biostatistics, electrical engineering, econometrics, and applied mathematics, this is both an entry-level text and a valuable reference.
Applied Asymptotics book cover
#23

Applied Asymptotics

Case Studies in Small-Sample Statistics

2007

In fields such as biology, medical sciences, sociology, and economics researchers often face the situation where the number of available observations, or the amount of available information, is sufficiently small that approximations based on the normal distribution may be unreliable. Theoretical work over the last quarter-century has led to new likelihood-based methods that lead to very accurate approximations in finite samples, but this work has had limited impact on statistical practice. This book illustrates by means of realistic examples and case studies how to use the new theory, and investigates how and when it makes a difference to the resulting inference. The treatment is oriented towards practice and comes with code in the R language (available from the web) which enables the methods to be applied in a range of situations of interest to practitioners. The analysis includes some comparisons of higher order likelihood inference with bootstrap or Bayesian methods. Author resource
Random Networks for Communication book cover
#24

Random Networks for Communication

From Statistical Physics to Information Systems

2007

When is a random network (almost) connected? How much information can it carry? How can you find a particular destination within the network? And how do you approach these questions - and others - when the network is random? The analysis of communication networks requires a fascinating synthesis of random graph theory, stochastic geometry and percolation theory to provide models for both structure and information flow. This book is the first comprehensive introduction for graduate students and scientists to techniques and problems in the field of spatial random networks. The selection of material is driven by applications arising in engineering, and the treatment is both readable and mathematically rigorous. Though mainly concerned with information-flow-related questions motivated by wireless data networks, the models developed are also of interest in a broader context, ranging from engineering to social networks, biology, and physics.
Design of Comparative Experiments book cover
#25

Design of Comparative Experiments

2008

Design of Comparative Experiments develops a coherent framework for thinking about factors that affect experiments and their relationships, including the use of Hasse diagrams. These diagrams are used to elucidate structure, calculate degrees of freedom and allocate treatment sub-spaces to appropriate strata. Good design considers units and treatments first, and then allocates treatments to units. Based on a one-term course the author has taught since 1989, the book is ideal for advanced undergraduate and beginning graduate courses. This book should be on the shelf of every practicing statistician who designs experiments.
Model Selection and Model Averaging book cover
#27

Model Selection and Model Averaging

2008

Choosing a model is central to all statistical work with data. We have seen rapid advances in model fitting and in the theoretical understanding of model selection, yet this book is the first to synthesize research and practice from this active field. Model choice criteria are explained, discussed and compared, including the AIC, BIC, DIC and FIC. The uncertainties involved with model selection are tackled with discussions of frequent and Bayesian methods; model averaging schemes are presented. Real-data examples are complemented by derivations providing deeper insight into the methodology, and instructive exercises build familiarity with the methods. The companion website features Data sets and R-code.
Bayesian Nonparametrics book cover
#28

Bayesian Nonparametrics

2010

Bayesian nonparametrics works - theoretically, computationally. The theory provides highly flexible models whose complexity grows appropriately with the amount of data. Computational issues, though challenging, are no longer intractable. All that is needed is an entry this intelligent book is the perfect guide to what can seem a forbidding landscape. Tutorial chapters by Ghosal, Lijoi and Prünster, Teh and Jordan, and Dunson advance from theory, to basic models and hierarchical modeling, to applications and implementation, particularly in computer science and biostatistics. These are complemented by companion chapters by the editors and Griffin and Quintana, providing additional models, examining computational issues, identifying future growth areas, and giving links to related topics. This coherent text gives ready access both to underlying principles and to state-of-the-art practice. Specific examples are drawn from information retrieval, NLP, machine vision, computational biology, biostatistics, and bioinformatics.
From Finite Sample to Asymptotic Methods in Statistics book cover
#29

From Finite Sample to Asymptotic Methods in Statistics

2009

Exact statistical inference may be employed in diverse fields of science and technology. As problems become more complex and sample sizes become larger, mathematical and computational difficulties can arise that require the use of approximate statistical methods. Such methods are justified by asymptotic arguments but are still based on the concepts and principles that underlie exact statistical inference. With this in perspective, this book presents a broad view of exact statistical inference and the development of asymptotic statistical inference, providing a justification for the use of asymptotic methods for large samples. Methodological results are developed on a concrete and yet rigorous mathematical level and are applied to a variety of problems that include categorical data, regression, and survival analyses. This book is designed as a textbook for advanced undergraduate or beginning graduate students in statistics, biostatistics, or applied statistics but may also be used as a reference for academic researchers.
Brownian Motion book cover
#30

Brownian Motion

2010

This eagerly awaited textbook covers everything the graduate student in probability wants to know about Brownian motion, as well as the latest research in the area. Starting with the construction of Brownian motion, the book then proceeds to sample path properties like continuity and nowhere differentiability. Notions of fractal dimension are introduced early and are used throughout the book to describe fine properties of Brownian paths. The relation of Brownian motion and random walk is explored from several viewpoints, including a development of the theory of Brownian local times from random walk embeddings. Stochastic integration is introduced as a tool and an accessible treatment of the potential theory of Brownian motion clears the path for an extensive treatment of intersections of Brownian paths. An investigation of exceptional points on the Brownian path and an appendix on SLE processes, by Oded Schramm and Wendelin Werner, lead directly to recent research themes.
Probability book cover
#31

Probability

Theory and Examples

2010

This book is an introduction to probability theory covering laws of large numbers, central limit theorems, random walks, martingales, Markov chains, ergodic theorems, and Brownian motion. It is a comprehensive treatment concentrating on the results that are the most useful for applications. Its philosophy is that the best way to learn probability is to see it in action, so there are 200 examples and 450 problems.
Stochastic Processes book cover
#33

Stochastic Processes

2011

This comprehensive guide to stochastic processes gives a complete overview of the theory and addresses the most important applications. Pitched at a level accessible to beginning graduate students and researchers from applied disciplines, it is both a course book and a rich resource for individual readers. Subjects covered include Brownian motion, stochastic calculus, stochastic differential equations, Markov processes, weak convergence of processes and semigroup theory. Applications include the Black–Scholes formula for the pricing of derivatives in financial mathematics, the Kalman–Bucy filter used in the US space program and also theoretical applications to partial differential equations and analysis. Short, readable chapters aim for clarity rather than full generality. More than 350 exercises are included to help readers put their new-found knowledge to the test and to prepare them for tackling the research literature.
Regression for Categorical Data book cover
#34

Regression for Categorical Data

2011

This book introduces basic and advanced concepts of categorical regression with a focus on the structuring constituents of regression, including regularization techniques to structure predictors. In addition to standard methods such as the logit and probit model and extensions to multivariate settings, the author presents more recent developments in flexible and high-dimensional regression, which allow weakening of assumptions on the structuring of the predictor and yield fits that are closer to the data. A generalized linear model is used as a unifying framework whenever possible in particular parametric models that are treated within this framework. Many topics not normally included in books on categorical data analysis are treated here, such as nonparametric regression; selection of predictors by regularized estimation procedures; ternative models like the hurdle model and zero-inflated regression models for count data; and non-standard tree-based ensemble methods, which provide excellent tools for prediction and the handling of both nominal and ordered categorical predictors. The book is accompanied an R package that contains data sets and code for all the examples.
Statistical Principles for the Design of Experiments book cover
#36

Statistical Principles for the Design of Experiments

Applications to Real Experiments

2012

This book is about the statistical principles behind the design of effective experiments and focuses on the practical needs of applied statisticians and experimenters engaged in design, implementation and analysis. Emphasising the logical principles of statistical design, rather than mathematical calculation, the authors demonstrate how all available information can be used to extract the clearest answers to many questions. The principles are illustrated with a wide range of examples drawn from real experiments in medicine, industry, agriculture and many experimental disciplines. Numerous exercises are given to help the reader practise techniques and to appreciate the difference that good design can make to an experimental research project. Based on Roger Mead's excellent Design of Experiments, this new edition is thoroughly revised and updated to include modern methods relevant to applications in industry, engineering and modern biology. It also contains seven new chapters on contemporary topics, including restricted randomisation and fractional replication.
Quantum Stochastics book cover
#37

Quantum Stochastics

2015

The classical probability theory initiated by Kolmogorov and its quantum counterpart, pioneered by von Neumann, were created at about the same time in the 1930s, but development of the quantum theory has trailed far behind. Although highly appealing, the quantum theory has a steep learning curve, requiring tools from both probability and analysis and a facility for combining the two viewpoints. This book is a systematic, self-contained account of the core of quantum probability and quantum stochastic processes for graduate students and researchers. The only assumed background is knowledge of the basic theory of Hilbert spaces, bounded linear operators, and classical Markov processes. From there, the book introduces additional tools from analysis, and then builds the quantum probability framework needed to support applications to quantum control and quantum information and communication. These include quantum noise, quantum stochastic calculus, stochastic quantum differential equations, quantum Markov semigroups and processes, and large-time asymptotic behavior of quantum Markov semigroups.
Nonparametric Estimation under Shape Constraints book cover
#38

Nonparametric Estimation under Shape Constraints

Estimators, Algorithms and Asymptotics

2014

This book treats the latest developments in the theory of order-restricted inference, with special attention to nonparametric methods and algorithmic aspects. Among the topics treated are current status and interval censoring models, competing risk models, and deconvolution. Methods of order restricted inference are used in computing maximum likelihood estimators and developing distribution theory for inverse problems of this type. The authors have been active in developing these tools and present the state of the art and the open problems in the field. The earlier chapters provide an introduction to the subject, while the later chapters are written with graduate students and researchers in mathematical statistics in mind. Each chapter ends with a set of exercises of varying difficulty. The theory is illustrated with the analysis of real-life data, which are mostly medical in nature.
Large Sample Covariance Matrices and High-Dimensional Data Analysis book cover
#39

Large Sample Covariance Matrices and High-Dimensional Data Analysis

2015

High-dimensional data appear in many fields, and their analysis has become increasingly important in modern statistics. However, it has long been observed that several well-known methods in multivariate analysis become inefficient, or even misleading, when the data dimension p is larger than, say, several tens. A seminal example is the well-known inefficiency of Hotelling's T2-test in such cases. This example shows that classical large sample limits may no longer hold for high-dimensional data; statisticians must seek new limiting theorems in these instances. Thus, the theory of random matrices (RMT) serves as a much-needed and welcome alternative framework. Based on the authors' own research, this book provides a first-hand introduction to new high-dimensional statistical methods derived from RMT. The book begins with a detailed introduction to useful tools from RMT, and then presents a series of high-dimensional problems with solutions provided by RMT methods.
High-Dimensional Probability book cover
#47

High-Dimensional Probability

An Introduction with Applications in Data Science

2018

High-dimensional probability offers insight into the behavior of random vectors, random matrices, random subspaces, and objects used to quantify uncertainty in high dimensions. Drawing on ideas from probability, analysis, and geometry, it lends itself to applications in mathematics, statistics, theoretical computer science, signal processing, optimization, and more. It is the first to integrate theory, key tools, and modern applications of high-dimensional probability. Concentration inequalities form the core, and it covers both classical results such as Hoeffding's and Chernoff's inequalities and modern developments such as the matrix Bernstein's inequality. It then introduces the powerful methods based on stochastic processes, including such tools as Slepian's, Sudakov's, and Dudley's inequalities, as well as generic chaining and bounds based on VC dimension. A broad range of illustrations is embedded throughout, including classical and modern results for covariance estimation, clustering, networks, semidefinite programming, coding, dimension reduction, matrix completion, machine learning, compressed sensing, and sparse regression.
High-Dimensional Statistics book cover
#48

High-Dimensional Statistics

A Non-Asymptotic Viewpoint

2019

Recent years have witnessed an explosion in the volume and variety of data collected in all scientific disciplines and industrial settings. Such massive data sets present a number of challenges to researchers in statistics and machine learning. This book provides a self-contained introduction to the area of high-dimensional statistics, aimed at the first-year graduate level. It includes chapters that are focused on core methodology and theory - including tail bounds, concentration inequalities, uniform laws and empirical process, and random matrices - as well as chapters devoted to in-depth exploration of particular model classes - including sparse linear models, matrix models with rank constraints, graphical models, and various types of non-parametric models. With hundreds of worked examples and exercises, this text is intended both for courses and for self-study by graduate students and researchers in statistics, machine learning, and related fields who must understand, apply, and adapt modern statistical methods suited to large-scale data.

Authors

David Pollard
David Pollard
Author · 1 books
David Pollard was born in London in 1942. He fled accountancy to the University of Sussex where he was given his three degrees in literature, the history of ideas and philosophy. The last of these, a doctorate, was published as The Poetry of Keats: Language and Experience and is a Heideggerian interpretation of the poet. He has also published other work on Keats, as well as on Blake and Nietzsche. His latest, Nietzsche’s Footfalls, a meditation on the philosopher and his times, came out in 2003. He has also reviewed extensively in the fields of both philosophy and literature. Pollard’s work has appeared in: Omphalos, Tears in the Fence, Aletheia, Fire, Eratica, Eclipse and Poetry Monthly. He is curently writing a comparison of Blake and Nietzsche.
548 Market St PMB 65688, San Francisco California 94104-5401 USA
© 2025 Paratext Inc. All rights reserved