gaussian PDF: 1 to 10 of 100 results fetched - page 1 [an]

STATISTICS with MATLAB. GAUSSIAN PROCESS REGRESSION and BAYESIAN OPTIMIZATION

https://www.amazon.com/STATISTICS-GAUSSIAN-REGRESSION-BAYESI...
Statistics and Machine Learning Toolbox provides functions and apps to describe, analyze, and model data. You can use descriptive statistics and plots for exploratory data analysis, fit probability distributions to data, generate random numbers for Monte Carlo simulations, and perform hypothesis tests. Regression and classification algorithms let you draw inferences from data and build predictive models. For multidimensional data analysis, Statistics and Machine Learning Toolbox provides feature selection, stepwise regression, principal component analysis (PCA), regularization, and other dimensionality reduction methods that let you identify variables or features that impact your model. The toolbox provides supervised and unsupervised machine learning algorithms, including support vector machines (SVMs), boosted and bagged decision trees, k-nearest neighbor, k-means, k-medoids, hierarchical clustering, Gaussian mixture models, and hidden Markov models. Many of the statistics and machine learning algorithms can be used for computations on data sets that are too big to be stored in memory. Gaussian process regression (GPR) models are nonparametric kernel-based probabilistic models. You can train a GPR model using the fitrgp function. Gaussian mixture models (GMM) are often used for data clustering. Usually, fitted GMMs cluster by assigning query data points to the multivariate normal components that maximize the component posterior probability given the data. This book develops the work with Gaussian Process Regression (GPR), clustering with Gaussian mixture models and Bayesian Optimization using MATLAB. The more important topics in the bok are the next: Gaussian Mixture Models (GMM): Create, Fit and Simulate Gaussian Process Regression Models Kernel (Covariance) Function Options Exact GPR Method Fully Independent Conditional Approximation for GPRModels Approximating the Kernel Function Parameter Estimation and Prediction Block Coordinate Descent Approximation for GPR Models Clustering Using Gaussian Mixture Models Cluster Data from Mixture of Gaussian Distributions Tune Gaussian Mixture Models Bayesian Optimization Algorithm Parallel Bayesian Optimization Parallel Bayesian Algorithm Bayesian Optimization Plot Functions Bayesian Optimization Output Functions
Author: G. Peck
Publication date: 08/15/2018
Kindle book details: Kindle Edition, 88 pages

Gaussian Processes on Trees: From Spin Glasses to Branching Brownian Motion (Cambridge Studies in Advanced Mathematics Book 163)

https://www.amazon.com/Gaussian-Processes-Trees-Branching-Ma...
Branching Brownian motion (BBM) is a classical object in probability theory with deep connections to partial differential equations. This book highlights the connection to classical extreme value theory and to the theory of mean-field spin glasses in statistical mechanics. Starting with a concise review of classical extreme value statistics and a basic introduction to mean-field spin glasses, the author then focuses on branching Brownian motion. Here, the classical results of Bramson on the asymptotics of solutions of the F-KPP equation are reviewed in detail and applied to the recent construction of the extremal process of BBM. The extension of these results to branching Brownian motion with variable speed are then explained. As a self-contained exposition that is accessible to graduate students with some background in probability theory, this book makes a good introduction for anyone interested in accessing this exciting field of mathematics.
Author: Anton Bovier
Published by: Cambridge University Press | Publication date: 10/20/2016
Kindle book details: Kindle Edition, 211 pages

Markov Processes, Gaussian Processes, and Local Times (Cambridge Studies in Advanced Mathematics Book 100)

https://www.amazon.com/Processes-Gaussian-Cambridge-Advanced...
This book was first published in 2006. Written by two of the foremost researchers in the field, this book studies the local times of Markov processes by employing isomorphism theorems that relate them to certain associated Gaussian processes. It builds to this material through self-contained but harmonized 'mini-courses' on the relevant ingredients, which assume only knowledge of measure-theoretic probability. The streamlined selection of topics creates an easy entrance for students and experts in related fields. The book starts by developing the fundamentals of Markov process theory and then of Gaussian process theory, including sample path properties. It then proceeds to more advanced results, bringing the reader to the heart of contemporary research. It presents the remarkable isomorphism theorems of Dynkin and Eisenbaum and then shows how they can be applied to obtain new properties of Markov processes by using well-established techniques in Gaussian process theory. This original, readable book will appeal to both researchers and advanced graduate students.
Published by: Cambridge University Press | Publication date: 07/24/2006
Kindle book details: Kindle Edition, 632 pages

Financial Modeling Under Non-Gaussian Distributions (Springer Finance)

https://www.amazon.com/Financial-Modeling-Non-Gaussian-Distr...
This book examines non-Gaussian distributions. It addresses the causes and consequences of non-normality and time dependency in both asset returns and option prices. The book is written for non-mathematicians who want to model financial market prices so the emphasis throughout is on practice. There are abundant empirical illustrations of the models and techniques described, many of which could be equally applied to other financial time series.
Published by: Springer | Publication date: 04/05/2007
Kindle book details: Kindle Edition, 541 pages

Unsupervised Machine Learning in Python: Master Data Science and Machine Learning with Cluster Analysis, Gaussian Mixture Models, and Principal Components Analysis

https://www.amazon.com/Unsupervised-Machine-Learning-Python-...
In a real-world environment, you can imagine that a robot or an artificial intelligence won’t always have access to the optimal answer, or maybe there isn’t an optimal correct answer. You’d want that robot to be able to explore the world on its own, and learn things just by looking for patterns.Think about the large amounts of data being collected today, by the likes of the NSA, Google, and other organizations. No human could possibly sift through all that data manually. It was reported recently in the Washington Post and Wall Street Journal that the National Security Agency collects so much surveillance data, it is no longer effective.Could automated pattern discovery solve this problem?Do you ever wonder how we get the data that we use in our supervised machine learning algorithms?Kaggle always seems to provide us with a nice CSV, complete with Xs and corresponding Ys.If you haven’t been involved in acquiring data yourself, you might not have thought about this, but someone has to make this data!A lot of the time this involves manual labor. Sometimes, you don’t have access to the correct information or it is infeasible or costly to acquire.You still want to have some idea of the structure of the data.This is where unsupervised machine learning comes into play.In this book we are first going to talk about clustering. This is where instead of training on labels, we try to create our own labels. We’ll do this by grouping together data that looks alike.The 2 methods of clustering we’ll talk about: k-means clustering and hierarchical clustering.Next, because in machine learning we like to talk about probability distributions, we’ll go into Gaussian mixture models and kernel density estimation, where we talk about how to learn the probability distribution of a set of data.One interesting fact is that under certain conditions, Gaussian mixture models and k-means clustering are exactly the same! We’ll prove how this is the case.Lastly, we’ll look at the theory behind principal components analysis or PCA. PCA has many useful applications: visualization, dimensionality reduction, denoising, and de-correlation. You will see how it allows us to take a different perspective on latent variables, which first appear when we talk about k-means clustering and GMMs.All the algorithms we’ll talk about in this course are staples in machine learning and data science, so if you want to know how to automatically find patterns in your data with data mining and pattern extraction, without needing someone to put in manual work to label that data, then this book is for you.All of the materials required to follow along in this book are free: You just need to able to download and install Python, Numpy, Scipy, Matplotlib, and Sci-kit Learn.
Publication date: 05/22/2016
Kindle book details: Kindle Edition, 38 pages

Gaussian Markov Random Fields: Throey and Applications (Chapman & Hall/CRC Monographs on Statistics & Applied Probability Book 104)

https://www.amazon.com/Gaussian-Markov-Random-Fields-Applica...
No description available
Published by: CRC Press | Publication date: 04/16/2007
Kindle book details: Kindle Edition, 280 pages

Lectures on Gaussian Processes (SpringerBriefs in Mathematics)

https://www.amazon.com/Lectures-Gaussian-Processes-SpringerB...
Gaussian processes can be viewed as a  far-reaching infinite-dimensional extension of classical normal random variables. Their theory presents a powerful range of tools for probabilistic modelling in various academic and technical domains such as Statistics, Forecasting, Finance, Information Transmission, Machine Learning - to mention just a few. The objective of these Briefs is to present a quick and condensed treatment of the core theory that a reader must understand in order to make his own independent contributions. The primary intended readership are PhD/Masters students and researchers working in pure or applied mathematics. The first chapters introduce essentials of the classical theory of Gaussian processes and measures with the core notions of reproducing kernel, integral representation, isoperimetric property, large deviation principle. The brevity being a priority for teaching and learning purposes, certain technical details and proofs are omitted. The later chapters touch important recent issues not sufficiently reflected in the literature, such as small deviations, expansions, and quantization of processes. In university teaching, one can build a one-semester advanced course upon these Briefs.
Published by: Springer | Publication date: 01/11/2012
Kindle book details: Kindle Edition, 134 pages

Statistical Rethinking: A Bayesian Course with Examples in R and Stan (Chapman & Hall/CRC Texts in Statistical Science Book 122)

https://www.amazon.com/Statistical-Rethinking-Bayesian-Examp...
Statistical Rethinking: A Bayesian Course with Examples in R and Stan builds readers’ knowledge of and confidence in statistical modeling. Reflecting the need for even minor programming in today’s model-based statistics, the book pushes readers to perform step-by-step calculations that are usually automated. This unique computational approach ensures that readers understand enough of the details to make reasonable choices and interpretations in their own modeling work.The text presents generalized linear multilevel models from a Bayesian perspective, relying on a simple logical interpretation of Bayesian probability and maximum entropy. It covers from the basics of regression to multilevel models. The author also discusses measurement error, missing data, and Gaussian process models for spatial and network autocorrelation.By using complete R code examples throughout, this book provides a practical foundation for performing statistical inference. Designed for both PhD students and seasoned professionals in the natural and social sciences, it prepares them for more advanced or specialized statistical modeling. Web ResourceThe book is accompanied by an R package (rethinking) that is available on the author’s website and GitHub. The two core functions (map and map2stan) of this package allow a variety of statistical models to be constructed from standard model formulas.
Published by: Chapman and Hall/CRC | Publication date: 01/03/2018
Kindle book details: Kindle Edition, 487 pages

Stable Non-Gaussian Random Processes: Stochastic Models with Infinite Variance (Stochastic Modeling Series Book 1)

https://www.amazon.com/Stable-Non-Gaussian-Random-Processes-...
This book serves as a standard reference, making this area accessible not only to researchers in probability and statistics, but also to graduate students and practitioners. The book assumes only a first-year graduate course in probability. Each chapter begins with a brief overview and concludes with a wide range of exercises at varying levels of difficulty. The authors supply detailed hints for the more challenging problems, and cover many advances made in recent years.
Published by: Routledge | Publication date: 11/22/2017
Kindle book details: Kindle Edition, 632 pages

Gaussian and Non-Gaussian Linear Time Series and Random Fields (Springer Series in Statistics)

https://www.amazon.com/Gaussian-Non-Gaussian-Linear-Springer...
The principal focus here is on autoregressive moving average models and analogous random fields, with probabilistic and statistical questions also being discussed. The book contrasts Gaussian models with noncausal or noninvertible (nonminimum phase) non-Gaussian models and deals with problems of prediction and estimation. New results for nonminimum phase non-Gaussian processes are exposited and open questions are noted. Intended as a text for gradutes in statistics, mathematics, engineering, the natural sciences and economics, the only recommendation is an initial background in probability theory and statistics. Notes on background, history and open problems are given at the end of the book.
Published by: Springer | Publication date: 09/27/2012
Kindle book details: Kindle Edition, 247 pages
[1] 2345Next
PDFfetch