
Today, Games User Research forms an integral component of the development of any kind of interactive entertainment. User research stands as the primary source of business intelligence in the incredibly competitive game industry. This book aims to provide the foundational, accessible, goto resource for people interested in GUR. It is a communitydriven effort—it is written by passionate professionals and researchers in the GUR community as a handbook and guide for everyone interested in user research and games. The book bridges the current gaps of knowledge in Game User Research, building the goto volume for everyone working with games, with an emphasis on those new to the field.

This monograph addresses the need to clarify basic mathematical concepts at the crossroad between gravitation and quantum physics. Selected mathematical and theoretical topics are exposed within a nottooshort, integrated approach that exploits standard and nonstandard notions in natural geometric language. The role of structure groups can be regarded as secondary even in the treatment of the gauge fields themselves. Twospinors yield a partly original ‘minimal geometric data’ approach to EinsteinCartanMaxwellDirac fields. The gravitational field is jointly represented by a spinor connection and by a soldering form (a ‘tetrad’) valued in a vector bundle naturally constructed from the assumed 2spinor bundle. We give a presentation of electroweak theory that dispenses with grouprelated notions, and we introduce a nonstandard, natural extension of it. Also within the 2spinor approach we present: a nonstandard view of gauge freedom; a firstorder Lagrangian theory of fields with arbitrary spin; an original treatment of Lie derivatives of spinors and spinor connections. Furthermore we introduce an original formulation of Lagrangian field theories based on covariant differentials, which works in the classical and quantum field theories alike and simplifies calculations. We offer a precise mathematical approach to quantum bundles and quantum fields, including ghosts, BRST symmetry and antifields, treating the geometry of quantum bundles and their jet prolongations in terms Frölicher's notion of smoothness. We propose an approach to quantum particle physics based on the notion of detector, and illustrate the basic scattering computations in that context.

Inductive logic (also known as confirmation theory) seeks to determine the extent to which the premisses of an argument entail its conclusion. This book offers an introduction to the field of inductive logic and develops a new Bayesian inductive logic. Chapter 1 introduces perhaps the simplest and most natural account of inductive logic, classical inductive logic, which is attributable to Ludwig Wittgenstein. Classical inductive logic is seen to fail in a crucial way, so there is a need to develop more sophisticated inductive logics. Chapter 2 presents enough logic and probability theory for the reader to begin to study inductive logic, while Chapter 3 introduces the ways in which logic and probability can be combined in an inductive logic. Chapter 4 analyses the most influential approach to inductive logic, due to W.E. Johnson and Rudolf Carnap. Again, this logic is seen to be inadequate. Chapter 5 shows how an alternative approach to inductive logic follows naturally from the philosophical theory of objective Bayesian epistemology. This approach preserves the inferences that classical inductive logic gets right (Chapter 6). On the other hand, it also offers a way out of the problems that beset classical inductive logic (Chapter 7). Chapter 8 defends the approach by tackling several key criticisms that are often levelled at inductive logic. Chapter 9 presents a formal justification of the version of objective Bayesianism which underpins the approach. Chapter 10 explains what has been achieved and poses some open questions.

Cryptography is a vital technology that underpins the security of information in computer networks. This book presents an introduction to the role that cryptography plays in providing information security for technologies such as the Internet, mobile phones, payment cards, and wireless local area networks. Focusing on the fundamental principles that ground modern cryptography as they arise in modern applications, it avoids both an overreliance on transient current technologies and overwhelming theoretical research. A short appendix is included for those looking for a deeper appreciation of some of the concepts involved. By the end of this book, the reader will not only be able to understand the practical issues concerned with the deployment of cryptographic mechanisms, including the management of cryptographic keys, but will also be able to interpret future developments in this increasingly important area of technology.

Spectral methods have long been popular in direct and large eddy simulation of turbulent flows, but their use in areas with complexgeometry computational domains has historically been much more limited. More recently, the need to find accurate solutions to the viscous flow equations around complex configurations has led to the development of highorder discretization procedures on unstructured meshes, which are also recognized as more efficient for solution of timedependent oscillatory solutions over long time periods. This book, an updated edition on the original text, presents the recent and significant progress in multidomain spectral methods at both the fundamental and application level. Containing material on discontinuous Galerkin methods, nontensorial nodal spectral element methods in simplex domains, and stabilization and filtering techniques, this text introduces the use of spectral/hp element methods with particular emphasis on their application to unstructured meshes. It provides a detailed explanation of the key concepts underlying the methods along with practical examples of their derivation and application.

This book addresses a basic question in differential geometry that was first considered by physicists Stanley Deser and Adam Schwimmer in 1993 in their study of conformal anomalies. The question concerns conformally invariant functionals on the space of Riemannian metrics over a given manifold. These functionals act on a metric by first constructing a Riemannian scalar out of it, and then integrating this scalar over the manifold. Suppose this integral remains invariant under conformal rescalings of the underlying metric. What information can one then deduce about the Riemannian scalar? This book asserts that the Riemannian scalar must be a linear combination of three obvious candidates, each of which clearly satisfies the required property: a local conformal invariant, a divergence of a Riemannian vector field, and the Chern–Gauss–Bonnet integrand. The book provides a proof of this conjecture. The result itself sheds light on the algebraic structure of conformal anomalies, which appear in many settings in theoretical physics. It also clarifies the geometric significance of the renormalized volume of asymptotically hyperbolic Einstein manifolds. The methods introduced here make an interesting connection between algebraic properties of local invariants—such as the classical Riemannian invariants and the more recently studied conformal invariants—and the study of global invariants, in this case conformally invariant integrals.

This book presents an important breakthrough in arithmetic geometry. In 2014, this book's author delivered a series of lectures at the University of California, Berkeley, on new ideas in the theory of padic geometry. Building on his discovery of perfectoid spaces, the author introduced the concept of “diamonds,” which are to perfectoid spaces what algebraic spaces are to schemes. The introduction of diamonds, along with the development of a mixedcharacteristic shtuka, set the stage for a critical advance in the discipline. This book shows that the moduli space of mixedcharacteristic shtukas is a diamond, raising the possibility of using the cohomology of such spaces to attack the Langlands conjectures for a reductive group over a padic field. The book follows the informal style of the original Berkeley lectures, with one chapter per lecture. It explores padic and perfectoid spaces before laying out the newer theory of shtukas and their moduli spaces. Points of contact with other threads of the subject, including pdivisible groups, padic Hodge theory, and RapoportZink spaces, are thoroughly explained.

The Error of Truth recounts the astonishing and unexpected tale of how quantitative thinking was invented and rose to primacy in our lives in the nineteenth and early twentieth centuries, bringing us to an entirely new perspective on what we know about the world and how we know it—even on what we each think about ourselves. Quantitative thinking is our inclination to view natural and everyday phenomena through a lens of measurable events, with forecasts, odds, predictions, and likelihood playing a dominant part. This worldview, or Weltanschauung, is unlike anything humankind had before, and it came about because of a momentous human achievement: namely, we had learned how to measure uncertainty. Probability as a science had been invented. Through probability theory, we now had correlations, reliable predictions, regressions, the bellshaped curve for studying social phenomena, and the psychometrics of educational testing. Significantly, these developments in mathematics happened during a relatively short period in world history: roughly, the 130year period from 1790 to 1920, from about the close of the Napoleonic era, through the Enlightenment and the Industrial Revolutions, to the end of World War I. Quantification is now everywhere in our daily lives, such as in the ubiquitous microchip in smartphones, cars, and appliances, in the Bayesian logic of artificial intelligence, and in applications in business, engineering, medicine, economics, and elsewhere. Probability is the foundation of our quantitative thinking. Here we see its story: when, why, and how it came to be and changed us forever.

This text provides an introduction to the theoretical, practical, and numerical aspects of image registration, with special emphasis on medical imaging. Given a socalled reference and template image, the goal of image registration is to find a reasonable transformation such that the transformed template is similar to the reference image. Image registration is utilized whenever information obtained from different viewpoints times and sensors needs to be combined or compared, and unwanted distortion needs to be eliminated. The book provides a systematic introduction to image registration and discusses the basic mathematical principles, including aspects from approximations theory, image processing, numerics, optimization, partial differential equations, and statistics, with a strong focus on numerical methods. A unified variational approach is introduced and enables a separation into datarelated issues like image feature or image intensitybased similarity measures, and problem inherent regularization like elastic or diffusion registration. This general framework is further used for the explanation and classification of established methods as well as the design of new schemes and building blocks including landmark, thinplatespline, mutual information, elastic, fluid, demon, diffusion, and curvature registration.

This book is an introduction to the modelbased approach to survey sampling. It consists of three parts, with Part I focusing on estimation of population totals. Chapters 1 and 2 introduce survey sampling, and the modelbased approach, respectively. Chapter 3 considers the simplest possible model, the homogenous population model, which is then extended to stratified populations in Chapter 4. Chapter 5 discusses simple linear regression models for populations, and Chapter 6 considers clustered populations. The general linear population model is then used to integrate these results in Chapter 7. Part II of this book considers the properties of estimators based on incorrectly specified models. Chapter 8 develops robust sample designs that lead to unbiased predictors under model misspecification, and shows how flexible modelling methods like nonparametric regression can be used in survey sampling. Chapter 9 extends this development to misspecfication robust prediction variance estimators and Chapter 10 completes Part II of the book with an exploration of outlier robust sample survey estimation. Chapters 11 to 17 constitute Part III of the book and show how modelbased methods can be used in a variety of problem areas of modern survey sampling. They cover (in order) prediction of nonlinear population quantities, subsampling approaches to prediction variance estimation, design and estimation for multipurpose surveys, prediction for domains, small area estimation, efficient prediction of population distribution functions and the use of transformations in survey inference. The book is designed to be accessible to undergraduate and graduate level students with a good grounding in statistics and applied survey statisticians seeking an introduction to modelbased survey design and estimation.

A central concern of number theory is the study of localtoglobal principles, which describe the behavior of a global field K in terms of the behavior of various completions of K. This book looks at a specific example of a localtoglobal principle: Weil's conjecture on the Tamagawa number of a semisimple algebraic group G over K. In the case where K is the function field of an algebraic curve X, this conjecture counts the number of Gbundles on X (global information) in terms of the reduction of G at the points of X (local information). The goal of this book is to give a conceptual proof of Weil's conjecture, based on the geometry of the moduli stack of Gbundles. Inspired by ideas from algebraic topology, it introduces a theory of factorization homology in the setting ℓadic sheaves. Using this theory, the authors articulate a different localtoglobal principle: a product formula that expresses the cohomology of the moduli stack of Gbundles (a global object) as a tensor product of local factors. Using a version of the Grothendieck–Lefschetz trace formula, the book shows that this product formula implies Weil's conjecture. The proof of the product formula will appear in a sequel volume.

Most of our everyday life experiences are multisensory in nature, i.e. they consist of what we see, hear, feel, taste, smell, and much more. Almost any experience, such as eating a meal or going to the cinema, involves a magnificent sensory world. In recent years, many of these experiences have been increasingly transformed through technological advancements such as multisensory devices and intelligent systems. This book takes the reader on a journey that begins with the fundamentals of multisensory experiences, moves through the relationship between the senses and technology, and finishes by considering what the future of those experiences may look like, and our responsibility in it. The book seeks to empower the reader to shape his or her own and other people’s experiences by considering the multisensory worlds in which we live. This book is a powerful and personal story about the authors’ passion for, and viewpoint on, multisensory experiences.

The last 25 years have seen a small revolution in our approach to the understanding of new technology and information systems. It has become a founding assumption of computersupported cooperative work and human–computer interaction that in the future, if not already, most computer applications will be socially embedded in the sense that they will become infrastructures (in some sense) for the development of the social practices which they are designed to support. Assuming that IT artifacts have to be understood in this sociotechnical way, traditional criteria for good design in computer science, such as performance, reliability, stability or usability, arguably need to be supplemented by methods and perspectives which illuminate the way in which technology and social practice are mutually elaborating. This book concerns the philosophy, conceptual apparatus, and methodological concerns which will inform the development of a systematic and longterm humancentered approach to the ITproduct life cycle, addressing issues concerned with appropriation and infrastructuring. This entails an orientation to “practicebased computing.” The book contains a number of chapters which examine both the conceptual foundations of such an approach, and a number of empirical case studies that exemplify it.

Proving in the Elementary Mathematics Classroom addresses a fundamental problem in children’s learning that has received relatively little research attention: Although proving and related concepts (e.g., proof, argumentation, conjecturing) are core to mathematics as a sensemaking activity, they currently have a marginal place in elementary classrooms internationally. This book takes a step toward addressing this problem by examining how the place of proving in elementary students’ mathematical work can be elevated through the purposeful design and implementation of mathematics tasks, specifically proving tasks. In particular, the book draws on relevant research and theory and classroom episodes with 8–9yearolds from England and the United States to examine different kinds of proving tasks and the proving activity they can help generate in the elementary classroom. It examines further the role of elementary teachers in mediating the relationship between proving tasks and proving activity, including major mathematical and pedagogical issues that can arise for them as they implement each kind of proving task in the classroom. In addition to its research contribution in the intersection of the scholarly areas of teaching/learning proving and task design/implementation, the book has important implications for teaching, curricular resources, and teacher education. For example, the book identifies different kinds of proving tasks whose balanced representation in the mathematics classroom and in curricular resources can support a rounded set of learning experiences for elementary students related to proving. It identifies further important mathematical ideas and pedagogical practices related to proving that can be studied in teacher education.

Motivated by the theory of turbulence in fluids, the physicist and chemist Lars Onsager conjectured in 1949 that weak solutions to the incompressible Euler equations might fail to conserve energy if their spatial regularity was below 1/3Hölder. This book uses the method of convex integration to achieve the bestknown results regarding nonuniqueness of solutions and Onsager's conjecture. Focusing on the intuition behind the method, the ideas introduced now play a pivotal role in the ongoing study of weak solutions to fluid dynamics equations. The construction itself—an intricate algorithm with hidden symmetries—mixes together transport equations, algebra, the method of nonstationary phase, underdetermined partial differential equations (PDEs), and specially designed highfrequency waves built using nonlinear phase functions. The powerful “Main Lemma”—used here to construct nonzero solutions with compact support in time and to prove nonuniqueness of solutions to the initial value problem—has been extended to a broad range of applications that are surveyed in the appendix. Appropriate for students and researchers studying nonlinear PDEs, this book aims to be as robust as possible and pinpoints the main difficulties that presently stand in the way of a full solution to Onsager's conjecture.

Pattern recognition prowess served our ancestors well. However, today we are confronted by a deluge of data that are far more abstract, complicated, and difficult to interpret than were annual seasons and the sounds of predators. The number of possible patterns that can be identified relative to the number that are genuinely useful has grown exponentially—which means that the chances that a discovered pattern is useful is rapidly approaching zero. Coincidental streaks, clusters, and correlations are the norm—not the exception. Our challenge is to overcome our inherited inclination to think that all patterns are meaningful.Computer algorithms can easily identify an essentially unlimited number of phantom patterns and relationships that vanish when confronted with fresh data. The paradox of big data is that the more data we ransack for patterns, the more likely it is that what we find will be worthless. Our challenge is to overcome our inherited inclination to think that all patterns are meaningful.

Based on lectures given at Zhejiang University in Hangzhou, China, and Johns Hopkins University, this book introduces eigenfunctions on Riemannian manifolds. The book gives a proof of the sharp Weyl formula for the distribution of eigenvalues of Laplace–Beltrami operators, as well as an improved version of the Weyl formula, the DuistermaatGuillemin theorem under natural assumptions on the geodesic flow. The book shows that there is quantum ergodicity of eigenfunctions if the geodesic flow is ergodic. It begins with a treatment of the Hadamard parametrix before proving the first main result, the sharp Weyl formula. The book avoids the use of Tauberian estimates and instead relies on supnorm estimates for eigenfunctions. It also gives a rapid introduction to the stationary phase and the basics of the theory of pseudodifferential operators and microlocal analysis. These are used to prove the DuistermaatGuillemin theorem. Turning to the related topic of quantum ergodicity, the book demonstrates that if the longterm geodesic flow is uniformly distributed, most eigenfunctions exhibit a similar behavior, in the sense that their mass becomes equidistributed as their frequencies go to infinity.

This book provides an introduction to algebraic cycles on complex algebraic varieties, to the major conjectures relating them to cohomology, and even more precisely to Hodge structures on cohomology. The book is intended for both students and researchers, and not only presents a survey of the geometric methods developed in the last thirty years to understand the famous BlochBeilinson conjectures, but also examines recent work by the author. It focuses on two central objects: the diagonal of a variety—and the partial BlochSrinivas type decompositions it may have depending on the size of Chow groups—as well as its small diagonal, which is the right object to consider in order to understand the ring structure on Chow groups and cohomology. An exploration of a sampling of recent works by the author looks at the relation, conjectured in general by Bloch and Beilinson, between the coniveau of general complete intersections and their Chow groups and a very particular property satisfied by the Chow ring of K3 surfaces and conjecturally by hyperKähler manifolds. In particular, the book delves into arguments originating in Nori's work that have been further developed by others.

Understanding change is essential in most scientific fields. This is highlighted by the importance of issues such as shifts in public health and changes in public opinion regarding politicians and policies. Nevertheless, our measurements of the world around us are often imperfect. For example, measurements of attitudes might be biased by social desirability, while estimates of health may be marred by low sensitivity and specificity. In this book we tackle the important issue of how to understand and estimate change in the context of data that are imperfect and exhibit measurement error. The book brings together the latest advances in the area of estimating change in the presence of measurement error from a number of different fields, such as survey methodology, sociology, psychology, statistics, and health. Furthermore, it covers the entire process, from the best ways of collecting longitudinal data, to statistical models to estimate change under uncertainty, to examples of researchers applying these methods in the real world. The book introduces the reader to essential issues of longitudinal data collection such as memory effects, panel conditioning (or mere measurement effects), the use of administrative data, and the collection of multimode longitudinal data. It also introduces the reader to some of the most important models used in this area, including quasisimplex models, latent growth models, latent Markov chains, and equivalence/DIF testing. Further, it discusses the use of vignettes in the context of longitudinal data and estimation methods for multilevel models of change in the presence of measurement error.

This book is devoted to the mathematical modelling of electromagnetic materials. Electromagnetism in matter is developed with particular emphasis on material effects, which are ascribed to memory in time and nonlocality. Within the mathematical modelling, thermodynamics of continuous media plays a central role in that it places significant restrictions on the constitutive equations. Further, as shown in connection with uniqueness, existence and stability, variational settings, and wave propagation, a correct formulation of the pertinent problems is based on the knowledge of the thermodynamic restrictions for the material. The book is divided into four parts. Part I (chapters 1 to 4) reviews the basic concepts of electromagnetism, starting from the integral form of Maxwell’s equations and then addressing attention to the physical motivation for materials with memory. Part II (chapers 5 to 9) deals with thermodynamics of systems with memory and applications to evolution and initial/boundaryvalue problems. It contains developments and results which are unusual in textbooks on electromagnetism and arise from the research literature, mainly post1960s. Part III (chapters 10 to 12) outlines some topics of materials modelling — nonlinearity, nonlocality, superconductivity, and magnetic hysteresis — which are of great interest both in mathematics and in applications.