282 results found
Hand DJ, 2010, Theory of Stochastic Processes: With Applications to Financial Mathematics and Risk Theory by Dmytro Gusak, Alexander Kukush, Alexey Kulik, Yuliya Mishura, Andrey Pilipenko, International Statistical Review, Vol: 78, Pages: 461-461
Hand DJ, 2010, Econophysics and Companies: Statistical Life and Death in Complex Business Networks by Hideaki Aoyama, Yoshi Fujiwara, Yuichi Ikeda, Hiroshi Iyetomi, Wataru Souma, International Statistical Review, Vol: 78, Pages: 445-445
Hand DJ, 2010, Introduction to Social Statistics: The Logic of Statistical Reasoning by Thomas Dietz, Linda Kalof, International Statistical Review, Vol: 78, Pages: 326-327
Hand DJ, 2010, Machine Learning: An Algorithmic Perspective by Stephen Marsland, International Statistical Review, Vol: 78, Pages: 325-325
Hand DJ, 2010, The Role of Statistics in Business and Industry by Gerald J. Hahn, Necip Doganaksoy, International Statistical Review, Vol: 78, Pages: 152-153
Hand DJ, 2010, Statistical Analysis of Network Data: Methods and Models by Eric D. Kolaczyk, International Statistical Review, Vol: 78, Pages: 135-135
Hand DJ, 2010, Text Mining: Classification, Clustering, and Applications edited by Ashok Srivastava, Mehran Sahami, International Statistical Review, Vol: 78, Pages: 134-135
Hand DJ, 2010, The Laws of Coincidence, 19th International Conference on Computational Statistics (COMPSTAT'2010), Publisher: PHYSICA-VERLAG GMBH & CO, Pages: 33-43
Hand DJ, Hand DJ, 2010, Fraud Detection in Telecommunications and Banking: Discussion of Becker, Volinsky, and Wilks (2010) and Sudjianto et al. (2010), TECHNOMETRICS, Vol: 52, Pages: 34-38, ISSN: 0040-1706
Hand DJ, Hand DJ, Hand DJ, et al., 2010, Evaluating diagnostic tests: The area under the ROC curve and the balance of errors, STATISTICS IN MEDICINE, Vol: 29, Pages: 1502-1510, ISSN: 0277-6715
Because accurate diagnosis lies at the heart of medicine, it is important to be able to evaluate the effectiveness of diagnostic tests. A variety of accuracy measures are used. One particularly widely used measure is the AUC, the area under the receiver operating characteristic (ROC) curve. This measure has a well-understood weakness when comparing ROC curves which cross. However, it also has the more fundamental weakness of failing to balance different kinds of misdiagnoses effectively. This is not merely an aspect of the inevitable arbitrariness in choosing a performance measure, but is a core property of the way the AUC is defined. This property is explored, and an alternative, the H measure, is described.
Hand DJ, Zhou F, Hand DJ, et al., 2010, Evaluating models for classifying customers in retail banking collections, JOURNAL OF THE OPERATIONAL RESEARCH SOCIETY, Vol: 61, Pages: 1540-1547, ISSN: 0160-5682
Heard NA, Weston DJ, Platanioti K, et al., 2010, BAYESIAN ANOMALY DETECTION METHODS FOR SOCIAL NETWORKS, ANNALS OF APPLIED STATISTICS, Vol: 4, Pages: 645-662, ISSN: 1932-6157
Pavlidis NG, Adams NM, Nicholson D, et al., 2010, Prospects for Bandit Solutions in Sensor Management, COMPUTER JOURNAL, Vol: 53, Pages: 1370-1383, ISSN: 0010-4620
Yu K, Ally AK, Yang S, et al., 2010, Kernel quantile-based estimation of expected shortfall, JOURNAL OF RISK, Vol: 12, Pages: 15-32, ISSN: 1465-1211
Anagnostopoulos C, Tasoulis DK, Adams NM, et al., 2009, Temporally adaptive estimation of logistic classifiers on data streams, ADVANCES IN DATA ANALYSIS AND CLASSIFICATION, Vol: 3, Pages: 243-261, ISSN: 1862-5347
Bentham J, Hand DJ, Bentham J, et al., 2009, Detecting New Kinds of Patient Safety Incidents, 12th International Conference on Discovery Science, Publisher: SPRINGER-VERLAG BERLIN, Pages: 51-65, ISSN: 0302-9743
We present a novel approach to discovering small groups of anomalously similar pieces of free text. The UK's National Reporting and Learning System (NRLS) contains free text and categorical variables describing several million patient safety incidents that have occurred in the National Health Service. The groups of interest represent previously unknown incident types. The task is particularly challenging because the free text descriptions are of random lengths, from very short to quite extensive, and include arbitrary abbreviations and misspellings, as well as technical medical terms. Incidents of the same type may also be described in various different ways. The aim of the analysis is to produce a global, numerical model of the text, such that the relative positions of the incidents in the model space reflect their meanings. A high dimensional vector space model of the text passages is produced; TF-IDF term weighting is applied, reflecting the differing importance of particular words to a description's meaning. The dimensionality of the model space is reduced, using principal component and linear discriminant analysis. The supervised analysis uses categorical variables from the NRLS, and allows incidents of similar meaning to be positioned close to one another in the model space. Anomaly detection tools are then used to find small groups of descriptions that are more similar than one would expect. The results are evaluated by having the groups assessed qualitatively by domain experts to see whether they are of substantive interest. © 2009 Springer Berlin Heidelberg.
Hand DJ, 2009, Mining the past to determine the future: Rejoinder, Vol: 25, Pages: 461-462
Hand DJ, 2009, Observed Confidence Levels: Theory and Application by Alan M. Polansky, International Statistical Review, Vol: 77, Pages: 316-316
Hand DJ, 2009, Forecasting with Exponential Smoothing: The State Space Approach by Rob J. Hyndman, Anne B. Koehler, J. Keith Ord, Ralph D. Snyder, International Statistical Review, Vol: 77, Pages: 315-316
Hand DJ, 2009, Statistical DNA Forensics: Theory, Methods and Computation by Wing Kam Fung, Yue-Qing Hu, International Statistical Review, Vol: 77, Pages: 314-315
Hand DJ, 2009, Encyclopedia of Quantitative Risk Analysis and Assessment edited by Edward L. Melnick, Brian S. Everitt, International Statistical Review, Vol: 77, Pages: 313-314
Hand DJ, 2009, Did over-reliance on financial models for risk assessment create the financial crisis?, The Foundation for Science and Technology, Pages: 28-29
Hand DJ, 2009, Mathematics and statistics in finance, A Global Village, Pages: 11-15
Hand DJ, 2009, How we are fighting the battle against fraud, Credit Collections and Risk, Pages: 47-48
Hand DJ, Hand DJ, 2009, Mining the past to determine the future: Rejoinder Discussion, INTERNATIONAL JOURNAL OF FORECASTING, Vol: 25, Pages: 461-462, ISSN: 0169-2070
Hand DJ, Hand DJ, 2009, Measuring classifier performance: a coherent alternative to the area under the ROC curve, MACHINE LEARNING, Vol: 77, Pages: 103-123, ISSN: 0885-6125
Hand DJ, Hand DJ, Hand DJ, 2009, Modern statistics: the myth and the magic, JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES A-STATISTICS IN SOCIETY, Vol: 172, Pages: 287-306, ISSN: 0964-1998
The paper is a personal exploration of the puzzling contradiction between the fundamental excitement of statistics and its poor public image. It begins with the historical foundations and proceeds through the role of applications and the dramatic impact of the computer in shaping the discipline. The mismatch between the reality of statistics and its public perception arises from a number of dichotomies, some of which are explored. In particular, although statistics is perhaps typically seen as an impersonal discipline, in some sense it is very personal, and many of its applications are aimed at providing unique benefit to individuals. This benefit depends on the creation of detailed data sets describing individuals, but the contrary view is that this represents an invasion of privacy. Some observations on statistical education are made, and issues which will affect the future health of the discipline are examined. Copyright (c) 2009 Royal Statistical Society.
Hand DJ, Hand DJ, Hand DJ, 2009, Mining the past to determine the future: Problems and possibilities, INTERNATIONAL JOURNAL OF FORECASTING, Vol: 25, Pages: 441-451, ISSN: 0169-2070
Technological advances mean that vast data sets are increasingly common. Such data sets provide us with unparallelled opportunities for modelling and predicting the likely outcome of future events. However, such data sets may also bring with them new challenges and difficulties. An awareness of these, and of the weaknesses as well as the possibilities of these large data sets, is necessary if useful forecasts are to be made. This paper looks at some of these difficulties, using illustrations with applications from various areas.
Hand DJ, Yu K, 2009, Justifying adverse actions with new scorecard technologies, Journal of Financial Transformation, Vol: 26, Pages: 87-91
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.