213 results found
Chen C-M, Kwasnicki RM, Curto VF, et al., 2019, Tissue Oxygenation Sensor and an Active In Vitro Phantom for Sensor Validation, IEEE SENSORS JOURNAL, Vol: 19, Pages: 8233-8240, ISSN: 1530-437X
Sun Y, Lo FPW, Lo B, 2019, EEG-based user identification system using 1D-convolutional long short-term memory neural networks, Expert Systems with Applications, Vol: 125, Pages: 259-267, ISSN: 0957-4174
© 2019 Elsevier Ltd Electroencephalographic (EEG) signals have been widely used in medical applications, yet the use of EEG signals as user identification systems for healthcare and Internet of Things (IoT) systems has only gained interests in the last few years. The advantages of EEG-based user identification systems lie in its dynamic property and uniqueness among different individuals. However, it is for this reason that manually designed features are not always adapted to the needs. Therefore, a novel approach based on 1D Convolutional Long Short-term Memory Neural Network (1D-Convolutional LSTM) for EEG-based user identification system is proposed in this paper. The performance of the proposed approach was validated with a public database consists of EEG data of 109 subjects. The experimental results showed that the proposed network has a very high averaged accuracy of 99.58%, when using only 16 channels of EEG signals, which outperforms the state-of-the-art EEG-based user identification methods. The combined use of CNNs and LSTMs in the proposed 1D-Convolutional LSTM can greatly improve the accuracy of user identification systems by utilizing the spatiotemporal features of the EEG signals with LSTM, and lowering cost of the systems by reducing the number of EEG electrodes used in the systems.
McCrory M, Sun M, Sazonov E, et al., 2019, Methodology for objective, passive, image- and sensor-based assessment of dietary intake, meal-timing, and food-related activity in Ghana and Kenya (P13-028-19)., Current Developments in Nutrition, Vol: 3, Pages: 1247-1247, ISSN: 2475-2991
Objectives: Herein we describe a new system we have developed for assessment of dietary intake, meal timing, and food-related activities, adapted for use in low- and middle-income countries. Methods: System components include one or more wearable cameras (the Automatic Ingestion Monitor-2 (AIM), an eyeglasses-mounted wearable chewing sensor and micro-camera; ear-worn camera; the eButton, a camera attached to clothes; and eHat, a camera attached to a visor worn by the mother when feeding infants and toddlers), and custom software for evaluation of dietary intake from food-based images and sensor-detected food intake. General protocol: The primary caregiver of the family uses one or more wearable cameras during all waking hours. The cameras aim directly in front of the participant and capture images every few seconds, thereby providing multiple images of all food-related activities throughout the day. The camera may be temporarily removed for short periods to preserve privacy, such as during bathing and personal care. For analysis, images and sensor signals are processed by the study team in custom software. The images are time-stamped, arranged in chronological order, and linked with sensor-detected eating occasions. The software also incorporates food composition databases of choice such as the West African Foods Database, a Kenyan Foods Database, and the USDA Food Composition Database, allowing for image-based dietary assessment by trained nutritionists. Images can be linked with nutritional analysis and tagged with an activity label (e.g., food shopping, child feeding, cooking, eating). Assessment of food-related activities such as food-shopping, food gathering from gardens, cooking, and feeding of other family members by the primary caregiver can help provide context for dietary intake and additional information to increase accuracy of dietary assessment and analysis of eating behavior. Examples of the latter include assessment of specific ingredients in prepared
Berthelot M, Henry FP, Hunter J, et al., Pervasive wearable device for free tissue transfer monitoring based on advanced data analysis: clinical study report, Journal of Biomedical Optics, ISSN: 1083-3668
Free tissue transfer (FTT) surgery for breast reconstruction following mastectomy has become a routineoperation with high success rates. Although failure is low, it can have a devastating impact on patient recovery,prognosis and psychological well-being. Continuous and objective monitoring of tissue oxygen saturation (StO2) hasshown to reduce failure rates through rapid detection time of postoperative vascular complications. We have developeda pervasive wearable wireless device that employs near infrared spectroscopy (NIRS) to continuously monitor FTTviaStO2measurement. Previously tested on different models, this paper introduces the results of a clinical study. Thegoal of the study is to demonstrate the developed device can reliably detectStO2variations in a clinical setting: 14patients were recruited. Advanced data analysis were performed on theStO2variations, the relativeStO2gradientchange, and, the classification of theStO2within different clusters of blood occlusion level (from 0% to 100% at 25%step) based on previous studies made on a vascular phantom and animals. The outcomes of the clinical study concurwith previous experimental results and the expected biological responses. This suggests the device is able to correctlydetect perfusion changes and provide real-time assessment on the viability of the FTT in a clinical setting.
Guo Y, Sun M, Lo FPW, et al., 2019, Visual guidance and automatic control for robotic personalized stent graft manufacturing, Pages: 8740-8746, ISSN: 1050-4729
© 2019 IEEE. Personalized stent graft is designed to treat Abdominal Aortic Aneurysms (AAA). Due to the individual difference in arterial structures, stent graft has to be custom made for each AAA patient. Robotic platforms for autonomous personalized stent graft manufacturing have been proposed in recently which rely upon stereo vision systems for coordinating multiple robots for fabricating customized stent grafts. This paper proposes a novel hybrid vision system for real-time visual-sevoing for personalized stent-graft manufacturing. To coordinate the robotic arms, this system is based on projecting a dynamic stereo microscope coordinate system onto a static wide angle view stereo webcam coordinate system. The multiple stereo camera configuration enables accurate localization of the needle in 3D during the sewing process. The scale-invariant feature transform (SIFT) method and color filtering are implemented for stereo matching and feature identifications for object localization. To maintain the clear view of the sewing process, a visual-servoing system is developed for guiding the stereo microscopes for tracking the needle movements. The deep deterministic policy gradient (DDPG) reinforcement learning algorithm is developed for real-time intelligent robotic control. Experimental results have shown that the robotic arm can learn to reach the desired targets autonomously.
Sun Y, Lo B, 2019, An artificial neural network framework for gait based biometrics, IEEE Journal of Biomedical and Health Informatics, Vol: 23, Pages: 987-998, ISSN: 2168-2194
OAPA As the popularity of wearable and implantable Body Sensor Network (BSN) devices increases, there is a growing concern regarding the data security of such power-constrained miniaturized medical devices. With limited computational power, BSN devices are often not able to provide strong security mechanisms to protect sensitive personal and health information, such as one's physiological data. Consequently, many new methods of securing Wireless Body Area Networks (WBANs) have been proposed recently. One effective solution is the Biometric Cryptosystem (BCS) approach. BCS exploits physiological and behavioral biometric traits, including face, iris, fingerprints, Electrocardiogram (ECG), and Photoplethysmography (PPG). In this paper, we propose a new BCS approach for securing wireless communications for wearable and implantable healthcare devices using gait signal energy variations and an Artificial Neural Network (ANN) framework. By simultaneously extracting similar features from BSN sensors using our approach, binary keys can be generated on demand without user intervention. Through an extensive analysis on our BCS approach using a gait dataset, the results have shown that the binary keys generated using our approach have high entropy for all subjects. The keys can pass both NIST and Dieharder statistical tests with high efficiency. The experimental results also show the robustness of the proposed approach in terms of the similarity of intra-class keys and the discriminability of the inter-class keys.
Rosa BG, Anastasova-Ivanova S, Lo B, et al., 2019, Towards a Fully Automatic Food Intake Recognition System Using Acoustic, Image Capturing and Glucose Measurements, IEEE 16th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Publisher: IEEE, ISSN: 2376-8886
Singh RK, Varghese RJ, Liu J, et al., 2019, A multi-sensor fusion approach for intention detection, Biosystems and Biorobotics, Pages: 454-458
© Springer Nature Switzerland AG 2019. For assistive devices to seamlessly and promptly assist users with activities of daily living (ADL), it is important to understand the user’s intention. Current assistive systems are mostly driven by unimodal sensory input which hinders their accuracy and responses. In this paper, we propose a context-aware sensor fusion framework to detect intention for assistive robotic devices which fuses information from a wearable video camera and wearable inertial measurement unit (IMU) sensors. A Naive Bayes classifier is used to predict the intent to move from IMU data and the object classification results from the video data. The proposed approach can achieve an accuracy of 85.2% in detecting movement intention.
Bernstein A, Varghese RJ, Liu J, et al., 2019, An Assistive Ankle Joint Exoskeleton for Gait Impairment, Biosystems and Biorobotics, Pages: 658-662
© 2019, Springer Nature Switzerland AG. Motor rehabilitation and assistance post-stroke are becoming a major concern for healthcare services with an increasingly aging population. Wearable robots can be a technological solution to support gait rehabilitation and to provide assistance to enable users to carry out activities of daily living independently. To address the need for long-term assistance for stroke survivors suffering from drop foot, this paper proposes a low-cost, assistive ankle joint exoskeleton for gait assistance. The proposed exoskeleton is designed to provide ankle foot support thus enabling normal walking gait. Baseline gait reading was recorded from two force sensors attached to a custom-built shoe insole of the exoskeleton. From our experiments, the average maximum force during heel-strike (63.95 N) and toe-off (54.84 N) were found, in addition to the average period of a gait cycle (1.45 s). The timing and force data were used to control the actuation of tendons of the exoskeleton to prevent the foot from preemptively hitting the ground during swing phase.
Zhang Y, Zhang Y, Lo B, et al., 2019, Wearable ECG signal processing for automated cardiac arrhythmia classification using CFASE-based feature selection, ISSN: 0266-4720
© 2019 John Wiley & Sons, Ltd Classification of electrocardiogram (ECG) signals is obligatory for the automatic diagnosis of cardiovascular disease. With the recent advancement of low-cost wearable ECG device, it becomes more feasible to utilize ECG for cardiac arrhythmia classification in daily life. In this paper, we propose a lightweight approach to classify five types of cardiac arrhythmia, namely, normal beat (N), atrial premature contraction (A), premature ventricular contraction (V), left bundle branch block beat (L), and right bundle branch block beat (R). The combined method of frequency analysis and Shannon entropy is applied to extract appropriate statistical features. Information gain criterion is employed to select features that the results show that 10 highly effective features can obtain performance measures comparable to those obtained by using the complete features. The selected features are then fed to the input of Random Forest, K-Nearest Neighbour, and J48 for classification. To evaluate classification performance, tenfold cross validation is used to verify the effectiveness of our method. Experimental results show that Random Forest classifier demonstrates significant performance with the highest sensitivity of 98.1%, the specificity of 99.5%, the precision of 98.1%, and the accuracy of 98.08%, outperforming other representative approaches for automated cardiac arrhythmia classification.
Lo FP-W, Sun Y, Qiu J, et al., 2019, A Novel Vision-based Approach for Dietary Assessment using Deep Learning View Synthesis, IEEE 16th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Publisher: IEEE, ISSN: 2376-8886
Zhang K, Chen C-M, Anastasova S, et al., 2019, Roll-to-Roll processable OTFT-based Amplifier and Application for pH sensing, IEEE 16th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Publisher: IEEE, ISSN: 2376-8886
Qiu J, Lo FP-W, Lo B, 2019, Assessing Individual Dietary Intake in Food Sharing Scenarios with a 360 Camera and Deep Learning, IEEE 16th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Publisher: IEEE, ISSN: 2376-8886
Sun Y, Lo FP-W, Lo B, 2019, A Deep Learning Approach on Gender and Age Recognition using a Single Inertial Sensor, IEEE 16th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Publisher: IEEE, ISSN: 2376-8886
Chen S, Kang L, Lu Y, et al., 2019, Discriminative Information Added by Wearable Sensors for Early Screening - a Case Study on Diabetic Peripheral Neuropathy, IEEE 16th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Publisher: IEEE, ISSN: 2376-8886
Ahmed MR, Zhang Y, Feng Z, et al., 2019, Neuroimaging and Machine Learning for Dementia Diagnosis: Recent Advancements and Future Prospects., IEEE Rev Biomed Eng, Vol: 12, Pages: 19-33
Dementia, a chronic and progressive cognitive declination of brain function caused by disease or impairment, is becoming more prevalent due to the aging population. A major challenge in dementia is achieving accurate and timely diagnosis. In recent years, neuroimaging with computer-aided algorithms have made remarkable advances in addressing this challenge. The success of these approaches is mostly attributed to the application of machine learning techniques for neuroimaging. In this review paper, we present a comprehensive survey of automated diagnostic approaches for dementia using medical image analysis and machine learning algorithms published in the recent years. Based on the rigorous review of the existing works, we have found that, while most of the studies focused on Alzheimer's disease, recent research has demonstrated reasonable performance in the identification of other types of dementia remains a major challenge. Multimodal imaging analysis deep learning approaches have shown promising results in the diagnosis of these other types of dementia. The main contributions of this review paper are as follows. 1) Based on the detailed analysis of the existing literature, this paper discusses neuroimaging procedures for dementia diagnosis. 2) It systematically explains the most recent machine learning techniques and, in particular, deep learning approaches for early detection of dementia.
Lo FP-W, Sun Y, Qiu J, et al., 2018, Food Volume Estimation Based on Deep Learning View Synthesis from a Single Depth Map, NUTRIENTS, Vol: 10, ISSN: 2072-6643
Teachasrisaksakul K, Wu L, Yang G-Z, et al., 2018, Hand Gesture Recognition with Inertial Sensors., 40th International Conference of the IEEE Engineering in Medicine and Biology Society, Publisher: IEEE, Pages: 3517-3520, ISSN: 1557-170X
Dyscalculia is a learning difficulty hindering fundamental arithmetical competence. Children with dyscalculia often have difficulties in engaging in lessons taught with traditional teaching methods. In contrast, an educational game is an attractive alternative. Recent educational studies have shown that gestures could have a positive impact in learning. With the recent development of low cost wearable sensors, a gesture based educational game could be used as a tool to improve the learning outcomes particularly for children with dyscalculia. In this paper, two generic gesture recognition methods are proposed for developing an interactive educational game with wearable inertial sensors. The first method is a multilayered perceptron classifier based on the accelerometer and gyroscope readings to recognize hand gestures. As gyroscope is more power demanding and not all low-cost wearable device has a gyroscope, we have simplified the method using a nearest centroid classifier for classifying hand gestures with only the accelerometer readings. The method has been integrated into open-source educational games. Experimental results based on 5 subjects have demonstrated the accuracy of inertial sensor based hand gesture recognitions. The results have shown that both methods can recognize 15 different hand gestures with the accuracy over 93%.
Berthelot M, Lo B, Yang G-Z, et al., 2018, Pilot study: Free flap monitoring using a new tissue oxygen saturation (StO2) device, European Journal of Surgical Oncology, Vol: 44, Pages: 900-900, ISSN: 0748-7983
Gu X, Deligianni F, Lo B, et al., 2018, Markerless gait analysis based on a single RGB camera, International Conference on Wearable and Implantable Body Sensor Networks, Publisher: IEEE, ISSN: 2376-8894
Gait analysis is an important tool for monitoring and preventing injuries as well as to quantify functional decline in neurological diseases and elderly people. In most cases, it is more meaningful to monitor patients in natural living environments with low-end equipment such as cameras and wearable sensors. However, inertial sensors cannot provide enough details on angular dynamics. This paper presents a method that uses a single RGB camera to track the 2D joint coordinates with state-of-the-art vision algorithms. Reconstruction of the 3D trajectories uses sparse representation of an active shape model. Subsequently, we extract gait features and validate our results in comparison with a state-of-the-art commercial multi-camera tracking system. Our results are comparable to those from the current literature based on depth cameras and optical markers to extract gait characteristics.
Sun Y, Lo B, Random number generation using inertial measurement unit signals for on-body IoT devices, Living in the Internet of Things: Cybersecurity of the IoT - A PETRAS, IoTUK and IET Event, Publisher: IET
With increasing popularity of wearable and implantable tech-nologies for medical applications, there is a growing concernon the security and data protection of the on-body Internet-of-Things (IoT) devices. As a solution, cryptographic system isoften adopted to encrypt the data, and Random Number Gen-erator (RNG) is of vital importance to such system. This paperproposes a new random number generation method for secur-ing on-body IoT devices based on temporal signal variationsof the outputs of the Inertial Measurement Units (IMU) wornby the users while walking. As most new wearable and im-plantable devices have built-in IMUs and walking gait signalscan be extracted from these body sensors, this method can beapplied and integrated into the cryptographic systems of thesenew devices. To generate the random numbers, this method di-vides IMU signals into gait cycles and generates bits by com-paring energy differences between the sensor signals in a gaitcycle and the averaged IMU signals in multiple gait cycles.The generated bits are then re-indexed in descending orderby the absolute values of the associated energy differences tofurther randomise the data and generate high-entropy randomnumbers. Two datasets were used in the studies to generaterandom numbers, where were rigorously tested and passed fourwell-known randomness test suites, namely NIST-STS, ENT,Dieharder, and RaBiGeTe.
Lo BPL, Innovative Sensing Technologies for Developing Countries, IEEE Biomedical and Health Informatics BHI 2018
Friedl KE, Hixson JD, Buller MJ, et al., 2018, Guest editorial - 13th Body Sensor Networks Symposium, IEEE Journal of Biomedical and Health Informatics, Vol: 22, Pages: 3-4, ISSN: 2168-2194
Berthelot M, Yang G-Z, Lo B, 2018, Tomographic Probe for Perfusion Analysis in Deep Layer Tissue, 15th International Conference on Biomedical and Health Informatics (BHI) and Wearable and Implantable Body Sensor Networks (BSN) of the IEEE-Engineering-in-Medicine-and-Biology-Society, Publisher: IEEE, Pages: 86-89, ISSN: 2376-8886
Sun Y, Yang G, Lo B, An artificial neural network framework for lower limb motion signal estimation with foot-mounted inertial sensors, IEEE Conference on Body Sensor Networks (BSN) 2018, Publisher: IEEE
This paper proposes a novel artificial neuralnetwork based method for real-time gait analysis with minimalnumber of Inertial Measurement Units (IMUs). Accurate lowerlimb attitude estimation has great potential for clinical gait di-agnosis for orthopaedic patients and patients with neurologicaldiseases. However, the use of multiple wearable sensors hinderthe ubiquitous use of inertial sensors for detailed gait analysis.This paper proposes the use of two IMUs mounted on theshoes to estimate the IMU signals at the shin, thigh and waistfor accurate attitude estimation of the lower limbs. By usingthe artificial neural network framework, the gait parameters,such as angle, velocity and displacements of the IMUs canbe estimated. The experimental results have shown that theproposed method can accurately estimate the IMUs signals onthe lower limbs based only on the IMU signals on the shoes,which demonstrates its potential for lower limb motion trackingand real-time gait analysis.
Gao A, Lo P, Lo B, Food volume estimation for quantifying dietary intake with a wearable camera, Body Sensor Networks Conference 2018, Publisher: IEEE
A novel food volume measurement technique isproposed in this paper for accurate quantification of the dailydietary intake of the user. The technique is based on simul-taneous localisation and mapping (SLAM), a modified versionof convex hull algorithm, and a 3D mesh object reconstructiontechnique. This paper explores the feasibility of applying SLAMtechniques for continuous food volume measurement with amonocular wearable camera. A sparse map will be generatedby SLAM after capturing the images of the food item withthe camera and the multiple convex hull algorithm is appliedto form a 3D mesh object. The volume of the target objectcan then be computed based on the mesh object. Comparedto previous volume measurement techniques, the proposedmethod can measure the food volume continuously with no priorinformation such as pre-defined food shape model. Experimentshave been carried out to evaluate this new technique andshowed the feasibility and accuracy of the proposed algorithmin measuring food volume.
Lo BPL, Guo Y, Zhang Y, et al., Automated epileptic seizure detection by analyzing wearable EEG signals using extended correlation-based feature selection, IEEE BSN 2018, Publisher: IEEE
Electroencephalogram (EEG)that measures the electrical activity of the brainhasbeen widely employedfordiagnosingepilepsywhich is onekind of brainabnormalities. With theadvancement of low-costwearablebrain-computer interfacedevices,it is possible to monitor EEG forepileptic seizure detectionin daily use. However,it is still challenging to develop seizure classificationalgorithms with a considerable higheraccuracy and lower complexity. In this study, we proposea lightweight method which can reduce the number of features for a multiclass classificationto identify three different seizure statuses(i.e., Healthy, Interictal and Epileptic seizure)throughEEGsignalswith a wearable EEG sensorsusingExtended Correlation-Based Feature Selection(ECFS).More specifically, there are three steps in our proposed approach. Firstly, the EEG signals were segmented into fivefrequency bandsand secondly, we extractthe features while the unnecessary feature space was eliminated by developing the ECFS method.Finally, the features were fed intofive different classification algorithms, including Random Forest, Support Vector Machine, Logistic Model Trees, RBF Networkand Multilayer Perception. Experimental results have shownthatLogistic Model Treesprovides the highest accuracy of97.6% comparing toother classifiers.
Berthelot ME, Yang GZ, Lo B, 2017, A self-calibrated tissue viability sensor for free flap monitoring, IEEE Journal of Biomedical and Health Informatics, Vol: 22, Pages: 5-14, ISSN: 2168-2194
In fasciocutaneous free flap surgery, close postoperative monitoring is crucial for detecting flap failure, as around 10% of cases require additional surgery due to compromised anastomosis. Different biochemical and biophysical techniques have been developed for continuous flap monitoring, however, they all have shortcoming in terms of reliability, elevated cost, potential risks to the patient and inability to adapt to the patient's phenotype. A wearable wireless device based on near infrared spectroscopy (NIRS) has been developed for continuous blood flow and perfusion monitoring by quantifying tissue oxygen saturation (StO2). This miniaturized and low cost device is designed for postoperative monitoring of flap viability. With self-calibration, the device can adapt itself to the characteristics of the patients' skin such as tone and thickness. An extensive study was conducted with 32 volunteers. The experimental results show that the device can obtain reliable StO2 measurements across different phenotypes (age, sex, skin tone and thickness). To assess its ability to detect flap failure, the sensor was validated with an animal study. Free groin flaps were performed on 16 Sprague Dawley rats. Results demonstrate the accuracy of the sensor in assessing flap viability and identifying the origin of failure (venous or arterial thrombosis).
Deligianni F, Wong CW, Lo B, et al., 2017, A fusion framework to estimate plantar ground force distributions and ankle dynamics, Information Fusion, Vol: 41, Pages: 255-263, ISSN: 1566-2535
Gait analysis plays an important role in several conditions, including the rehabilitation of patients with orthopaedic problems and the monitoring of neurological conditions, mental health problems and the well-being of elderly subjects. It also constitutes an index of good posture and thus it can be used to prevent injuries in athletes and monitor mental health in typical subjects. Usually, accurate gait analysis is based on the measurement of ankle dynamics and ground reaction forces. Therefore, it requires expensive multi-camera systems and pressure sensors, which cannot be easily employed in a free-living environment. We propose a fusion framework that uses an ear worn activity recognition (e-AR) sensor and a single video camera to estimate foot angle during key gait events. To this end we use canonical correlation analysis with a fused-lasso penalty in a two-steps approach that firstly learns a model of the timing distribution of ground reaction forces based on e-AR signal only and subsequently models the eversion/inversion as well as the dorsiflexion of the ankle based on the combined features of e-AR sensor and the video. The results show that incorporating invariant features of angular ankle information from the video recordings improves the estimation of the foot progression angle, substantially.
Sun Y, Wong C, Yang GZ, et al., 2017, Secure key generation using gait features for Body Sensor Networks, IEEE EMBS Annual International Body Sensor Networks Conference, Publisher: IEEE, Pages: 206-210
With increasing popularity of wearable and Body Sensor Networks technologies, there is a growing concern on the security and data protection of such low-power pervasive devices. With very limited computational power, BSN sensors often cannot provide the necessary data protection to collect and process sensitive personal information. Since conventional network security schemes are too computationally demanding for miniaturized BSN sensors, new methods of securing BSNs have proposed, in which Biometric Cryptosystem (BCS) appears to be an effective solution. With regards to BCS security solutions, physiological traits, such as an individual's face, iris, fingerprint, electrocardiogram (ECG), and photoplethysmogram (PPG) have been widely exploited. However, behavioural traits such as gait are rarely studied. In this paper, a novel lightweight symmetric key generation scheme based on the timing information of gait is proposed. By extracting similar timing information from gait acceleration signals simultaneously from body worn sensors, symmetric keys can be generated on all the sensor nodes at the same time. Based on the characteristics of generated keys and BSNs, a fuzzy commitment based key distribution scheme is also developed to distribute the keys amongst the sensor nodes.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.