AjsnDx.jpg

Network testing in the 5G era

Mobile network operators need to test the stability and performance of their networks in order to ensure good service, but because of the enormous amounts of data that’s involved this is hardly possible with manual methods so, operators are turning to artificial intelligence to solve this challenge.

With the advent of the fifth generation of mobile communications, network testers are confronted with a novel situation. Many aspects of 5G – diverse frequency bands, network operators’ different rollout programs, the breadth of applications such as IoT, conventional mobile communications, traffic networking, and so on – are leading to highly differentiated networks and test data.

Analysing this data in the usual aggregated form quickly leads to distorted results and incorrect interpretations. AI is able to offer a good solution to this dilemma. Algorithm based methods only reflect specific theories.

These may not be ideal, but the data itself is reliable. AI methods, such as pattern recognition, are able to evaluate data sets without preconceptions and discover relationships that would remain hidden to human analysts.

Big data needs AI

The term “artificial intelligence” has been bandied about a lot in recent years, often without a clear definition of what it means, and with no differentiation between systems that are able to learn (a characteristic of AI) and systems that are simply based on complex algorithms.

The term “machine learning” is a bit more specific. Here the goal is to automatically derive general rules from a large volume of data. After completion of the learning process, yes/no decisions can be made based on multidimensional dependencies or features.

The decision rules are learned by approximating between real data points rather than being formulated by human experts. This method requires very large data volumes and an intensive training phase. But in the application phase, it is able to correctly interpret new measurement data almost spontaneously.

Supervised and unsupervised learning

Machine learning can be roughly divided into two types: supervised and unsupervised.

The goal of supervised learning is to find statistical relationships between the data and events or predefined labels in order to generate estimations for unknown inputs. A widely used application is object recognition, in which the presence and position of a particular object in an image (e.g. “A cat is/is not present in the picture”) is determined through multi-stage interpretation of patterns (edges, coloured areas, etc.).

For training, the learning software is presented with images labelled by humans and works out characteristics that allow decisions to be made. These rules are concealed in the neural network of the AI system rather than being formulated in algorithms.

An example of non-visual pattern recognition is the determination of the call stability score (CSS) for network tests.

Unsupervised learning works without labels. The algorithms have to independently recognise patterns or multidimensional data aggregates in order to derive usable conclusions from them, for example with the aim of measuring differences between new and known data points. A typical task for unsupervised learning is anomaly detection, which identifies unusual data without the support of experts.

AI methods

In response to the needs of network operators Rohde & Schwarz uses AI methods for applications such as simplifying the optimisation of mobile networks or improving the assessment of qualitative differences between providers.

The Data Intelligence Lab established in 2018 tackles these issues and supports Rohde & Schwarz R&D departments with data based analysis methods. These approaches are especially promising for testing mobile networks where particularly large amounts of data are generated, so that manual analysis and rule formulation are no longer practical. Machine learning makes it possible to use the information hidden in large data sets, for example to derive new assessment metrics. An example is the call stability score.

Call stability score

The call stability score is a new assessment metric for reliable communications. A suddenly dropped phone call is an annoying experience, so that is why mobile network operators have been testing voice quality and connection stability for many years.

The most popular statistic is the call drop rate (CDR). But since the number of dropped calls is very low in mature networks, it is necessary to make a large number of calls in order to obtain a statistically significant value. Consequently, drive test campaigns are long and expensive.

Above: Display of a network optimisation scenario using the R&S SmartAnalytics analysis software’s call stability score. Average CSS values are shown on the upper left, arranged by region. On the upper right, especially low scores are marked for later analysis. The exact values are listed in the table with additional information (shown here are sample results for demo purposes).

Therefore, Rohde & Schwarz uses a method to replace the binary call status (either successfully completed or dropped) by a finely graduated analogue value. This is done by creating a statistical AI-generated model that links the transmission conditions with the call status.

The CSS derived from the model allows the reliability of the mobile connection to be measured over the entire call duration and classified based on quality.

The diagnostic also includes unstable calls that were successfully completed but the data proves they were not far away from being dropped. In conventional CDR statistics, those unstable calls would be assessed positively as successful calls, distorting the network quality assessment.

The CSS value is based on information gathered from millions of test calls and incorporated in the model during the learning process. The assessment is conclusive right from the first call. The network call quality is registered more accurately and with less test effort.

In practice, every nine seconds of a call, measurement data is sent to the statistical model as a time series. The model assesses the data based on the learned rules and outputs a number between 0 and 1.

The higher the number is, so the lower the likelihood of a drop occurring in that nine-second interval. The CSS measurement is part of the R&S SmartAnalytics analysis platform,

Another AI-driven function in this software suite is anomaly detection using unsupervised learning. In both cases, the use of artificial intelligence leads to results that are not possible with conventional means.

AI methods will be used more and more in the future to maximize exploitation of the information content of measurement data.

Author details: Dr. Alexandros Andre Chaaraoui is Data Scientist and Project Leader, Rohde & Schwarz

Author

Dr. Alexandros Andre Chaaraoui

Published at Mon, 26 Oct 2020 00:00:00 +0000

The Exponential Growth of AI in Brain Care and Treatment

GDJ/Pixabay
Source: GDJ/Pixabay

Advances in computer science are helping to accelerate a broad spectrum of scientific research. The more complex the problem, the greater the potential for artificial intelligence (AI) machine learning to help identify patterns and make predictions. How widely is machine learning being used in treating diseases and disorders of the brain? A new study published earlier this month in the science journal APL Bioengineering examines the state-of-the-art uses of AI for brain disease, and shows there has been exponential growth in over a decade.

The biological brain has been the inspiration for artificial neural networks, a type of artificial intelligence (AI) machine learning model. Deep learning methods use artificial neural networks, and its pattern-recognition capabilities have contributed greatly to the current AI renaissance. Studying the brain often requires processing reams of complex imaging data, a daunting and time-intensive task for human scientists. Thus, machine learning can be a useful tool in the treatment of brain disorders and diseases.

In a  study funded by Horizon 2020, the European Union’s EU Research and Innovation program, the team of Italian researchers from Politecnico di Milano and the University of Calabria set out to find out the different ways artificial intelligence is being used for brain care and identify the important clinical applications.

To achieve this, the team of Alice Segato, Aldo Marzullo, Francesco Calimeri and Elena De Momi queried 2696 scientific papers in the Pubmed, Scopus, and Web of Science databases dating as far back as January 1, 2008 using the search keywords of artificial intelligence and brain which yielded 154 papers. The team then performed a systematic review of these scientific papers.

According to the researchers, there was “an exponential growth, in the latest ten years, of the number of studies evaluating AI models as an assisting tool across multiple paradigms of brain care,” and these paradigms include “diagnosis with anatomical information, diagnosis with morphological information, diagnosis with connectivity information, candidate selection for surgical treatment, target definition for surgical treatment, trajectory definition for surgical treatment, modeling of tissue deformation for intra-operative assistance, and prediction of patient outcome for postoperative assessment.”

The team found that AI was being used for patients for a variety of brain disorders such as Parkinson’s disease, brain tumors, epilepsy, brain tumors, cerebrovascular abnormalities, brain lesions, and brain injuries.

The types of algorithms were quite varied. These include regression algorithms, linear regression, logistic regression, instance-based algorithms, K-nearest neighbor (KNN), support vector machines (SVM), Bayesian algorithms, naïve Bayes (NB), clustering algorithms, K-means, Fuzzy C-means, hidden Markov model (HMM), sparse autoencoder (SAE), artificial neural network algorithms, deep learning algorithms, fully connected neural network (FCNN), convolutional neural network (CNN), corrective learning network (CLNet), recurrent neural networks (RNN), recurrent fuzzy neural networks (RFNN), long short-term memory networks (LSTM), deep belief networks (DBN), extreme learning machines (ELM), dimensionality reduction algorithms, linear discriminant analysis (LDA), ensemble algorithms, AdaBoost, random forest (RF), gradient boosting machines (GBM), gradient boosted regression trees (GBRT), sparse multi-view task-centralize (Sparse MVTC), genetic algorithms (GA), natural language processing (NLP), graph-based semi-supervision (GBS), multivariate analysis, and supervised LOCATE (locally adaptive threshold estimation).

A vast majority, 121 papers to be precise, used artificial intelligence for diagnosing brain disease and disorders. “This includes classification using anatomical information, morphological information, and connectivity information for neurological disorders, brain tumors, brain lesion, brain injury, Parkinson’s disease, epilepsy and cerebral artery, schizophrenia, Alzheimer’s disease, autism disorder, and multiple sclerosis. CT, MRI, PET, SC, and FC data were used as input features for the development of classification algorithm,” wrote the researchers.

The most common pathology of the diagnosis was for brain tumors, neurological disorders, Alzheimer’s disease, and Autism disorder. The most commonly used data in the papers for diagnosis were mostly MRI (51 percent), and FC (31 percent). The AI methods most used are convoluted neural networks (30 percent), support vector machines (23 percent), random forest (12 percent), and artificial neural networks (7 percent).

“The use of artificial intelligence techniques is gradually bringing efficient theoretical solutions to a large number of real-world clinical problems related to the brain,” the researchers concluded. “Specifically, in recent years, thanks to the accumulation of relevant data and the development of increasingly effective algorithms, it has been possible to significantly increase the understanding of complex brain mechanisms.”

Copyright © 2020 Cami Rosso All rights reserved.

Published at Sun, 25 Oct 2020 23:48:45 +0000