Microsoft Unveils 5G/Telco Playbook With ‘Azure For Operators’

Microsoft Unveils 5G/Telco Playbook With ‘Azure For Operators’

Microsoft Azure Monday laid out its playbook to partner with communications
 service providers by providing a carrier-grade platform for edge and cloud computing to help network operators realize the full potential of 5G technology.

The No. 2 cloud provider’s new Azure for Operators telco strategy is fortified by its past and current telco-related work—including its partnerships with operators such as AT&T and T-Mobile, and its development of Azure Edge Zones—and acquisitions of telco-geared software makers Affirmed Networks and Metaswitch earlier this year.

“Today starts a new chapter in our close collaboration with the telecommunications industry to unlock the power of 5G and bring cloud and edge closer than ever,” Jason Zander, executive vice president of Microsoft Azure, said in a blog post Monday. “We‘re building a carrier-grade cloud and bringing more Microsoft technology to the operator’s edge. This, in combination with our developer ecosystem, will help operators to future-proof their networks, drive down costs and create new services and business models.”

Using Microsoft Azure and its artificial intelligence (AI) and machine-learning capabilities, operators will be able to automate their operations and offer new services including ultrareliable, low-latency connectivity, mixed-reality communications services, network slicing and highly scalable Internet of Things (IoT) applications to help transform industries, Zander said.

Last summer, Microsoft Azure and AT&T unveiled a multiyear strategic alliance to leverage AI and 5G using AT&T’s network and the Azure cloud platform to market integrated solutions in areas including voice, collaboration, edge, IoT, public safety and cybersecurity. Microsoft is now AT&T’s preferred cloud provider for non-network applications.

“Since that announcement, we’ve made considerable progress on our journey to become a ‘public cloud-first’ company,” Igal Elvaz, AT&T’s senior vice president of wireless, said in a statement. “Microsoft’s recent and bold acquisitions in the wireless core space will further support our long-term strategy of using public cloud for network workloads.”

In addition to AT&T, inaugural partners for Azure for Operators include systems integrators Accenture and Tech Mahindra, and Ascos, Etisalat, Hewlett Packard Enterprise, Intel, Mavenir, Red Hat, Samsung, Telstra, Tillman Digital Cities, Verizon and VMWare.

“We want to bring, effectively, the cloud economical models to the operators and carriers,” Yousef Khalidi, corporate vice president of Azure Networking, told CRN. “Until not long ago, most of the public clouds—I‘m referring to us and the two other big ones—were mostly designed and catering for the enterprise space, and that was a 10- to 12-year journey to get us there. But if you look really at meeting the needs of the whole segment of the telecommunications sector, we did not really meet their core network needs. We definitely ran their enterprise back-office applications, line of business, CRM, etc., but not the core networks. So we realized there’s an opportunity to help our customers better here.”

To do so, Microsoft needed to have the right technology set, the right people and the right mindset to understand what those customers need to better serve their own customers, Khalidi said.

“Their needs are carrier-grade networks, software that can run mobile and wired and wireless networks,” he said. “All of us are going through an inflection point with 5G. They also have a need to introduce compute in their workloads, which is something, frankly, we understand quite well.”

Click through to read more about the Operators for Azure strategy unveiled by Microsoft, which last week said it had joined the 5G Open Innovation Lab—a global ecosystem of developers, enterprises and government institutions—as a founding partner to help startups with its engineering and technology resources.

Published at Mon, 28 Sep 2020 20:03:45 +0000

DARPA sets sights on making AI self-aware of complex time dimensions

The Defense Advanced Research Projects Agency (DARPA) is setting its sights on developing an AI system with a detailed self-understanding of the time dimensions of its learned knowledge.

DARPA’s Time-Aware Machine Intelligence (TAMI) research program and incubator is looking to develop a new class of neural network architectures that incorporate an explicit time dimension as a fundamental building block for network knowledge representation,” according to the TAMI program solicitation.

The overall goal is to create an AI system that will be able to “think in and about time” when exercising its learned task knowledge in task performance.

The Challenge

Current neural networks do not explicitly model the inherent time characteristics of their encoded knowledge.

Consequently, state-of-the-art machine learning does not have the expressive capability to reason with encoded knowledge using time.

The Proposed Solution

TAMI’s vision is for an AI system to develop a detailed self-understanding of the time dimensions of its learned knowledge and eventually be able to “think in and about time” when exercising its learned task knowledge in task performance.

How and Why

Large amounts of data samples are needed to feed neural networks; however, each data sample exists only in a specific time frame.

To understand what this means and looks like, the solicitation points out:

Consider neural networks designed for inference. Such neural networks derive abstract task knowledge from the analysis of a large number of data samples.

Each data sample exists only in a specific time. For example, features given by a vehicle data sample are associated with that specific vehicle’s age (e.g., rust and dents) and, therefore, are explicitly dependent on time.

Neural networks incorporate such information as static activation weights; however, using the example above, the activation of these weights should ideally be conditioned on time.

What DARPA wants, according to the solicitation, is a learning mechanism that enables “a self awareness of the complex time-conditioned property of neural networks’ knowledge encoding.”

The TAMI research program will have two phases:

  1. Feasibility Study: Performers will develop theories and computational methods to answer fundamental questions regarding time cognition in machine learning
  2. Proof of Concept Demo: Performers are expected to prototype time-aware meta-learning methods into computational models and demonstrate whether the new model could provide novel machine intelligence capabilities that current state-of-the-art machine learning architectures cannot achieve.

For the feasibility study, DARPA seeks answers to such questions as:

  • How can time attributes co-evolve with task learning itself?
  • What association mechanisms should be used to represent the interactions between the time dimension and the other dimensions of the encoded knowledge?
  • How should implicit time-dependent information not directly observable in the data be captured?
  • And more

In a nutshell, the TAMI program will look to develop new, time-aware neural network architectures that introduce a meta-learning capability into machine learning, and this meta-learning will enable a neural network to capture the time-dependencies of its encoded knowledge.

TAMI Inspired by Time Processing Mechanisms in Human Brains

According to the solicitation:

TAMI draws inspiration from ongoing research on time processing mechanisms in human brains.

A large number of computational models have been introduced in computational neuroscience to explain time perception mechanisms in the brain.

TAMI will go a step further from such research to develop and prototype concrete computational models. TAMI will leverage the latest research on meta-learning in neural networks.

TAMI Program Manager’s Background and Experience

While the TAMI program solicitation did not mention how the research would translate into real-world applications for the Department of Defense (i.e. as part of department-wide AI adoption, for use in autonomous vehicles, weapons systems, drone swarms, surveillance, etc.), perhaps some background on the program manager might offer a few clues for the reader to infer.

Dr. Jiangying Zhou is leading the TAMI program, and she has been a program manager for DARPA since November, 2018.

Dr. Jiangying Zhou, DARPA
Dr. Jiangying Zhou

She is also the program manager of at least four other DARPA research programs:

  • Revolutionary Enhancement of Visibility by Exploiting Active Light-fields (REVEAL) — to develop a comprehensive theoretical framework to enable the development of new imaging hardware and software technologies.
  • Competency-Aware Machine Learning (CAML) — to make AI and Machine Learning systems more trustworthy by programming systems to communicate their decision-making and strategies with their human counterparts.
  • Artificial Intelligence Research Associate (AIRA) — to elevate AI to the role of an insightful and trusted collaborator in the scientific process.
  • Nature as Computer (NAC) — to “crack computation problems unsolvable by classical models, such as developing simulations for hypersonic flight, materials for massively distributed sensing and control, and robust network optimization and analysis.”

Combined, the programs that Dr. Zhou leads have to do with making AI more robust and trustworthy while pushing the limits of imaging, sensing, and computational technologies, which is pretty much the whole aim of DARPA’s larger Artificial Intelligence Exploration (AIE) program to turn machines into collaborative partners for national defense.

According to Dr. Zhou’s bio, her areas of research include:

  • Machine Learning
  • Artificial Intelligence
  • Data Analytics
  • Intelligence, Surveillance and Reconnaissance (ISR) Exploitation Technologies

Previously, Dr. Zhou spent over 10 years as an engineer at Teledyne Scientific and Imaging (a subsidiary of Teledyne Technologies Inc) where she worked on “sensor exploitation, signal and image processing, and pattern recognition,” for public and private entities.

Currently, Teledyne Scientific and Imaging is comprised of:

  • Teledyne Scientific Company
  • Teledyne Imaging Sensors

The Teledyne Scientific Company specializes in:

  • Advanced wireless systems for a multitude of uses: battlefield surveillance and factory monitoring
  • 3D video and audio environments for applications in augmented and virtual reality
  • Lip reading, eye tracking, and speech recognition to facilitate hands free control of computer functions in battlefields and call centers
  • And more

Teledyne Imaging Sensors bills itself as a leader in high performance imaging systems for military, space, astronomy, and commercial applications that include:

  • Infrared & visible sensors
  • Read-Out Integrated Circuits
  • Infrared scientific and tactical cameras
  • Camera electronics embedded with advanced algorithms
  • Laser eye and sensor protection devices & filters

Last year, another Teledyne subsidiary, Teledyne Instruments, was awarded a $22 million contract to supply the US Navy with autonomous underwater vehicles (AUVs) and related monitoring and communications acoustic systems.

Brain-computer interface allows for telepathic piloting of drones

DARPA gets back to work on developing autonomous ships as Navy pushes for unmanned fleets

Drones that can see without being seen ‘at night, underground, in the Arctic, and in fog’: DARPA

Envisioning the bioengineered soldier of the future through DARPA research programs

UFOs and the theoretical spacetime-bending technology behind them

Published at Mon, 28 Sep 2020 19:27:44 +0000

Leave a Reply