Tải bản đầy đủ - 0trang
2 CRA I: Functions, Components, and Design Rules
Cognitive Radio Architecture
wireless networks (CWNs), and ad hoc reasoning with users, all the while learning
Structured and Ad Hoc Reasoning,
Learning from Experience
Software Modules .....
Back End Control
Figure 14.1: The CRA augments SDR with computational intelligence and learning capacity
(© Dr. Joseph Mitola III, used with permission).
The detailed allocation of functions to components with interfaces among the
components requires closer consideration of the SDR component as the foundation of CRA.
SDRs include a hardware platform with RF access and computational resources,
plus at least one software-deﬁned personality. The SDR Forum has deﬁned its
SCA  and the Object Management Group (OMG) has deﬁned its SRA .
These are similar ﬁne-grained architecture constructs enabling reduced-cost wireless connectivity with next-generation plug-and-play. These SDR architectures
are deﬁned in Uniﬁed Modeling Language (UML) object models , Common
Object Request Broker Architecture (CORBA) Interface Design Language (IDL)
, and XML descriptions of the UML models. The SDR Forum and OMG
standards describe the technical details of SDR both for radio engineering and
for an initial level of wireless air interface (“waveform”) plug-and-play. The
SCA/SRA was sketched in 1996 at the ﬁrst US Department of Defense (DoD)
inspired modular multifunctional information transfer system (MMITS) Forum,
was developed by the DoD in the 1990s and the architecture is now in use by the
US military . This architecture emphasizes plug-and-play wireless personalities
on computationally capable mobile nodes where network connectivity is often
intermittent at best.
The commercial wireless community , in contrast, led by cell phone giants
Motorola, Ericsson, and Nokia, envisions a much simpler architecture for mobile
wireless devices, consisting of two application programming interfaces (APIs)—one
for the service provider and another for the network operator. Those users deﬁne a
knowledge plane in the future intelligent wireless networks that is not dissimilar
from a distributed CWN. That community promotes the business model of the
user → service provider → network operator → large manufacturer → device, in
which the user buys mobile devices consistent with services from a service provider,
and the technical emphasis is on intelligence in the network. This perspective no
doubt will yield computationally intelligent networks in the near- to mid-term.
The CRA developed in this text, however, envisions the computational intelligence to create ad hoc and ﬂexible networks with the intelligence in the mobile
device. This technical perspective enables the business model of user → device →
heterogeneous networks, typical of the Internet model in which the user buys a
device (e.g., a wireless laptop) that can connect to the Internet via any available
Internet service provider (ISP). The CRA builds on both the SCA/SRA and the
commercial API model, but integrates Semantic Web intelligence in RXML for
more of an Internet business model. This chapter describes how SDR, AACR, and
iCR form a continuum facilitated by RXML.
AACR Node Functional Components
A simple CRA includes the functional components shown in Figure 14.2. A functional component is a black box to which functions have been allocated, but for
which implementation is not speciﬁed. Thus, while the applications component
is likely to be primarily software, the nature of those software components is yet
to be determined. User interface functions, however, may include optimized hardware (e.g., for computing video ﬂow vectors in real time to assist scene perception). At the level of abstraction of this ﬁgure, the components are functional,
These functional components are as follows:
1. The user sensory perception (SP), which includes haptic, acoustic, and video
sensing and perception functions.
Cognitive Radio Architecture
Figure 14.2: Minimal AACR node architecture (© Dr. Joseph Mitola III, used with
2. The local environment sensors (location, temperature, accelerometer,
3. The system applications (sys apps) media-independent services such as playing
a network game.
4. The SDR functions which include RF sensing and SDR applications.
5. The cognition functions (symbol grounding for system control, planning, and
6. The local effector functions (speech synthesis, text, graphics, and multimedia
These functional components are embodied on an iCR platform, a hardware
realization of the six functions. To support the capabilities described in the prior
chapters, these components go beyond SDR in critical ways. First, the user interface goes well beyond buttons and displays. The traditional user interface has
been partitioned into a substantial user sensory subsystem and a set of local effectors. The user sensory interface includes buttons (the haptic interface) and microphones (the audio interface) to include acoustic sensing that is directional, capable
of handling multiple speakers simultaneously, and able to include full motion
video with visual scene perception. In addition, the audio subsystem does not just
encode audio for (possible) transmission; it also parses and interprets the audio
from designated speakers, such as the
, for a high-performance spoken
natural language (NL) interface. Similarly, the text subsystem parses and interprets the language to track the user’s information states, detecting plans and
potential communications and information needs unobtrusively as the user
conducts normal activities. The local effectors synthesize speech along with traditional text, graphics, and multimedia displays.
Sys apps are those information services that deﬁne value for the user.
Historically, voice communications with a phone book, text messaging, and the
exchange of images or video clips comprised the core value proposition of sys
apps for SDR. These applications were generally integral to the SDR application,
such as data services via general packet radio service (GPRS), which is really a
wireless SDR personality more than an information service. AACR sys apps
break the service out of the SDR waveform, so that the user need not be limited
by details of wireless connectivity unless that is of particular interest. Should the
user care whether he or she plays the distributed video game via 802.11 or Bluetooth
over the last 3 m? Probably not. The typical user might care if the AACR wants to
switch to third generation (3G) at $5 per minute, but a particularly afﬂuent user
might not care and would leave all that up to the AACR.
The cognition component provides all the cognition functions—from the
semantic grounding of entities in the perception system to the control of the overall system through planning and initiating actions—learning user preferences and
RF situations in the process.
Each of these subsystems contains its own processing, local memory, integral
power conversion, built-in test (BIT), and related technical features.
AACR consists of six functional components: user SP, environment, effectors,
SDR, sys apps, and cognition. Those components of the
communications and internal reasoning about the
by using the RXML
syntax. Given the top-level outline of these functional components along with the
requirement that they be embodied in physical hardware and software (the “platform”), the six functional components are deﬁned ontologically in the equation in
Figure 14.3. In part, this equation states that the hardware–software platform and
Figure 14.3: Components of the AACR
. The AACR
is defined to be an iCR
platform, consisting of six functional components using the RXML syntax.
Cognitive Radio Architecture
the functional components of the AACR are independent. Platform-independent
computer languages such as Java are well understood. This ontological perspective envisions platform independence as an architecture design principle for
AACR. In other words, the burden is on the (software) functional components to
adapt to whatever RF–hardware–operating system platform might be available.
14.2.2 Design Rules Include Functional Component Interfaces
The six functional components (see Tables 14.1(a) and 14.1(b)) imply
associated functional interfaces. In architecture, design rules may include a list of
the quantities and types of components as well as the interfaces among those
components. This section addresses the interfaces among the functional
The AACR N-squared diagram of Table 14.1(a) characterizes AACR interfaces. These constitute an initial set of AACR APIs. In some ways, these APIs
augment the established SDR APIs. This is entirely new and much needed in
order for basic AACRs to accommodate even the basic ideas of the Defense
Advanced Research Projects Agency (DARPA) NeXt-Generation (XG) radio
In other ways, these APIs supersede the existing SDR APIs. In particular, the
SDR user interface becomes the user sensory and effector API. User sensory APIs
include acoustics, voice, and video, and the effector APIs include speech synthesis
Table 14.1(a): AACR N-squared diagram. This matrix characterizes internal interfaces
between functional processes. Interface notes 1–36 are explained in Table 14.1(b).
Environment Sys apps
P: primary; A: afferent; E: efferent; C: control; M: multimedia; D: data; S: secondary; others not
designated P or S are ancillary.
Information services API; bCAPI.
Table 14.1(b): Explanations of interface notes for functional processes shown in Table 14.1(a).
Cross-media correlation interfaces (video-acoustic, haptic-speech, etc.) to limit
search and reduce uncertainty (e.g., if video indicates user is not talking, acoustics
may be ignored or processed less aggressively for command inputs than if user is
Environment sensors parameterize user sensor-perception. Temperature below
freezing may limit video.
Sys apps may focus scene perception by identifying entities, range, expected sounds
for video, audio, and spatial perception processing.
SDR applications may provide expectations of user input to the perception system
to improve probability of detection and correct classiﬁcation of perceived inputs.
This is the primary control efferent path from cognition to the control of the user
SP subsystem, controlling speech recognition, acoustic signal processing, video
processing, and related SP. Plans from cognition may set expectations for user scene
perception, improving perception.
Effectors may supply a replica of the effect to user perception so that self-generated
effects (e.g., synthesized speech) may be accurately attributed to the
validated as having been expressed, and/or canceled from the scene perception to
Perception of rain, buildings, indoor/outdoor can set GPS integration parameters.
Environment sensors would consist of location sensing such as GPS or GLONASS;
ambient temperature; light level to detect inside versus outside locations; possibly
smell sensors to detect spoiled food, ﬁre, etc. There seems to be little beneﬁt in
enabling interfaces among these elements directly.
Data from the sys apps to environment sensors would also be minimal.
Data from the SDR personalities to the environment sensors would be minimal.
Data from the cognition system to the environment sensors controls those sensors,
turning them on and off, setting control parameters, and establishing internal paths
from the environment sensors.
Data from effectors directly to environment sensors would be minimal.
Data from the user SP system to sys apps is a primary afferent path for multimedia
streams and entity states that effect information services implemented as sys apps.
Speech, images, and video to be transmitted move along this path for delivery by the
relevant sys apps or information service to the relevant wired or SDR communications path. Sys apps overcomes the limitations of individual paths by maintaining
continuity of conversations, data integrity, and application coherence (e.g., for
multimedia games). Whereas the cognition function sets up, tears down, and
orchestrates the sys apps, the primary API between the user scene and the information service consists of this interface and its companions—the environment afferent
path, the effector efferent path, and the SDR afferent and efferent paths.
Data on this path assists sys apps in providing location awareness to services.
Table 14.1(b): Explanations of interface notes for functional processes shown in Table 14.1(a). (Continued)
Different information services interoperate by passing control information through
the cognition interfaces and by passing domain multimedia ﬂows through this interface. The cognition system sets up and tears down these interfaces.
This is the primary afferent path from external communications to the AACR. It
includes control and multimedia information ﬂows for all the information services.
Following the SDR Forum’s SCA, this path embraces wired as well as wireless
Through this path, the AACR
exerts control over the information services
provided to the
Effectors may provide incidental feedback to information services through this afferent path, but the use of this path is deprecated. Information services are supposed to
control and obtain feedback through the mediation of the cognition subsystem.
Although the SP system may send data directly to the SDR subsystem (e.g., to satisfy security rules that user biometrics be provided directly to the wireless security
subsystem), the use of this path is deprecated. Perception subsystem information is
supposed to be interpreted by the cognition system so that accurate information, not
raw data, can be conveyed to other subsystems.
Environment sensors such as GPS historically have accessed SDR waveforms
directly (e.g., providing timing data for air interface signal generation). The cognition system may establish such paths in cases where cognition provides little or no
value added, such as providing a precise timing reference from GPS to an SDR
waveform. The use of this path is deprecated because all of the environment sensors,
including GPS, are unreliable. Cognition has the capability to “de-glitch” GPS (e.g.,
recognize from video that the
is in an urban canyon and therefore not allow
GPS to report directly, but report to the GPS subscribers, on behalf of GPS, location
estimates based perhaps on landmark correlation, dead reckoning, etc.).
Sys apps–SDR This is the primary efferent path from information services to SDR through the
The linking of different wireless services directly to each other is deprecated. If an
incoming voice service needs to be connected to an outgoing voice service, there
should be a bridging service in sys apps through which the SDR waveforms communicate with each other. That service should be set up and taken down by the cognition system.
Cognition–SDR This is the primary control interface, replacing the control interface of the SDR SCA
and the OMG SRA.
Effectors–SDR Effectors such as speech synthesis and displays should not need to provide state
information directly to SDR waveforms, but if needed, the cognition function should
set up and tear down these interfaces.
This is the primary afferent ﬂow for the results from acoustics, speech, images,
video, video ﬂow, and other sensor-perception subsystems. The primary results
passed across this interface should be the speciﬁc states of
in the scene,
which would include scene characteristics such as the recognition of landmarks,
known vehicles, furniture, and the like. In other words, this is the interface by which
the presence of
in the local scene is established and their characteristics
are made known to the cognition system.
Environment– This is the primary afferent ﬂow for environment sensors.
Table 14.1(b): Explanations of interface notes for functional processes shown in Table 14.1(a). (Continued)
This is the interface through which information services request services and receive
support from the AACR platform. This is also the control interface by which cognition sets up, monitors, and tears down information services.
SDR–Cognition This is the primary afferent interface by which the state of waveforms, including a
distinguished RF-sensor waveform, is made known to the cognition system. The
cognition system can establish primary and backup waveforms for information services, enabling the services to select paths in real time for low-latency services. Those
paths are set up and monitored for quality and validity (e.g., obeying XG rules) by
the cognition system, however.
The cognition system as deﬁned in this six-component architecture entails (1) orientCognition–
ing to information from
sensors in the SDR subsystem and from scene
sensors in the user SP and environment sensors; (2) planning; (3) making decisions;
and (4) initiating actions, including the control over all of the cognition resources of
may directly control any of the elements of the systems via
paths through the cognition system that enable it to monitor what the user is doing in
order to learn from a user’s direct actions, such as manually tuning in the user’s
favorite radio station when the
either failed to do so properly or was not asked.
This is the primary afferent ﬂow for status information from the effector subsystem,
including speech synthesis, displays, and the like.
In general, the user SP system should not interface directly to the effectors, but
should be routed through the cognition system for observation.
The environment system should not interface directly to the effectors. This path is
sys apps may display streams, generate speech, and otherwise directly control any
effectors once the paths and constraints have been established by the cognition
This path may be used if the cognition system establishes a path, such as from an
SDR’s voice track to a speaker. Generally, however, the SDR should provide streams
to the information services of the sys apps. This path may be necessary for legacy
compatibility during the migration from SDR through AACR to iCR, but it is
This is the primary efferent path for the control of effectors. Information services
provide the streams to the effectors, but cognition sets them up, establishes paths, and
monitors the information ﬂows for support to the user’s
These paths are deprecated, but may be needed for legacy compatibility.