Tải bản đầy đủ - 0 (trang)
2 CRA I: Functions, Components, and Design Rules

2 CRA I: Functions, Components, and Design Rules

Tải bản đầy đủ - 0trang

Cognitive Radio Architecture

wireless networks (CWNs), and ad hoc reasoning with users, all the while learning

from experience.



CR1



Radio Knowledge

RXML

Representation

Language

Radio Knowledge

RKRL



Structured and Ad Hoc Reasoning,

Learning from Experience



Cognition

User Knowledge



Equalizer

Antenna



RF



Modem

INFOSEC



XML

User Interface

SWR

Software Modules .....



RAM



Baseband



Back End Control



Baseband Modem



Equalizer

Algorithm

Software



SDR

Hardware

Antenna



RF



Modem



INFOSEC



Baseband



User Interface



Figure 14.1: The CRA augments SDR with computational intelligence and learning capacity

(© Dr. Joseph Mitola III, used with permission).



The detailed allocation of functions to components with interfaces among the

components requires closer consideration of the SDR component as the foundation of CRA.

SDR Components

SDRs include a hardware platform with RF access and computational resources,

plus at least one software-defined personality. The SDR Forum has defined its

SCA [3] and the Object Management Group (OMG) has defined its SRA [4].

These are similar fine-grained architecture constructs enabling reduced-cost wireless connectivity with next-generation plug-and-play. These SDR architectures

are defined in Unified Modeling Language (UML) object models [5], Common

Object Request Broker Architecture (CORBA) Interface Design Language (IDL)

[7], and XML descriptions of the UML models. The SDR Forum and OMG

standards describe the technical details of SDR both for radio engineering and

for an initial level of wireless air interface (“waveform”) plug-and-play. The

SCA/SRA was sketched in 1996 at the first US Department of Defense (DoD)

inspired modular multifunctional information transfer system (MMITS) Forum,

437



Chapter 14

was developed by the DoD in the 1990s and the architecture is now in use by the

US military [7]. This architecture emphasizes plug-and-play wireless personalities

on computationally capable mobile nodes where network connectivity is often

intermittent at best.

The commercial wireless community [8], in contrast, led by cell phone giants

Motorola, Ericsson, and Nokia, envisions a much simpler architecture for mobile

wireless devices, consisting of two application programming interfaces (APIs)—one

for the service provider and another for the network operator. Those users define a

knowledge plane in the future intelligent wireless networks that is not dissimilar

from a distributed CWN. That community promotes the business model of the

user → service provider → network operator → large manufacturer → device, in

which the user buys mobile devices consistent with services from a service provider,

and the technical emphasis is on intelligence in the network. This perspective no

doubt will yield computationally intelligent networks in the near- to mid-term.

The CRA developed in this text, however, envisions the computational intelligence to create ad hoc and flexible networks with the intelligence in the mobile

device. This technical perspective enables the business model of user → device →

heterogeneous networks, typical of the Internet model in which the user buys a

device (e.g., a wireless laptop) that can connect to the Internet via any available

Internet service provider (ISP). The CRA builds on both the SCA/SRA and the

commercial API model, but integrates Semantic Web intelligence in RXML for

more of an Internet business model. This chapter describes how SDR, AACR, and

iCR form a continuum facilitated by RXML.

AACR Node Functional Components

A simple CRA includes the functional components shown in Figure 14.2. A functional component is a black box to which functions have been allocated, but for

which implementation is not specified. Thus, while the applications component

is likely to be primarily software, the nature of those software components is yet

to be determined. User interface functions, however, may include optimized hardware (e.g., for computing video flow vectors in real time to assist scene perception). At the level of abstraction of this figure, the components are functional,

not physical.

These functional components are as follows:

1. The user sensory perception (SP), which includes haptic, acoustic, and video

sensing and perception functions.

438



Cognitive Radio Architecture



User Interface

Functions



Applications



User



Radio Networks

Environment sensor

functions



Environment



Effector

Functions



SDR

Functions

Other Networks

Cognition

Functions



CR



Figure 14.2: Minimal AACR node architecture (© Dr. Joseph Mitola III, used with

permission).



2. The local environment sensors (location, temperature, accelerometer,

compass, etc.).

3. The system applications (sys apps) media-independent services such as playing

a network game.

4. The SDR functions which include RF sensing and SDR applications.

5. The cognition functions (symbol grounding for system control, planning, and

learning).

6. The local effector functions (speech synthesis, text, graphics, and multimedia

displays).

These functional components are embodied on an iCR platform, a hardware

realization of the six functions. To support the capabilities described in the prior

chapters, these components go beyond SDR in critical ways. First, the user interface goes well beyond buttons and displays. The traditional user interface has

been partitioned into a substantial user sensory subsystem and a set of local effectors. The user sensory interface includes buttons (the haptic interface) and microphones (the audio interface) to include acoustic sensing that is directional, capable

of handling multiple speakers simultaneously, and able to include full motion

video with visual scene perception. In addition, the audio subsystem does not just

encode audio for (possible) transmission; it also parses and interprets the audio

from designated speakers, such as the , for a high-performance spoken

natural language (NL) interface. Similarly, the text subsystem parses and interprets the language to track the user’s information states, detecting plans and

potential communications and information needs unobtrusively as the user

439



Chapter 14

conducts normal activities. The local effectors synthesize speech along with traditional text, graphics, and multimedia displays.

Sys apps are those information services that define value for the user.

Historically, voice communications with a phone book, text messaging, and the

exchange of images or video clips comprised the core value proposition of sys

apps for SDR. These applications were generally integral to the SDR application,

such as data services via general packet radio service (GPRS), which is really a

wireless SDR personality more than an information service. AACR sys apps

break the service out of the SDR waveform, so that the user need not be limited

by details of wireless connectivity unless that is of particular interest. Should the

user care whether he or she plays the distributed video game via 802.11 or Bluetooth

over the last 3 m? Probably not. The typical user might care if the AACR wants to

switch to third generation (3G) at $5 per minute, but a particularly affluent user

might not care and would leave all that up to the AACR.

The cognition component provides all the cognition functions—from the

semantic grounding of entities in the perception system to the control of the overall system through planning and initiating actions—learning user preferences and

RF situations in the process.

Each of these subsystems contains its own processing, local memory, integral

power conversion, built-in test (BIT), and related technical features.

The Ontological

AACR consists of six functional components: user SP, environment, effectors,

SDR, sys apps, and cognition. Those components of the enable external

communications and internal reasoning about the by using the RXML

syntax. Given the top-level outline of these functional components along with the

requirement that they be embodied in physical hardware and software (the “platform”), the six functional components are defined ontologically in the equation in

Figure 14.3. In part, this equation states that the hardware–software platform and















Figure 14.3: Components of the AACR . The AACR is defined to be an iCR

platform, consisting of six functional components using the RXML syntax.



440



Cognitive Radio Architecture

the functional components of the AACR are independent. Platform-independent

computer languages such as Java are well understood. This ontological perspective envisions platform independence as an architecture design principle for

AACR. In other words, the burden is on the (software) functional components to

adapt to whatever RF–hardware–operating system platform might be available.



14.2.2 Design Rules Include Functional Component Interfaces

The six functional components (see Tables 14.1(a) and 14.1(b)) imply

associated functional interfaces. In architecture, design rules may include a list of

the quantities and types of components as well as the interfaces among those

components. This section addresses the interfaces among the functional

components.

The AACR N-squared diagram of Table 14.1(a) characterizes AACR interfaces. These constitute an initial set of AACR APIs. In some ways, these APIs

augment the established SDR APIs. This is entirely new and much needed in

order for basic AACRs to accommodate even the basic ideas of the Defense

Advanced Research Projects Agency (DARPA) NeXt-Generation (XG) radio

communications program.

In other ways, these APIs supersede the existing SDR APIs. In particular, the

SDR user interface becomes the user sensory and effector API. User sensory APIs

include acoustics, voice, and video, and the effector APIs include speech synthesis



Table 14.1(a): AACR N-squared diagram. This matrix characterizes internal interfaces

between functional processes. Interface notes 1–36 are explained in Table 14.1(b).



From\to



User SP



Environment Sys apps



User SP



1

2

3

4

5 PECb

6 SC



7

8

9

10

11 PECb

12



Environment

Sys apps

SDR

Cognition

Effectors



13 PAa

14 SAa

15 SCMa

16 PDa

17 PCa,b

18a



SDR



Cognition



Effectors



19

20

21 SDa

22 SD

23 PAEb

24



25 PAb

26 PAb

27 PDCa,b

28 PCb

29 SCb

30 PCDb



31

32

33 PEMa

34 SD

35 PEb

36



P: primary; A: afferent; E: efferent; C: control; M: multimedia; D: data; S: secondary; others not

designated P or S are ancillary.

a

Information services API; bCAPI.



441



Table 14.1(b): Explanations of interface notes for functional processes shown in Table 14.1(a).



Note

number



Process interface



Explanation



1



User SP–

User SP



2



Environment–

User SP

Sys apps–

User SP

SDR–User SP



Cross-media correlation interfaces (video-acoustic, haptic-speech, etc.) to limit

search and reduce uncertainty (e.g., if video indicates user is not talking, acoustics

may be ignored or processed less aggressively for command inputs than if user is

speaking).

Environment sensors parameterize user sensor-perception. Temperature below

freezing may limit video.

Sys apps may focus scene perception by identifying entities, range, expected sounds

for video, audio, and spatial perception processing.

SDR applications may provide expectations of user input to the perception system

to improve probability of detection and correct classification of perceived inputs.

This is the primary control efferent path from cognition to the control of the user

SP subsystem, controlling speech recognition, acoustic signal processing, video

processing, and related SP. Plans from cognition may set expectations for user scene

perception, improving perception.

Effectors may supply a replica of the effect to user perception so that self-generated

effects (e.g., synthesized speech) may be accurately attributed to the ,

validated as having been expressed, and/or canceled from the scene perception to

limit search.

Perception of rain, buildings, indoor/outdoor can set GPS integration parameters.



3

442



4

5



Cognition–

User SP



6



Effectors–

User SP



7



User SP–

Environment

Environment–

Environment



8



Environment sensors would consist of location sensing such as GPS or GLONASS;

ambient temperature; light level to detect inside versus outside locations; possibly



9

10

11



12

443



13



14



Sys apps–

Environment

SDR–

Environment

Cognition–

Environment

(primary

control path)

Effectors–

Environment

UserSP–Sys

apps



Environment–

Sys apps



smell sensors to detect spoiled food, fire, etc. There seems to be little benefit in

enabling interfaces among these elements directly.

Data from the sys apps to environment sensors would also be minimal.

Data from the SDR personalities to the environment sensors would be minimal.

Data from the cognition system to the environment sensors controls those sensors,

turning them on and off, setting control parameters, and establishing internal paths

from the environment sensors.

Data from effectors directly to environment sensors would be minimal.

Data from the user SP system to sys apps is a primary afferent path for multimedia

streams and entity states that effect information services implemented as sys apps.

Speech, images, and video to be transmitted move along this path for delivery by the

relevant sys apps or information service to the relevant wired or SDR communications path. Sys apps overcomes the limitations of individual paths by maintaining

continuity of conversations, data integrity, and application coherence (e.g., for

multimedia games). Whereas the cognition function sets up, tears down, and

orchestrates the sys apps, the primary API between the user scene and the information service consists of this interface and its companions—the environment afferent

path, the effector efferent path, and the SDR afferent and efferent paths.

Data on this path assists sys apps in providing location awareness to services.

(Continued)



Table 14.1(b): Explanations of interface notes for functional processes shown in Table 14.1(a). (Continued)



Note

number



444



Process interface



Explanation



15



Sys apps–

Sys apps



16



SDR–Sys apps



17



Cognition–

Sys apps

Effectors–

Sys apps



Different information services interoperate by passing control information through

the cognition interfaces and by passing domain multimedia flows through this interface. The cognition system sets up and tears down these interfaces.

This is the primary afferent path from external communications to the AACR. It

includes control and multimedia information flows for all the information services.

Following the SDR Forum’s SCA, this path embraces wired as well as wireless

interfaces.

Through this path, the AACR exerts control over the information services

provided to the .

Effectors may provide incidental feedback to information services through this afferent path, but the use of this path is deprecated. Information services are supposed to

control and obtain feedback through the mediation of the cognition subsystem.

Although the SP system may send data directly to the SDR subsystem (e.g., to satisfy security rules that user biometrics be provided directly to the wireless security

subsystem), the use of this path is deprecated. Perception subsystem information is

supposed to be interpreted by the cognition system so that accurate information, not

raw data, can be conveyed to other subsystems.

Environment sensors such as GPS historically have accessed SDR waveforms

directly (e.g., providing timing data for air interface signal generation). The cognition system may establish such paths in cases where cognition provides little or no

value added, such as providing a precise timing reference from GPS to an SDR

waveform. The use of this path is deprecated because all of the environment sensors,



18



19



User SP–SDR



20



Environment–

SDR



21

22



23

445



24



25



26



including GPS, are unreliable. Cognition has the capability to “de-glitch” GPS (e.g.,

recognize from video that the is in an urban canyon and therefore not allow

GPS to report directly, but report to the GPS subscribers, on behalf of GPS, location

estimates based perhaps on landmark correlation, dead reckoning, etc.).

Sys apps–SDR This is the primary efferent path from information services to SDR through the

services API.

SDR–SDR

The linking of different wireless services directly to each other is deprecated. If an

incoming voice service needs to be connected to an outgoing voice service, there

should be a bridging service in sys apps through which the SDR waveforms communicate with each other. That service should be set up and taken down by the cognition system.

Cognition–SDR This is the primary control interface, replacing the control interface of the SDR SCA

and the OMG SRA.

Effectors–SDR Effectors such as speech synthesis and displays should not need to provide state

information directly to SDR waveforms, but if needed, the cognition function should

set up and tear down these interfaces.

User SP–

This is the primary afferent flow for the results from acoustics, speech, images,

Cognition

video, video flow, and other sensor-perception subsystems. The primary results

passed across this interface should be the specific states of in the scene,

which would include scene characteristics such as the recognition of landmarks,

known vehicles, furniture, and the like. In other words, this is the interface by which

the presence of in the local scene is established and their characteristics

are made known to the cognition system.

Environment– This is the primary afferent flow for environment sensors.

Cognition

(Continued)



Table 14.1(b): Explanations of interface notes for functional processes shown in Table 14.1(a). (Continued)



Note

number



27



28



446



29



30

31



Process interface



Sys apps–

Cognition



Explanation



This is the interface through which information services request services and receive

support from the AACR platform. This is also the control interface by which cognition sets up, monitors, and tears down information services.

SDR–Cognition This is the primary afferent interface by which the state of waveforms, including a

distinguished RF-sensor waveform, is made known to the cognition system. The

cognition system can establish primary and backup waveforms for information services, enabling the services to select paths in real time for low-latency services. Those

paths are set up and monitored for quality and validity (e.g., obeying XG rules) by

the cognition system, however.

The cognition system as defined in this six-component architecture entails (1) orientCognition–

Cognition

ing to information from sensors in the SDR subsystem and from scene

sensors in the user SP and environment sensors; (2) planning; (3) making decisions;

and (4) initiating actions, including the control over all of the cognition resources of

the . The may directly control any of the elements of the systems via

paths through the cognition system that enable it to monitor what the user is doing in

order to learn from a user’s direct actions, such as manually tuning in the user’s

favorite radio station when the either failed to do so properly or was not asked.

Effectors–

This is the primary afferent flow for status information from the effector subsystem,

Cognition

including speech synthesis, displays, and the like.

User SP–

In general, the user SP system should not interface directly to the effectors, but

Effectors

should be routed through the cognition system for observation.



32

33



Environment–

Effectors

Sys apps–

Effectors



447



34



SDR–Effectors



35



Cognition–

Effectors



36



Effectors–

Effectors



The environment system should not interface directly to the effectors. This path is

deprecated.

sys apps may display streams, generate speech, and otherwise directly control any

effectors once the paths and constraints have been established by the cognition

subsystem.

This path may be used if the cognition system establishes a path, such as from an

SDR’s voice track to a speaker. Generally, however, the SDR should provide streams

to the information services of the sys apps. This path may be necessary for legacy

compatibility during the migration from SDR through AACR to iCR, but it is

deprecated.

This is the primary efferent path for the control of effectors. Information services

provide the streams to the effectors, but cognition sets them up, establishes paths, and

monitors the information flows for support to the user’s or intent.

These paths are deprecated, but may be needed for legacy compatibility.



Tài liệu bạn tìm kiếm đã sẵn sàng tải về

2 CRA I: Functions, Components, and Design Rules

Tải bản đầy đủ ngay(0 tr)

×