Navigation Satellite Selection Using Neural Networks [PDF]

satellite subset is chosen by minimizing a quantity known as Geometric Dilution of Precision. (GOOP), which is given by

0 downloads 5 Views 2MB Size

Recommend Stories


[PDF] Download Neural Networks
I want to sing like the birds sing, not worrying about who hears or what they think. Rumi

[PDF] Download Neural Networks
Sorrow prepares you for joy. It violently sweeps everything out of your house, so that new joy can find

Face Recognition using Neural Networks
Never wish them pain. That's not who you are. If they caused you pain, they must have pain inside. Wish

Hyphenation using deep neural networks
Come let us be friends for once. Let us make life easy on us. Let us be loved ones and lovers. The earth

Global Navigation Satellite System
Nothing in nature is unbeautiful. Alfred, Lord Tennyson

China Satellite Navigation Office
Ask yourself: What are my most important needs and desires? Does my present life fulfill them? Next

Neural Networks
You have to expect things of yourself before you can do them. Michael Jordan

Neural Networks
Where there is ruin, there is hope for a treasure. Rumi

neural networks
Learning never exhausts the mind. Leonardo da Vinci

Neural Networks
Almost everything will work again if you unplug it for a few minutes, including you. Anne Lamott

Idea Transcript


Cleveland State University

EngagedScholarship@CSU Electrical Engineering & Computer Science Faculty Publications

Electrical Engineering & Computer Science Department

5-1995

Navigation Satellite Selection Using Neural Networks Daniel J. Simon Cleveland State University, [email protected]

Hossny El-Sherief TRW System Integration Group

Follow this and additional works at: https://engagedscholarship.csuohio.edu/enece_facpub Part of the Digital Communications and Networking Commons How does access to this work benefit you? Let us know!

Publisher's Statement NOTICE: this is the author’s version of a work that was accepted for publication in Neurocomputing. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Neurocomputing, 7, 3, (05-01-1995); 10.1016/ 0925-2312(94)00024-M Original Citation Dan Simon, Hossny El-Sherief. (1995) Navigation satellite selection using neural networks. Neurocomputing, 7(3), 247-258, doi: 10.1016/0925-2312(94)00024-M.

Repository Citation Simon, Daniel J. and El-Sherief, Hossny, "Navigation Satellite Selection Using Neural Networks" (1995). Electrical Engineering & Computer Science Faculty Publications. 133. https://engagedscholarship.csuohio.edu/enece_facpub/133 This Article is brought to you for free and open access by the Electrical Engineering & Computer Science Department at EngagedScholarship@CSU. It has been accepted for inclusion in Electrical Engineering & Computer Science Faculty Publications by an authorized administrator of EngagedScholarship@CSU. For more information, please contact [email protected].

Navigation satellite selection using neural networks Dan Simon a,*, Hossny El-Sherief b

b

a 1RWTu f Lo/Jorutory, 4f)51 N. Higky Rood, Mesa, AZ 85215, USA. Mwwge, Guidance Syslons and CmtroIlNport~n1, TRW Systems Integraticll Group, Bui/dirJg S82,

Room 1051, PO Box 1310, San BenumJiIlQ, CA 924()2, USA.

The application of neural networks to optimal satellite subset selection for navigation use is discussed. The methods presented in this paper are general enough to be applicable regardless of how many satellite signals are being processed by the receiver. The optimal satellite subset is chosen by minimizing a quantity known as Geometric Dilution of Precision (GOOP), which is given by the trace of the inverse of the measurement matrix. Ari artificial neural network learns the functional relationships between the entries of a measurement matrix and the eigenvalues of its inverse, and thus generates GDOP without inverting a matrix.. Simulation results are given, and the computationaJ benefit of neural network-based satellite selection is discussed. Keywords: Neural networks; Global Positioning System; Geometric dilution of precision;

Approximation; Oassification

1. Introduction

A Global Positioning System (GPS) receiver generates a user position and time

by measuring the range from the user to four or more GPS sateUites [8,9,3}. but a

GPS receiver can process only a subset of available satellite signals. For instance, there may be nine satellites visible, but the receiver hardware may be limited to processing no more than six satellites. So before processing, the receiver must decide which subset to use. The optimal choice can be made by using tbe subset which results in the smallest magnification of satellite errors onto resultant user

position and time. This magnification can be determined for each satellite subset by inverting a 4 X 4 matrix. Phillips [16] presented a simple and geometrically intuitive approach to this problem under the assumption that the GPS receiver processes exactly four satellite signals, and the user is not concerned with obtaining an accurate time reference. He showed that the optimal satellite set is that set which minimizes a certain geometrical measurement of a tetrahedron formed by the four satellites and the user. The approach taken in this paper is applicable to any number of satellite signals. It is based on the learning properties of artificial neurons, and as such gives an approximate rather than an exact answer. Its primary advantage lies in the fact that no matrix inversions are required. This translates into less required computational time for satellite subset selection, and the ability to begin navigating sooner with the best satellite subset. 2. Geometric dilution of precision (GDOP)

A user's GPS receiver measures a set of n ranges (R 1, R 2, ... , Rn) between the user and n GPS satellites. The GPS satellites are at positions (X j, Yj' z), (i = 1, ... , n). The four unknowns which the user needs to determine are the offset T between receiver time and GPS time, and the user position (x, y, z). We denote the user's best estimate of time offset and position as f and (X, y, i). We denote the corresponding best estimates of range as (R 1, R2 , ••• , Rn). The errors between the true and estimated quantities are denoted by .1x =x-x (1) .1y =y-y (2) .1z=z-i (3) .1T= T- f (4) (5) .1R j = R j - Ri • The errors of the user's estimate of time and position can be determined by solving the following n simultaneous nonlinear equations for .1x, .1y, .1z, and .1T [10], )2 (x +.1x - Xi) 2 + ( y +.1y - y;) 2 + (i +.1z - Zi) 2 = (" R j + .1R; - cT,-,c.1T (6)

(i=1, ... ,n) where c is the speed of light. These equations can be linearized to obtain the matrix equation .1Rl all a 12 a13 1 .1x a 21 a 22 a 23 1 .1y .1R2 (7) .1z

anI

a n2 an3

1

c.1T

.1R n

The n X 4 matrix in Eq. 7 is called the measurement matrix, and its elements are given by ail

= (x-xi)/(Ri-cf)

aiZ = ( y - y;) I( Ri - cf)

(8)

ai3 = (i -Zi)/( Ri - cf). Eq. 7 can be written more compactly as ,.;;>

fU,

--. = r.

(9)

The least-squares solution for

x= (A1A)

-l

x is given by [15]

AT?

(10)

A1A is invertible if A has full rank. The uncertainty of the solution of user position and time is therefore related to the uncertainty of the measured ranges by follows. (11)

r

If the covariance of is normalized to an identity matrix, we obtain a simplified expression for the covariance of user position and time as

cov(r)

= I = cov(.i') = (A1A) -1.

(12)

A useful scalar measure of the uncertainty of the solution of the user position and time is the trace of the above matrix. This quantifies the magnification of GPS range measurement errors (e.g. due to satellite and receiver inaccuracies) onto user position and time errors. GOOP is thus defined as GOOP = [ trace(A1A) -

1] 1/2

.

(13)

So GOOP can be calculated by inverting a 4 x 4 matrix. But how can GOOP be computed without resorting to matrix inversion? The way that humans learn most effectively is through induction, which is reasoning from the particular to the general. A computer algorithm can be designed to inductively generate a mathematical function by generalizing from known input/output relationships ([22], p. 182). Two ways of doing this are reviewed in the following sections. 3. Neural network-based approximation The term backpropagation refers to a general learning rule which is implemented in an artificial neural network. Good overviews can be found in [1,13,19]. A typical backpropagation network has three layers of neurons - an input layer, a middle layer (also called a hidden layer), and an output layer. The outputs of the input layer neurons are weighted to activate the middle layer neurons, and the outputs of the middle layer neurons are weighted to activate the output layer

neurons. The effectiveness of backpropagation in learning complex, multidimensional functions can be partially explained by Kolmogorov's Theorem, which was extended to neural networks by Hecht-Nielson [1,5,7]. This theorem states that any functional !Jim ~gzn mapping can be exactly represented by a three-layer neural network with (2m + 1) middle-layer neurons, assuming that the input components are normalized to lie in the range [0,1]. Knowing that GDOP is equal to the square root of the trace of (ATA)-l (see Eq. 13), we will use the following general facts about the trace and eigenvalues of a matrix to compute GDOP. (1) The trace of a matrix is equal to the sum of its eigenvalues ([6], p. 40). More specifically, the GDOP of a GPS navigation solution is equal to the square root of the sum of the eigenvalues of (A1A)-l, where A is defined in Eq. 7-9. (2) If (ATA) has eigenvalues Ai then (ATA)-l has the eigenvalues Ai- l ([15], p. 292). (3) The determinant of a matrix is equal to the product of its eigenvalues ([4], p. 332). (4) If (ATA) has eigenvalues Ai then (ATA)k has the eigenvalues A~, where k is anY"p0sitive integer ([6], p. 43). Using A to denote the four-element vector of the eigenvalues of ATA, we can

define the four functions

Il( A) = Al + A2 + A3 + A4 = trace( ATA)

\ 14)

12( A) = A~ +

A~ + A~ + A~ = trace [ (A1A)2]

(15)

13( A) = A{ +

A~ + A~ + A~ = trace [(A TA)3]

(16)

= det(A1A).

(17)

Using the above notation, the GDOP which we wish to calculate is precisely given by GDOP=(AIl+Ail+Ail+Ail)l/Z. So GDOP can be viewed as the scalar functional of the .9t 4 ~.9t4 mapping GDOP = GDOP[ l(A)].

(18)

t1."M (19)

The mapping from j7X) to GDOP cannot be determined analytically. But this complex, nonlinear mapping is the type of problem at which neural networks excel. A neural network can be designed with four easily computable inputs (fl' Iz, 13' 14)' one hidden layer, and four outputs (All, Ail, Ail, Ail). The outputs can then be summed to give the square of GDOP. At this point it is of interest to investigate the solution space of Eq. 14-17. That is, given Ii (i = 1, 2, 3, 4), is there a unique solution for Ai (i = 1, 2, 3, 4)? We know that there is at least one solution. But if there is more than one solution, then a neural network may converge to the wrong solution and hence produce an

incorrect GDOP for a given satellite subset. The following lemma states that the solution to Eq. 14-17 is unique, and thus implies that a well-trained network will produce an accurate value for GDOP. Lemma 1. Consider two 4 X 4 matrices A and B such that trace(A) = trace(B), trace(A 2 ) = trace(B 2 ), trace(A 3 ) = trace(B 3 ), and det(A) = det(B). Then A and B have the same eigenvalues. Proof. See Appendix A.

0

4. Neural-network based classification Note that in picking a satellite subset to use for navigation, the receiver does not need to compute GDOP for every satellite subset. It only needs to find a subset which gives a satisfactorily low GDOP. So it can be argued that using backpropagation to find an approximating function to GDOP [(11M] (see Eq. 19) entails more work than necessary. A more efficient approach is to create a network which classifies satellite subsets according to GDOP. If a network can be trained to classify a satellite group into one of n sets (Sl"'" Sn) according to GDOP, then those satellite groups which are classed in the best set are candidates for navigation use. Classification may be the most popular application of neural networks. So it is not surprising that there are many different network architectures which have been proposed for classification. One recently proposed architecture is the Optimal Interpolative Net (01 Net) [20,21]. The 01 Net is a three-layer classification network which grows during training according to how many neurons are necessary for correct classification. The efficient recursive learning formulation presented in [20] makes the 01 Net an attractive architecture. In addition, fault tolerance can be implemented in 01 Net training in a straightforward manner [17,18]. The 01 Net can be used to select a good group of satellites with which to navigate by classifying groups according to GDOP. Given a group G of satellites, the 01 Net can be trained to classify the group as GESi

iff

GDOP(G)

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 PDFFOX.COM - All rights reserved.