Tải bản đầy đủ - 0 (trang)
8 Test 6 - Comparing Performance When Not Using the Bottom Depth Lines

8 Test 6 - Comparing Performance When Not Using the Bottom Depth Lines

Tải bản đầy đủ - 0trang

Underwater Terrain Navigation


Fig. 13. Test 5: The graphs show that the performance of the algorithm is increased

a bit when 7% of the particles are dead reckoned, but there is no clear difference in

contrast to the difference received when using the high-accuracy INS. By dead reckoning

7% or 15% of the particles, the position error is maintained below 100 m for 50 min.

More information about the configuration and performance can be found in Table 7.

Table 7. Test setup for graphs in Fig. 13

Evaluation method










Magnetic Field





Depth ∩ Magnetism 85

79.05 72.25 0

Skip PF (Only INS) 0




PF Mean (m)

167.4 161.4 150.7 -

PF Covariance

43000 41700 36000 -

KF Mean (m)





KF Covariance





The algorithm for the medium-accuracy INS also shows a good performance.

It maintains a position accuracy comparable to when using the depth lines until

40 min, see Fig. 16 and a video on https://youtu.be/p69zQzMSciU. After about

45 min, it has problems tracking the position due to the big velocity error in the




M. Lager et al.

Further Development and Testing of the Algorithm

The algorithm has only been tested with simulated data so far, but the plan for

the future is to test it on a ship, using digital sea charts and magnetic maps.

The position can then be compared to a GPS position. When testing at sea, it

will also be important to keep track of the water level, as the tide will influence

the performance significantly due to the importance of correct bottom depth


Fig. 14. In this image, the particle cloud is initially (top left image) spread around

the ship (indicated by the square). When passing a grounding (top right image), most

particles are discarded and split into smaller sub-clouds. The mean of the particle cloud

moves to the biggest sub-cloud (bottom left image), far from the correct position. A few

moments later (bottom right image), the particles located in the wrong location, has

been discarded after further evaluation of the particles. A position has been estimated

by a KF, using the position estimation from the particle filter along with the INS data.

The Kalman position estimation is indicated by a pentagon. In the four images, it can

be seen that the KF gives a more stable and accurate position estimation, as it does

not respond to the fast position variations from the particle filter.

Underwater Terrain Navigation


Fig. 15. Test 6:1: The high-accuracy INS is used. The performance is influenced by

using or not using the bottom depth lines in the sea chart. Even though the performance

is lower, it still manage to track the correct position during the 24 h test. A mean

position error of 34.0 m is maintained.

Fig. 16. Test 6:2: The medium-accuracy INS is used. It shows a good performance

during 45 min, even though the bottom depth lines are not being used. After this time,

it has problem tracking the correct position.



M. Lager et al.


It has already been shown that PF algorithms can be used for estimating positions [2,17,19,20], and it has even been shown that it is possible to accurately

estimate a ship’s position if high-resolution maps are available [3–15,21]. This

paper describes how this technique can be used in solutions more suitable for realworld scenarios, where only normal sea charts and magnetic field maps are available. The paper has shown how the performance varies depending on only using

one or multiple inputs to the PF, and how the performance can be enhanced by

using a KF. It has also been shown how the performance of the INS influences

the position accuracy when using the PF.

When fusing the different PF methods in a 20 h long test, the mean of the

position errors for the high-end INS has been calculated to be 10.2 m when using

the bottom depth lines, and 30.5 m when not using them. This is not as good

as GNSS-based accuracy, but probably accurate enough for most navigation

purposes. When using the medium-accuracy INS, the mean position error for a

40 min long test was 35.4 m when using the bottom depth lines, and 37.1 m when

not using them.

Whether it is possible to navigate accurately enough without GNSS systems,

only relying on high-performance navigation sensors and normal sea chart and

magnetic charts, still remains unclear. Further testing with real ship data in

multiple areas is therefore necessary.

Acknowledgment. This work was partially supported

Autonomous Systems and Software Program (WASP).





1. Humphreys, T.E., Ledvina, B.M., Psiaki, M.L., O’Hanlon, B.W., Kintner Jr., P.M.:

Cornell: assessing the spoofing threat: development of a portable GPS civilian

spoofer. In: Proceedings of the ION GNSS International Technical Meeting of the

Satellite Division, vol. 55 (2008)

2. Gustafsson, F., Gunnarsson, F., Bergman, N., Forssell, U., Jansson, U., Karlsson,

R., Nordlund, P.-J.: Particle Filters for Positioning, Navigation, and Tracking.

IEEE Trans. Signal Process. 50(2), 425–437 (2002). https://doi.org/10.1109/78.


3. Nygren, I.: Terrain navigation for underwater vehicles. In Ph.D. dissertation, Dept.

Electr. Eng., Royal Institute of Technology (KTH), Sweden (2005)

4. Karlsson, R., Gustafsson, F.: Particle filter for underwater terrain navigation. In:

IEEE Workshop on Statistical Signal Processing, pp. 526–529, (2003). https://doi.


5. ˚

Anonsen, K.B., Hallingstad, O.: Sigma point Kalman filter for underwater terrainbased navigation. In: IFAC Proceedings, vol. 40(17), pp. 106–110 (2007). https://


6. Nordlund, P.J.: Sequential Monte Carlo filters and integrated navigation. Ph.D.

dissertation, Dept. Electr. Eng., Linkă

oping University, Linkă

oping (2002)

Underwater Terrain Navigation


7. Karlsson, R., Gustafsson, F.: Bayesian surface and underwater navigation. IEEE

Trans. Signal Process. 54(11), 4204–4213 (2006). https://doi.org/10.1109/TSP.


8. Karlsson, R., Gustafsson, F., Karlsson, T.: Particle filtering and Cramer-Rao lower

bound for underwater navigation. In: Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal, vol. 6, pp. 65–68. IEEE (2003)

9. Donovan, G.T.: Position error correction for an autonomous underwater vehicle

inertial navigation system (INS) using a particle filter. IEEE J. Oceanic Eng. 37(3),

431–445 (2012)

10. Nakatani, T., Ura, T., Sakamaki, T., Kojima, J.: Terrain based localization for

pinpoint observation of deep seafloors. In: OCEANS 2009 - EUROPE, pp. 1–6

(2009). https://doi.org/10.1109/OCEANSE.2009.5278194

11. Fairfield, N., Wettergreen, D.: Active localization on the ocean floor with multibeam sonar. In: Proceedings of the OCEANS Conference, Quebec City, Canada

(2008). https://doi.org/10.1109/OCEANS.2008.5151853

12. ˚

Anonsen, K.B., Hallingstad, O.: Terrain aided underwater navigation using point

mass and particle filters. In: 2006 IEEE/ION Position, Location, and Navigation

Symposium, pp. 1027–1035. IEEE (2006)

13. Meduna, D.K., Rock, S.M., McEwen, R.: Low-cost terrain relative navigation

for long-range AUVs. In: Proceedings of the OCEANS Conference, Quebec City,

Canada (2008). https://doi.org/10.1109/OCEANS.2008.5152043

14. Carren, S., Wilson, P., Ridao, P., Petillot, Y.: A survey on terrain based navigation for AUVs. In: Proceedings of the OCEANS (2010). https://doi.org/10.1109/


15. Melo, J., Matos, A.: Survey on advances on terrain based navigation for

autonomous underwater vehicles. In: Ocean Engineering, vol. 139, pp. 250-264

(2017). https://doi.org/10.1016/j.oceaneng.2017.04.047

16. Le Grand, E., Thrun, S.: 3-axis magnetic field mapping and fusion for indoor

localization. In: 2012 IEEE Conference on Multisensor Fusion and Integration for

Intelligent Systems (MFI), pp. 358–364 (2012). https://doi.org/10.1109/MFI.2012.


17. Frassl, M., Angermann, M., Lichstenstern, M., Robertson, P., Julian, B.J., Doniec,

M.: Magnetic maps of indoor environments for precise localization of legged and

non-legged locomotion. In: 2013 IEEE/RSJ International Conference on Intelligent

Robots and Systems (IROS), pp. 913–920 (2013). https://doi.org/10.1109/IROS.


18. Lager, M., Topp, E.A., Malec, J.: Underwater terrain navigation using standard

sea charts and magnetic field maps. In: 2017 IEEE International Conference on

Multisensor Fusion and Integration for Intelligent Systems (MFI) (2017). https://


19. Robertson, P., Angermann, M., Krach, B., Khider, M.: Inertial-based joint mapping

and positioning for pedestrian navigation. In: Proceedings of the ION GNSS 2009


20. Daellaert, F., Fox, D., Burgard, W., Thrun, S.: Monte Carlo localization for

mobile robots. In: Proceedings of the IEEE International Conference on Robotics

and Automation (ICRA 1999), pp. 1322–1338 (1999). https://doi.org/10.1109/


21. Klein, L.A.: Sensor and Data Fusion - A Tool for Information Assessment and

Decision Making. SPIE Press (2012). ISBN: 9780819491336

Supervised Calibration Method

for Improving Contrast and Intensity

of LIDAR Laser Beams

Mohammad Aldibaja ✉ , Noaki Suganuma, Keisuke Yoneda, Ryo Yanase,

and Akisue Kuramoto



Autonomous Vehicle Research Unit, Kanazawa University, Kanazawa 920-1192, Japan


Abstract. Calibration of LIDAR laser beams in terms of contrast and intensity

levels is very important for map generation and localization of autonomous vehi‐

cles. In this paper, we explain a simple semi-calibration method based on

matching the shape and distribution of histograms. A laser beam output is selected

to be a reference of the calibration process after manually tuning its intensity and

contrast parameters to describe the road marks in prominent reflectivity. The

histograms of the other laser beams are then aligned to the reference histogram

and the calibration parameters of each beam are obtained. The experimental

results have verified that the proposed method is reliable and provides a consid‐

erable enhancement of the map image quality as well as it improves the locali‐

zation accuracy during the autonomous driving.

Keywords: Autonomous vehicles · LIDAR · Road mapping systems

Localization · Autonomous driving · Calibration



Mapping and localization come always together to enable autonomous driving. Map

must be in high definition level to describe the road surfaces and the surround environ‐

ments accurately with respect to the real world. Localization is achieved by comparing

the observation data that obtained by sensors relative to the maps. The intuitive compar‐

ison strategy is to calculate the matching score based on the common features between

the sensor and map data. The matching calculation is a three dimensional problem

because of the error possibility in positioning the map details relative to the real world,

explaining the details in low quality and changing the environmental conditions in the

observation data. For the first issue, we use post-processed measurements of a very

accurate GNSS/INS system. In addition, some framework based on SLAM technology

is being developed to deal with critical situations such as mapping inside long tunnels

[1]. The second issue is overcome by implementing a very robust mapping method to

accumulate the LIDAR point clouds in the absolute coordinate system [2]. The third

issue is very possible to occur and researchers usually try to develop localization strat‐

egies that deal robustly with the differences in the observation data [3–6]. These issues

© Springer International Publishing AG, part of Springer Nature 2018

S. Lee et al. (Eds.): MFI 2017, LNEE 501, pp. 210–218, 2018.


Improving Intensity and Contrast of LIDAR Laser Beams


influence each other during the autonomous driving, however, there is another technical

factor to guarantee the localization stability, i.e., calibration of the LIDAR laser beams.

Every point in the environment is scanned by each beam with respect to the vehicle

motion. Therefore, the reflected value of the point must be same for all beams. This

increases the map quality in terms of illustrating the road texture and structure, i.e., ghost

is not generated as well as the contrast is very high between the road surface and the

painted marks. Mainly, Thrun et al. has proposed an unsupervised calibration method

based on learning the variance in the intensity levels of the laser beams [7]. The core

idea is to minimize an energy function that encodes the intensity differences between

the neighboring points iteratively. In addition, a Bayesian generative model is incorpo‐

rated to adjust the beam reflectivity by filtering-out the uncertainty and noise charac‐

teristics. Thus, each beam is represented by 256 reflectivity values with correspondence

to the gray level in the image domain. The parameters are optimized by the Bayes model

and the map images are regenerated accordingly. This method provides good image

quality if the parameters are properly initialized. On the other hand, it uses an iterative

framework that consumes time and the results are not guaranteed for different road

patterns as well as it reduces the contrast of the road marks.

In this paper, we introduce a semi-calibration method that simply enhances the

contrast and consumes less processing time. The method has been utilized for three years

as a supplement of the mapping system that used to generate maps in different cities in

Japan. In order to provide a complete idea on the performance evaluation, mapping and

localization systems are briefly explained in the next sections, respectively.


Mapping System

The key issue for implementing a mapping system is the strategy of accumulating the

LIDAR point clouds with taking into account the vehicle heading direction. Accord‐

ingly, the designed system accumulates and generates the map images in the absolute

coordinate system directly. Another issues are storing size, save strategy and providing

global labeling IDs to the map data. These steps are summarized in Fig. 1 and each of

which is roughly described as follows: the vehicle position is first approximated to the

closest pixel in the image domain according to the pixel resolution. The LIDAR frame

is then generated by down-sampling the point cloud into a 2D image which encodes the

road surface at height of 30 cm as shown in Fig. 1a. Another LIDAR frame is generated

for the next movement step of the vehicle as in Fig. 1b. This frame is accumulated to

the previous one by considering the maximum top-left corner as the mapping reference

regardless to the heading angle. The intensity values of the common pixels between the

two frames are averaged and an accumulated image is obtained as illustrated in Figs. 1c–

d. The size of the accumulated image is increased due to the vehicle motion and the

saving procedures are applied for exceeding a size threshold as in Fig. 1e. The accu‐

mulated image is then divided into sub-images that possess a fixed size. The global

labeling IDs are given to these images with respect to the top-left corner of the accu‐

mulated image as demonstrated in Fig. 1f. Hence, the images are saved with the corre‐

sponding IDs in Bmp type and the accumulation process is initialized continuously.


M. Aldibaja et al.

Fig. 1. Mapping system. (a) First LIDAR frame. (b) Second frame. (c) Accumulation map. (d)

Accumulated image. (e) Large map image divided into sub-images. (f) Sub-images with labeling



Localization System

During autonomous driving, the labeling ID of a driving area can be obtained based on

the vehicle position in the absolute coordinate system. Thus, the corresponding saved

map image with the same ID is retrieved as well as the three neighboring sub-images

are obtained by adding one size-unit to the driving area ID as illustrated in Fig. 2a. The

four images are combined in one large image and this image is cutoff to the size of

256 × 256 around the vehicle as in Fig. 2b. The cutoff image is called the small map

image and used for localization. The localization system uses dead reckoning method

to estimate the vehicle position [4]. Dead reckoning integrates the vehicle velocity and

the previous position to estimate the current position with taking into account the elapsed

time. In order to improve the accuracy of dead reckoning, an offset vector is calculated

to correct the estimation based on the LIDAR observation data.

Improving Intensity and Contrast of LIDAR Laser Beams


Fig. 2. Localization system. (a) Large map image by four sub-images. (b) Small map image

around the vehicle. (c) LIDAR image. (d) Correlation matrix. (e) Matching image: LIDAR image

overlaps the map image according to highest peak in the correlation matrix.

As the LIDAR image describes the environment at the actual vehicle position, the

offset is computed by calculating the normalized cross correlation between the LIDAR

and small-map images as shown in Figs. 2c–d. The highest score indicates the position

of the LIDAR image relative to the small map image as demonstrated in Fig. 2e. This

system has been used to conduct autonomous driving on public roads in Suzu, Kanazawa

and Abashiri cities in Japan. In addition, the system has been modified to overcome

various critical situations of changing the environmental and weather conditions with

providing high accuracy of longitudinal and lateral localization [8].


Calibration System

4.1 Problem Statement

Calibration is attained to generate map images with representing the road marks in high

contrast comparing to the road surface. This is very important for enabling many appli‐

cations and algorithms such as lane line detection, curb extraction, painting sign filtra‐

tion, waypoints/traffic signs assigning and so on [9, 10]. In addition, robust calibration

allows map generation against a reasonable range of changing weather conditions. This

preserves a very smooth distribution of the intensity level between the generated maps

on different dates. Furthermore, LIDAR image quality can be improved using the cali‐

bration parameters that affects significantly the matching result between the LIDAR and

map images during the autonomous driving. Figure 3 shows a google image of a parking

area and the corresponding generated map image using the raw/uncalibrated LIDAR

data. One can observe that the lane marks are hardly recognizable because of the small

intensity change of the laser beams (contrast). In addition, the vehicle trajectories appear

in annular patterns due to the different reflectivity values of each laser beam in the same

scanned area. Accordingly, calculating of the localization offsets using this map image

is risky as there are no prominent matching features to be extracted.


M. Aldibaja et al.

Fig. 3. Google map image and corresponding image using uncalibrated LIDAR.

4.2 Proposed Method



Each laser beam LsrID is explained by two parameters aLsrID , bLsrID and the reflectivity

values ILsrID range from 0 to 255. A pixel I in a map image is obtained by combining the

contributions of the laser beams as clarified in Eq. (1).



1 ∑



+ bLsrID

N i=0 LsrID LsrID


A simple calibration strategy is implemented to adjust the slop and intercept values

of a linear function which describes the intensity-contrast relationship. The core idea is

Fig. 4. Proposed calibration method. (a) Decomposing the map image into laser beam frames

and adjusting the reference frame. (b) Adjusting histogram’s mean and variance.

Improving Intensity and Contrast of LIDAR Laser Beams


to collect some LIDAR frames on a very well painted road surface with representing

lane lines sharply, e.g., parking areas. The LIDAR frame is decomposed into 64 frames

that represent the scanned area by each beam individually as illustrated in Fig. 4a for

the parking area in Fig. 3. The decomposed frames are visually checked to select a

reference frame that possesses dense reflectivity and significantly contributes in repre‐

senting the lane lines and the road marks as LsrID = 0 in Fig. 4a. The contrast and the

intensity of the selected frame are manually readjusted to obtain better appearance and

texture of the road marks. The relationship between the frame before and after adjust‐

ment is simply explained by Eq. (2).

Iref,Adj = aAdj ILsrID + bAdj


Accordingly, the other beams/frames must be aligned to reflect the same intensity

values of the common pixels M with the adjusted frame. This can be achieved by creating

histograms on the number of the common pixels between the reference frame and each

laser beam output. Figure 4b shows an illustration of the histogram which can be

described by the variance and mean components 𝜎, 𝜇. The relationship between the

histograms before and after adjustment is governed by two parameters aLsrID , bLsrID as in

Fig. 4c. In order to calculate these parameters with respect to the reference frame, a set

of equations (3) is used. Consequently, the histogram is aligned to be in the same variance

and mean values of the existing pixels.

𝜇ref,Adj = aLsrID 𝜇LsrID + bLsrID



𝜇ref,Adj + 𝜎ref,Adj = aLsrID 𝜇LsrID + 𝜎LsrID + bLsrID



1 ∑

I(u , v ),

M i=0 i i

𝜎2 =

aLsrID =

M { (


}2 (

) {






1 ∑

I ui , vi − 𝜇 ; ui , vi = Iref,Adj ui , vi ≠ 0, ILsrID ui , vi ≠ 0

M i=0




bLsrID = 𝜇ref,Adj − aLsrID 𝜇LsrID


The above steps are applied on 63 laser beams and the parameters of each of which

are obtained. Accordingly, the intensity values of the common pixels are unified and the

calibration process is achieved. In addition, the obtained parameters are used during the

autonomous driving to correct the LIDAR frames.


Experimental Results and Discussion

Calibration is very important to maintain a smooth intensity distribution in the map data

that collected on different days. The contrast between the road surface and the painted

marks must be high in order to recognize the lateral and longitudinal features for esti‐

mating the vehicle position. The proposed calibration method is very simple and effec‐

tive to meet these conditions because it deals with the raw outputs of the LIDAR beams.

Tài liệu bạn tìm kiếm đã sẵn sàng tải về

8 Test 6 - Comparing Performance When Not Using the Bottom Depth Lines

Tải bản đầy đủ ngay(0 tr)