The research of Last et al. into Europe’s DGNSS examines the value of DGNSS PRC interpolation and whether this would cause problems for the user [15]. The clock bias question arises from the difference between user and **GPS** satellite time, which is usually resolved by calculating this difference during locking to the frequency and phase difference. With DGNSS, there is another latency introduced by the time-to- calculation of the reference station, which is typically not of concern since all the latencies introduced by a single station are the same. During their test, the authors assessed the quantity of multi-beacon coverage areas in the United Kingdom, where three beacons was common and seven beacons was the maximum—interestingly, the maximum on the European mainland was 23! Testing consisted of using four DGNSS receivers to record transmissions and an Ashtech receiver locally-placed for actual values, which were recorded for 24 hours. They discovered that the effect of merging different clock biases was minimized due to the averaging and weighting process when combining the PRCs, and therefore was negligible. The combination method weighted the inverse of the user-beacon ranges, resulting in an improvement in correlation between calculated PRC and actual PRC (termed Regional Area Augmentation System, RAAS). Also compared were the solutions computed using single-beacon (23 km away) and RAAS (219, 358, and 419 km away) **methods**. They find that the single-beacon position solutions are better, but only slightly, suggesting that RAAS solutions might be useful. Their work also suggests that further work should explore a combination of two beacons, and that a RAAS could extend the boundaries of the current DGNSS system.

Show more
143 Read more

Tropospheric heterogeneity (**differential** tropospheric delay) can lead to misinterpretation of DInSAR results. A between-site and between-epoch double-differencing algorithm has been proposed to derive tropospheric corrections to radar results from **GPS** measurements. These **GPS** observations can be made by either a network of continuous **GPS** (CGPS) stations or **GPS** campaigns synchronised to the radar satellite flyover. In order to correct the radar result on a pixel-by-pixel basis, the **GPS**-derived corrections have to be interpolated. Three interpolation **methods** (inverse distance weighted, spline, and Kriging) have been investigated. Using **GPS** data from the SCIGN network, it has been found that the inverse distance weighted and Kriging interpolation techniques are more suitable. **Differential** corrections as much as several centimetres may have to be applied in order to ensure sub-cm accuracy for the radar result. It seems optimal to estimate the tropospheric delay from **GPS** data at 5-minute intervals. The algorithm and procedures developed in this paper could be easily implemented in a CGPS network data centre. The interpolated image of between-site, single-differenced tropospheric delays can be generated as a routine product to assist radar interferometry, in a manner similar to the SLC radar images.

Show more
12 Read more

11 Read more

propagation, transmission, router, processing, computer … etc.)” and “packet dropout (caused by network congestion, multi-path fading, faulty network drivers… etc.)” on the system performance [14]. To treat the mentioned effects, several **methods** have been proposed. Seiler et al. adopted the H ∞ control theory to stabilize NCS and analyzed the

The offshore energy industries use **differential** Global Positioning System (DGPS) **methods** extensively to provide positioning for activities such as seismic exploration and pipeline inspection surveys. It has long been a criticism o f DGPS that the quality measures delivered by most processing algorithms do not truly reflect the real quality o f the derived positions and underestimate the likely sizes o f the errors. In practice, this problem is often resolved by quoting extremely conservative upper error bounds so that users can be sure that real positioning errors are always within these. Significant improvements have been seen in both solutions and quality measures for post-processed applications with the use of reverse-engineered stochastic models. A common goal for researchers involved with single-frequency DGPS positioning is to identify a general case stochastic algorithm that can be relied upon completely to describe correctly, in real-time, the quality of position solutions. Many groups have found that the use of an elevation-weighting function is preferable over a model that incorrectly assumes all code pseudoranges are of equal precision and uncorrelated. However, such functions cannot accurately represent the high- frequency variations within DGPS measurements.

Show more
405 Read more

Global Positioning System **GPS** has proven to be an accurate positioning sensor. However, it is associated with several sources of error such as ionosphere and troposphere eects, satellite time errors, errors of orbit data, receiver's errors, and errors resulting from multi-path eect which reduce the accuracy of low-cost **GPS** receivers. These sources of error also limit the use of single-frequency **GPS** receivers due to their less accurate data. Therefore, it's important to reduce the eect of errors on **GPS** systems. In order to cope with these errors and enhance **GPS** system's accuracy, Dierential **GPS** DGPS method can be used. The problem with this method is slow updating process of dierential corrections. In this paper, three algorithms based on Kalman Filtering KF are proposed to predict real-time corrections of DGPS systems. The eciency of the proposed algorithms is veried on the basis of actual data. The experimental results obtained in eld tests gaurantee the high potential of these **methods** to get accurate positioning data. The results show that KF with variable transition matrix is better than other **methods** so it's possible to reduce the Root Mean Square RMS of positioning errors in low-cost **GPS** receivers to less than one meter.

Show more
12 Read more

In general, **GPS** TEC is calculated with the so-called geometry-free linear combination of two frequencies (L1– L2). Hardware biases are usually determined in a relative aspect because they remain in the ionospheric TEC after subtracting measurements at different frequencies. These differences in the hardware biases of **GPS** code measure- ments are called **differential** code biases (DCBs) (Mannucci et al., 1998; Meza, 1999). Choi et al. (2011) showed that the receiver DCBs estimated from **GPS** measurements reach up to a few tens of nanoseconds. **GPS** satellite DCBs have a range at the level of a few nanoseconds (ns) and exhibit very gradual drifts over periods of several months. They have a long-term stability with a root mean square (RMS) error of about 0.2 ns (Wilson and Mannucci, 1994). These DCBs can seriously affect ionospheric TEC estimation. There- fore, it is necessary to precisely estimate **GPS** satellite and

Show more
Valbuena et al, 2010 [8] worked on accuracy and precision of **GPS** receivers under forest canopies. True coordinates obtained in a total station traverse were compared against **GPS**/GLONASS occupations computed from one navigation-grade and three survey-grade receivers. Horizontal component of the absolute error was a better descriptor of the performance of **GPS**/GLONASS receivers compared to the precision computed by the proprietary software. Differences among **methods** and receivers were only observed for recording periods between 5 and 15 minutes.

Show more
While a procedure to automatically determine initial rel- ative positions is a significant missing piece in our goal of a complete localization system, this paper will show that our novel method is already usable for highly accurate track- ing in applications such as land surveying or precision agri- culture, where the initial node locations are already well- known. It can also be used in its current state for appli- cations involving temporal feature extraction, such as the mapping of certain events (e.g. a lane change or a left turn in a vehicle’s track) to a specific time tag, or in applications in which additional sensing modalities may enable periodic re-initialization of any of the participating nodes’ relative locations. More importantly, however, we will conclude this paper with an outline on how we plan to definitively solve the location initialization problem without imposing the re- quirement of the stationary calibration phase present in all existing **methods** that are able to achieve centimeter-scale localization precision, tracking or otherwise.

Show more
14 Read more

Several researches have been conducted on enhanced accuracy of receivers. Some research works have been carried out on the use of extra sensors for integration with **GPS** receiver such as inertial sensor [2- 4]. Another research work was carried out on **differential** **methods** such as **Differential** **GPS** (DGPS) and Wide Area Augmentation System (WAAS) for increase accuracy [5,6]. Some other researchers used software **methods** such as Neural Networks (NNs) and fuzzy for improving accuracy [7-9].

10 Read more

In mathematical neuroscience, stochastic **differential** equations (SDE) have been uti- lized to model stochastic phenomena that range in scale from molecular transport in neurons, to neuronal firing, to networks of coupled neurons, to cognitive phenom- ena such as decision making [1]. Generally these SDEs are impossible to solve in closed form and must be tackled approximately using **methods** that include eigen- function decompositions, WKB expansions, and variational **methods** in the Langevin or Fokker–Planck formalisms [2–4]. Often knowing what method to use is not obvi- ous and their application can be unwieldy, especially in higher dimensions. Here we demonstrate how **methods** adapted from statistical field theory can provide a unifying framework to produce perturbative approximations to the moments of SDEs [5–13]. Any stochastic and even deterministic system can be expressed in terms of a path integral for which asymptotic **methods** can be systematically applied. Often of inter- est are the moments of x(t ) or the probability density function p(x, t ). Path integral **methods** provide a convenient tool to compute quantities such as moments and tran- sition probabilities perturbatively. They also make renormalization group **methods**

Show more
35 Read more

F n j∓1 − F n j + O (∆t 2 , ∆x∆t) , (3.13) where the ± sign should be chosen according to whether v > 0 or v < 0. The logic be- hind the choice of the stencil in an upwind method is is illustrated in Fig. 1.1 where we have shown a schematic diagram for the two possible values of the advection velocity. The upwind scheme (as well as all of the others we will consider here) is an example of an explicit scheme, that is of a scheme where the solution at the new time-level n + 1 can be calculated explicitly from the quantities that are already known at the previous time-level n. This is to be contrasted with an implicit scheme in which the finite-difference representations of the **differential** equation has, on the right-hand-side, terms at the new time-level n + 1. These **methods** require in general the solution of a number of coupled algebraic equations and will not be discussed further here.

Show more
82 Read more

happen not to be the times at which a Runge-Kutta (RK) method needs to know them. Therefore, in the case of a RK method, the values of the approximations have to be interpolated with at least the accuracy one wishes to attain with the splitting and this means a lot of additional computational effort. We can now summarize our results in table 5.9 which shows which **methods** are practicable for each kind of splitting scheme. 4

34 Read more

The main disadvantage of considering separate quantile regression using single level quantile regression is that the natural ordering among diﬀerent quantiles can not be en- sured. Addressing the non-crossing issue, He [25] proposed a quantile regression method assuming the response variable to be heteroskedastic. Neocleous and Portnoy [33] pro- posed a method to estimate the quantile curve using linear interpolation from an esti- mated gird of quantile curves. Takeuchi [43] and Takeuchi et al [44] proposed non-crossing quantile regression **methods** using support vector machine (SVM) [46]. Later Shim et al. [42] used doubly penalized kernel machine (DPKM) for estimating non-crossing quantile curves.

Show more
105 Read more

functions of time t or constants. In the next section we give the Monte Carlo simulation, the method we will use for our experiments. In Section 3 we denote the numerical meth- ods for SDE. First, we represent a stochastic Taylor expansion and we obtain the Euler- Maruyama [11] and Milstein **methods** [12] from the truncated Ito-Taylor expansion. In Section 4 we consider a nonlinear SDE, and we solve and explicate our equation for two diﬀerent **methods**, namely the EM and Milstein **methods**. We use MATLAB for our sim- ulations and support our results with graphs and tables. And the last section consists of conclusion, which gives our suggestions.

Show more
10 Read more

This paper is intended to generalize and apply the classical direct **methods** in variational problems for diﬀerential forms with an aim to enlarge the range of applications of the variational principle. Our work is motivated by Iwaniec and Lutoborski [], who ﬁrst used diﬀerential forms to express variational problems.

17 Read more

The final distribution in Figure 4.9(a) reflects that the adaptive **methods** added more points where the function oscillates due to the parameters in the analytic solution. The dense area in Figure 4.9(a) is divided into four quarters, and they match Figure 4.9(b) in which the exact solution has high gradients in the same region. The plot of the absolute error in Figure 4.9(c) shows that the errors in the four quarters are equi-distributed and are relatively lower than the errors near the boundary. Figure 4.9(d) shows that the maximum error by using the adaptive method converges towards the final step. We also tested with a closely similar number of collocation points that are uniformly distributed at each step since the uniform distribution creates a perfect square number, and the adaptive method does not necessarily end up with a perfect square number of distribution. The final step in uniform distribution could not reach the same accuracy as the adaptive method as you can see in Figure 4.9(d).

Show more
99 Read more

use of multi-reference stations. However, a network of **GPS** reference stations is not always available and its im- plementation is costly. In addition, in certain cases (e.g., marine long-baseline applications), interpolated corrections are not reliable because the rover receiver is usually outside the network coverage area. A new approach based on a single-reference-station mode has been published by Kim and Langley (2007) that nullifies the effect of the differen- tial ionosphere in an ambiguity search process; this method provides a number of interesting results.

This thesis has presented the design and practical demonstration of a flight control system that is capable of autonomously landing a fixed-wing UAV on a stationary platform, aided by a high- precision **GPS**. The project forms part of on-going research in ATOL at Stellenbosch University with the end goal of landing a fixed-wing UAV on a moving platform, for example on a ship’s deck. The selected airframe was equipped with the standard ESL avionics stack. The airframe’s aerodynamic coefficients were determined via AVL and the equipped aircraft’s mass moment of inertia was obtained by the double pendulum method. The stall speed of the airframe was also determined in AVL and practically flight tested. With a flight-ready aircraft, the scene was set for controller design and testing. The inner-loop and outer-loop controllers were designed based on the acceleration-based manoeuvre autopilot architecture [1]. In preparation for a practical flight test, the designed flight controllers were tested in SIL and HIL simulations to verify the performance of the FCS and to minimise risk. With an equipped airframe and practically flight-tested controllers, the scene was also set to test the developed landing state machine via virtual deck landings tests. Then with enough confidence gained in the FCS and to enable the FCS to perform practical landings, Novatel’s high-precision **GPS** was integrated into the aircraft and the FCS OBC code. Considerable time went into testing the FCS with the new DGPS to ensure the desired operation. Landing simulations were repeated in SIL and HIL simulation to test the FCS robustness and execution of the landing state machine to minimise risk before practical landing tests.

Show more
177 Read more