Knowledge about position and orientation (pose) is a key ingredient in many applications, especially in the field of intelligent autonomous robot. The capabilities to acquire this information over time is solely depend on data coming from sensors. The measurement from sensors provide the understanding of the system. However, only just gathering data with sensors is not good enough because of the system needs to turn those data to that can be understand and acted on by the autonomous system. Generally, finding accurate pose from inexpensive method is the basic goal of researchers. Due to the fact that uncertainty of each sensor, environment and robot is unavoidable, it is difficult to find an ideal approach for estimating robot position and orientation. For example, the dead reckoning method suffers the accumulation error over time and wheel slippage during the movement. Another example, Inertial Measurement Unit (IMU) can provide the position and attitude of the robot at high rate. However, low frequency noise and sensor biases are amplified due to the integrative nature that leads to position accumulation and attitude errors.
Currently, researchers are focusing more on sensor fusion. The goal of this method is to combine two or more data source from sensors that can provide the advantages and compensate the disadvantages of each sensor. This solution is promised for more consistent, more accurate, and more dependable than it would be with a single data source.
Our goal is to produce a user friendly, interactive, seamlessly navigating, mobile robot that is able to accomplish task in restaurant.
A great way to catch your reader's attention is to tell a story.
Everything you consider writing can be told as a story.