Week 6

Feb 20, 2018

Sensing and Processing

This week was very exciting as we got our hands on the much awaited Velodyne Lidar. We also got the ZED running with ROS and Rviz on the Jetson TX2. As of now, we plan on using the Velodyne to create a 3D map of our route and use it to both recognize the position of the golf cart and recognize foreign objects within the map. We also plan on mounting at least one 2D Lidar on the front of the vehicle as we have decided that moving in reverse is not one of our priorities. This 2D Lidar will aid in close-proximity object detection that covers the Velodyne’s front blind spot and run on a continuous loop. The ZED stereo camera will be primarily used for lane line detection and hopefully street sign recognition. As of now, we are unsure about what method of GPS we will be using but we plan on combining positions from two, this gives us a more accurate sense of where we are in relation to our surroundings.


Planning and Navigation

In order to communicate with the server, we are going to be using MySQL to hold the information being sent or received from the front end app. This week was spent learning how to use MySQL and researching how exactly we can implement it (code-wise) with the actual Apache server previously created. The maps API code has begun to be converted into a functioning ROS node. There is now a function to export lists of points as JSON. We have begun researching code we can port from the Autoware project to our own. We have begun research on non-holonomic configuration spaces, installing a Gazebo velodyne sensor package, and figuring out how to alter the urdf file so that we can input the parameters to match the golf cart — however we do not have all of the data on the golf cart to actually change these parameters yet. We will get this information from the motors team.


Motor Control

This week the motor control team has successfully configured the linear actuator and position read out, allowing us to identify the location of the linear actuator and stop it at any position. The team has continued to write code to further control the linear actuators based off of what the planning team has sent. Lastly, we have started the disassembly of the top of the golf cart.


Front End Team:

Website:

We created a Squarespace account, and have created the views for the functionality. An about us view has also been added to the app. This page, will have a picture of the group with a description of what we are doing. As well as a little bit about each team member. It also has a how can help section for future use.

App:

The logic is in place for the map function. We are currently stuck on getting it to update when you select the location. The Sensor view is becoming more graphical as a response to asking people what they think about the app. For Functionality, the a cancel button has been added to the goto function. This will allow the user to decided not to go somewhere if they change their minds. An about us view has also been added to the app. This page, will have a picture of the group with a description of what we are doing. As well as a little bit about each team member.

Database:

All of the initial values for the data set have been accumulated. The CSV is ready for when the API is created. Below is an example of the data.

We created the Add, and Delete, Queries for the database. We are also almost finished getting all the lat, longs coordinates into our CSV file that contains all our data.

About Us

We are a mixed team of JMU student working to produce a self driving golf cart.

Next post



Subscribe to our News letter

News letters have not begun yet, but will as they become necessary. For now updates can be found under blog posts.