Detecting Road Lane Lines

This page will describe how to build a lane-finding algorithm using distortion correction, image rectification, colour transforms, Sobel edge detection and gradient thresholding with OpenCV and Identified lane curvature and vehicle displacement with polynomial mathematics from a forward facing camera with applied computer vision techniques using OpenCV. Below is a walkthrough of the key principles and techniques used in the project and its subsequent Python implementation. The information supplied is not exhaustive and it is recommended to view the source code and comments on GitHub for a deeper understanding of the project. A link to the GitHub repository is here. A video of the project in action is available here.

The project can be combined with object detection and if you wish to learn more about how to perform object detection, a link is provided here.

 

This project page will describe the following sections

Image processing

Locating the road lines

Drawing a road overlay

Image Processing

 

 

The first part of detecting lane markings is to process the camera images to eliminate changes in road surface, glare from the sun or headlights, as well as to help see through the darkness of the night or with casted shadows on the surface of the road. This will ensure that the road markings can be detected under different environmental conditions.

 

The process involves a combination of RGB and HSL colour spectrums combined with image thresholding, gradient thresholding and bitwise operations to generate a binary image of the road surface. Below is the code used to process the raw RGB input images.

 

import edge

 

# function that finds the road driving lane lines

# image: camera image where the line locations are to be located

# return: a masked image of only the lane lines

def driving_lane(self, image):

    # Convert to HLS colour space and separate the channels

    # hls for Sobel edge detection

    hls = cv2.cvtColor(image, cv2.COLOR_BGR2HLS)

 

    # used on the luminance channel data for edges

    _, sxbinary = edge.threshold(hls[:, :, 1], thresh=(120, 255))

    sxbinary = edge.blur_gaussian(sxbinary, ksize=3)

    # find the edges in the channel data using sobel magnitudes

    sxbinary = edge.mag_thresh(sxbinary, sobel_kernel=3, thresh=(110, 255))

 

    s_channel = hls[:, :, 2] # use only the saturation channel data

    _, s_binary = edge.threshold(s_channel, (110, 255))

    _, r_thresh = edge.threshold(image[:, :, 2], thresh=(120, 255))

 

    rs_binary = cv2.bitwise_and(s_binary, r_thresh)

    return cv2.bitwise_or(rs_binary, sxbinary.astype(np.uint8))

 

The imported edge file contains simplified functions for performing various edge detections on images. The edge functions are located in the "edge.py" file, available in GitHub repository.

The following code walkthrough will be tested on the following image.

 

 

 

The first part of the process is to convert the image colour space from RGB to HLS (Hue, Lightness, Saturation). Converting to HLS helps eliminate changes in the image due to lighting effects as it breaks an image down to its colour used (Hue), the colour used scaled between black and white (Lightness) and the intensity of the colour (Saturation). The following code converts the RGB image to the HLS colour space with an example of the output proceeding.

 

# hls for Sobel edge detection

hls = cv2.cvtColor(image, cv2.COLOR_BGR2HLS)

 

 

Following this, Sobel edge detection is performed on the lightness channel of the HLS image. Sobel is used for detecting discontinuities in the data between neighbouring pixels. For detecting the road lines, the magnitude of the X and Y direction lines are used.

 

# used on the luminance channel data for edges

_, sxbinary = edge.threshold(hls[:, :, 1], thresh=(120, 255))

sxbinary = edge.blur_gaussian(sxbinary, ksize=3)

# find the edges in the channel data using sobel magnitudes

sxbinary = edge.mag_thresh(sxbinary, sobel_kernel=3, thresh=(110, 255))

def mag_thresh(image, sobel_kernel=3, thresh=(0, 255)):

   

    # get the edges in the horizontal direction

    sobelx = np.absolute(sobel(image, orient='x', sobel_kernel=sobel_kernel))

    # get the edges in the vertical direction

    sobely = np.absolute(sobel(image, orient='y', sobel_kernel=sobel_kernel))

   

    # Calculate the edge magnitudes

    mag = np.sqrt(sobelx ** 2 + sobely ** 2)

    return binary_array(mag, thresh)

 

 

Additional to the use of Sobel edges, a binary threshold is also used on the HLS saturation channel and the R channel of the original RGB image. The saturation channel is best used to separate dull road colours and the R channel is the best for containing high yellow and white colour values as both colours are influenced by the colour red. 

Note: OpenCV uses a BGR colour format instead of the standard RGB format.

s_channel = hls[:, :, 2] # use only the saturation channel data

_, s_binary = edge.threshold(s_channel, (110, 255))

_, r_thresh = edge.threshold(image[:, :, 2], thresh=(120, 255))

s_binary output data from the HLS saturation channel.

r_thresh output data from the R channel from an RGB image.

 

 

To locate the road lines, the s_binary and r_thresh are bitwise ANDed together to remove image noise from shadows and variations in road surfaces. The final result is combined with the Sobel edges to create the data containing the road markings.  ​

 

rs_binary = cv2.bitwise_and(s_binary, r_thresh)

return cv2.bitwise_or(rs_binary, sxbinary.astype(np.uint8))

 

The final output data of the road lane markings is shown below.

 

 

 

To locate the road markings and correctly calculate the curvature of the lane and road, a perspective transform is performed. This also has the benefit of removing unrequired data that is outside where we expect the driving lanes to be such as other vehicles and trees.

 

To perform a perspective transformation, a region of interest (ROI) needs to be located by calling the function below. This function requires three arguments to be set:

  • focal_point must be a tuple in the location of the road vanishing point.

  • source_pts must be a list of tuples for the outer (ROI) points. i.e. the bottom left and right corners of the image.

  • roi_height is the 'region of interest' (ROI) cutoff height from the focal_point location. It is a fraction of the distance from the source_pts to the focal point.

 

An example of the ROI is below.

 

 

Once the ROI is located, a perspective transform is performed with the following code. 

# img : image to be transformed into the new perspective

# roi_pts : location points from the original image to be transformed.

# Points must be in a clockwise order.

# location_pts : the final location points in the image where the

# old_pts will be located. If None supplied, the new points

# will be the four corners of the supplied image in a

# clockwise order, starting at point (0,0).

# returns : the warped perspective image with the supplied points

def warp_image(self, img, roi_pts=None, location_pts=None, padding=(0,0)):

    if roi_pts is None:

        roi_pts = self.roi_pts

   

    if location_pts is None:

        location_pts = np.float32([[padding[0], self.h-padding[1]], # bot-left

                                   [padding[0], padding[1]], # top-left

                                   [self.w-padding[0], padding[1]], # top-right

                                   [self.w-padding[0], self.h-padding[1]]]) # bot-right

   

    # calculate the perspective transform matrix between the old and new points

    self.M = cv2.getPerspectiveTransform(roi_pts, location_pts)

   

    # Warp the image to the new perspective

    return cv2.warpPerspective(img, self.M, (self.w, self.h))

 

The roi_pts are the polygon corner location points, as observed in the ROI example above. The location_pts are the new location of the transformed image. An example of the warped image can be seen below.

 

 

Locating the Road Lines

 

 

To locate the lane lines in the warped image, a histogram window filter is applied. The window filter will be explained later on. The calculation of the histogram polynomial is depending on whether the line polynomial exist. The code that determines the polynomial calculations is as follows.

# finds the location of the lanes lines

# image : image that needs o be search for line. Should be a warped image

# plot_line : True => plots the found lines onto the image for visulisation

# return : polynomial functions of the left and right lane lines

def lane_lines(self, image, plot_line=False):

    # is the input image a binary image or a multi-channel image

    if len(image.shape) > 2:

        # image has multiple channels. convert the image to a binary image

        # raise 'Lane.lane_lines input image needs to be a binary image'

        img = image[:,:,0]

        for channel in range(1, image.shape[2]):

            img = self.binary_image(img, image[:,:,channel])

    else:

        img = image.copy()

   

    # Does the program know where the lane lines are?

    if self.left_fit is None or self.right_fit is None:

        # Don't know where the lane lines are, so go and find them

        self.find_lane_lines(img, plot_line=plot_line, draw_square=plot_line)

    else:

        self.refresh_lane_lines(img, plot_line=plot_line)

   

    return self.left_fit, self.right_fit

 

As can be observed in the code above, if no lane polynomial function already exists for either of the left or right lane markings, the system needs to search the warped image and locate the lines. This is completed with the following code.

# finds the lane line locations within an image

# img : image that needs o be search for line. Should be a warped image

# line_windows : how many windods are used in the search for the lane lines

# plot_line : True => plots the found lines onto the image for visulisation

# draw_square : draws the search boxes locations wher the lane lines are located

# return : polynomial functions of the left and right lane lines

def find_lane_lines(self, img, line_windows=10, plot_line=False, draw_square=False):

    out_img = img.copy()

    # Set height of windows

    window_height = np.int(img.shape[0] / line_windows)

    # Identify the x and y positions of all nonzero pixels in the image

    nonzero = img.nonzero()

    nonzeroy = np.array(nonzero[0])

    nonzerox = np.array(nonzero[1])

    # Current positions to be updated for each window

    leftx, rightx = self.histogram_peaks(img)

    leftx_current = leftx

    rightx_current = rightx

    # Set the width of the windows +/- margin

    margin = 100

    # Set minimum number of pixels found to recenter window

    minpix = 50

    # Create empty lists to receive left and right lane pixel indices

    left_lane_inds = []

    right_lane_inds = []

 

    # Step through the windows one by one

    for window in range(line_windows):

        # Identify window boundaries in x and y (and right and left)

        win_y_low = img.shape[0] - (window + 1) * window_height

        win_y_high = img.shape[0] - window * window_height

        win_xleft_low = leftx_current - margin

        win_xleft_high = leftx_current + margin

        win_xright_low = rightx_current - margin

        win_xright_high = rightx_current + margin

        if draw_square == True:

            # Draw the windows on the visualization image

            cv2.rectangle(out_img, (win_xleft_low, win_y_low), (win_xleft_high, win_y_high),                           (255, 255, 255), 2)

            cv2.rectangle(out_img, (win_xright_low, win_y_low), (win_xright_high,                                      win_y_high), (255, 255, 255), 2)

 

        # Identify the nonzero pixels in x and y within the window

        good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >=                              win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0]

        good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >=                              win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0]

 

        # Append these indices to the lists

        left_lane_inds.append(good_left_inds)

        right_lane_inds.append(good_right_inds)

 

        # If you found > minpix pixels, recenter next window on their mean position

        if len(good_left_inds) > minpix:

            leftx_current = np.int(np.mean(nonzerox[good_left_inds]))

        if len(good_right_inds) > minpix:

            rightx_current = np.int(np.mean(nonzerox[good_right_inds]))

 

    # Concatenate the arrays of indices

    left_lane_inds = np.concatenate(left_lane_inds)

    right_lane_inds = np.concatenate(right_lane_inds)

    # Extract left and right line pixel positions

    leftx = nonzerox[left_lane_inds]

    lefty = nonzeroy[left_lane_inds]

    rightx = nonzerox[right_lane_inds]

    righty = nonzeroy[right_lane_inds]

    # Fit a second order polynomial to each

    try:

        self.left_fit = np.polyfit(lefty, leftx, 2)

    except:

        self.left_fit = None

    try:

        self.right_fit = np.polyfit(righty, rightx, 2)

    except:

        self.right_fit = None

    if plot_line==True:

        # plot the line of best fit onto the image

        self.plot_best_fit(out_img, nonzerox, nonzeroy, left_lane_inds, right_lane_inds)

    return self.left_fit, self.right_fit

 

The code above can look daunting, but in short, it uses a histogram to locate the image peaks and then uses a small window to find the location of the highest number of points in the data and uses there position to calculate the polynomial function which represents the road lane lines.

 

Now an in-depth at the code. When a binary image is supplied, the function tries to locate the position of the highest concentration of pixels along the width of the image. This is achieved by finding the distribution of the data with a  histogram. 

# Current positions to be updated for each window

leftx, rightx = self.histogram_peaks(img)

leftx_current = leftx

rightx_current = rightx

 

As a histogram finds the distribution of data along an axis, it converts a 2D image into a 1D data array, with a loss of spatial information. That is, from the histogram peaks, we do not know the position of where the road line starts at the bottom of the image or where it ends at the top of the image.

 

To get around this we can create small windows around the left and right peaks, and adjust the position of the peak to follow the line from the top to the bottom of the image.

# Set the width of the windows +/- margin

margin = 100

   

# Step through the windows one by one

for window in range(line_windows): # default line_windows is 10

    # Identify window boundaries in x and y (and right and left)

    win_y_low = img.shape[0] - (window + 1) * window_height

    win_y_high = img.shape[0] - window * window_height

    win_xleft_low = leftx_current - margin

    win_xleft_high = leftx_current + margin

    win_xright_low = rightx_current - margin

    win_xright_high = rightx_current + margin

 

In both the left and right windows, all the non-zero points x & y coordinates within that window are calculated and stored.

 

# Identify the nonzero pixels in x and y within the window

good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >=                      win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0]

good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >=                      win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0]

 

# Append these indices to the lists

left_lane_inds.append(good_left_inds)

right_lane_inds.append(good_right_inds)

 

If the total number of points in each window exceeds the desired quantity, that window is considered to contain a road marking and the mean location of the points in that window is used to update the window's locations in the next loop iteration.​

# Set minimum number of pixels found to recenter window

minpix = 50

 

# If you found > minpix pixels, recenter next window on their mean position

if len(good_left_inds) > minpix:

    leftx_current = np.int(np.mean(nonzerox[good_left_inds]))

if len(good_right_inds) > minpix:

    rightx_current = np.int(np.mean(nonzerox[good_right_inds]))

Upon completion of all the points being located, a 2nd order polynomial is calculated from their positional coordinates. This polynomial then represents the road lane markings. 

# Extract left and right line pixel positions

leftx = nonzerox[left_lane_inds]

lefty = nonzeroy[left_lane_inds]

rightx = nonzerox[right_lane_inds]

righty = nonzeroy[right_lane_inds]

# Fit a second order polynomial to each

try:

    self.left_fit = np.polyfit(lefty, leftx, 2)

except:

    self.left_fit = None

try:

    self.right_fit = np.polyfit(righty, rightx, 2)

except:

    self.right_fit = None

 

 

The second option for calculating the polynomial is to use the previously calculated position and update with the latest data. The code to accomplish this follows.

 

# uses the previous lane line locations to refresh the current lane line location

# img : image that needs o be search for line. Should be a warped image

# plot_line : True => plots the found lines onto the image for visulisation

# return : quadratics functions of the left and right lane lines

def refresh_lane_lines(self, img, plot_line=False):

    nonzero = img.nonzero()

    nonzeroy = np.array(nonzero[0])

    nonzerox = np.array(nonzero[1])

    margin = 100

    left_lane_inds = (

        (nonzerox > (self.left_fit[0] * (nonzeroy ** 2) + self.left_fit[1] * nonzeroy +                           self.left_fit[2] - margin))

        & (nonzerox < (self.left_fit[0] * (nonzeroy ** 2) + self.left_fit[1] * nonzeroy +                           self.left_fit[2] + margin)))

    right_lane_inds = (

        (nonzerox > (self.right_fit[0] * (nonzeroy ** 2) + self.right_fit[1] * nonzeroy +                        self.right_fit[2] - margin))

        & (nonzerox < (self.right_fit[0] * (nonzeroy ** 2) + self.right_fit[1] * nonzeroy +                        self.right_fit[2] + margin)))

    # Again, extract left and right line pixel positions

    leftx = nonzerox[left_lane_inds]

    lefty = nonzeroy[left_lane_inds]

    rightx = nonzerox[right_lane_inds]

    righty = nonzeroy[right_lane_inds]

 

    # Fit a second order polynomial to each

    self.left_fit = np.polyfit(lefty, leftx, 2)

    self.right_fit = np.polyfit(righty, rightx, 2)

    if plot_line == True:

        # plot the line of best fit onto the image

        self.plot_best_fit(img, nonzerox, nonzeroy, left_lane_inds, right_lane_inds)

    return self.left_fit, self.right_fit

 

Refreshing the polynomial is similar to the finding method mentioned earlier with one distinct difference, it uses the previous polynomial, and it searches within a margin around it.

 

 

Using either method with dynamic noise, such as bumpy roads, can cause the images to fluctuate. To minimise this effect an average filter for both the left and right polynomials is employed. The filter averages the functions over the past 32 calculations to reduce noise.​

# calculates the moving average of the filter as well as keeps track

# of the time series data

# data : new data to be added to the queue to be filtered

# return : the filtered average for the filter, -1 if error

def moving_average(self, data):

    self.queue.appendleft(data)

    queue_length = len(self.queue)

    try:

        # find the moving average

        average = sum(self.queue) / queue_length

    except:

        average = -1

   

    if queue_length >= self.max_length:

        self.queue.pop()

   

    return average

Below is an output graph of the filter lane lines. The right lane line is shown in Blue, Left lane line in Orange and the average of them is displayed in Green.​

 

Overlaying this onto warped image we get.

Note: Due to the filter, the polynomials may not always overlap the observed lines.

 

 

Drawing a Lane Overlay

 

 

Once we have located and calculated the lane line polynomials, we can project the polynomials back onto the surface of the road to ensure that the measurements and calculations are accurate and correct. This is done with the following code.

 

# combines the found road lane with the camera image

# img : original camera image

# color : overlay color

# overlay_weight : weight factor of transparency of the overlay

# return : image with the over-layed lane

def overlay_lane(self, img, color=(0,255,0), overlay_weight=0.3):

    # Create an image to draw the lines on

    color_warp = np.zeros_like(img).astype(np.uint8)

    # Recast the x and y points into usable format for cv2.fillPoly()

    pts_left = np.array([np.transpose(np.vstack([self.left_pts, self.y_pts]))])

    pts_right = np.array([np.flipud(np.transpose(np.vstack([self.right_pts, self.y_pts])))])

    pts = np.hstack((pts_left, pts_right))

 

    # Draw the lane onto the warped blank image

    cv2.fillPoly(color_warp, np.int_([pts]), color)

 

    # Warp the blank back to original image space using inverse perspective matrix

    newwarp = self.inverse_warp_image(color_warp)

 

    # Combine the result with the original image

    return cv2.addWeighted(img, 1, newwarp, overlay_weight, 0)

 

The function creates a blank image onto which it draws a polygon that has its boundaries calculated from the filtered polynomials. This new image is in the same perspective as the warped image that we used to locate the lines so we will require a reverse perspective warp to get it back into the original image perspective. Finally, all we need to do is combine the overlay image and the original image. This produces the following image.

 

 

 

That is it. This is the underlying concepts to locating the road surface lines for lane departure or self-driving cars.  The complete project code can be viewed and downloaded from GitHub here.

 

A video of the lane detection system with object detection can be viewed below.

 

 

The

 

 

CONTACT

I hope you enjoyed reviewing the project. - Let's get in touch. LinkedIn messages work best.

  • github
  • udacity_black
  • LinkedIn - White Circle
  • git
  • YouTube - White Circle
  • udacity

©2019 BY HAIDYN MCLEOD