Inferring Driving Behavior using Smartphone
We consider the problem of inferring driving behavior. Previous work in this field has required the deployment of dedicated sensors on vehicles and/or on the roadside, or the tracking of mobile phones by service providers. A driving behavior inference system using smartphone sensors like gyroscope, accelerometer and GPS data can determine the driver's speed and acceleration profiles as well as recognize driving events. Driving events, viz. turns (left, right and U) and overtake, etc. are well defined and characterized by sensors like z-component of gyroscope, vehicle speed, etc. and hence, can be detected by using supervised learning algorithms. This information can be used to monitor and improve driving efficiency through a voice-enabled smartphone app. Event recognition rates of 70.59%, 65.77%, 69%, and 76% were achieved for left, right, u-turns and overtake respectively. Our system differs from past driving pattern recognition since we fuse data from multiple sensors into a single classifier. Additionally, the driver's speed and acceleration profiles are determined to aid in the feedback process. The system is a completely mobile, effective and inexpensive way to detect and recognize driving events. Therefore, the system can be easily distributed to a wide audience due to the ubiquitous nature of smartphones.

The poster presented at Grace Hopper Celebration India 2014.
Sensor Data Collection
The latest mobile phones are equipped with many useful inputs for research, including:
• Camera(primary & secondary)
• Microphone
• 3-axis Accelerometer
• 3-axis Gyroscope
• Proximity
• Magnetometer
• GPS
just to name a few. These devices are powerful, inexpensive and versatile research platforms that make instrumenting a vehicle for data collection accessible to the general public as well as academia.
​
All the above smartphone sensor readings were logged at 1Hz and driving events like left, right, u-turns and overtake were manually labeled and saved in persistence storage for further processing. As the events were manually labeled, supervised learning algorithms can be applied, as well as the results can be verified to achieve good event recognition rates.
Feature Identification
The coordinate system used by android sensorAPI for smartphones is shown in Fig. 3.1. We follow the same conventions.

When a turn is taken, since the phone tends to rotate according to the motion of the vehicle, the rate of rotation around an axis that’s perpendicular to the horizontal (z- in our case) shows a peak value. In the case of left turn, the sensor shows the maximum value at the point of turn as it is in the anti-clockwise direction, as depicted in Figure 3.2a. Similarly, for a right turn, it shows a minimum at the point of the turn, as the rate of rotation is clockwise, shown in Figure 3.2b. An overtake is identified with a smaller z-gyroscope signal representing lane changes, shown in Figure 3.3a. Also, overtake is characterized by an initial increase in speed, shown in Figure 3.3b. Thus, z-gyroscope and GPSSpeed would define an overtake.


Event Detection using Supervised Methods
Speed and acceleration profiles of drivers are identified from GPSSpeed and accelerometer sensor values, using map-reduce framework operations on the given dataset. For left, right and U turns, we identified z-component of Gyroscope as the defining feature. As shown in Fig. 3.2a, this was verified with patterns showed while plotting Gyroscope-z against sample numbers(time-sequenced) for each type of turns. The sensor dataset was smoothed by binning to reduce noise as well as normalized since SVM is scale-variant. SVM classifiers were coded to differentiate these events(classes) from normal driving, using scikit-learn tool available for python. The selected machine learning algorithm, SVMs parameters were tuned to achieve higher accuracies. As an alternative to supervised learning described above, the characteristic signals for different driving events could be employed for online learning (i.e. in smartphone itself) using low CPU intensive operations. An application that would detect the pattern of gyroscope-z in moving window can efficiently determine turns without the overhead of svm learning.
​
The identified speed profile provides a good understanding of user's driving style. The aggregate speed and acceleration profiles of all users can be used to give effective feedback to the driver using a voice enabled smartphone app. For example, a use-case of an app might look like “The typical driving speed for this non-rush hour is 40 kmph, you are driving at a slower pace”. Figure. ?? shows the amount of time spent by a user in different speed buckets.

The confusion matrices for all driving events are shown below. These are used to compute event recognition rates. In the case of overtake detection, sensor fusion of gyroscope and speed improved the accuracy from 55.5% to 68.53%.
