Recent watershed advancements in computer vision and machine learning has allowed for the possibility of classifying traffic characteristics from camera image data in real time. Information from traffic images can supplement data from other sensors such as loop detectors, Bluetooth and WiFi sensors and Dedicated Short Range communications (DSRC) roadside units. In this paper, we propose a method for near real-time estimation of traffic state variables such as volume and speed, in locations where traffic cameras exist. The proposed system allows municipalities and provinces to extend the utility of their existing camera systems with minimal additional resources. In this paper, we specifically explore the application of convolutional neural networks (CNN) to traffic image processing. We use existing loop detector data from Toronto highways as the ground truth to train and test the CNN to infer the macroscopic traffic flow characteristics of speed and flow from the still images. Preprocessing using temporal median stacking and image subtraction was first done to identify cars in lanes. The model was then trained, using ground truth data from loop detectors, to estimate traffic speed and volume directly from the images for all vehicle types. The proposed model generates promising results, with up to 88.6 percent accuracy, depending on the bin size.