Ask a smart home device for the weather forecast, and it takes several seconds to respond. One reason for this latency is that connected devices don’t have enough memory or power to store and run the enormous machine-learning models needed for the device to understand what a user is asking of it. The model is stored in a data center that may be hundreds of miles away, where the answer is computed and sent to the device.
ADVERTISEMENT |
MIT researchers have created a new method for computing directly on these devices that drastically reduces this latency. Their technique shifts the memory-intensive steps of running a machine-learning model to a central server where components of the model are encoded onto light waves.
The waves are transmitted to a connected device using fiber optics, which enables tons of data to be sent lightning-fast through a network. The receiver then employs a simple optical device that rapidly performs computations using the parts of a model carried by those light waves.
This technique leads to more than a hundredfold improvement in energy efficiency compared to other methods. It could also improve security, since a user’s data don’t need to be transferred to a central location for computation.
…
Add new comment