DSC-RX100M3
ƒ/4
8.8 mm
1/125
125

My mom found this final report from CS411, and I noticed my lab-mate Dan Lenoski. I have not seen him for 29 years, and we reconnected this morning for breakfast. Turns out we are neighbors (I can see his house when I look out my front window), and the “small world” coincidences grew from there.

We were both grad student Research Assistants on Prof. John Hennessy’s DASH team at Stanford. I was fascinated by neural networks and wanted to study what we now call model and data parallelism, the two orthogonal ways to exploit parallelism in the algorithm. We only had a 16 processor machine at the time (an Encore Multimax), but we also did simulation work up to 100 processors. Below are some of the pages from our final report.

Pretty amazing to see the rebirth of these neural networks as deep learning over the past 5 years.

I found Dan on LinkedIn and from his DASH team paper on their shared-memory multiprocessor (I left in 1990).

One response to “TBT: Our Machine Learning Project from 1989”

  1. We wrote: "Neural networks are brain-inspired models that are adapted to run on uniprocessor machines; as with the brain, their true potential may only be realized in a massively parallel application." 9x speedup with 10 processors (measured) and 72x with 96 processors (simulated).

    Here is my textbook from those EE PhD times, referenced in the first sentence. detailsThe Path to Nervana — my PDP textbook from 1987.  The future will now be accelerated.

Leave a Reply

Your email address will not be published. Required fields are marked *