Could Bitcoin save Moore's Law?

I originally posted the following on my Google+ account in Jan 2014. My semiconductor friends thought it was a little off the wall. But I posted it anyway. 

Moore's Law is at risk, not because of the physics but because of the economics. As discussed on e.g. SemiWiki.

Bitcoin mining is taking off. In 2013, TSMC, AMD and others saw $200M in sales for bitcoin-related parts (from basically zero a year or two before). Link.

The Bitcoin network difficulty is growing sharply exponentially. In the last several months, it's been 2x per month. Link.

There are many arguments why Bitcoin could become the underpinning of the whole global economy, or even just the internet economy. For example, ask Marc Andreesen [link]. Of course, it might not as well. But it might!  

Given these points, Bitcoin might just become the biggest driver of semi revenue. And since Moore's Law is at the most risk for economic reasons, Bitcoin might just be the new driver for Moore's Law... 

What's cool is that it really could be happening: bitcoin mining company KnCMiner is one of the very first companies to tape out at TSMC's 16nm process node. Here. (TSMC manufactures about half the silicon in the world, and 16nm is their newest, smallest node.) This upstart bitcoin (!) company beat Apple, Qualcomm, Nvidia and almost everybody else to the punch.

What gives? If you think about it, it's fully rational. Since they're building money-printing machines - more BTC every 10 minutes (when they win the lottery), they can calculate precisely how much money they expect to make based on how many Ghashes they can run. Maximize the hash rate, minimize the power costs, and the difference is profit. Marcus Erlandsson, the CTO of KnCMiner, confirmed this when I chatted with him recently. Cool!


Active Analytics, and Auto vs. Manual Design

In  2014, "Predictive Analytics" hit the mainstream. Many people got very excited about the idea that you could take a pinch of "big data" or "data mining", add in a dash of "visualization", and get "business value". I agree. I only use the air quotes because it was framed as something novel. But this stuff has been going on for decades, (though to be fair for much of that time it was with smaller datasets). For example, go to the appendix of Friedman's famous 1991 MARS paper and you'll find data mining + visualization for new insights. And then there's statistics + Tufte-style visualization. Then you have the likes of Spotify and Tableau. We'd been doing this sort of thing at Solido since 2004, and ADA before that, to help designers get insight into designing computer chips. My PhD included "knowledge extraction." It's great to see that this tech is starting to hit the mainstream - it's incredibly useful.

What's cool is that there is state of the art beyond predictive analytics. It's basically about closing the loop, rather than working with a static dataset. Get some data, do some analysis, but then (auto) find new data and repeat. The "find new data" part can be active, i.e. you can choose which sample to take next. You could also think of it as classic optimization, but with a visual element. I call it "Active Predictive Analytics", or "Active Analytics" for short. We've been doing this with a new tool at Solido, and designers really like it as a new style of design tool. It turns out to address auto vs. manual design too..

There's been a long running debate on whether automatic or manual design is better, and both sides have had really great arguments. But what if you can get the best of both worlds, if you can reconcile manual vs. automatic design? That's what the tool turns out to do: if you want to design fully manually, i.e. you pull the design, you can. If you want fully automatic, i.e. the tool pushes the design, you can. But the cool thing is that it allows the shades of gray in between: it gives insight what designs and design regions might be good, and you can easily pull the design with a visual editor. Call it supercharged manual design, if you will. I'm quite excited about this because it has applications far beyond circuits, for everything from deep learning to business intelligence to website optimization (evolution from A/B testing to multi-armed bandit to this).

I gave an invited talk on this at the Berlin Machine Learning group in May 2014. Slides are here.


The Ultimate Bootstrap: AI & Moore's Law

People talk about a Moore's Law for gene sequencing, a Moore's Law for software, etc. But what about the Moore's Law? Transistors keep getting exponentially smaller. It's the bull that the other "Laws" ride, a "Silicon Midas Touch": once a technology gets touched by the silicon Moore's Law, that technology goes exponential. Moore's Law is a technology backbone that is driving humanity. I love that! It's a driving reason why I've spent 15+ years of my life in semiconductors, to help drive Moore's Law. I've co-created software enabling chip design on bleeding-edge process nodes. 

What's cool: it's AI-based software, which runs on the most advanced microprocessors. To design the next generation of microprocessors. For that smartphone in your pocket, for the servers powering Google, and for the companies designing the next gen of chips. Put another way: the computation drives new chip designs, and those new chip designs are used for new computations, ... ad infinitum. It's the ultimate bootstrap of silicon brains. The only thing it's clocked only by is manufacturing speed.

I've given a couple talks about this. Here's one from 2013 I gave at a singularity meetup. And here's one I gave as an invited talk to the PyData Berlin conference (and the video too).


Predicting Black Swans for Fun and Profit

I've always been a big fan of Nassim Nicholas Taleb's writing. Though not always his conclusions. In "The Black Swan: The Impact of the Highly Improbable" he describes "black swan" events, which have extremely low probability but huge impact when they do happen. Partway through, he makes an assumption that they're so hard to predict, that you should just not bother, and instead protect yourself against the downside (if a negative event) or make sure you're exposed to the upside (if a positive event). I disagree: just because something's hard doesn't mean it's impossible. It's just a challenge! And it's worth going for if the upside to prediction is high.

Case in point: designing memory chips where the chance of failure is 1 in a billion or so. The Sonys and TSMCs of the world have huge motivation to estimate that value quite precisely. What's cool: they can now estimate these "black swans" with good confidence (using tech I helped develop), and they're very happy about it. It was hard, but not impossible!

I gave a talk on this at the Berlin Algorithms group in Feb 2014. The slides are here.



Artificial Intelligence and the Future of Cognitive Enhancement

I was invited to keynote Berlin's "Data Science Day" for 2014. They asked me to give something visionary. So I talked about cognitive enhancement (CogE), a longtime pet interest of mine and related to my work at Solido. Whereas the first machine age was augmenting muscles, our second machine age is about augmenting brains, ie CogE. Today's CogE has examples like search and recommendation, and also more extreme versions that we see in designing advanced computer chips.  Future CogE will continue to be catalyzed by the positive feedback cycle of AI & the "Silicon Midas Touch", and my favorite singularity scenario (BW++).

The slides are here.



Welcome! This is my first post. I've had a buildup of things I've been meaning to blog about, so the next several posts will be a flurry of activity while I get those off my chest. Many will be based on talks I've given in the last year. PS in Saskatchewan, flurry = mild wind + medium sized snowflakes. Bigger flakes than you'd see in a tweetstorm.

Page 1 ... 1 2 3 4