Bitmain Moves Toward Artificial Intelligence after Bitcoin Dominance
By its own reckoning, Bitmain built 70 percent of all the computers on the Bitcoin network. It makes specialized chips to perform the critical hash functions involved in mining and trading bitcoin and packages those chips into the top mining rig; the Antminer S9. Now the tech giant is looking to Artificial Intelligence.
Apart from Bitcoin and cryptocurrency mining, Jihan Wu, the Co-founder of Bitmain envisions a use-case for Bitmain beyond blockchain and cryptocurrency. In an interview with the Institute of Electronics and Electrical Engineers (IEEE), Wu expressed that while Bitcoin’s success is personal to him, Bitmain can’t solely rely on Bitcoin and has to search other avenues for continual success.
Bitmain Capitalizes on Artificial Intelligence Trend
The search led to focusing on another one of the decade’s hottest trends, Artificial Intelligence (A.I.). On November 8, 2017, Bitmain’s C.E.O Micree Zhan announced their new A.I chip, the Sophon BM1680, at AIWORLD in Beijing, which represented the company applying its chip-making capabilities beyond bitcoin. If things go to plan, thousands of Bitmain Sophon units soon could be training neural networks in vast data centers around the world.
Sophon, the A.I. division of Bitmain, wants to propel A.I. applications using ASICs. Bitmain’s product marketer, Allen Tang, believes that AI and the blockchain are “left leg and the right leg of the future.” However, the bitcoin mining giant faces stiff competition from the likes of Nvidia and Intel.
Micree Zhan explained in November 2017, “Bitmain saw trends in the AI business that were similar to the early days of Bitcoin, and so we started to explore AI toward the end of 2015. Now after only a year and a half, we have the mass-production chips in hand.”
The idea is to etch in silicon some of the most common deep learning algorithms, thus greatly boosting efficiency. Users will be able to apply their own datasets and build their own models on these ASICs, allowing the resulting neural networks to generate results and learn from those results at a far quicker pace. This is a technique that Google’s DeepMind unit, based in London, used to train its AlphaGo artificial intelligence, using its own Tensor Processing Unit chips.
Superior Technology Chips
According to the company, the chip is specialized for executing deep learning algorithms and algorithmic training. The former task relies on the chip’s ability to perform inference. According to IEEE’s David Schneider, an AI system has different training and inference performing skills than that of a processor. He further notes that high-precision mathematics is required in training, and inference is more suited towards low precision.
According to its specs, the BM1680 uses 32-bit floating point math. It can perform two teraflops (two trillion floating point operations per second) and typically consumes 25 Watts but can ramp up to 41W when running flat out. In comparison, Google’s Tensor Processing Unit uses 8-bit math for inferencing.
Currently sold on its site, the Sophon BM1680 is the heart of a card that’s as an accelerator for deep learning applications.
Companies Lined up as Potential Customers
The company has been communicating with the giant Chinese conglomerates, including Alibaba, Tencent, and Baidu. Zhao says that these companies are concerned about the cost, stability of supply, and power consumption of GPU-based AI accelerators and are looking for a new vendor.
Bitmain also seems to be preparing to serve demand in China, and supply the Chinese government and companies begin to increasingly use Artificial Intelligence.
How far can Bitmain eclipse the processing card business is all that remains to be seen.
Source: Read Full Article