Tantalum Capacitors Potentials and Trends

by Tomas Zednicek Ph.D. European Passive Components Institute, EPCI Jan 19. www.passive-components.eu
1.      INTRODUCTION
Tantalum capacitors are offering excellent stability in high energy, harsh conditions, and power volumetric efficiency and low...Read More

Added passives capacity will help stabilise lead times, limit price increases

By James Carbone
However, it will remain a seller’s market in 2019 for many parts including MLCCs and resistors
Buyers of multilayer ceramic capacitors (MLCC), chip resistors and other passive components can expect supply to remain tight in 2019, although...Read More

Moderating outlook won’t slow industry momentum

By Victoria Kickham
Advancing technology and the digital economy will keep the electronics sector humming in 2019 and beyond, industry insiders say

Electronics industry leaders are maintaining a bright outlook despite predictions of moderating economic...Read More

Expect higher average prices for embedded processors

By James Carbone
Prices will rise because more higher-cost 64-bit embedded microprocessors will be designed into electronics equipment.

The global market for embedded microprocessors (MPUs) will grow about 5 per cent to $77.7 billion in 2018 because...Read More

Industry embraces strong outlook, laments tight labor market

By Victoria Kickham
Demand for labor heats up as 2018 comes to a close and the economy continues its solid pace.

Electronics industry companies will continue to reap the rewards of a strong US economy in 2019, but they will face accompanying challenges...Read More

previous arrow
next arrow
Slider

People recognition through edge computing

For decades we relied mostly on CPUs (Central Processing Units) to perform computations. Their performance was growing rapidly thanks to increasing clock frequencies and doubling the number of transistors approximately every two years as stated by the Moore’s Law. When the frequencies could not go any higher due to heat dissipation issues, we continued scaling with the Moore’s Law by creating multi-core processors. This scaling has slowed down in around 2012.

During the first decade of the second millennium many people, including myself, have realised that CPUs are excellent for serial tasks but GPUs (Graphics Processing Units) are more suitable for parallel tasks. I have started programming GPUs in 2008 and developed the first multiple time-scales artificial recurrent neural networks and multi-GPU back-propagation through time GPU algorithms that would be used to develop action and language acquisition for the humanoid robot iCub. This PhD research would eventually get me to work at NVIDIA research in Santa Clara where I was probably the first person to evaluate the performance of artificial neural networks and the training algorithms for GPUs. I say this because I had to convince NVIDIA to let me work on these topics and needed to explain why I believed they were so important. This was just before the deep learning took off and just before NVIDIA started investing heavily into this area.  

GPUs are now used pretty much everywhere for running the training of deep learning models as this involves sustaining some serious processing power usually over a period of several days. However, once the model is trained it can be deployed on various other architectures that can run the inference much more efficiently and in situations where standard GPUs would not be feasible.

Recent advancements in chip technology gave birth to novel architectures designed to perform incredibly complex computations using very little power. This in turn enabled novel solutions for ever more popular domain of edge computing. Unlike cloud computing, edge computing is capable of processing data right at its source resulting in real-time or near real-time data analysis, lower operating costs, reduced network traffic and improved application performance.

One such architecture, which I decided to evaluate, was Intel Movidius Myriad 2 chip offering high-performance at ultra-low power. I came across news about a very new mini-PCI Express module developed by UP featuring Myriad 2 processor. I have previously used Intel Movidius Neural Compute Stick that has the same processor but having a mini PCI-e board opens up a whole new list of possibilities, so I decided to test it out. This new PCI-e module called AI Core enables Artificial Intelligence at the Edge. Thanks to its form factor it can be seamlessly integrated in most of the industrial computing edge devices. It is powered by ultra-low power high-performance Intel® Movidius™ Myriad™ 2 2450 VPU with 512MB.

For this demonstration, a single AI Core module was plugged into a mini-PCI Express port of UP Squared, the world’s fastest x86 maker board based on latest Intel platforms Apollo Lake and the successor of 2015 Kickstarter supported UP board. UP Squared was setup with Ubuntu 16.04 Linux distribution and the AI Core module was used to run convolutional neural networks trained to recognise and localise various objects. These neural networks are based on the Caffe implementation of the mobilenet-ssd architecture. The following video shows three different models running on this setup in sequence at around 10 frames per second. None of this model is particularly well trained or optimised. The main point of the video was to demonstrate how this setup can be used for various edge applications.

As a Head of Innovation at Cortexica, I look into emerging applications that I believe hold a lot of promise for the future. I am convinced that Edge Computing and particularly the Intel Myriad family chips on the AI Core module will enable many exciting applications. Cortexica is already working with Cisco on some cool projects (https://emear.thecisconetwork.com/site/content/lang/en/id/8533) I already see countless applications of edge computing in our business.

 

Leave a Comment