Malcolm Penn, chairman and CEO, Future Horizons, sent me the Enable 450 newsletter. The goal of the Enable 450 is: Co-ordination Action to enable an effective European 450 mm Equipment and Materials Network. Here, I am presenting a bit about the E450EDL – European 450mm Equipment demo line.
The aim of the ENIAC E450EDL key enabling technology pilot project is to continue the engagement of the European semiconductor equipment and materials industry in the 450mm wafer size transition that started with the ENIAC JU EEMI450 initiative and proceeded with subsequent projects funded with public money, amongst others NGC450, SOI450, EEM450PR.
The demo line resulting from this project will be such that it will enable first critical process module development by combining imec infrastructure with tools remaining at the site of the manufacturers (distributed pilot line). Multi-site processing will allow partners to participate in the world first 450mm integration studies and will be enabled by the controlled exchange of 450mm wafers between different sites.
The consortium comprises 41 members (from 11 different European countries) with many SMEs and research institutes. The project is organized in five technical work packages and a work package on management and co-ordination.
In the work package on integration and wafer processing first critical modules will be developed and will demonstrate the feasibility of processing on 450mm wafers. The main objective in the work package on lithography is to develop a wafer stage test-rig, which can be implemented into the pilot line system.
In the work package on front end equipment several tools will be developed such as a plasma ion implant module, a plasma dry etch module, a RTP system and a single wafer cleaning system.
Furthermore, in the dedicated work package on metrology 450mm metrology tool types will be developed for amongst others dielectric film thickness and composition measurements, defect inspection, defect review and analysis, optical critical dimensions (CD), overlay (mask and wafer) and 3D metrology.
Finally, from the work package on wafer handling and automation a set of equipment will be provided to support the demo line operations, and facilitate the R&D dedicated to process and metrology modules.
The project will last 36 months beginning on 1st of October 2013. The budget has been given at €204.6 million of which the ENIAC JU will fund €30.8 million. This project is still considering new members so if you are interested please contact ASML.
Thursday, March 28, 2013
Wednesday, March 27, 2013
Inertial micropump technology for microfluidic apps
At a MEMS Industry Group seminar in Orlando, US, Alexander Govyadinov, lead technologist, Hewlett-Packard Printing & Technology Development Organization said microfluidics looks at the movement of small amounts of fluids through microchannels.
The current microfluidic applications include pharmaceutical and life science research, clinical and veterinary diagnostics, human point-of-care, analytical devices, environment and industrial testing, and inhalers, micropumps and microneedles.
The microfluidic segment has been growing at 20 percent CAGR. By 2016, the $4.7 billion market size refers to the over 1 billion microfluidic chips and substrates. The GM for synthetic biology reached $1 billion in 2012.
Every fluidic system needs a pump. Although external pumps are commonly used, there is lack of simple, cheap and easy-to-integrate mcro-pumps.
There is passive capillary pump operation using capillaty retention valve (CRV). In a capillary-driven microfluidic device the chip is composed of microfluidic functional elements. There are rotary pumps as well. Rotating gears can be hard to integrate and require strong external actuators. Mostly, external pumps are available. There are pneumatic/membrane micropumps as well as external piezo pumps and active pumps.
In a thermal inkjet (TIJ), the voltage pulse heats the resistor and boils the fluid. Once, the droplet has been ejected, the chamber is refilled by capillary forces. HP has an inertial pump for microfluidics. There exists a computational fluid dynamics (CFD) inertial pump model. An optimal resistor location is available. There are 2mmx512 pump-channel arrays.
Vison for future micropump applications include generic fluidic network with reversible pumps. Pumps' densities can be up to 1000 per inch2. There are concepts such as polymerase chain reactor and u-calorimeter total analysis system.
Microfluidics is a growing field. Inertial pump is a new way to move fluids through microchannels.
The current microfluidic applications include pharmaceutical and life science research, clinical and veterinary diagnostics, human point-of-care, analytical devices, environment and industrial testing, and inhalers, micropumps and microneedles.
The microfluidic segment has been growing at 20 percent CAGR. By 2016, the $4.7 billion market size refers to the over 1 billion microfluidic chips and substrates. The GM for synthetic biology reached $1 billion in 2012.
Every fluidic system needs a pump. Although external pumps are commonly used, there is lack of simple, cheap and easy-to-integrate mcro-pumps.
There is passive capillary pump operation using capillaty retention valve (CRV). In a capillary-driven microfluidic device the chip is composed of microfluidic functional elements. There are rotary pumps as well. Rotating gears can be hard to integrate and require strong external actuators. Mostly, external pumps are available. There are pneumatic/membrane micropumps as well as external piezo pumps and active pumps.
In a thermal inkjet (TIJ), the voltage pulse heats the resistor and boils the fluid. Once, the droplet has been ejected, the chamber is refilled by capillary forces. HP has an inertial pump for microfluidics. There exists a computational fluid dynamics (CFD) inertial pump model. An optimal resistor location is available. There are 2mmx512 pump-channel arrays.
Vison for future micropump applications include generic fluidic network with reversible pumps. Pumps' densities can be up to 1000 per inch2. There are concepts such as polymerase chain reactor and u-calorimeter total analysis system.
Microfluidics is a growing field. Inertial pump is a new way to move fluids through microchannels.
Now, I’m on Wikipedia! ;) Thanks everyone!! :)
Today is Holi, the festival of colors. Well, it added some more color to my life as I was told that I have been listed on Wikipedia! You can see it here! ;)
I don’t really know who has added me there, or where they are getting all of their information. All I can humbly say is: thanks a lot, very sincerely, to Wikipedia! :)
Wikipedia has said I’ve been staying at my Delhi residence since 1984! Well, that’s the year my father, late, Pramode Ranjan Chakraborty, bought this house. Later, in 1986, he, along with my mother, late Mrs Bina Chakraborty, moved to this house.
Why this huge gap in our buying the house and moving in? Well, not many folks know that my parents met with a near fatal accident on Jan. 27, 1986 in the early hours of the day at New Delhi. They were going home by an auto-rickshaw to our home at Greater Kailash-II, New Delhi, when an Ambassador car rammed into their auto-rickshaw full on!
That’s also the day my life changed completely! I was still a student, playing cricket with friends, when my aunt called us from Delhi. We rushed to Delhi, to find our parents badly injured! I personally had to say goodbye to cricket, and turned attention to finding work. I finally moved to Delhi in Nov. 1987 and that's where my entire life started!
It has been a great ride ever since! All the hard work done seems to have paid off. First, I must mention Gratian Vas, who took me in at Holy Faith International back in 1988. My first brush with electronics was at SBP Consultants & Engineers a year later, followed by Electronics For You. However, it was at DiSyCom magazine, under Arun Bhattacharjee, where I learned the ropes.
Later, I was hired by late Ms Rashmi Bhushan to write for electronic components magazine published by Asian Sources Media. That’s when my life changed significantly! Not only did Asian Sources Media, now, Global Sources, hire me as the full-time telecom editor and take me to Hong Kong, it gave me first-hand view of China and how it grew in the world of electronics! It has been a fascinating journey ever since!
Thereafter, it was at Reed Elsevier, in Singapore, where I had the late Ian Shelley, Michael Tan, Paul Beh and Swee Heng Tan for company. Everywhere, I learned a lot! That’s what I continue to do even today!
The world can give me as many awards and folks can call me anything, but I shall always remain, yours truly! :)
I don’t really know who has added me there, or where they are getting all of their information. All I can humbly say is: thanks a lot, very sincerely, to Wikipedia! :)
Wikipedia has said I’ve been staying at my Delhi residence since 1984! Well, that’s the year my father, late, Pramode Ranjan Chakraborty, bought this house. Later, in 1986, he, along with my mother, late Mrs Bina Chakraborty, moved to this house.
Why this huge gap in our buying the house and moving in? Well, not many folks know that my parents met with a near fatal accident on Jan. 27, 1986 in the early hours of the day at New Delhi. They were going home by an auto-rickshaw to our home at Greater Kailash-II, New Delhi, when an Ambassador car rammed into their auto-rickshaw full on!
That’s also the day my life changed completely! I was still a student, playing cricket with friends, when my aunt called us from Delhi. We rushed to Delhi, to find our parents badly injured! I personally had to say goodbye to cricket, and turned attention to finding work. I finally moved to Delhi in Nov. 1987 and that's where my entire life started!
It has been a great ride ever since! All the hard work done seems to have paid off. First, I must mention Gratian Vas, who took me in at Holy Faith International back in 1988. My first brush with electronics was at SBP Consultants & Engineers a year later, followed by Electronics For You. However, it was at DiSyCom magazine, under Arun Bhattacharjee, where I learned the ropes.
Later, I was hired by late Ms Rashmi Bhushan to write for electronic components magazine published by Asian Sources Media. That’s when my life changed significantly! Not only did Asian Sources Media, now, Global Sources, hire me as the full-time telecom editor and take me to Hong Kong, it gave me first-hand view of China and how it grew in the world of electronics! It has been a fascinating journey ever since!
Thereafter, it was at Reed Elsevier, in Singapore, where I had the late Ian Shelley, Michael Tan, Paul Beh and Swee Heng Tan for company. Everywhere, I learned a lot! That’s what I continue to do even today!
The world can give me as many awards and folks can call me anything, but I shall always remain, yours truly! :)
Friday, March 22, 2013
What technology SoC engineers need for next-gen chips?
About 318 engineers and managers completed a blind, anonymous survey on 'On-Chip Communications Networks (OCCN), also referred to as an “on-chip networks”, defined as the entire interconnect fabric for an SoC. The on-chip communications network report was done by Sonics Inc. A summary of some of the highlights is as follows.
The average estimated time spent on designing, modifying and/or verifying on-chip communications networks was 28 percent (for the respondents that knew their estimate time).
The two biggest challenges for implementing OCCNs were meeting product specifications and balancing frequency, latency and throughput. Second tier challenges were integrating IP elements/sub-systems and getting timing closure.
As for 2013 SoC design expectations, a majority of respondents are targeting a core speed of at least 1 GHz for SoCs design starts within the next 12 months, based on those respondents that knew their target core speeds. Forty percent of respondents expect to have 2-5 power domain partitions for their next SoC design.
A variety of topologies are being considered for respondents’ next on-chip communications networks, including NoCs (half), followed by crossbars, multi-layer bus matrices and peripheral interconnects; respondents that knew their plans here, were seriously considering an average of 1.7 different topologies.
Twenty percent of respondents stated they already had a commercial Network-on-Chip (NoC) implemented or plan to implement one in the next 12 months, while over a quarter plan to evaluate a NoC over the next 12 months. A NoC was defined as a configurable network interconnect that packetizes address/data for multicore SoCs.
For respondents who had an opinion when commercial Networks-on-Chip became an important consideration versus internal development when implementing an SoC, 43 percent said they would consider commercial NoCs at 10 or fewer cores; approximately two-thirds said they would consider commercial NoCs at 20 or fewer cores.
The survey participants’ top three criteria for selecting a Network on Chip were: scalability-adaptability, quality of service and system verification, followed by layout friendly, support for power domain partitioning. Half of respondents saw reduced wiring congestion as the primary reason to use virtual channels, followed by increased throughput and meeting system concurrency with limited bandwidth.
The average estimated time spent on designing, modifying and/or verifying on-chip communications networks was 28 percent (for the respondents that knew their estimate time).
The two biggest challenges for implementing OCCNs were meeting product specifications and balancing frequency, latency and throughput. Second tier challenges were integrating IP elements/sub-systems and getting timing closure.
As for 2013 SoC design expectations, a majority of respondents are targeting a core speed of at least 1 GHz for SoCs design starts within the next 12 months, based on those respondents that knew their target core speeds. Forty percent of respondents expect to have 2-5 power domain partitions for their next SoC design.
A variety of topologies are being considered for respondents’ next on-chip communications networks, including NoCs (half), followed by crossbars, multi-layer bus matrices and peripheral interconnects; respondents that knew their plans here, were seriously considering an average of 1.7 different topologies.
Twenty percent of respondents stated they already had a commercial Network-on-Chip (NoC) implemented or plan to implement one in the next 12 months, while over a quarter plan to evaluate a NoC over the next 12 months. A NoC was defined as a configurable network interconnect that packetizes address/data for multicore SoCs.
For respondents who had an opinion when commercial Networks-on-Chip became an important consideration versus internal development when implementing an SoC, 43 percent said they would consider commercial NoCs at 10 or fewer cores; approximately two-thirds said they would consider commercial NoCs at 20 or fewer cores.
The survey participants’ top three criteria for selecting a Network on Chip were: scalability-adaptability, quality of service and system verification, followed by layout friendly, support for power domain partitioning. Half of respondents saw reduced wiring congestion as the primary reason to use virtual channels, followed by increased throughput and meeting system concurrency with limited bandwidth.
Wednesday, March 20, 2013
Focus on SiC power electronics business 2020
SiC is currently implemented in several power systems and is gaining momentum and credibility.
Yole Developpement stays convinced that the most pertinent market for SiC lands in high and very high voltage (more than 1.2kV), where applications are less cost-driven and where few incumbent technologies can’t compete in performance. This transition is on its way as several device/module makers have already planned such products at short term.
Thus, even though EV/HEV skips SiC, industry could expand among other apps. Now, the only question remains: Is there enough business to make so many contenders live decently? Probably, yes, as green-techs are also expanding fast, strongly requesting SiC. But, any newcomers should carefully manage strategy and properly size capex according to the market size.
Power electronics industry outlook
Electronics systems were worth $122 billion in 2012, and will likely grow to $144 billion by 2020 at a CAGR of 1.9 percent. Power inverters will grow from $41 billion in 2012 to over $70 billion by 2020 at a CAGR of 7.2 percent. Semiconductor power devices (discretes and modules) will grow from $12.5 billion in 2012 to $21.9 billion by 2020 at a CAGR of 7.9 percent. Power wafers will grow $912 million in 2012 to $1.3 billion by 2020 at a CAGR of 5.6 percent.
Looking at the power electronics market in 2012 by application and the main expectations to 2015, computer and office will account for 25 percent, industry and energy 24 percent, consumer electronics 18 percent, automotive and transport 17 percent, telecom 7 percent and others 9 percent.
The main trends expected for 2013-2015 are:
* Significant increase of automotive sector following EV and HEV ramp-up.
* Renewable energies and smart-grid implementation will drive industry sector ramp-up.
* Steady erosion of consumer segment due to pressure on price (however, volumes (units) will keep on increase).
The 2011 power devices sales by region reveals that overall, Asia is still the landing-field for more than 65 percent of power products. Most of the integrators are located in China, Japan or Korea. Europe is very dynamic as well with top players in traction, grid, PV inverter, motor control, etc. Asia leads with 39 percent, followed by Japan with 27 percent, Europe with 21 percent and North America with 13 percent.
The 2011 revenues by company/headquarter locations reveals that the big-names of the power electronics industry are historically from Japan. Nine companies of the top-20 are Japanese. There are very few power manufacturers in Asia except in Japan. Europe and US are sharing four of the top five companies. Japan leads with 42 percent, followed by Europe and North America with 28 percent each, respectively, and Asia with 2 percent.
Looking at the TAM comparison for SiC (and GaN), very high voltage, high voltage of 2kV and medium voltage of 1.2kV appear as a more comfortable area for SiC. The apps are less cost-driven and SiC added value is obvious. Low voltage from 0-900V is providing strong competition with traditional silicon technologies, SJ MOSFET and GaN. There are cost-driven apps.
Yole Developpement stays convinced that the most pertinent market for SiC lands in high and very high voltage (more than 1.2kV), where applications are less cost-driven and where few incumbent technologies can’t compete in performance. This transition is on its way as several device/module makers have already planned such products at short term.
Thus, even though EV/HEV skips SiC, industry could expand among other apps. Now, the only question remains: Is there enough business to make so many contenders live decently? Probably, yes, as green-techs are also expanding fast, strongly requesting SiC. But, any newcomers should carefully manage strategy and properly size capex according to the market size.
Power electronics industry outlook
Electronics systems were worth $122 billion in 2012, and will likely grow to $144 billion by 2020 at a CAGR of 1.9 percent. Power inverters will grow from $41 billion in 2012 to over $70 billion by 2020 at a CAGR of 7.2 percent. Semiconductor power devices (discretes and modules) will grow from $12.5 billion in 2012 to $21.9 billion by 2020 at a CAGR of 7.9 percent. Power wafers will grow $912 million in 2012 to $1.3 billion by 2020 at a CAGR of 5.6 percent.
Looking at the power electronics market in 2012 by application and the main expectations to 2015, computer and office will account for 25 percent, industry and energy 24 percent, consumer electronics 18 percent, automotive and transport 17 percent, telecom 7 percent and others 9 percent.
The main trends expected for 2013-2015 are:
* Significant increase of automotive sector following EV and HEV ramp-up.
* Renewable energies and smart-grid implementation will drive industry sector ramp-up.
* Steady erosion of consumer segment due to pressure on price (however, volumes (units) will keep on increase).
The 2011 power devices sales by region reveals that overall, Asia is still the landing-field for more than 65 percent of power products. Most of the integrators are located in China, Japan or Korea. Europe is very dynamic as well with top players in traction, grid, PV inverter, motor control, etc. Asia leads with 39 percent, followed by Japan with 27 percent, Europe with 21 percent and North America with 13 percent.
The 2011 revenues by company/headquarter locations reveals that the big-names of the power electronics industry are historically from Japan. Nine companies of the top-20 are Japanese. There are very few power manufacturers in Asia except in Japan. Europe and US are sharing four of the top five companies. Japan leads with 42 percent, followed by Europe and North America with 28 percent each, respectively, and Asia with 2 percent.
Looking at the TAM comparison for SiC (and GaN), very high voltage, high voltage of 2kV and medium voltage of 1.2kV appear as a more comfortable area for SiC. The apps are less cost-driven and SiC added value is obvious. Low voltage from 0-900V is providing strong competition with traditional silicon technologies, SJ MOSFET and GaN. There are cost-driven apps.
Tuesday, March 19, 2013
Xilinx targets growing ASIC and ASSP gaps
Xilinx Inc. has announced solutions for significant and growing gaps in ASIC and ASSP offerings targeting next-generation smarter networks and data centers. It has been acquiring and developing a SmartCORE IP portfolio and a critical mass of application specialists and services that leverage Xilinx’s All Programmable FPGAs, SoCs, and 3D ICs.
To find out more about how are Xilinx's solutions targeting growing ASIC and ASSP gaps for next-gen smarter networks and data centers, I spoke with Neeraj Varma, director, Sales-India, Xilinx. He said: "Over the past several years, Xilinx has been making a transition from the leading FPGA vendor to a provider of All Programmable Solutions for Smarter Systems. With its All Programmable 7 Series FPGAS, All Programmable SoCs and the Vivado Design Suite, Xilinx now offers a comprehensive set of solutions that provide end-to-end system implementation.
"Through strategic acquisitions, investments in silicon products and IP development, Xilinx has started to replace entire ASSPs and ASICs in the communications market by offering a complete IP cores portfolio which allows customers to design Smarter Systems for networking, communications and data center applications.
"Xilinx is calling this set of IP cores, SmartCORE IP, because they are the critical application-specific building blocks needed to develop smarter networking and communications systems. We are responding to market need and that need has accelerated recently as the viability of ASICs and more recently ASSPs have been severely challenged. Xilinx is a generation ahead in SoC and tools and its leadership at 28nm borne out with revenue ramp."
Developing SmartCORE IP portfolio
What is meant by Xilinx acquiring and developing a SmartCORE IP portfolio and a critical mass of application specialists and services?
According to him, 28nm design process devices require a new and a different set of tools to exploit all the capabilities. That was one of the reasons for Xilinx to invest heavily in resources and time to come up with the Vivado Design Suite, to be able to support the large designs and get them into production with minimal effort and ease.
Vivado supports the growing use of IP blocks to reduce the complexity of the designs which are very critical in the implementation of complex networking and communications systems. This is one of the main reasons Xilinx spent years to develop strategic partnerships and making acquisitions such as Omiino (OTN IP solutions), Modelware (Traffic Management and Packet processing IP solutions), Sarance (Ethernet and Interlaken IP solutions) and Modesat (Microwave and Eband backhaul IP solutions) to offer a comprehensive set of IP cores to design Smarter Systems for networking, communications and data centre applications.
How are the solutions going to address the challenges with ASICs and ASSPs?
He said that ASICs and ASSPs targeting the communications, networking, and data center equipment markets have been disappearing at a surprisingly rapid pace due to many factors, including escalating IC-design costs and the need for much greater levels of intelligence and adaptability—all driven by wide variance in application and device requirements.
Additionally, the equipment markets no longer accept “me too” equipment design, which means that ASSP-based equipment design has almost vanished due to limited flexibility. These growing gaps are pervasive across all markets.These challenges, coupled with the rapidly increasing design costs and lengthy design cycles for both ASICs and ASSPs have created significant solution gaps for equipment design teams.
ASSPs and ASICs are either too late to market to meet OEM or operator requirements, are significantly overdesigned to satisfy the superset requirements of many diverse customers, are not a good fit for specific target applications, and/or provide limited ability for customers to differentiate their end products. Equipment vendors face many or all of these gaps when attempting to use the solutions offered by ASIC and ASSP vendors.
The biggest driver in the communications and networking markets is the insatiable need for bandwidth as traffic explodes well beyond the capabilities of networks to support that traffic. However, the need is definitely not bandwidth or transmission capacity at any cost. It’s really a need for more bandwidth and more capacity at lower and lower cost in both wireless and wired networks.
To find out more about how are Xilinx's solutions targeting growing ASIC and ASSP gaps for next-gen smarter networks and data centers, I spoke with Neeraj Varma, director, Sales-India, Xilinx. He said: "Over the past several years, Xilinx has been making a transition from the leading FPGA vendor to a provider of All Programmable Solutions for Smarter Systems. With its All Programmable 7 Series FPGAS, All Programmable SoCs and the Vivado Design Suite, Xilinx now offers a comprehensive set of solutions that provide end-to-end system implementation.
"Through strategic acquisitions, investments in silicon products and IP development, Xilinx has started to replace entire ASSPs and ASICs in the communications market by offering a complete IP cores portfolio which allows customers to design Smarter Systems for networking, communications and data center applications.
"Xilinx is calling this set of IP cores, SmartCORE IP, because they are the critical application-specific building blocks needed to develop smarter networking and communications systems. We are responding to market need and that need has accelerated recently as the viability of ASICs and more recently ASSPs have been severely challenged. Xilinx is a generation ahead in SoC and tools and its leadership at 28nm borne out with revenue ramp."
Developing SmartCORE IP portfolio
What is meant by Xilinx acquiring and developing a SmartCORE IP portfolio and a critical mass of application specialists and services?
According to him, 28nm design process devices require a new and a different set of tools to exploit all the capabilities. That was one of the reasons for Xilinx to invest heavily in resources and time to come up with the Vivado Design Suite, to be able to support the large designs and get them into production with minimal effort and ease.
Vivado supports the growing use of IP blocks to reduce the complexity of the designs which are very critical in the implementation of complex networking and communications systems. This is one of the main reasons Xilinx spent years to develop strategic partnerships and making acquisitions such as Omiino (OTN IP solutions), Modelware (Traffic Management and Packet processing IP solutions), Sarance (Ethernet and Interlaken IP solutions) and Modesat (Microwave and Eband backhaul IP solutions) to offer a comprehensive set of IP cores to design Smarter Systems for networking, communications and data centre applications.
How are the solutions going to address the challenges with ASICs and ASSPs?
He said that ASICs and ASSPs targeting the communications, networking, and data center equipment markets have been disappearing at a surprisingly rapid pace due to many factors, including escalating IC-design costs and the need for much greater levels of intelligence and adaptability—all driven by wide variance in application and device requirements.
Additionally, the equipment markets no longer accept “me too” equipment design, which means that ASSP-based equipment design has almost vanished due to limited flexibility. These growing gaps are pervasive across all markets.These challenges, coupled with the rapidly increasing design costs and lengthy design cycles for both ASICs and ASSPs have created significant solution gaps for equipment design teams.
ASSPs and ASICs are either too late to market to meet OEM or operator requirements, are significantly overdesigned to satisfy the superset requirements of many diverse customers, are not a good fit for specific target applications, and/or provide limited ability for customers to differentiate their end products. Equipment vendors face many or all of these gaps when attempting to use the solutions offered by ASIC and ASSP vendors.
The biggest driver in the communications and networking markets is the insatiable need for bandwidth as traffic explodes well beyond the capabilities of networks to support that traffic. However, the need is definitely not bandwidth or transmission capacity at any cost. It’s really a need for more bandwidth and more capacity at lower and lower cost in both wireless and wired networks.
Monday, March 18, 2013
Tensilica acquisition to broaden Cadence's IP portfolio
Last week (March 11, 2013), Cadence Design Systems Inc. entered into a definitive agreement to acquire Tensilica Inc., a leader in dataplane processing IP, for approximately $380 million in cash.
With this acquisition, Tensilica dataplane processing units (DPUs) combined with Cadence design IP will deliver more optimized IP solutions for mobile wireless, network infrastructure, auto infotainment and home applications.
The Tensilica IP also complements industry-standard processor architectures, providing application-optimized subsystems to increase differentiation and get to market faster. Finally, over 200 licensees, including system OEMs and seven of the top 10 semiconductor companies, have shipped over 2 billion Tensilica IP cores.
Talking about the rationale behind Cadence acquiring Tensilica, Pankaj Mayor, VP and head of Marketing, Cadence Design Systems, said: "Tensilica fits and furthers our IP strategy - the combination of Tensilica's DPU and Cadence IP portfolio will broaden our IP portfolio. Tensilica also brings significant engineering and management talent. The combination will allow us to deliver to our customers configurable, differentiated, and application-optimized subsystems that improve time to market."
It is expected that the Cadence acquisition will see the Tensilica dataplane IP to complement Cadence and Cosmic Circuits' IP. Cadence had acquired Cosmic Circuits in February 2013.
What are the possible advantages of DPUs over DSPs? Does it also mean a possible end of the road for DSPs?
As per Mayor, DSPs are special purpose processors targeted to address digital signaling. Tensilica's DPUs are programmable and customizable for a specific function, providing optimal data throughput and processing speed; in other words, the DPUs from Tensilica provide a unique combination of customized processing plus DSP. Tensilica's DPUs can outperform traditional DSPs in power and performance.
So, what will happens to the MegaChips design center agreement with Tensilica? Does it still carry on? According to Mayor, right now, Cadence and Tensilica are operating as two independent companies and therefire, Cadence cannot comment until the closing of the acquisition, which is in 30-60 days.
With this acquisition, Tensilica dataplane processing units (DPUs) combined with Cadence design IP will deliver more optimized IP solutions for mobile wireless, network infrastructure, auto infotainment and home applications.
The Tensilica IP also complements industry-standard processor architectures, providing application-optimized subsystems to increase differentiation and get to market faster. Finally, over 200 licensees, including system OEMs and seven of the top 10 semiconductor companies, have shipped over 2 billion Tensilica IP cores.
Talking about the rationale behind Cadence acquiring Tensilica, Pankaj Mayor, VP and head of Marketing, Cadence Design Systems, said: "Tensilica fits and furthers our IP strategy - the combination of Tensilica's DPU and Cadence IP portfolio will broaden our IP portfolio. Tensilica also brings significant engineering and management talent. The combination will allow us to deliver to our customers configurable, differentiated, and application-optimized subsystems that improve time to market."
It is expected that the Cadence acquisition will see the Tensilica dataplane IP to complement Cadence and Cosmic Circuits' IP. Cadence had acquired Cosmic Circuits in February 2013.
What are the possible advantages of DPUs over DSPs? Does it also mean a possible end of the road for DSPs?
As per Mayor, DSPs are special purpose processors targeted to address digital signaling. Tensilica's DPUs are programmable and customizable for a specific function, providing optimal data throughput and processing speed; in other words, the DPUs from Tensilica provide a unique combination of customized processing plus DSP. Tensilica's DPUs can outperform traditional DSPs in power and performance.
So, what will happens to the MegaChips design center agreement with Tensilica? Does it still carry on? According to Mayor, right now, Cadence and Tensilica are operating as two independent companies and therefire, Cadence cannot comment until the closing of the acquisition, which is in 30-60 days.
PMC's DIGI 120G supports 10G, 40G and 100G speeds for OTN transport
PMC-Sierra Inc. has launched the PM5440 DIGI 120G, said to be the industry's only single-chip OTN processor supporting 10G, 40G and 100G speeds for OTN transport.
Elaborating, Kevin So, senior product line manager, PMC, said: "PMC is the first to integrate support for 12x10G, 3x40G or 1x100G in a single piece of silicon to address OTN transport (point-to-point), OTN aggregation (multiplexing) and OTN switching deployments. For example, with DIGI 120G, an OEM can design a line card on a P-OTP that supports 12x10G supporting per port configurable multi-service like OC-192/STM-64, 10GE, OTU2 or Fiber Channel."
Using the same chip and same software investment, they can also design a 3x40G card supporting 40GE, OC-768/STM-256 or OTU3. Another card can be designed to support 100GE or OTU4. An OEM can design 10+ cards across multiple platforms leveraging a single R&D investment using DIGI 120G. This also translates into the lowest cost of ownership for the OEMs, while achieving a time to market advantage.
How does OTN allow for flexible aggregation and switching from 1G to 100G? For that matter, what can this device do?
OTN is a defined as a carrier grade protocol to transparently carry and switch and aggregate multi-service traffic including 1GE all the way to 100GE over a WDM. The protocol is an ITU-T standard, and supports ODU0 (which is 1G) to ODU4 (which is 100G). In addition, OTN defines something called ODUflex, which is a flexible container that can be adjusted up and down from 1G to 100G in increments of 1G.
PMC’s DIGI 120G supports all these OTN container rates and enable the ability to multiplex and switch traffic between them. In addition, DIGI 120G provides the ability to scale ODUflex to carry packet traffic ranging from 1G to 100G without service interruption. DIGI 120G is a single chip solution that uniquely enables the transponders, muxponders and line cards on ROADMs and P-OTPs.
Innovations done
What are the innovations done by the PM5440 DIGI 120G? What if there is some new chip coming out?
Reducing line card power and bill-of-material by more than 50 percent, PMC's DIGI 120G stands uniquely differentiated as:
* Industry’s only single-chip solution delivering 12x10G, 3x40G or 1x100G port densities.
* Industry’s highest number of 10G ports enabling 2x higher density 10G OTN line cards.
* Industry’s highest gain 40G/100G enhanced-FEC extending optical reach by 2x vs GFEC.
* Industry’s only 120G OTN solution with OIF’s OTN-over-Packet Fabric Protocol (OFP).
* First OTN processor to enable hitless packet traffic scaling with ITU-T’s G.hao/G.7044.
* Flexible per port client-mapping of OTN, Ethernet, Storage, IP/MPLS and SONET/SDH.
* Synchronous Ethernet (SyncE), 1588v2 Precision Time Protocol (PTP), and Ethernet Link OAM (802.3ah) delivering per port Carrier Ethernet performance.
To deliver these innovations, PMC integrated well over a billion transistors. The level of silicon integration is unprecedented – requiring engineering capabilities unmatched in the telecom industry. So added that PMC worked closely with tier-1 OEM customers from the start at the requirements phase in order to tailor the solution for their systems. As a result, the DIGI 120G is a key architectural element of their system.
By when does PMC sees enterprises 'really' going in for products with PM5440 DIGI 120G, to support Big Data? And, what happens if they still don't?
So noted: "We have been working with our customers for the last few months developing their line cards using DIGI 120G. We are confident they will take their products using DIGI 120G to production in 2013."
ROADM revolution
Does PMC actually see a reconfigurable optical add-drop multiplexer (ROADM) revolution?
According to So, a couple of things are happening in the ROADM market. On the photonics side, products are now available to allow service providers to deploy very flexible wavelength switches that are color independent, direction independent, wavelength contention-free and support flexible ITU grid widths.
On the platform architecture side, we are seeing a move away from traditional muxponders and transponders line card architectures where the client ports are fixed to a specific optical uplink port (wavelength). Instead, OEMs want to de-couple the client ports from the uplink optical capacity for great flexibility and in order to achieve better bandwidth utilization especially as the industry starts deploying 100G wavelengths.
Services in the network, especially those from the metro network edge is still largely 1G or 10G rates. To achieve this flexibility, central fabrics are added to the ROADM platform to support OTN switching. PMC’s Metro OTN processor family, including our latest DIGI 120G, enable OEMs to build line cards that can switch OTN and packet simultaneously in these platform architectures.
Finaly, is the bandwidth of common modulation format for 100G and beyond too broad for ROADMs?
Kevin So concluded: "OTN, as a protocol, is designed to scale to beyond 100G. The standard bodies are already working on this now. ROADMs, as a hardware platform will scale, but new components and technologies will likely be needed to take them beyond 100G."
Elaborating, Kevin So, senior product line manager, PMC, said: "PMC is the first to integrate support for 12x10G, 3x40G or 1x100G in a single piece of silicon to address OTN transport (point-to-point), OTN aggregation (multiplexing) and OTN switching deployments. For example, with DIGI 120G, an OEM can design a line card on a P-OTP that supports 12x10G supporting per port configurable multi-service like OC-192/STM-64, 10GE, OTU2 or Fiber Channel."
Using the same chip and same software investment, they can also design a 3x40G card supporting 40GE, OC-768/STM-256 or OTU3. Another card can be designed to support 100GE or OTU4. An OEM can design 10+ cards across multiple platforms leveraging a single R&D investment using DIGI 120G. This also translates into the lowest cost of ownership for the OEMs, while achieving a time to market advantage.
How does OTN allow for flexible aggregation and switching from 1G to 100G? For that matter, what can this device do?
OTN is a defined as a carrier grade protocol to transparently carry and switch and aggregate multi-service traffic including 1GE all the way to 100GE over a WDM. The protocol is an ITU-T standard, and supports ODU0 (which is 1G) to ODU4 (which is 100G). In addition, OTN defines something called ODUflex, which is a flexible container that can be adjusted up and down from 1G to 100G in increments of 1G.
PMC’s DIGI 120G supports all these OTN container rates and enable the ability to multiplex and switch traffic between them. In addition, DIGI 120G provides the ability to scale ODUflex to carry packet traffic ranging from 1G to 100G without service interruption. DIGI 120G is a single chip solution that uniquely enables the transponders, muxponders and line cards on ROADMs and P-OTPs.
Innovations done
What are the innovations done by the PM5440 DIGI 120G? What if there is some new chip coming out?
Reducing line card power and bill-of-material by more than 50 percent, PMC's DIGI 120G stands uniquely differentiated as:
* Industry’s only single-chip solution delivering 12x10G, 3x40G or 1x100G port densities.
* Industry’s highest number of 10G ports enabling 2x higher density 10G OTN line cards.
* Industry’s highest gain 40G/100G enhanced-FEC extending optical reach by 2x vs GFEC.
* Industry’s only 120G OTN solution with OIF’s OTN-over-Packet Fabric Protocol (OFP).
* First OTN processor to enable hitless packet traffic scaling with ITU-T’s G.hao/G.7044.
* Flexible per port client-mapping of OTN, Ethernet, Storage, IP/MPLS and SONET/SDH.
* Synchronous Ethernet (SyncE), 1588v2 Precision Time Protocol (PTP), and Ethernet Link OAM (802.3ah) delivering per port Carrier Ethernet performance.
To deliver these innovations, PMC integrated well over a billion transistors. The level of silicon integration is unprecedented – requiring engineering capabilities unmatched in the telecom industry. So added that PMC worked closely with tier-1 OEM customers from the start at the requirements phase in order to tailor the solution for their systems. As a result, the DIGI 120G is a key architectural element of their system.
By when does PMC sees enterprises 'really' going in for products with PM5440 DIGI 120G, to support Big Data? And, what happens if they still don't?
So noted: "We have been working with our customers for the last few months developing their line cards using DIGI 120G. We are confident they will take their products using DIGI 120G to production in 2013."
ROADM revolution
Does PMC actually see a reconfigurable optical add-drop multiplexer (ROADM) revolution?
According to So, a couple of things are happening in the ROADM market. On the photonics side, products are now available to allow service providers to deploy very flexible wavelength switches that are color independent, direction independent, wavelength contention-free and support flexible ITU grid widths.
On the platform architecture side, we are seeing a move away from traditional muxponders and transponders line card architectures where the client ports are fixed to a specific optical uplink port (wavelength). Instead, OEMs want to de-couple the client ports from the uplink optical capacity for great flexibility and in order to achieve better bandwidth utilization especially as the industry starts deploying 100G wavelengths.
Services in the network, especially those from the metro network edge is still largely 1G or 10G rates. To achieve this flexibility, central fabrics are added to the ROADM platform to support OTN switching. PMC’s Metro OTN processor family, including our latest DIGI 120G, enable OEMs to build line cards that can switch OTN and packet simultaneously in these platform architectures.
Finaly, is the bandwidth of common modulation format for 100G and beyond too broad for ROADMs?
Kevin So concluded: "OTN, as a protocol, is designed to scale to beyond 100G. The standard bodies are already working on this now. ROADMs, as a hardware platform will scale, but new components and technologies will likely be needed to take them beyond 100G."
Tuesday, March 12, 2013
Global semicon sales to grow 6.6 percent in 2013: Cowan LRA model
This is a continuation of my coverage of the fortunes of the global semiconductor industry. I would like to acknowledge and thank Mike Cowan, an independent semiconductor analyst and developer of the Cowan LRA model, who has provided me the latest numbers.
According to the WSTS’s Jan 2013 HBR (posted on March 8th, 2013), January 2013’s actual global semiconductor sales came in at $22.824 billion. This actual sales result for January is 2.9 percent higher than last month’s sales forecast estimate for January, namely $22.180 billion.
Plugging January’s actual sales number into the Cowan LRA forecasting model yields the following quarterly, half-year, and full year sales and sales growth forecast expectations for 2013 compared to 2012 sales depicted in the table.
It should be highlighted that with last month’s publishing of the final 2012 sales result by the WSTS, the Cowan LRA Model for forecasting global semiconductor sales was updated to incorporate the full complement of 2012′s monthly sales numbers, thereby capturing 29 years of historical, global semiconductor (actual) sales numbers as gathered, tracked and published each month by the World Semiconductor Trade Statistics (WSTS) on its website.
As described last month, the necessary mathematical computations required in order to update the complete set of linear regression parameters embedded in the Cowan LRA forecasting model for determining future sales were carried out. The newly derived set of linear regression parameters therefore reflect 29 years (1984 to 2012) of historical global semiconductor sales as the basis for predicting future quarterly and full year sales and sale growth forecast expectations by running the Cowan LRA Model.
Therefore, the table above summarizes the model’s latest, updated 2013 sales and sales growth expectations reflecting the WSTS’s January 2013′s actual sales as calculated by the model’s newly minted set of linear regression parameters.
Note that the latest Cowan LRA Model’s expected 2013 sales growth of 6.6 percent relative to 2012 final sales ($291.562 billion) is more bullish than the WSTS’s adjusted autumn 2012 sales growth forecast of 3.9 percent as well as the WSTS’s autumn 2012′s original forecasted sales growth of 4.5 percent, which was released back during last November.
In addition to forecasting 2013’s quarterly sales estimates the Cowan LRA Model also provides a forecast expectation for February 2013’s sales, namely $22.436 billion. This sales forecast yields a 3MMA forecast for February of $23.571 billion assuming the no or minimal sales revision is made to January’s actual sales.
Finally, the table provided below details the monthly evolution for 2013’s sales and sales growth forecast predictions as put forth by the Cowan LRA forecasting model dating back to September of last year.
Note that the most recent 2013 sales growth forecast is up compared to the previous two forecasts of 5.5 percent and 3.6 percent, respectively.
It should be mentioned that the previous 2013’s sales growth forecast for December 2012, namely 3.6 percent, was based upon a sales forecast estimate for Jan 2013 versus the latest sales growth forecast estimate of 6.6 percent, which utilizes January’s actual sales result just released in the WSTS’s January 2013 HBR, Historical Billings Report.
According to the WSTS’s Jan 2013 HBR (posted on March 8th, 2013), January 2013’s actual global semiconductor sales came in at $22.824 billion. This actual sales result for January is 2.9 percent higher than last month’s sales forecast estimate for January, namely $22.180 billion.
Plugging January’s actual sales number into the Cowan LRA forecasting model yields the following quarterly, half-year, and full year sales and sales growth forecast expectations for 2013 compared to 2012 sales depicted in the table.
It should be highlighted that with last month’s publishing of the final 2012 sales result by the WSTS, the Cowan LRA Model for forecasting global semiconductor sales was updated to incorporate the full complement of 2012′s monthly sales numbers, thereby capturing 29 years of historical, global semiconductor (actual) sales numbers as gathered, tracked and published each month by the World Semiconductor Trade Statistics (WSTS) on its website.
As described last month, the necessary mathematical computations required in order to update the complete set of linear regression parameters embedded in the Cowan LRA forecasting model for determining future sales were carried out. The newly derived set of linear regression parameters therefore reflect 29 years (1984 to 2012) of historical global semiconductor sales as the basis for predicting future quarterly and full year sales and sale growth forecast expectations by running the Cowan LRA Model.
Therefore, the table above summarizes the model’s latest, updated 2013 sales and sales growth expectations reflecting the WSTS’s January 2013′s actual sales as calculated by the model’s newly minted set of linear regression parameters.
Note that the latest Cowan LRA Model’s expected 2013 sales growth of 6.6 percent relative to 2012 final sales ($291.562 billion) is more bullish than the WSTS’s adjusted autumn 2012 sales growth forecast of 3.9 percent as well as the WSTS’s autumn 2012′s original forecasted sales growth of 4.5 percent, which was released back during last November.
In addition to forecasting 2013’s quarterly sales estimates the Cowan LRA Model also provides a forecast expectation for February 2013’s sales, namely $22.436 billion. This sales forecast yields a 3MMA forecast for February of $23.571 billion assuming the no or minimal sales revision is made to January’s actual sales.
Finally, the table provided below details the monthly evolution for 2013’s sales and sales growth forecast predictions as put forth by the Cowan LRA forecasting model dating back to September of last year.
Note that the most recent 2013 sales growth forecast is up compared to the previous two forecasts of 5.5 percent and 3.6 percent, respectively.
It should be mentioned that the previous 2013’s sales growth forecast for December 2012, namely 3.6 percent, was based upon a sales forecast estimate for Jan 2013 versus the latest sales growth forecast estimate of 6.6 percent, which utilizes January’s actual sales result just released in the WSTS’s January 2013 HBR, Historical Billings Report.
Monday, March 11, 2013
Components Direct offers guaranteed traceable E&O inventory!
Components Direct is a leading source for authorized end-of-life and excess electronic components. The products are guaranteed grade A factory sealed direct from the manufacturer and inventoried in a ESD 20.20 certified and ISO 9001 certified state-of-the art facility.
It has a leading cloud-based platform for excess and obsolete (E&O) inventory. In 2012, Avnet and Components Direct entered in a strategic relationship. Components Direct is the exclusive channel for Avnet's factory authorized excess and end-of-life components. Components Direct is headquartered in Milpitas, CA with locations in the US and Asia.
Compared to leading industry giants, such as Element14 and RS Components, Components Direct, currently, doesn't have a detailed menu showcasing listed products, at least not on the home page, as yet. One hopes that'll make an appearance soon.
Speaking on the mission of Components Direct, Anne Ting, executive VP, Marketing said: "Components Direct is the premier authorized distributor for excess and end-of-life electronic components. We are the only company working directly with manufacturers and their franchised distributors to offer 100 percent guaranteed traceable E&O components as well as technology services to combat counterfeit components and other gray market activity.
"For our supplier partners, we enable them to put excess product back into the control of an authorized source, as opposed to the gray market. For buyers, we provide them with a secure, authorized one-stop shop for excess, obsolete and unsold factory components."
Combating gray market
How important is it to combat the gray market? Why will this endeavor stop/lessen gray market activity?
According to Ting, the gray market is a serious and growing problem. As early as 2008, a study by KPMG and the Alliance for Gray Market and Counterfeit Abatement (AGMA) stated that as much as $58 billion of technology products were passing through the gray market, and the problem has only gotten worse.
The gray market is rampant throughout all industries, with everyone from engineers, to procurement professionals and consumers impacted negatively when the products they purchase are advertised as new and authentic, but in reality could be used, refurbished or even worse, counterfeit.
In fact, a 2012 study by market research firm IHS found that over 12 million counterfeit electronics and semiconductor components have entered the distribution chain since 2007, with 57 percent of all counterfeit parts obsolete or end-of-life components. Many of these parts make their way into mission-critical industries, such as defense and aerospace, where a malfunctioning counterfeit part can mean the difference between life and death.
While provisions in the 2012 National Defense Authorization Act have enabled the government and trade groups to make some progress towards regulating the supply chain to ensure that components are only sourced directly from the manufacturers or their franchised distributors, the problem has not abated. The Act empowers the federal government to hold contractors financially responsible for replacing counterfeit products.
This, together with other changes, puts more responsibility on suppliers of electronic component to have risk mitigation procedures in place. The issue is become more topical and the industry must act in order to comply with the new legislation.
Components Direct takes this problem seriously, and provides supplier insights and tools to help combat gray market activity. In a recent study we conducted for a leading semiconductor supplier of both analog and digital devices, we discovered that over 124 million units of their product were floating in the gray market across 6,500 plus part numbers.
Over 70 percent of the products were found in Asia, and 20 percent also appeared in both North America and EMEA. The product age spanned many years with date codes of less than one year accounting for 22 percent of their gray market product. A further 5 percent had date codes over 11 years, demonstrating that whether you were an OEM looking for the newest product, or a military sub-contractor looking for obsolete components, no end customer is immune to the presence of unauthorized product.
Components Direct’s technology tools and services track gray market activity and provide suppliers with unprecedented visibility to their product leakage in the gray market by part number, region, data code etc. This data enables our suppliers to trace leakage in their supply chain and lessen potential unauthorized product from getting into the gray market.
Additionally, Components Direct provides suppliers and buyers with a secure, factory authorized channel for selling or purchasing 100 percent guaranteed traceable components. "We only sell products that come directly with manufacturers or their franchised distributors and all our products are inventoried in an ESD 20.20 and ISO 9001 certified facility," said Ting.
As an extension of the manufacturer, Components Direct provides the supply chain buyer with complete confidence and peace of mind that all products originate directly from the manufacturer and have been properly stored, handled and packaged. Sourcing from an authorized source like Components Direct eliminates the risks surrounding product quality, reliability and liability.
Thursday, March 7, 2013
What's next in complex SoC verification?
Functional verification is critical in advanced SoC designs. Abey Thomas, verification competency manager, Embitel Technologies, said that over 70 percent effort in the SoC lifecycle is verification. Only one in three SoCs achieves first silicon success.
Thirty percent designs needed three or more re-spins. Three out of four designs are SoCs with one or more processors. Three out of four designs re-use existing IPs. Almost all of the embedded processor IPs have power controllability. Almost all of the SoCs have multiple asynchronous clock domains.
An average of 75 percent designs are less than 20 million gates. Significant increase in formal checking is approaching. Average number of tests performed has increased exponentially. Regression runs now span several days and weeks. Hardware emulation and FPGA prototyping is rising exponentially. There has been a significant increase in verification engineers involved. A lot of HVLs and methodologies are now available.
Verification challenges
Verification challenges include unexpected conflicts in accessing the shared resource. Complexities can arise due to an interaction between standalone systems. Next, there are arbitration priority related issues and access deadlocks, as well as exception handling priority conflicts. There are issues related to the hardware/software sequencing, and long loops and unoptimized code segments. The leakage power management and thermal management also pose problems.
There needs to be verification of performance and system power management. Multiple power regions are turned ON and OFF. Multiple clocks are also gated ON and OFF. Next, asynchronous clock domain crossing, and issues related to protocol compliance for standard interfaces. There are issues related to system stability and component reliability. Some other challenges include voltage level translators and isolation cells.
Where are we now? It is at clock gating, power gating with or without retention, multi-switching (multi-Vt) threshold transistors, multi-supply multi-voltage (MSMV), DVFS, logic optimization, thermal compensation, 2D-3D stacking, and fab process and substrate level bias control.
So, what's needed? There must be be low power methods without impacting on performance. Careful design partitions are needed. The clock trees must be optimized. Crucial software operations need to be identified at early stages. Also, functional verification needs to be thorough.
Power hungry processes must be shortlisted. There needs to be compiler level optimization as well as hardware acceleration based optimization. There should be duplicate registers and branch prediction optimization. Finally, there should be big-little processor approach.
Present verification trends and methodologies include clock partitions, power partitions, isolation cells, level shifters and translators, serializers-deserializers, power controller, clock domain manager, and power information format - CPF or UPF. In low-power related verification, there is on power-down and on power-up. In the latter, the behavioral processes are re-enabled for evaluation.
Open source verification challenges
First, the EDA vendor decides what to support! Too many versions are released in short time frame. Object oriented concepts are used that are sometimes unfit for hardware. Modelling is sometimes done by an engineer who does not know the difference between a clock cycle and motor cycle! Next, there is too much of open source implementations without much documentation. There can be multiple, confusing implementation options as well. In some cases, no open source tools are available. There is limited tech support due to open source.
Power aware simulation steps perform register/latch recognition from RTL design. They perform identification of power elements and power control signals.They support UPF or CPF based simulation. Power reports are generated, which can be exported to a unique coverage database.
Common pitfalls include wrapper on wrapper bugs, eg. Verilog + e wrapper + SV. There is also a dependency on machine generated functional coverage goals. There may be a disconnect between the designer and verification language. There are meaningless coverage reports and defective reference models, as well as unclear and ambiguous specification definition. The proven IP can become buggy due to wrapper condition.
Tips and tricks
There needs to be some early planning tips. Certain steps need to be completed. There should be completion of code coverage targets, completion of functional coverage targets, completion of targeted checker coverage, completion of correlation between functional coverage and checker coverage list, and a complete review of all known bugs, etc.
Tips and tricks include bridging the gap between design language and verification language. There must be use of minimal wrappers to avoid wrapper level bugs. There should be a thorough review of the coverage goals. There should be better interaction between designer and verification engineers. Run using basic EDA tool versions and lower costs.
Thirty percent designs needed three or more re-spins. Three out of four designs are SoCs with one or more processors. Three out of four designs re-use existing IPs. Almost all of the embedded processor IPs have power controllability. Almost all of the SoCs have multiple asynchronous clock domains.
An average of 75 percent designs are less than 20 million gates. Significant increase in formal checking is approaching. Average number of tests performed has increased exponentially. Regression runs now span several days and weeks. Hardware emulation and FPGA prototyping is rising exponentially. There has been a significant increase in verification engineers involved. A lot of HVLs and methodologies are now available.
Verification challenges
Verification challenges include unexpected conflicts in accessing the shared resource. Complexities can arise due to an interaction between standalone systems. Next, there are arbitration priority related issues and access deadlocks, as well as exception handling priority conflicts. There are issues related to the hardware/software sequencing, and long loops and unoptimized code segments. The leakage power management and thermal management also pose problems.
There needs to be verification of performance and system power management. Multiple power regions are turned ON and OFF. Multiple clocks are also gated ON and OFF. Next, asynchronous clock domain crossing, and issues related to protocol compliance for standard interfaces. There are issues related to system stability and component reliability. Some other challenges include voltage level translators and isolation cells.
Where are we now? It is at clock gating, power gating with or without retention, multi-switching (multi-Vt) threshold transistors, multi-supply multi-voltage (MSMV), DVFS, logic optimization, thermal compensation, 2D-3D stacking, and fab process and substrate level bias control.
So, what's needed? There must be be low power methods without impacting on performance. Careful design partitions are needed. The clock trees must be optimized. Crucial software operations need to be identified at early stages. Also, functional verification needs to be thorough.
Power hungry processes must be shortlisted. There needs to be compiler level optimization as well as hardware acceleration based optimization. There should be duplicate registers and branch prediction optimization. Finally, there should be big-little processor approach.
Present verification trends and methodologies include clock partitions, power partitions, isolation cells, level shifters and translators, serializers-deserializers, power controller, clock domain manager, and power information format - CPF or UPF. In low-power related verification, there is on power-down and on power-up. In the latter, the behavioral processes are re-enabled for evaluation.
Open source verification challenges
First, the EDA vendor decides what to support! Too many versions are released in short time frame. Object oriented concepts are used that are sometimes unfit for hardware. Modelling is sometimes done by an engineer who does not know the difference between a clock cycle and motor cycle! Next, there is too much of open source implementations without much documentation. There can be multiple, confusing implementation options as well. In some cases, no open source tools are available. There is limited tech support due to open source.
Power aware simulation steps perform register/latch recognition from RTL design. They perform identification of power elements and power control signals.They support UPF or CPF based simulation. Power reports are generated, which can be exported to a unique coverage database.
Common pitfalls include wrapper on wrapper bugs, eg. Verilog + e wrapper + SV. There is also a dependency on machine generated functional coverage goals. There may be a disconnect between the designer and verification language. There are meaningless coverage reports and defective reference models, as well as unclear and ambiguous specification definition. The proven IP can become buggy due to wrapper condition.
Tips and tricks
There needs to be some early planning tips. Certain steps need to be completed. There should be completion of code coverage targets, completion of functional coverage targets, completion of targeted checker coverage, completion of correlation between functional coverage and checker coverage list, and a complete review of all known bugs, etc.
Tips and tricks include bridging the gap between design language and verification language. There must be use of minimal wrappers to avoid wrapper level bugs. There should be a thorough review of the coverage goals. There should be better interaction between designer and verification engineers. Run using basic EDA tool versions and lower costs.
Wednesday, March 6, 2013
Unlocking full potential of soft IP
SoC design challenges and needs are diverse. There are many diverse IP blocks. It is time consuming to verify. Picking correct IP is critically important.
Speaking on the TSMC Soft IP Alliance program, Dan Kochpatcharin, deputy director, TSMC said that IP sourcing priorities include is it available, Is it from trusted partners, how is the design quality, and what are the specs and cost? Some other points to note are: is an IP verified, has it been silicon proven, what has been tested and how many in production volume already?
TSMC Soft IP Alliance has 5000+ IP titles from 40+ IP vendors. The IP Alliance program has been expanding. It is leveraging on successful IP. More and more customers are concerned about PPA data of soft IP specific technology when doing system design.
The Soft IP Alliance has 16 members. The Soft-IP quality assessment TSMC 9000 is key. New soft-IP handoff kit were rolled out in Nov. 2012. Major partners have now joined to drive soft-IP quality, such as Imagination, Sonics, MIPS, etc.
TSMC Soft-IP 9000 has carried out industry first QA assessment system for RTL based IP. TSMC and IP partners co-optimize RTL/process to deliver PPA optimized IPs.
Mike Gianfagna, Atrenta, spoke on the implementing program with Atrenta IP kit. Atrenta's SpyGlass is a systematic approach to soft IP quality.
If you look at what's needed for IP assessment, there are factors such as right abstraction levels must be supported, and soft IP is delivered as generators, RTL or gates; the biggest need is here. The IP must be comprehensive, easy to use, objective and quantifiable, actionable, and dynamic and scalable. Atrenta and TSMC announced SpyGlass IP kit 2.0 in October 2012
What does the IP kit check? Many items that would impact the integration/debug time and chip function were found and fixed. Soft IP qualification can be automated. It results in higher quality deliverables. All soft IP can be improved. Primary beneficiaries are chip integrators.
John Bainbridge, Sonics, spoke on the practical results of program participation. Sonics is a leader in system IP for SoCs. It enables designers to integrate any IP from anywhere, anytime.
Sonics helps leading SoC vendors solve some of the most difficult challenges in SoC design. These can be IP integration, high frequency, memory throughput, security, physical design, power management, development costs, and time-to-market.
Sonics is a lead beta partner for TSMC Soft IP 2.0 kit program. It has worked closely with Atrenta and TSMC to ensure a seamless design flow.
Speaking on the TSMC Soft IP Alliance program, Dan Kochpatcharin, deputy director, TSMC said that IP sourcing priorities include is it available, Is it from trusted partners, how is the design quality, and what are the specs and cost? Some other points to note are: is an IP verified, has it been silicon proven, what has been tested and how many in production volume already?
TSMC Soft IP Alliance has 5000+ IP titles from 40+ IP vendors. The IP Alliance program has been expanding. It is leveraging on successful IP. More and more customers are concerned about PPA data of soft IP specific technology when doing system design.
The Soft IP Alliance has 16 members. The Soft-IP quality assessment TSMC 9000 is key. New soft-IP handoff kit were rolled out in Nov. 2012. Major partners have now joined to drive soft-IP quality, such as Imagination, Sonics, MIPS, etc.
TSMC Soft-IP 9000 has carried out industry first QA assessment system for RTL based IP. TSMC and IP partners co-optimize RTL/process to deliver PPA optimized IPs.
Mike Gianfagna, Atrenta, spoke on the implementing program with Atrenta IP kit. Atrenta's SpyGlass is a systematic approach to soft IP quality.
If you look at what's needed for IP assessment, there are factors such as right abstraction levels must be supported, and soft IP is delivered as generators, RTL or gates; the biggest need is here. The IP must be comprehensive, easy to use, objective and quantifiable, actionable, and dynamic and scalable. Atrenta and TSMC announced SpyGlass IP kit 2.0 in October 2012
What does the IP kit check? Many items that would impact the integration/debug time and chip function were found and fixed. Soft IP qualification can be automated. It results in higher quality deliverables. All soft IP can be improved. Primary beneficiaries are chip integrators.
John Bainbridge, Sonics, spoke on the practical results of program participation. Sonics is a leader in system IP for SoCs. It enables designers to integrate any IP from anywhere, anytime.
Sonics helps leading SoC vendors solve some of the most difficult challenges in SoC design. These can be IP integration, high frequency, memory throughput, security, physical design, power management, development costs, and time-to-market.
Sonics is a lead beta partner for TSMC Soft IP 2.0 kit program. It has worked closely with Atrenta and TSMC to ensure a seamless design flow.
Tuesday, March 5, 2013
Seagate launches Wireless Plus and Central
Seagate Technology has announced the Wireless Plus and Central. Futoshi Nizuma, executive director of sales, Japan, South Asia, ASEAN and NZ, said that 2013 is an evolutionary year in storage.
PC growth is flat in mature markets (US/EMEA) as consumer technology choices grow and dollars are competed for. The mobile revolution is in full swing with smartphones, tablets and eReaders becoming ubiquitous. Digital storage usage is more complex with multiple devices, multiple users and anywhere access.
PC/notebooks remain the consumer digital hub, but the replacement cycle is getting longer. The global trend has been slightly up due to China growth. Mobile adoption positively impacts the storage ecosystem. Mobile devices are used in the home and on the road. 2013 is an inflection point. Mobile devices will have a higher installed base than all PCs.
Tablet shipments will increase 54 percent from 2012 to 2013 and smartphones will be in 51 percent of the households. The forecast for mobile data growth is 78 percent CAGR through 2016, reaching 10.8 exabytes. Also, mobile-connected tablets will generate almost as much traffic in 2016 as the entire global mobile network in 2012.
We are also witnessing the emergence of the 3Ms -- multi-screen, multi-user and multi-function. In the short term (1-3 years), PCs remain the digital hub in the home, with mobile and connected devices getting added to the ecosystem. In the mid-term (3-6 years) cloud and NAS will be the storage hub for the home while PCs become more of an edge device.
Here, the Seagate Wireless Plus assumes significance. Mobile storage can now be accessed without the web or wires. Also, the Seagate Central allows you to organize and access your digital life.
Now, you can simplify your life and organize all your content, files and documents in one location with automatic and continuous backup every for computer in the home – wirelessly. You can enjoy your content where you want, when you want. Access your music, movies and docs from computers, game consoles, Smart TVs and other connected devices -- all throughout the home. You can now enjoy your media on tablets and smartphones. Browse your universe of files from anywhere, with the free and intuitive Seagate Media App, available for Apple iOS and Android.
If you own a Samsung smart TV you can take advantage of the Seagate Media app (downloadable directly from the Samsung App store), to enjoy easy content browsing with your remote control. Central’s Remote Access service gives you the ability to upload or download content wherever you have Wi-Fi or 3G/4G connection, using a Web browser - it's like your own personal and secure cloud.
Friday, March 1, 2013
Flip-Chip: An established platform still in mutation!
Flip-Chip is a chip packaging technique in which the active area of the chip is flipped over facing downward, instead of facing up and bonded to the package leads with wires from the outside edges of the chip.
Any surface area of the Flip-Chip can be used for interconnection, which is typically done through metal bumps. These bumps are soldered onto the package and underfilled with epoxy. The Flip-Chip allows for a large number of interconnects with shorter distances than wire, which greatly reduces inductance.
According to Lionel Cadix, market and technology analyst, Yole Developpement, France, metal bumps can be made of solder (tin, tin-lead or lead-free alloys), copper, gold and copper-tin or Au-tin alloys. The package substrates are epoxy-based (organic substrates), ceramic based, copper based (leadframe substrates), and silicon or glass based.
In the period 2010-2018, Flip-Chip will likely grow at a CAGR of 19 percent. In 2012, laptop and desktop PCs were the top end products using Flip Chip. It represents 50 percent of the Flip-Chip market by end product with more than 6.2 million of wafer starts. PCs are followed by smart TV and LCD TVs (for LCD drivers), smartphones and high performance computers.
The Flip-Chip market in 2012 is around $20 billion, selling 20 billion units approximately in 12" equivalent wafers. Taiwan is so far the no. 1 producer. At least 50 percent of the Flip-Chips devices get into end products. By 2018, the Flip-Chip market should grow to a $35 billion market, selling 68 billion units.
Applications and market focus
Looking at the Flip-Chip applications and market focus, Flip-Chip technology is already present in a wide range of application, from high volumes/consumer applications, to low volumes/high end applications. All these applications have their own requirements, specifications and challenges!
Some of these applications are military and aerospace, medical devices, automobiles, HPC, servers, networks, base stations, etc, in low volumes. It is present in set-top boxes, game stations, smart TVs/displays, desktops/laptops and smartphones/tablets in high volumes. Flip-Chip applications are in imaging, logic 2D SoCs, HB-LEDs, RF, power, analog and mixed-signal, stacked memories, and logic 3D-SiP/SoCs.
In computing applications, for instance, the Intel core i5 is the first MCM combining a 77mm2 CPU together with a 115mm2 GPU in a 37.5mm side package. Solder bumps with a pitch of 185μm are used for the slicon to substrate (1st) interconnect. This MCM configuration is suitable for office applications, with relatively low demanding processing powers. For mobile/wireless applications, there are opportunities for MEMS in smartphones/feature phones. Similarly, Flip-Chip is available for consumer applications.
For microbumping in interposers for FPGA there is a focus on Xilinx Virtex 7 HT. Last year, Xilinx announced a single-layer, multi-chip silicon interposer for its 28nm 7 series FPGAs. Key features include two million logic cells for a high level of computational performance, and high bandwidth, four slice processed in 28 nm, 25 x 31mm, 100 μm thick silicon interposer, 45 um pitch microbumps and 10 μm TSV, and 35 x 35 mm BGA with 180 μm pitch C4 bumps.
Any surface area of the Flip-Chip can be used for interconnection, which is typically done through metal bumps. These bumps are soldered onto the package and underfilled with epoxy. The Flip-Chip allows for a large number of interconnects with shorter distances than wire, which greatly reduces inductance.
According to Lionel Cadix, market and technology analyst, Yole Developpement, France, metal bumps can be made of solder (tin, tin-lead or lead-free alloys), copper, gold and copper-tin or Au-tin alloys. The package substrates are epoxy-based (organic substrates), ceramic based, copper based (leadframe substrates), and silicon or glass based.
In the period 2010-2018, Flip-Chip will likely grow at a CAGR of 19 percent. In 2012, laptop and desktop PCs were the top end products using Flip Chip. It represents 50 percent of the Flip-Chip market by end product with more than 6.2 million of wafer starts. PCs are followed by smart TV and LCD TVs (for LCD drivers), smartphones and high performance computers.
The Flip-Chip market in 2012 is around $20 billion, selling 20 billion units approximately in 12" equivalent wafers. Taiwan is so far the no. 1 producer. At least 50 percent of the Flip-Chips devices get into end products. By 2018, the Flip-Chip market should grow to a $35 billion market, selling 68 billion units.
Applications and market focus
Looking at the Flip-Chip applications and market focus, Flip-Chip technology is already present in a wide range of application, from high volumes/consumer applications, to low volumes/high end applications. All these applications have their own requirements, specifications and challenges!
Some of these applications are military and aerospace, medical devices, automobiles, HPC, servers, networks, base stations, etc, in low volumes. It is present in set-top boxes, game stations, smart TVs/displays, desktops/laptops and smartphones/tablets in high volumes. Flip-Chip applications are in imaging, logic 2D SoCs, HB-LEDs, RF, power, analog and mixed-signal, stacked memories, and logic 3D-SiP/SoCs.
In computing applications, for instance, the Intel core i5 is the first MCM combining a 77mm2 CPU together with a 115mm2 GPU in a 37.5mm side package. Solder bumps with a pitch of 185μm are used for the slicon to substrate (1st) interconnect. This MCM configuration is suitable for office applications, with relatively low demanding processing powers. For mobile/wireless applications, there are opportunities for MEMS in smartphones/feature phones. Similarly, Flip-Chip is available for consumer applications.
For microbumping in interposers for FPGA there is a focus on Xilinx Virtex 7 HT. Last year, Xilinx announced a single-layer, multi-chip silicon interposer for its 28nm 7 series FPGAs. Key features include two million logic cells for a high level of computational performance, and high bandwidth, four slice processed in 28 nm, 25 x 31mm, 100 μm thick silicon interposer, 45 um pitch microbumps and 10 μm TSV, and 35 x 35 mm BGA with 180 μm pitch C4 bumps.
Subscribe to:
Posts (Atom)