Enterprises should start thinking what needs to get into their respective clouds. Transformative IT is unhinging the rules. There will be a cloud-based convergence era in the future, according to Avneesh Saxena, Group VP, Domain Research Group, IDC Asia/Pacific. He was presenting on cloud deployment trends in the Asia Pacific region at the recently held Intel APAC Cloud Summit.
According to him, China and India - with 9 percent each - led the countries in GPD and change agents. He referred to four mega trends:
* Information - exploding data. Just under 50 percent of TB shipped in 2014 will be in the public cloud.
* Mobility - third platform for industry growth. Mobile devices, services and applications will grow from 2011. This will be the intelligent economy.
* The technology catalyst. Servers, blades, cores, VMs, data transmission, 10G ports -- all will grow, some, by at least 5-10 times.
* IT spending (external spend only) will be worth $282 billion in Asia Pacific excluding Japan (APeJ). Also, 31 percent of CIOs and 25 percent of LoBs (line of business) plan to spend 11-30 percent more.
The top three priorities for CIOs and LoBs are as follows:
* Simplify the IT infrastructure.
* Lower the overall cost structure.
* Harnessing IT for competitive edge.
We will be investing more in mobility and analytics. There will be a move toward consolidation, virtualization and better efficiency. There will be a move toward a more flexible, agile and scalable infrastructure, as well, in the future.
Saxena outlined three key transformational trends.
Behavior/access -- mobility/analytics.
Infrastructure/devices -- convergence, virtualization.
Delivery/consumption -- cloud.
Mobilution is a confluence of factors. It is mobile everything. A lot of the distribution channels aer also cloud driven. Analytics-led competitive acceleration is the primary objective of business analytics projects.
Saxena added that there could yet be another disruption -- in the form of micro servers. The idea is to lower the cost of computing per unit of work. Even Intel's infrastructure will get 75 percent virtualized in three to four years from now.
There will also be converged infrastructure for private clouds. Besides, server virtualization is ramping up fast. There will be a huge increase in server shipments by 2014.
Next, there will be device proliferation impact on client virtualization. There is a demand to connect all of our devices -- smartphones, iPads, BlackBerrys, tablets, etc.
Evolving cloud business models include, C2C. The consumer clouds are the most popular, such as Hotmail, Gmail, Google Docs, etc. B2C clouds are the next -- such as NetFlix, Apple, Skype, etc. Finally, there are B2B clouds -- enterprise clouds -- where security, SLAs are differentiators.
Security/regulation are critical for public clouds. As of now, private clouds are deemed to be more secure than public clouds. Solving cloud security and compliance is a huge revenue opportunity for vendors.
Thursday, July 28, 2011
Reference architecture -- starting point to build and optimize cloud infrastructure
Rekha Raghu, Strategic Program Manager, Intel, Software and Services Group, discussed some reference architecture (RA) case studies.
Intel Cloud Builders program is a reference architecture (RA) -- a starting point from where to build and optimize cloud infrastructure. The RA Development Process takes anywhere from two to three weeks. It involves exploration, planning, integration, testing and development. The RA is said to be:
* Detailed know-how guides.
* Practical guidance for building and enhancing cloud infrastructure.
* Best-known methods learned through hands-on lab work.
RA case study # 1 – efficient power management
Data center power management involves monitor and control server power, and later, manage and co-ordinate at data center level. Dynamic power management is on the server, rack, and data center levels.
Power management use cases help to save money via Real time monitoring, optimized workloads and energy reduction. They allow scaling farther via power guard rail and optimization of rack density. They also help prepare for the worst in terms of disaster recovery/business continuity.
Intel also presented a power management RA overview as well as an implementation view. The monitoring, reporting and analysis provides insight into energy use and efficiency, as well as CO2 emissions.
Rack density optimization and power guard rail enables more servers deployed per rack. It improves the opex cost of power delivery per system. It also extends the capex data center investment with increased node deployments.
As for disaster recovery/business continuity, there is policy based power throttling per node to bring the data center back to life more quickly and safely. The next step involves inlet temperature monitoring and response based on thermal events (already available in Intel Intelligent Power Node Manager).
Workload-power optimization identifies optimal power reduction without performance impact. Customized analysis is required as each workload draws power differently.
RA case study # 2 – enhanced cloud security
If one looks at the trends in security in the enterprise, there are shifts in types of attacks. The platform is now as a target, not just software. Stealth and control are taken as objectives.
There are increased compliance concerns. HIPPA, Payment Card Industry (PCI), etc., require security enforcement and auditing. Changes in architectures require new protections as well. These include Virtualization and multi-tenancy, third party dependencies, and location identification.
Trustable compute pools usage models lead to compliance and trust in the cloud. Compliance in the cloud involves multi-tenancy that could complicate compliance. There is need for software trust despite physical abstraction. Also, compliance requires effective reporting. There is a need to enforce VM migration based on security policy.
Intel-VMware-HyTrust enables trusted compute pools. The outcome is that data integrity is secure and there is no compliance violation.
Intel Trusted Execution Technology (TXT) enforces platform control. It allows greater control of launch stack and enables isolation in boot process. It also complements runtime protections, and reduces support and remediation costs. Hardware based trust provides verification useful in compliance.
HyTrust appliance enforces policy. It is a virtual appliance that provides unified access control, policy enforcement, and audit-quality logging for the administration of virtual infrastructure.
Intel provides solutions to pro-actively control and audit virtualized data centers.
Intel Cloud Builders program is a reference architecture (RA) -- a starting point from where to build and optimize cloud infrastructure. The RA Development Process takes anywhere from two to three weeks. It involves exploration, planning, integration, testing and development. The RA is said to be:
* Detailed know-how guides.
* Practical guidance for building and enhancing cloud infrastructure.
* Best-known methods learned through hands-on lab work.
RA case study # 1 – efficient power management
Data center power management involves monitor and control server power, and later, manage and co-ordinate at data center level. Dynamic power management is on the server, rack, and data center levels.
Power management use cases help to save money via Real time monitoring, optimized workloads and energy reduction. They allow scaling farther via power guard rail and optimization of rack density. They also help prepare for the worst in terms of disaster recovery/business continuity.
Intel also presented a power management RA overview as well as an implementation view. The monitoring, reporting and analysis provides insight into energy use and efficiency, as well as CO2 emissions.
Rack density optimization and power guard rail enables more servers deployed per rack. It improves the opex cost of power delivery per system. It also extends the capex data center investment with increased node deployments.
As for disaster recovery/business continuity, there is policy based power throttling per node to bring the data center back to life more quickly and safely. The next step involves inlet temperature monitoring and response based on thermal events (already available in Intel Intelligent Power Node Manager).
Workload-power optimization identifies optimal power reduction without performance impact. Customized analysis is required as each workload draws power differently.
RA case study # 2 – enhanced cloud security
If one looks at the trends in security in the enterprise, there are shifts in types of attacks. The platform is now as a target, not just software. Stealth and control are taken as objectives.
There are increased compliance concerns. HIPPA, Payment Card Industry (PCI), etc., require security enforcement and auditing. Changes in architectures require new protections as well. These include Virtualization and multi-tenancy, third party dependencies, and location identification.
Trustable compute pools usage models lead to compliance and trust in the cloud. Compliance in the cloud involves multi-tenancy that could complicate compliance. There is need for software trust despite physical abstraction. Also, compliance requires effective reporting. There is a need to enforce VM migration based on security policy.
Intel-VMware-HyTrust enables trusted compute pools. The outcome is that data integrity is secure and there is no compliance violation.
Intel Trusted Execution Technology (TXT) enforces platform control. It allows greater control of launch stack and enables isolation in boot process. It also complements runtime protections, and reduces support and remediation costs. Hardware based trust provides verification useful in compliance.
HyTrust appliance enforces policy. It is a virtual appliance that provides unified access control, policy enforcement, and audit-quality logging for the administration of virtual infrastructure.
Intel provides solutions to pro-actively control and audit virtualized data centers.
Intel Cloud Builders program helps utilize proven reference solutions to ease deployments
Billy Cox, director, Cloud Strategy, Software and Services Group, Intel, said that IT and service providers need to define and prioritize IT requirements. Products and technologies go on to take advantage of new capabilities in Intel platforms. The Intel Cloud Builders program helps utilize proven reference solutions to ease your deployments.
Technology to optimize the CloudNode requires orchestration and automation. This involves: compliance and security, high performance IO, and density and efficiency.
So, what's the need for having reference architectures now? Where do you draw the lines for a reference architecture? Enterprises have mostly relied on build to order architectures. With the advent of cloud, there is more of configure to order architectures.
Cloud hardware architectures generally focus on homogenous compute, flat networks and distributed storage. The cloud software IaaS stack looks at horizontal management roles. The focus is on service delivery.
The Open Data Center usage models include secure federation - provider assurance and compliance monitoring, automation - VM operability and ID control, common management and policy - regulatory framework, as well as transparency - service catalog, standard unit of measurement, and carbon footprint, where the cloud services become “CO2 aware”. Cox also referred to data center usage models in 2011, where Intel is delivering products/technologies to address usage models.
Intel Cloud Builders program reference architectures is a starting point from which to build and optimize cloud infrastructure. Solutions are available today to make it easier to build and optimize cloud infrastructure. Intel offers proven, open, interoperable solutions optimized for IA capabilities. It is also establishing the foundation for more secure clouds.
Data center efficiency priorities involve achieving efficiency and reliability by maximizing available capacity and modular build out for growth. Intel has a holistic approach – systems, rack, design and monitoring.
For instance, the Unified Network consolidates traffic on an 10G Ethernet fabric. It simplifies the network by migrating to 10GbE and lowers TCO by consolidating data and storage networks. Flexible network is the foundation of cloud architecture.
Intel Cloud Builders is easing cloud deployments via proven, interoperable solutions for IT.
Technology to optimize the CloudNode requires orchestration and automation. This involves: compliance and security, high performance IO, and density and efficiency.
So, what's the need for having reference architectures now? Where do you draw the lines for a reference architecture? Enterprises have mostly relied on build to order architectures. With the advent of cloud, there is more of configure to order architectures.
Cloud hardware architectures generally focus on homogenous compute, flat networks and distributed storage. The cloud software IaaS stack looks at horizontal management roles. The focus is on service delivery.
The Open Data Center usage models include secure federation - provider assurance and compliance monitoring, automation - VM operability and ID control, common management and policy - regulatory framework, as well as transparency - service catalog, standard unit of measurement, and carbon footprint, where the cloud services become “CO2 aware”. Cox also referred to data center usage models in 2011, where Intel is delivering products/technologies to address usage models.
Intel Cloud Builders program reference architectures is a starting point from which to build and optimize cloud infrastructure. Solutions are available today to make it easier to build and optimize cloud infrastructure. Intel offers proven, open, interoperable solutions optimized for IA capabilities. It is also establishing the foundation for more secure clouds.
Data center efficiency priorities involve achieving efficiency and reliability by maximizing available capacity and modular build out for growth. Intel has a holistic approach – systems, rack, design and monitoring.
For instance, the Unified Network consolidates traffic on an 10G Ethernet fabric. It simplifies the network by migrating to 10GbE and lowers TCO by consolidating data and storage networks. Flexible network is the foundation of cloud architecture.
Intel Cloud Builders is easing cloud deployments via proven, interoperable solutions for IT.
Wednesday, July 27, 2011
Intel’s vision for the cloud
According to Allyson Klein, director, Leadership Marketing, Data Center Group, Intel Corp., the compute continuum has arrived. The connected world is becoming larger and more diverse. There will approximately be over 1 billion new users by 2015.
We are witnessing a sea of new devices, limited only by our creativity. There are estimated to be more than 15 billion devices connected by 2015. All of these devices are creating a renaissance of compute experience, that is pervasive and simple computing. These will once again change the ways we work and live.
And, a new frontier of insight, simplifying our lives and making our world more efficient. So, what about the cloud? Cloud will be the performance engine of the compute continuum.
There has been an introduction of a new economic model for computing: ~ 600 Apple iPhones will need a new server; and so would ~120 iPads. And, this is only said to be the beginning!
The data center processor growth has been >2X in 10 years. Data center acceleration is estimated to be >2X in the next five years. Cloud’s contribution to data center growth will be significant. In 2010, cloud was contributing 10 percent. This should double to 20 percent in 2015.
Intel’s strategy for creating the cloud includes:
IT & service providers - define and prioritize IT requirements.
Products & technologies - take advantage of new capabilities in Intel platforms.
Intel Cloud Builders - utilize proven reference solutions to ease your deployments.
The Open Data Center Alliance is a catalyst for change, given that open and interoperable solutions are essential. In October 2010, the Alliance established the first user-driven organization for cloud requirements. There were 70 IT leaders joined by technical advisor Intel. Five technical working groups were formed.
In June 2011, the Open Data Center Alliance released the first user-driven requirements for the cloud. It now has 4X members representing >$100B annual IT spend. There have been new technical collaborations as well -- four organizations and four initial solutions providers.
The Alliance endorses immediate use to guide member planning and purchasing decisions.
We are witnessing a sea of new devices, limited only by our creativity. There are estimated to be more than 15 billion devices connected by 2015. All of these devices are creating a renaissance of compute experience, that is pervasive and simple computing. These will once again change the ways we work and live.
And, a new frontier of insight, simplifying our lives and making our world more efficient. So, what about the cloud? Cloud will be the performance engine of the compute continuum.
There has been an introduction of a new economic model for computing: ~ 600 Apple iPhones will need a new server; and so would ~120 iPads. And, this is only said to be the beginning!
The data center processor growth has been >2X in 10 years. Data center acceleration is estimated to be >2X in the next five years. Cloud’s contribution to data center growth will be significant. In 2010, cloud was contributing 10 percent. This should double to 20 percent in 2015.
Intel’s strategy for creating the cloud includes:
IT & service providers - define and prioritize IT requirements.
Products & technologies - take advantage of new capabilities in Intel platforms.
Intel Cloud Builders - utilize proven reference solutions to ease your deployments.
The Open Data Center Alliance is a catalyst for change, given that open and interoperable solutions are essential. In October 2010, the Alliance established the first user-driven organization for cloud requirements. There were 70 IT leaders joined by technical advisor Intel. Five technical working groups were formed.
In June 2011, the Open Data Center Alliance released the first user-driven requirements for the cloud. It now has 4X members representing >$100B annual IT spend. There have been new technical collaborations as well -- four organizations and four initial solutions providers.
The Alliance endorses immediate use to guide member planning and purchasing decisions.
Tuesday, July 26, 2011
Cloud key strategy for Intel: Liam Keating
"The benefits of cloud are real! We have so far seen $17 million savings to date from our internal cloud efforts," said Liam Keating, Intel APAC IT director and China IT country manager. He was speaking at the Intel APAC Cloud Summit.
Intel currently runs 91 data centers, globally, from over 140, about a couple of years ago. As of now, cloud has become a key strategy for Intel.
If one views Intel's data center profile, it looks something like this:
D - Design - Expanded HPC solutions.
O - Office - Enterprise private cloud.
M - Manufacturing - Factory automation.
E - Enterprise - Enterprise private cloud.
S - Services - Enterprise private cloud.
In the past (in 2009), Intel had 12 percent virtualization. It had a design grid as well. According to Keating, Intel's experience with grid computing helped in the company's cloud computing strategy. Currently, Intel boasts over 50 percent virtualization. In future, this would move to over 75 percent. Keating added that Intel will continue to experiment and evolve the public cloud.
As for the applications residing on the internal cloud, these include: engineering 5 percent, sales/marketing 19 percent, ERP 13 percent, HR/finance/legal 22 percent, operations/security/manageability 26 percent, and productivity/collaboration 15 percent.
The business benefits are immense. "We are improving the velocity and availability of IT services," Keating said. He outlined five strategic benefits, as below:
* Agility - immediate provisioning.
* Higher responsiveness.
* Lower business costs.
* Flexible configurations.
* Secured infrastructure.
In terms of business velocity, there has been reduced provisioning time -- from 90 days to three hours. Intel is now on its way to minutes! As for efficiency, the server consolidation is at a 20:1 ratio. In terms of capacity, there has been a shift from capacity planning to demand forecast modes. Finally, quality, where the standard configurations improved consistency and enabled automation.
Intel has learned best practices lessons from implementing the cloud. First, the cloud terminology itself. There have been leadership support as well as IT business partnerships. Intel has also set short-term priorities -- pervasive virtualization and faster provisioning. Intel has also learned to manage with data - P2V RoI, measured services, business intelligence (BI) collection, server sizing, etc.
Current challenges facing Intel include asset management and utilization. There is a need to be cognizant of performance saturation, and also understand the degrees of separation. The integrated management view is critical in all of this.
Another challenge is presented by capacity planning, which is shifting to demand forecasting. Quicker provisioning requires the view into the future cloud as well. Next, automation re-inforces the workforce too!
Intel's IT division has successfully developed a private cloud. This has aligned the IT strategy to business needs. Business benefits will generate value. Cloud transition is a multi-year journey.
Intel currently runs 91 data centers, globally, from over 140, about a couple of years ago. As of now, cloud has become a key strategy for Intel.
If one views Intel's data center profile, it looks something like this:
D - Design - Expanded HPC solutions.
O - Office - Enterprise private cloud.
M - Manufacturing - Factory automation.
E - Enterprise - Enterprise private cloud.
S - Services - Enterprise private cloud.
In the past (in 2009), Intel had 12 percent virtualization. It had a design grid as well. According to Keating, Intel's experience with grid computing helped in the company's cloud computing strategy. Currently, Intel boasts over 50 percent virtualization. In future, this would move to over 75 percent. Keating added that Intel will continue to experiment and evolve the public cloud.
As for the applications residing on the internal cloud, these include: engineering 5 percent, sales/marketing 19 percent, ERP 13 percent, HR/finance/legal 22 percent, operations/security/manageability 26 percent, and productivity/collaboration 15 percent.
The business benefits are immense. "We are improving the velocity and availability of IT services," Keating said. He outlined five strategic benefits, as below:
* Agility - immediate provisioning.
* Higher responsiveness.
* Lower business costs.
* Flexible configurations.
* Secured infrastructure.
In terms of business velocity, there has been reduced provisioning time -- from 90 days to three hours. Intel is now on its way to minutes! As for efficiency, the server consolidation is at a 20:1 ratio. In terms of capacity, there has been a shift from capacity planning to demand forecast modes. Finally, quality, where the standard configurations improved consistency and enabled automation.
Intel has learned best practices lessons from implementing the cloud. First, the cloud terminology itself. There have been leadership support as well as IT business partnerships. Intel has also set short-term priorities -- pervasive virtualization and faster provisioning. Intel has also learned to manage with data - P2V RoI, measured services, business intelligence (BI) collection, server sizing, etc.
Current challenges facing Intel include asset management and utilization. There is a need to be cognizant of performance saturation, and also understand the degrees of separation. The integrated management view is critical in all of this.
Another challenge is presented by capacity planning, which is shifting to demand forecasting. Quicker provisioning requires the view into the future cloud as well. Next, automation re-inforces the workforce too!
Intel's IT division has successfully developed a private cloud. This has aligned the IT strategy to business needs. Business benefits will generate value. Cloud transition is a multi-year journey.
Sunday, July 24, 2011
Dr. Wally Rhines on global semicon industry
Thanks to my friend, Veeresh Shetty at Mentor Graphics, I was able to meet up with Dr. Walden (Wally) C. Rhines, chairman and CEO, Mentor Graphics, as well as with Hanns Windele, VP Mentor Graphics (Europe & India), for a short conversation regarding the global semiconductor industry.
Growth of global semicon industry
First, I sought Dr. Rhines' views on the growth of the global semiconductor industry. Dr. Rhines said: "Capital investment in the foundries has been quite high. TSMC, GlobalFoundries, Samsung, etc., have invested double. In 2012, some of the foundries will run at a lower percentage of capacity. If that happens, foundry wafer prices might fall. However, equipment prices would not decrease."
So, what has the industry learned from the previous recession? He said: "Capacity in the semicon industry was relatively tight in Q408. In 2009, we called it as inventory correction. If we had not had a recession, there would have been a capacity shortage.
"Now, companies seem to have caught up. There was large investment in the manufacturing capacity in 2010, and that has continued into 2011. There is more new capacity coming into foundries by 2012. Investment in memory has been modest. However, fabless companies should find more capacity in 2012."
Hanns Windele added: "The automotive industry was contributing to all of this as well. As of now, 45 percent is consumed by the computer industry, 20 percent by the communications industry, and consumer electronics and automotive account for 5-10 percent, approximately."
It appears that everything that one buys today, communications seems to be attached to it. Dr. Rhines also reckoned that PC shipments were holding up well, for now.
He noted that one thing no one wants to do is to give up on the PC. "The only reason to have an iPad with 64K memory is because the price of NAND flash has come down," he added. "In future, people will find ways to develop better iPads, cell phones, etc. There will be a lot of iPad variations in the future. On the other hand, Apple will innovate, as well."
Wendele noted: "More connectivity would be required in the future. Also, last year, Apple paid approximately $3 billion to applications suppliers."
So, what about social networks? Dr. Rhines said: "Social networking is more of a bubble. However, applications are not. They help companies generate revenues." He added, "The network infrastructure is not good yet. However, it will improve."
Growth of global semicon industry
First, I sought Dr. Rhines' views on the growth of the global semiconductor industry. Dr. Rhines said: "Capital investment in the foundries has been quite high. TSMC, GlobalFoundries, Samsung, etc., have invested double. In 2012, some of the foundries will run at a lower percentage of capacity. If that happens, foundry wafer prices might fall. However, equipment prices would not decrease."
So, what has the industry learned from the previous recession? He said: "Capacity in the semicon industry was relatively tight in Q408. In 2009, we called it as inventory correction. If we had not had a recession, there would have been a capacity shortage.
"Now, companies seem to have caught up. There was large investment in the manufacturing capacity in 2010, and that has continued into 2011. There is more new capacity coming into foundries by 2012. Investment in memory has been modest. However, fabless companies should find more capacity in 2012."
Hanns Windele added: "The automotive industry was contributing to all of this as well. As of now, 45 percent is consumed by the computer industry, 20 percent by the communications industry, and consumer electronics and automotive account for 5-10 percent, approximately."
It appears that everything that one buys today, communications seems to be attached to it. Dr. Rhines also reckoned that PC shipments were holding up well, for now.
He noted that one thing no one wants to do is to give up on the PC. "The only reason to have an iPad with 64K memory is because the price of NAND flash has come down," he added. "In future, people will find ways to develop better iPads, cell phones, etc. There will be a lot of iPad variations in the future. On the other hand, Apple will innovate, as well."
Wendele noted: "More connectivity would be required in the future. Also, last year, Apple paid approximately $3 billion to applications suppliers."
So, what about social networks? Dr. Rhines said: "Social networking is more of a bubble. However, applications are not. They help companies generate revenues." He added, "The network infrastructure is not good yet. However, it will improve."
Friday, July 22, 2011
Creating measurable value through differentiation: Dr. Wally Rhines
According to Dr. Walden (Wally) C. Rhines, chairman and CEO, Mentor Graphics Corp., customers pay a premium for differentiated products. Gross profit margin (GPM) percentage is the best measure for the differentiation of a manufactured product. He was speaking at the Mentor Graphics' EDA Tech Forum 2011 in Bangalore, India.
The difficulty of switching suppliers is proportional to differentiation and GPM. As an example, Apple released the Mac Classic to compete with IBM clones to regain market share in the PC industry. However, it did not gain market ascendancy. It was only when Apple introduced the iPod in 2H' 01 that things began changing. Later, it introduced the Apple iPad 2H '07. The rest is, for now, history.
Product differentiation is said to be easiest in new and emerging markets. Apple has since invested in semiconductor design, while Nokia has divested. Apple now reduces power and improves performance through design differentiation.
On the other hand, Nokia has divested of IC design as it is now difficult to create a differentiated ecosystem even for the leader (Nokia). As of now, it is using Windows Mobile 7 for developing smartphones. The question is: is the Android vs. iPhone an analogy to the PC vs. the Mac?
The smartphone market will eventually commoditize. However, this time, there has been substantial differentiation. Product differentiation does provide temporary differentiation. However, a company created infrastructure sustains differentiation. The third-party ecosystem drives a longer term differentiation.
Now, Apple isn't the only company with sustainable differentiation. Intel and AMD have also invested in application development, with Intel's x86 MPUs differentiation a prime example.
In mature commodity products, system integration reduces cost and power with increasing performance. For example, Texas Instruments' calculators are commoditized and selling well. The practise of involving education in product development has helped TI.
The difficulty of switching suppliers is proportional to differentiation and GPM. As an example, Apple released the Mac Classic to compete with IBM clones to regain market share in the PC industry. However, it did not gain market ascendancy. It was only when Apple introduced the iPod in 2H' 01 that things began changing. Later, it introduced the Apple iPad 2H '07. The rest is, for now, history.
Product differentiation is said to be easiest in new and emerging markets. Apple has since invested in semiconductor design, while Nokia has divested. Apple now reduces power and improves performance through design differentiation.
On the other hand, Nokia has divested of IC design as it is now difficult to create a differentiated ecosystem even for the leader (Nokia). As of now, it is using Windows Mobile 7 for developing smartphones. The question is: is the Android vs. iPhone an analogy to the PC vs. the Mac?
The smartphone market will eventually commoditize. However, this time, there has been substantial differentiation. Product differentiation does provide temporary differentiation. However, a company created infrastructure sustains differentiation. The third-party ecosystem drives a longer term differentiation.
Now, Apple isn't the only company with sustainable differentiation. Intel and AMD have also invested in application development, with Intel's x86 MPUs differentiation a prime example.
In mature commodity products, system integration reduces cost and power with increasing performance. For example, Texas Instruments' calculators are commoditized and selling well. The practise of involving education in product development has helped TI.
Wednesday, July 20, 2011
Trends in embedded -- smart and green energy: ST
It is a such a pleasure interacting with Vivek Sharma, VP, Greater China & South Asia-India Operations, and director, India Design Centers, STMicroelectronics. While presenting the latest trends in embedded technologies, he hoped that there could eventually be a fab in India, by 2015. Speaking about ‘More Moore’ and ‘More than Moore’, he talked about 3D heterogeneous integration and smart sensors – that provide new, high-growth opportunities. Sharma largely touched upon smart and green energy.
India’s opportunities to leapfrog are immense, especially with a median age of 25.9 years. As for the Indian consumption context, India's share is ~3 percent worldwide consumption levels 2009/2010. It is said to be $45 billion or ~3 percent in electronics and $6.7 billion or ~2.5 percent in semiconductor consumption.
Taking a look at leveraging of electronics by nations (as per 2005 data), Taiwan leads with 15.5 percent of GDP, followed by South Korea at 15.1 percent, China at 12.7 percent, Thailand at 12.4 percent, Germany at 8.3 percent, USA at 5.4 percent, Japan at 4.5 percent, and India at 1.7 percent, respectively.
"More than Moore" diversification has been taking place, especially, by combining SoC and SIP to produce higher value systems.
3D heterogeneous integration has been taking place by integrating multiple functions via 3D/TSV. This involves the vertical stacking and connection of various materials, technologies and functional components together:
* Bio, MEMs and other sensors.
* Digital processing (MCUs, MPUs).
* RF transceivers for data transmission.
* Micro-battery (i.e. thin film).
* Other analog ICs and mixed technologies.
Advantages include integrated multi-functionality, more interconnections, reduced power consumption, smaller packaging, increased yield and reliability, and reduced overall costs.
Smart system integration is another trend, which enables combining “More than Moore” and “More Moore” technologies in a single smart system -- from multi-package on board to multi-chip on package.
India’s opportunities to leapfrog are immense, especially with a median age of 25.9 years. As for the Indian consumption context, India's share is ~3 percent worldwide consumption levels 2009/2010. It is said to be $45 billion or ~3 percent in electronics and $6.7 billion or ~2.5 percent in semiconductor consumption.
Taking a look at leveraging of electronics by nations (as per 2005 data), Taiwan leads with 15.5 percent of GDP, followed by South Korea at 15.1 percent, China at 12.7 percent, Thailand at 12.4 percent, Germany at 8.3 percent, USA at 5.4 percent, Japan at 4.5 percent, and India at 1.7 percent, respectively.
"More than Moore" diversification has been taking place, especially, by combining SoC and SIP to produce higher value systems.
3D heterogeneous integration has been taking place by integrating multiple functions via 3D/TSV. This involves the vertical stacking and connection of various materials, technologies and functional components together:
* Bio, MEMs and other sensors.
* Digital processing (MCUs, MPUs).
* RF transceivers for data transmission.
* Micro-battery (i.e. thin film).
* Other analog ICs and mixed technologies.
Advantages include integrated multi-functionality, more interconnections, reduced power consumption, smaller packaging, increased yield and reliability, and reduced overall costs.
Smart system integration is another trend, which enables combining “More than Moore” and “More Moore” technologies in a single smart system -- from multi-package on board to multi-chip on package.
Sunday, July 17, 2011
June 2011 global semicon sales expectation for 2011: Cowan LRA model
This is a continuation of my coverage of the fortunes of the global semiconductor industry. I would like to acknowledge and thank Mike Cowan, an independent semiconductor analyst and developer of the Cowan LRA model, who has provided me the latest numbers.
June 2011's "actual" global semiconductor sales number is scheduled to be released by the WSTS, via its monthly HBR (Historical Billings Report), on or about Friday, August 5th. The monthly HBR is posted by the WSTS on its website.
In advance of the upcoming June sales release by the WSTS, Mike Cowan will detail an analysis capability using the Cowan LRA forecasting model to project worldwide semiconductor sales for 2011; namely, the ability to provide a "look ahead" scenario for year 2011's sales forecast range as a function of next month's (in this case June's) "actual" global semiconductor sales estimate.
The output of this "look ahead" modeling capability is captured in the scenario analysis matrix displayed in the table below. The details of these forecast results are also summarized in the paragraphs immediately following the table:Source: Cowan LRA model, USA.
In order to facilitate the determination of these "look ahead" forecast numbers, an extended range in possible June 2011's "actual" sales is selected a-priori; in this particular scenario analysis, a June 2011 sales range from a low of $27.935 billion to a high of $30.935 billion, in increments of $0.250 billion, was chosen as listed in the first column of the above table.
This estimated range in possible "actual" sales numbers is "centered around" a projected June sales forecast estimate of $29.435 billion as gleamed from last month's Cowan LRA Model run (based upon May's WSTS published "actual" sales number). The corresponding June 3MMA sales forecast estimate is projected to be $25.445 billion. (NOTE - assumes no, or minor. revisions in either April or May's previously published "actual" sales numbers released last month by the WSTS).
The overall year 2011 sales forecast estimate for each one of the assumed June sales over the pre-selected range of 'actual' sales estimates is calculated by the model, and is shown in the second column of the table.
June 2011's "actual" global semiconductor sales number is scheduled to be released by the WSTS, via its monthly HBR (Historical Billings Report), on or about Friday, August 5th. The monthly HBR is posted by the WSTS on its website.
In advance of the upcoming June sales release by the WSTS, Mike Cowan will detail an analysis capability using the Cowan LRA forecasting model to project worldwide semiconductor sales for 2011; namely, the ability to provide a "look ahead" scenario for year 2011's sales forecast range as a function of next month's (in this case June's) "actual" global semiconductor sales estimate.
The output of this "look ahead" modeling capability is captured in the scenario analysis matrix displayed in the table below. The details of these forecast results are also summarized in the paragraphs immediately following the table:Source: Cowan LRA model, USA.
In order to facilitate the determination of these "look ahead" forecast numbers, an extended range in possible June 2011's "actual" sales is selected a-priori; in this particular scenario analysis, a June 2011 sales range from a low of $27.935 billion to a high of $30.935 billion, in increments of $0.250 billion, was chosen as listed in the first column of the above table.
This estimated range in possible "actual" sales numbers is "centered around" a projected June sales forecast estimate of $29.435 billion as gleamed from last month's Cowan LRA Model run (based upon May's WSTS published "actual" sales number). The corresponding June 3MMA sales forecast estimate is projected to be $25.445 billion. (NOTE - assumes no, or minor. revisions in either April or May's previously published "actual" sales numbers released last month by the WSTS).
The overall year 2011 sales forecast estimate for each one of the assumed June sales over the pre-selected range of 'actual' sales estimates is calculated by the model, and is shown in the second column of the table.
Friday, July 15, 2011
Very fitting finale to Harry Potter!
What a writer, J.K. Rowling! What a movie, the Harry Potter series! And, of course, what an illustrious star-cast!
I've just returned home after watching Harry Potter and the Deathly Hallows - Part II. And, I can't stop thinking about the movie! Actually, I can't stop thinking about the entire series! What a classic this has turned out to be!Those who haven't ever read the series, there's nothing to worry! Your friends will surely commentate alongside you! ;)
Harry Potter and the Deathly Hallows - Part II starts off from where Part I ended. Thereafter, it turns literally into a roller-coaster ride. Starting from the journey to the Gringotts Wizarding Bank, with Hermione Granger impersonating Bellatrix, little time has been wasted for Harry, along with Hermione and Ron Weasley, to return to Hogwarts school. We are introduced to Aberforth Dumbledore, the younger brother of late Prof. Albus Dumbledore, and to late Ms. Ariana, their sister, who actually goes and fetches Neville Longbottom from the school.
Now starts the real fun!
Right from the time where Harry confronts Helena Ravenclaw or the 'Grey Lady', the Death Eaters attacking Hogwarts, the very brave resistance and defense put up by the school inhabitants led by Prof. Minerva McGonagall, the tragic death of Severus Snape at the hands of Voldermort and his pet snake, Nagini, and Snape's final meeting with Harry, following which Harry views Snape's pensieve and learns about Snape's love for his mother, Lily Potter, up to the time Harry enters the Forbidden Forest to meet his death! Its all breathtaking!!
The scene at the Forbidden Forest, where Voldermort appears to have 'killed' Harry, is quite chilling!
Thereafter, it is all about the good triumphing over evil! You too need to watch the movie, isn't it? ;)
All the right messages seem to have been made in Harry Potter and the Deathly Hallows -- Part II. Heroes require help to overcome evil, and so does Harry. Every person has a chance to do what's right -- as shown by Harry -- even though that chance or choice may not be correct or right. Friends are shown standing up for each other, although, in the end, several (or some) fall, notably, Remus Lupin, and his wife Nymphadora Tonks, and Fred Weasley. Some battles are personal -- again, Harry, and greater than one person. Harry's rescue of his arch-rival, Draco Malfoy, from the Room of Requirement, is a great example of helping someone in great need!
What about the 3D effects? While I am not the right person to comment, one feels the movie would have still looked very fine, minus the 3D effects, but never mind!
As with all good things, the Harry Potter series of movies has now come to an end. And boy, don't you feel it? It has been a decade long journey - first, with the books, and later, with the movies. As Harry Potter feels at the end of the book: All was well!
I've just returned home after watching Harry Potter and the Deathly Hallows - Part II. And, I can't stop thinking about the movie! Actually, I can't stop thinking about the entire series! What a classic this has turned out to be!Those who haven't ever read the series, there's nothing to worry! Your friends will surely commentate alongside you! ;)
Harry Potter and the Deathly Hallows - Part II starts off from where Part I ended. Thereafter, it turns literally into a roller-coaster ride. Starting from the journey to the Gringotts Wizarding Bank, with Hermione Granger impersonating Bellatrix, little time has been wasted for Harry, along with Hermione and Ron Weasley, to return to Hogwarts school. We are introduced to Aberforth Dumbledore, the younger brother of late Prof. Albus Dumbledore, and to late Ms. Ariana, their sister, who actually goes and fetches Neville Longbottom from the school.
Now starts the real fun!
Right from the time where Harry confronts Helena Ravenclaw or the 'Grey Lady', the Death Eaters attacking Hogwarts, the very brave resistance and defense put up by the school inhabitants led by Prof. Minerva McGonagall, the tragic death of Severus Snape at the hands of Voldermort and his pet snake, Nagini, and Snape's final meeting with Harry, following which Harry views Snape's pensieve and learns about Snape's love for his mother, Lily Potter, up to the time Harry enters the Forbidden Forest to meet his death! Its all breathtaking!!
The scene at the Forbidden Forest, where Voldermort appears to have 'killed' Harry, is quite chilling!
Thereafter, it is all about the good triumphing over evil! You too need to watch the movie, isn't it? ;)
All the right messages seem to have been made in Harry Potter and the Deathly Hallows -- Part II. Heroes require help to overcome evil, and so does Harry. Every person has a chance to do what's right -- as shown by Harry -- even though that chance or choice may not be correct or right. Friends are shown standing up for each other, although, in the end, several (or some) fall, notably, Remus Lupin, and his wife Nymphadora Tonks, and Fred Weasley. Some battles are personal -- again, Harry, and greater than one person. Harry's rescue of his arch-rival, Draco Malfoy, from the Room of Requirement, is a great example of helping someone in great need!
What about the 3D effects? While I am not the right person to comment, one feels the movie would have still looked very fine, minus the 3D effects, but never mind!
As with all good things, the Harry Potter series of movies has now come to an end. And boy, don't you feel it? It has been a decade long journey - first, with the books, and later, with the movies. As Harry Potter feels at the end of the book: All was well!
Intel leads industry transformation to open data centers and cloud computing
Intel India held a demonstration of “The-Cloud-in-a-Box,” conducted by Nick Knupffer, marketing program manager, Intel Corp.
According to him, user experience is the driving force in our industry: both device and the cloud. Innovation starts with the best transistors. He added that cloud computing is not only inevitable; it is imperative. Intel is said to have the right solutions required to enable a connected world.
By 2015, there will be more users, over 15 billion connected devices, and naturally, data -- 1 zetabyte Internet traffic. Internet and device expansion drives new requirements for IT solutions.
Intel’s Cloud 2015 vision is one of federated, automated and client aware networks. Federated, so that data can be shared securely across public and private clouds. Client aware, so that services can be optimized based on device capability. Automated, so that IT can focus more on innovation and less on management.
Data center processor growth has been >2X in five years. The mobile Internet that runs on Intel is also growing its data center business. Intel has moved to higher process nodes, and 22nm is now a revolutionary leap in process technology. Today, 70 percent of the global CIOs have cloud security top of mind.
Intel is now building the ecosystem around better, faster and stronger security based on Xeon. Here, the Intel Advanced Encryption Standard New Instructions (Intel AES-NI) and the Intel Trusted Execution Technology (Intel TXT), are prominent.
According to him, user experience is the driving force in our industry: both device and the cloud. Innovation starts with the best transistors. He added that cloud computing is not only inevitable; it is imperative. Intel is said to have the right solutions required to enable a connected world.
By 2015, there will be more users, over 15 billion connected devices, and naturally, data -- 1 zetabyte Internet traffic. Internet and device expansion drives new requirements for IT solutions.
Intel’s Cloud 2015 vision is one of federated, automated and client aware networks. Federated, so that data can be shared securely across public and private clouds. Client aware, so that services can be optimized based on device capability. Automated, so that IT can focus more on innovation and less on management.
Data center processor growth has been >2X in five years. The mobile Internet that runs on Intel is also growing its data center business. Intel has moved to higher process nodes, and 22nm is now a revolutionary leap in process technology. Today, 70 percent of the global CIOs have cloud security top of mind.
Intel is now building the ecosystem around better, faster and stronger security based on Xeon. Here, the Intel Advanced Encryption Standard New Instructions (Intel AES-NI) and the Intel Trusted Execution Technology (Intel TXT), are prominent.
Telesphere Videoconnect: Videoconferencing in the cloud!
If analysts are to be believed, the videoconferencing and telepresence market will more than double to $5 billion in annual revenue over the next four years. In this context, Phoenix, US-based Telesphere has introduced the VideoConnect, a hosted video service that promises to enable any size business to implement videoconferencing quickly and cost-effectively by targeting its webcam-equipped PCs, room-based video systems, videophones, smartphones, tablets and softphones.
Telesphere VideoConnect expands the company’s already broad range of cloud-based business communication solutions, such as hosted voice, hosted call center and hosted call recording for businesses.
So, what exactly is Teleshphere Videoconnect and how does it make use of the cloud? According to Sanjay Srinivasan, CTO, the VideoConnect is a hosted video conferencing service that allows callers to join reservationless video conferences using a variety of end points, including video phones, tele-presence room systems, softphones with webcams on PC, and also targets the multitude of tablets and smartphones. It supports HD quality video.
"All of the bridging/conferencing intelligence resides in the cloud and the end users need not have and operate complicated video equipment on premises. Additionally, being in the cloud, it removes the complexities of having to deal with firewall and NAT traversal issues as it is based on industry standard IP protocols. Users join a video conference by dialing in to a bridge, and entering a passcode," he added.
Naturally, that leads to what the hosted infrastructure involves. Srinivasan said: "The infrastructure involves amongst other things the bridging/conferencing systems and network session border controllers to allow seamless NAT/firewall traversal and bandwidth control/management to support a variety of network bandwidth availability situations."
In call bridging, it says that up to 12 simultaneous legs can be enabled. Does this mean that up to 12 users can be on one call on Videoconnect?
"Yes," said Srinivasan. "Up to 12 users/legs can be on a call. Of course, if one of the legs is a conference room, that only counts as one leg independent of the number of people in the room assuming there is one camera in there."
One-to-one calls are kept free, for now. Therefore, does it mean this is a free solution for most users?
He said: "This means that two endpoints on our network are able to have a peer-to-peer video call at no charge. A good example of this might be a multi-site customer that has people having video calls with each other and if they use a video phone with good sized screen, they can use it for ad hoc conferencing as well."
Now, what kind of pricing strategy has Videoconnect firmed up, if at all? "The pricing strategy will be made available in updates coming soon. However, the overall strategy used will be in line with the concept of hosted services in general enabling customers to leverage this capability with monthly fees," concluded Srinivasan.
Available to select Telesphere customers immediately, Telesphere VideoConnect features an intuitive user interface (UI) and hosted infrastructure that combine to create a nearly flat learning curve for employees.
Telesphere VideoConnect expands the company’s already broad range of cloud-based business communication solutions, such as hosted voice, hosted call center and hosted call recording for businesses.
So, what exactly is Teleshphere Videoconnect and how does it make use of the cloud? According to Sanjay Srinivasan, CTO, the VideoConnect is a hosted video conferencing service that allows callers to join reservationless video conferences using a variety of end points, including video phones, tele-presence room systems, softphones with webcams on PC, and also targets the multitude of tablets and smartphones. It supports HD quality video.
"All of the bridging/conferencing intelligence resides in the cloud and the end users need not have and operate complicated video equipment on premises. Additionally, being in the cloud, it removes the complexities of having to deal with firewall and NAT traversal issues as it is based on industry standard IP protocols. Users join a video conference by dialing in to a bridge, and entering a passcode," he added.
Naturally, that leads to what the hosted infrastructure involves. Srinivasan said: "The infrastructure involves amongst other things the bridging/conferencing systems and network session border controllers to allow seamless NAT/firewall traversal and bandwidth control/management to support a variety of network bandwidth availability situations."
In call bridging, it says that up to 12 simultaneous legs can be enabled. Does this mean that up to 12 users can be on one call on Videoconnect?
"Yes," said Srinivasan. "Up to 12 users/legs can be on a call. Of course, if one of the legs is a conference room, that only counts as one leg independent of the number of people in the room assuming there is one camera in there."
One-to-one calls are kept free, for now. Therefore, does it mean this is a free solution for most users?
He said: "This means that two endpoints on our network are able to have a peer-to-peer video call at no charge. A good example of this might be a multi-site customer that has people having video calls with each other and if they use a video phone with good sized screen, they can use it for ad hoc conferencing as well."
Now, what kind of pricing strategy has Videoconnect firmed up, if at all? "The pricing strategy will be made available in updates coming soon. However, the overall strategy used will be in line with the concept of hosted services in general enabling customers to leverage this capability with monthly fees," concluded Srinivasan.
Available to select Telesphere customers immediately, Telesphere VideoConnect features an intuitive user interface (UI) and hosted infrastructure that combine to create a nearly flat learning curve for employees.
Thursday, July 14, 2011
Ether 1.3.1 phone adaptive antenna solution integrates with smartphones!
San Diego, USA based Ethertronics Inc., enabling innovative antenna and RF solutions to deliver the best connected experience, has launched Ether 1.3.1, a phone adaptive antenna solution. Ready for integration with smartphones or other classes of phones, the Ether 1.3.1 can realize design benefits such as 50 percent reduction in antenna volume, yet maintain compliant performance.
According to Laurent Desclos, president and CEO, Ether 1.3.1 allows an antenna system to dynamically tune itself for optimum performance. Phone form factors are constantly changing throughout the design cycle.
“Current solutions, using passive antennas, require the antenna to be re-tuned with each change to the phone form factor, lengthening the time to market. Ether 1.3.1′s advanced active circuitry is able to adapt to changes in the form factor, reducing the need for lengthy antenna redesigns.”
In addition, Ether 1.3.1 can be designed to take up less volume than other antennas (up to a 50-percent reduction), providing more space for other components, and yet, still remain specification compliant.
Is this solution only suitable for smartphones then? Desclos said that Ether 1.3.1 is not limited to just smartphones. It can be integrated into all tiers of devices such as feature phones and tablets supporting 2G, 3G, and 4G mobile device designs. Ether 1.3.1 is said to be ready for commercial deployment. Several design references have been accomplished to date. Products from OEMs will be announced in the future.
It is said that Ether 1.3.1 allows more freedom in antenna structure design. Elaborating, Desclos said: “Ether 1.3.1 allows more freedom in antenna structure design in a few core areas: size, placement and ability to meet performance specifications. Through the use of active impedance matching techniques, smaller volume antennas can be achieved.”
This is especially important as phone form factors shrink, while more components are added to phones for increased functionality (cameras, GPS, etc.). Ether 1.3.1 can additionally be used to achieve compliance as the antenna system can be dynamically tuned for known challenge areas in specification compliance.
Finally, how can the Ether 1.3.1 solution be tuned for tougher challenges by toning down the antenna size?
Typically, when the antenna’s size is decreased, performance suffers since there is less volume area to cover the required bandwidth. The beauty of active impedance matching is that the technique allows for the antenna volume to be reduced by as much as 50 percent and still maintain compliant performance. As a result, active impedance matching allows for a wide range of designs, since the technique is applicable to a broad range of form factors.
According to Laurent Desclos, president and CEO, Ether 1.3.1 allows an antenna system to dynamically tune itself for optimum performance. Phone form factors are constantly changing throughout the design cycle.
“Current solutions, using passive antennas, require the antenna to be re-tuned with each change to the phone form factor, lengthening the time to market. Ether 1.3.1′s advanced active circuitry is able to adapt to changes in the form factor, reducing the need for lengthy antenna redesigns.”
In addition, Ether 1.3.1 can be designed to take up less volume than other antennas (up to a 50-percent reduction), providing more space for other components, and yet, still remain specification compliant.
Is this solution only suitable for smartphones then? Desclos said that Ether 1.3.1 is not limited to just smartphones. It can be integrated into all tiers of devices such as feature phones and tablets supporting 2G, 3G, and 4G mobile device designs. Ether 1.3.1 is said to be ready for commercial deployment. Several design references have been accomplished to date. Products from OEMs will be announced in the future.
It is said that Ether 1.3.1 allows more freedom in antenna structure design. Elaborating, Desclos said: “Ether 1.3.1 allows more freedom in antenna structure design in a few core areas: size, placement and ability to meet performance specifications. Through the use of active impedance matching techniques, smaller volume antennas can be achieved.”
This is especially important as phone form factors shrink, while more components are added to phones for increased functionality (cameras, GPS, etc.). Ether 1.3.1 can additionally be used to achieve compliance as the antenna system can be dynamically tuned for known challenge areas in specification compliance.
Finally, how can the Ether 1.3.1 solution be tuned for tougher challenges by toning down the antenna size?
Typically, when the antenna’s size is decreased, performance suffers since there is less volume area to cover the required bandwidth. The beauty of active impedance matching is that the technique allows for the antenna volume to be reduced by as much as 50 percent and still maintain compliant performance. As a result, active impedance matching allows for a wide range of designs, since the technique is applicable to a broad range of form factors.
Sunday, July 10, 2011
Global forecast estimates based on WSTS's May semicon sales: Cowan LRA model
This is a continuation of my coverage of the fortunes of the global semiconductor industry. I would like to acknowledge and thank Mike Cowan, an independent semiconductor analyst and developer of the Cowan LRA model, who has provided me the latest numbers.
The WSTS posted May 2011′s HBR, Historical Billings Report, on its website on Tuesday, July 5th, 2011.
According to the WSTS’s HBR, May’s actual sales came in at $23.494 billion with the corresponding May 3MMA sales at $25.031 billion. It should be noted that two months experienced slight downward sales revisions from last month’s published HBR, namely March (down by $0.147 billion) and April (down by $0.112 billion), respectively.
The Cowan LRA model’s sales forecast estimates for May as published last month were $24.565 billion (actual) and $25.474 billion (3MMA), respectively. Thus, the model’s May MI (Momentum Indicator) came in at minus 4.4 percent.
This indicates (mathematically speaking) that the semiconductor industry’s actual May sales result was much lower than the model’s expectation by $1.071 billion and that, most probably, 2011′s sales growth will be trending downward for the rest of this year.
Plugging the latest actual sales number for May into the model yields the following updated sales and sales growths forecast estimates for 2011:Source: Cowan LRA model.
The key take-aways from comparing the latest versus previous month's forecast results are highlighted below:
* 2011's sales forecast estimate fell by $3.937 billion to $318.391 billion (from last month's sales forecast estimate of $322.328).
* Correspondingly, 2011's sales growth forecast estimate dropped by 1.3 percentage points to 6.7 percent (from last month's 8.0 percent sales growth forecast estimate).
* June 2011's actual sales forecast expectation is $29.435 billion which corresponds to a June 3MMA sales estimate of $25.445 billion assuming no (or minor) sales revisions for either April or May's previously published actual sales from last month.
Next month's forecast update based upon June's actual sales are expected to be available on or about Friday, Aug 5th, 2011.
The WSTS posted May 2011′s HBR, Historical Billings Report, on its website on Tuesday, July 5th, 2011.
According to the WSTS’s HBR, May’s actual sales came in at $23.494 billion with the corresponding May 3MMA sales at $25.031 billion. It should be noted that two months experienced slight downward sales revisions from last month’s published HBR, namely March (down by $0.147 billion) and April (down by $0.112 billion), respectively.
The Cowan LRA model’s sales forecast estimates for May as published last month were $24.565 billion (actual) and $25.474 billion (3MMA), respectively. Thus, the model’s May MI (Momentum Indicator) came in at minus 4.4 percent.
This indicates (mathematically speaking) that the semiconductor industry’s actual May sales result was much lower than the model’s expectation by $1.071 billion and that, most probably, 2011′s sales growth will be trending downward for the rest of this year.
Plugging the latest actual sales number for May into the model yields the following updated sales and sales growths forecast estimates for 2011:Source: Cowan LRA model.
The key take-aways from comparing the latest versus previous month's forecast results are highlighted below:
* 2011's sales forecast estimate fell by $3.937 billion to $318.391 billion (from last month's sales forecast estimate of $322.328).
* Correspondingly, 2011's sales growth forecast estimate dropped by 1.3 percentage points to 6.7 percent (from last month's 8.0 percent sales growth forecast estimate).
* June 2011's actual sales forecast expectation is $29.435 billion which corresponds to a June 3MMA sales estimate of $25.445 billion assuming no (or minor) sales revisions for either April or May's previously published actual sales from last month.
Next month's forecast update based upon June's actual sales are expected to be available on or about Friday, Aug 5th, 2011.
Thursday, July 7, 2011
Now, there is EDA software piracy!
Great! That's what was required!! As though software piracy isn't enough, there is now an article about EDA software piracy!!!
According to the article, the anti-piracy committee of the Electronic Design Automation Consortium (EDAC) estimates that 30-40 percent of all EDA software use is via pirated licenses. That's a huge number!
What are the chief reasons for EDA software piracy? Surely, it can't be attributed to the Far East countries alone, and definitely not China and Taiwan, and perhaps, India, for that matter.
Everyone in the semiconductor industry knows that EDA software is required to design. There are hefty license fees involved that companies have to pay.
Designing a chip is a very complex activity and that requires EDA software. EDA firms send out sales guys to all over the country. Why, some of the EDA vendors are also known to form alliances with the technical colleges and universities. They offer their EDA software to such institutes at a very low cost.
Back in 2006, John Tanner wrote an article in Chip Design, stating: EDA tools shouldn't cost more than the design engineer!
However, how many of such EDA licenses are properly used? Also, has the EDA vendor, who does go out to the technical institutes made a study about any particular institute's usage of the EDA tool?
The recently held Design and Automation Conference (DAC) showered praises on itself for double-digit rise in attendance. Was there a mention of EDA piracy in all of that? No way! If so, why not?
The reasons are: the EDA industry already churns out a sizeable revenue from the global usage of EDA software. EDA firms are busy trying to keep up with the latest process nodes and develop the requisite EDA tool. New products are constantly being developed, and so, product R&D is a continuous event! Of course, in all of this race, the EDA firms are also looking to keep their revenues running high, lest there is an industry climb-down!
Where then, are the reasons for EDA firms to even check, leave alone, control piracy?
An industry friend had this to say regarding EDA software piracy. "It is the inability to use certain 'tool modules' only at 'certain time'. Like, if a IP company wants to just run PrimeTime (Synopsys) few times to ensure its timing worthiness before releasing that IP, and doesn't need it after that. However, it is is not possible to get such a short time license." Cost and unethical practices by the stake holders were some other reasons EDA users have cited.
Regarding the status in India, especially, the difference isn't that much, from say, China. Another user said it is not such a prevelant, 'worrisome' aspect, yet. Yet another EDA user said that EDA piracy is there more in the sense of 'unauthorized' usage than 'unpaid' usage -- not using it for what it is supposed to be used for. For instance, using academic licenses for 'commercial developments', etc.
That leads to the key question: can EDA software piracy be curtailed to some extent? One user feels that yes, it can. Perhaps, Microsoft type 'detection' technologies exist. However, another said that the EDA companies' expenses have to do, so it can be more than actual losses. Hence, they are probably not quite doing it!
According to the article, the anti-piracy committee of the Electronic Design Automation Consortium (EDAC) estimates that 30-40 percent of all EDA software use is via pirated licenses. That's a huge number!
What are the chief reasons for EDA software piracy? Surely, it can't be attributed to the Far East countries alone, and definitely not China and Taiwan, and perhaps, India, for that matter.
Everyone in the semiconductor industry knows that EDA software is required to design. There are hefty license fees involved that companies have to pay.
Designing a chip is a very complex activity and that requires EDA software. EDA firms send out sales guys to all over the country. Why, some of the EDA vendors are also known to form alliances with the technical colleges and universities. They offer their EDA software to such institutes at a very low cost.
Back in 2006, John Tanner wrote an article in Chip Design, stating: EDA tools shouldn't cost more than the design engineer!
However, how many of such EDA licenses are properly used? Also, has the EDA vendor, who does go out to the technical institutes made a study about any particular institute's usage of the EDA tool?
The recently held Design and Automation Conference (DAC) showered praises on itself for double-digit rise in attendance. Was there a mention of EDA piracy in all of that? No way! If so, why not?
The reasons are: the EDA industry already churns out a sizeable revenue from the global usage of EDA software. EDA firms are busy trying to keep up with the latest process nodes and develop the requisite EDA tool. New products are constantly being developed, and so, product R&D is a continuous event! Of course, in all of this race, the EDA firms are also looking to keep their revenues running high, lest there is an industry climb-down!
Where then, are the reasons for EDA firms to even check, leave alone, control piracy?
An industry friend had this to say regarding EDA software piracy. "It is the inability to use certain 'tool modules' only at 'certain time'. Like, if a IP company wants to just run PrimeTime (Synopsys) few times to ensure its timing worthiness before releasing that IP, and doesn't need it after that. However, it is is not possible to get such a short time license." Cost and unethical practices by the stake holders were some other reasons EDA users have cited.
Regarding the status in India, especially, the difference isn't that much, from say, China. Another user said it is not such a prevelant, 'worrisome' aspect, yet. Yet another EDA user said that EDA piracy is there more in the sense of 'unauthorized' usage than 'unpaid' usage -- not using it for what it is supposed to be used for. For instance, using academic licenses for 'commercial developments', etc.
That leads to the key question: can EDA software piracy be curtailed to some extent? One user feels that yes, it can. Perhaps, Microsoft type 'detection' technologies exist. However, another said that the EDA companies' expenses have to do, so it can be more than actual losses. Hence, they are probably not quite doing it!
Monday, July 4, 2011
SanDisk's iNAND Extreme family of embedded eMMC storage devices for high-end mobile and tablets
SanDisk Corp.'s embedded storage is in most of all top computing device brands. It recently launched the iNAND Extreme family of embedded eMMC storage devices for high-end mobile and tablets.
Gadi Ben-Gad, product marketing manager for SanDisk, said: "This very high performance line of iNAND products joins the existing iNAND and iNAND Ultra lines, which are very successful in the mobile, tablet and consumer electronics markets. The first generation of these products (iNAND Extreme) will be sampling in a few weeks.
"iNAND Extreme products offer up to 50MB/s write and 80MB/s read sequential performance and very high random performance designed for the next generation of high-end mobile and tablet devices. SanDisk continues to monitor market trends and requirements and diversifying the embedded offering in the market, to answer to the different requirements of the different mobile, tablet and consumer electronics segments."
So, how will SanDisk play a strong role in these areas? According to Ben-Gad, SanDisk works closely with a broad and diverse set of mobile and tablet OEMs. The company also works very closely with the majority of the leading mobile chipset vendors and standardization bodies in the mobile/tablet ecosystem to ensure optimal integration and technological support.
He added: "SanDisk is a fully vertically integrated company with substantial expertise in NAND flash technology, system technology and product design with years of experience in designing embedded and removable mobile storage devices. SanDisk is very well-positioned to understand, develop and support the future storage requirements in mobile, tablet and consumer electronics devices."
Finally, I must thank Ms. Jody Privette Young, LymanPR, for making this happen.
Gadi Ben-Gad, product marketing manager for SanDisk, said: "This very high performance line of iNAND products joins the existing iNAND and iNAND Ultra lines, which are very successful in the mobile, tablet and consumer electronics markets. The first generation of these products (iNAND Extreme) will be sampling in a few weeks.
"iNAND Extreme products offer up to 50MB/s write and 80MB/s read sequential performance and very high random performance designed for the next generation of high-end mobile and tablet devices. SanDisk continues to monitor market trends and requirements and diversifying the embedded offering in the market, to answer to the different requirements of the different mobile, tablet and consumer electronics segments."
So, how will SanDisk play a strong role in these areas? According to Ben-Gad, SanDisk works closely with a broad and diverse set of mobile and tablet OEMs. The company also works very closely with the majority of the leading mobile chipset vendors and standardization bodies in the mobile/tablet ecosystem to ensure optimal integration and technological support.
He added: "SanDisk is a fully vertically integrated company with substantial expertise in NAND flash technology, system technology and product design with years of experience in designing embedded and removable mobile storage devices. SanDisk is very well-positioned to understand, develop and support the future storage requirements in mobile, tablet and consumer electronics devices."
Finally, I must thank Ms. Jody Privette Young, LymanPR, for making this happen.
Saturday, July 2, 2011
Applied Vantage Vulcan RTP -- better side of anneal
Applied Materials Inc. has launched the Vantage Vulcan RTP advanced spike anneal system, an innovation in chip manufacturing technology.
Rapid Thermal Processing (or RTP) is a semiconductor manufacturing process, that heats silicon wafers to high temperatures (up to 1,200 °C or greater) within a few seconds. It is used often during semiconductor device manufacturing to enhance desired attributes, such as conductivity.
Sundar Ramamurthy, appointed VP, GM, Front End Products, Silicon Systems Group, Applied Materials, presented on the Applied Vantage Vulcan RTP, which will help the company maintain RTP leadership for the next decade.
The Vantage Vulcan RTP provides in-class temperature uniformity for higher yield. There are sharper temperature spikes for faster chips. It also features low-temperature control for new applications. Besides, it offers efficient energy usage for lower carbon footprint.
Applied’s RTP is the technology and marketplace leader. The RTP is a growing ~$500 million market opportunity. Vantage Radiance Plus facilitates a tool of record at virtually every top chip maker. The Vantage Vulcan is in place at top chip makers for spike anneal. It also happens to be the industry’s greenest RTP solution, as its advanced system design improves the usage of grid energy.
Innovation in semiconductor manufacturing technology has seen the carbon footprint savings per system equivalent to taking four mid-size sedans off the road.
Mobility and connectivity are said to be driving growth in lower power, high performance chips that find use in smartphones, tablets, mobile PCs and servers. Rapid Thermal Processing (RTP) is the thermal process, which heats silicon wafers to ultra-high temperatures on a timescale of a few seconds. It is used for anneals and oxidation.
Applied's Vantage Vulcan RTP provides a revolutionary backside heating design. Within-die spike anneal thermal variability is provided with frontside heating. There is a 3X decrease in within-die thermal variability with Vulcan system’s backside heating. The thermal processing roadmap now enables 28nm node and beyond with sharper spikes and full-range temperature control.
It enables low-temperature regime control, such as closed-loop control from <75°C, unique sensors for accurate, low-temperature measurement and new capability for advanced low-temperature applications.
Rapid Thermal Processing (or RTP) is a semiconductor manufacturing process, that heats silicon wafers to high temperatures (up to 1,200 °C or greater) within a few seconds. It is used often during semiconductor device manufacturing to enhance desired attributes, such as conductivity.
Sundar Ramamurthy, appointed VP, GM, Front End Products, Silicon Systems Group, Applied Materials, presented on the Applied Vantage Vulcan RTP, which will help the company maintain RTP leadership for the next decade.
The Vantage Vulcan RTP provides in-class temperature uniformity for higher yield. There are sharper temperature spikes for faster chips. It also features low-temperature control for new applications. Besides, it offers efficient energy usage for lower carbon footprint.
Applied’s RTP is the technology and marketplace leader. The RTP is a growing ~$500 million market opportunity. Vantage Radiance Plus facilitates a tool of record at virtually every top chip maker. The Vantage Vulcan is in place at top chip makers for spike anneal. It also happens to be the industry’s greenest RTP solution, as its advanced system design improves the usage of grid energy.
Innovation in semiconductor manufacturing technology has seen the carbon footprint savings per system equivalent to taking four mid-size sedans off the road.
Mobility and connectivity are said to be driving growth in lower power, high performance chips that find use in smartphones, tablets, mobile PCs and servers. Rapid Thermal Processing (RTP) is the thermal process, which heats silicon wafers to ultra-high temperatures on a timescale of a few seconds. It is used for anneals and oxidation.
Applied's Vantage Vulcan RTP provides a revolutionary backside heating design. Within-die spike anneal thermal variability is provided with frontside heating. There is a 3X decrease in within-die thermal variability with Vulcan system’s backside heating. The thermal processing roadmap now enables 28nm node and beyond with sharper spikes and full-range temperature control.
It enables low-temperature regime control, such as closed-loop control from <75°C, unique sensors for accurate, low-temperature measurement and new capability for advanced low-temperature applications.
Friday, July 1, 2011
XConnect's VIE set to make video calls as easy as voice calls!
XConnect, a leader in next-generation interconnection and ENUM-directory services, has launched the global Video Interconnection Exchange (VIE) – the world’s first neutral federation for exchanging video calls across networks, operators, service providers, B2B exchanges and vendors.
VIE is said to make video calling as easy as making a voice call – whether using a laptop, desktop, tablet or mobile phone – from anywhere in the world. By connecting video “islands,” VIE will dramatically accelerate worldwide mass-market adoption of video calling and conferencing by service providers, enterprises and consumers.
So, how exactly is the XConnect VIE really going to improve video/telepresence calling? XConnect CEO Eli Katz, said: "The driver behind VIE is to enable ubiquitous video calling; to make video calling as simple as making a phone call. That is, to allow the many different video services to interconnect, and allow person-to-person video calling regardless of the network, device and video service."
What are the communication charges involved as of now? He added: "VIE - like all of XConnect services - is offered on a modular basis. Customers have a menu of services that they choose to “consume.” Pricing is based on their choices. Typically, charges are structured on subscription (joining fees) and then usage-based charges."
What are these video islands that XConnect VIE shall connect? According to Katz, IP-based services, such as video and HD voice, are usually limited to within the service provider’s own network. In the vast majority of cases, calls that need to go between networks are still reliant on the PSTN interconnect infrastructure, which means IP-based services cannot be supported – creating “islands.”
For video services to work on a cross- network basis, each one of the video services/networks needs to be interconnected via IP, avoiding the PSTN completely. So currently, a SKYPE video user can only make video calls to another SKYPE user; such a user cannot initiate calls to another video service, for example, Facetime.
Katz added: “Video islands” exist due to interconnection challenges above, as well as interworking problems between different services, such as:
* Not having the prerequiste knowledge that the device being called can support video calling.
* Differences in video and audio codecs implemented by video providers.
* Differences in signalling systems.
* Differences in screen size and frame rates.
He explained: "VIE is built on our Interconnect 2.0 platform – which includes carrier ENUM registry and multimedia IP interconnection hub services. By joining VIE, the video service provider gains an immediate multilateral (one-to-many) IP technical and commercial interconnect with every other VIE member.
"The carrier ENUM registry is utilised to discover if the party being called can support video and the type of video service they support (i.e., codecs). The interconnection hub handles the signalling and interworking challenges to ensure seamless interconnection of the call between the two video networks."
Isn't the no. of operators (5) low, to start off with? What are the plans for expansion? Specifically, what are XConnect's plans for Asia!
Katz said that the five initial members of VIE include a mix of service providers and managed telepresence exchange providers, representing approximately 1 million video endpoints.
"VIE is available globally, and we are already in discussions with operators in Asia who have expressed strong interest in joining VIE. We will be making further announcements detailing members in the following months," he noted.
VIE is said to make video calling as easy as making a voice call – whether using a laptop, desktop, tablet or mobile phone – from anywhere in the world. By connecting video “islands,” VIE will dramatically accelerate worldwide mass-market adoption of video calling and conferencing by service providers, enterprises and consumers.
So, how exactly is the XConnect VIE really going to improve video/telepresence calling? XConnect CEO Eli Katz, said: "The driver behind VIE is to enable ubiquitous video calling; to make video calling as simple as making a phone call. That is, to allow the many different video services to interconnect, and allow person-to-person video calling regardless of the network, device and video service."
What are the communication charges involved as of now? He added: "VIE - like all of XConnect services - is offered on a modular basis. Customers have a menu of services that they choose to “consume.” Pricing is based on their choices. Typically, charges are structured on subscription (joining fees) and then usage-based charges."
What are these video islands that XConnect VIE shall connect? According to Katz, IP-based services, such as video and HD voice, are usually limited to within the service provider’s own network. In the vast majority of cases, calls that need to go between networks are still reliant on the PSTN interconnect infrastructure, which means IP-based services cannot be supported – creating “islands.”
For video services to work on a cross- network basis, each one of the video services/networks needs to be interconnected via IP, avoiding the PSTN completely. So currently, a SKYPE video user can only make video calls to another SKYPE user; such a user cannot initiate calls to another video service, for example, Facetime.
Katz added: “Video islands” exist due to interconnection challenges above, as well as interworking problems between different services, such as:
* Not having the prerequiste knowledge that the device being called can support video calling.
* Differences in video and audio codecs implemented by video providers.
* Differences in signalling systems.
* Differences in screen size and frame rates.
He explained: "VIE is built on our Interconnect 2.0 platform – which includes carrier ENUM registry and multimedia IP interconnection hub services. By joining VIE, the video service provider gains an immediate multilateral (one-to-many) IP technical and commercial interconnect with every other VIE member.
"The carrier ENUM registry is utilised to discover if the party being called can support video and the type of video service they support (i.e., codecs). The interconnection hub handles the signalling and interworking challenges to ensure seamless interconnection of the call between the two video networks."
Isn't the no. of operators (5) low, to start off with? What are the plans for expansion? Specifically, what are XConnect's plans for Asia!
Katz said that the five initial members of VIE include a mix of service providers and managed telepresence exchange providers, representing approximately 1 million video endpoints.
"VIE is available globally, and we are already in discussions with operators in Asia who have expressed strong interest in joining VIE. We will be making further announcements detailing members in the following months," he noted.
Subscribe to:
Posts (Atom)