ChipStart BlogSpot

An Analog Solution for Wearables and SMART/SIM CARDS.

Posted by Jeremy Pakosh on Fri, Oct 31, 2014 @ 02:06 PM

Overview

The shift in emphasis from performance to power constraints and cost reduction presents challenges for designers of today's multi-core and system-on-chip (SoC) ICs for mobile applications. Prominent examples are to be found in the cost- and power-sensitive SMART/SIM card and wearables markets. With the emergence of wearable electronics a new generation of gadgets is expected to drive brisk growth in the demand for electrical and electronic components, with the market value of components amounting to roughly 2/3 of the cost production. The wearable technology market revenue was $4.3 billion as of 2012 and is expected to reach to $14.0 billion by 2018, growing at an estimated CAGR of 18.93 % from 2013 to 2018. Components account for the largest percentage share of the overall revenue of global wearable technology, with a 66.2% share ($1.83 billion) in 2012 expected to grow to 73.0% of the total market by 2018.  

Wearable applications call for components that integrate high functionality and computing power while maintaining good power efficiency. A typical consumer wearable device as shown in Figure 2 contains various sensors (e.g., for acceleration, temperature, IR and ambient light, pressure) an energy-efficient microprocessor, memory, wired and wireless communication, and power management. Limitations on the size of a wearable device drives integration of all possible functions into one SoC, and VivEng offers IP that can help SoC developers get to market quickly with highly competitive performance parameters.

Achieving the low power consumption demanded by wearable systems is a goal that requires designers to consider the full system architecture including hardware/software partitioning and power management strategies. Power management strategies often include clock scaling and gating, with independent control of multiple power domains. Many multi-core designs and SoC applications employ several low-dropout (LDO) regulators in their overall power management subsystem, to provide a voltage source for each power domain.

The primary task for an LDO regulator is to provide a stable output voltage even though the voltage of the external power supply (e.g., battery) changes over time or in response to variations in load current. LDO regulators can be used as standalone components or be embedded in an SoC design. A typical LDO regulator requires an external capacitor to improve the transient response, power supply noise rejection and stability. VivEng offers a family of LDO regulators that do not require an external capacitor, providing the advantages of lower pin counts, smaller packages, and reduced PCB area and BOM cost.

 

SMART CARD SoC

Figure 1 shows an example of a smart card SoC implemented using analog IP from VivEng. The design is partitioned into two main power domains, one for flash memory and another for digital circuitry including CPU and control logic. The power domains each have a dedicated LDO regulator, with one providing 1.8 V for flash memory and the other providing 1.2 V for the digital circuitry.

VivEng_BlogFig.1

The first LDO regulator generates a 1.8 V supply voltage from an unregulated 1.62 V to 5.5 V supply, delivering a maximum load current (ILmax) of 20 mA. The LDO is always enabled and provides output even when the input voltage is insufficient for regulation (i.e., in saturation mode.) Power-up sequencing as required by the flash macro is also provided. It does not require any off-chip decoupling capacitor for its operation.

The second LDO regulator generates a 1.2 V supply voltage from an unregulated 1.62 V to 5.5 V supply, and is enabled only after the 1.8 V supply has settled. It offers two selectable operating modes: light-load (Idd = 3 uA, ILmax = 100 uA) and full-load ( Idd = 20 uA, ILmax = 20 mA.)

The design contains several other IP blocks from VivEng in addition to the LDO regulators: a trimmable oscillator that generates a master clock, an intrinsic thermal sensor that monitors die temperature, a photodetector that monitors ambient light intensity, and a thermal random bit generator that generates a stream of random bits.

The ambient light detector raises a signal a flag which can be used (for example) to initiate erasure and shutdown in the event the package is opened and the IC is exposed to light.

The 15 MHz oscillator generates a clock and does not require any external components. The IP features low power consumption, low temperature coefficient for the clock frequency, 50% duty cycle, instant start, and “on-the-fly” frequency adjustment with a 4-bit control bus.

The temperature alarm IP is essentially tied to the voltage reference block. It continuously monitors die temperature and asserts its output signal when the die temperature exceeds low or high temperature limits.

The thermal random bit generator generates a stream of random data bits based on a non-deterministic physical noise source. The IP has low current consumption and operates at a bit rate from 0.1 to 1 Mbps.

The IP blocks from VivEng use reference voltages and bias currents supplied by an IP block available from the same library. The reference voltage and current block is a high precision bandgap voltage and current reference circuit capable of working over a wide range of supply voltages. The block also generates a power-on reset and brownout signal for Vdd. All of the IP blocks are developed for the TSMC 90 nm flash memory process.  

 

VivEng_BlogFig.2

Summary

VivEng offers a portfolio of IP blocks that can be used standalone in different types of SoC or as a bundle suitable for SIM/SMART card or wearable SoC designs. The IP blocks are all self-contained and do not require any external components. The IP is fully characterized for the TSMC 90 nm flash process and the transfer to a 65 nm process is in progress.

  • LDO – wide input power supply range, from 1.62 V to 5.5 V
    • Low idle and operating currents for low power consumption
  • Clock oscillator with adjustable frequency
  • Thermal alarm for detecting over/under temperature conditions
  • Ambient light alert with trimmable threshold
  • High-speed random bit generation up to 1 Mbps
  • Bandgap voltage and current reference

 

 

Topics: Wearables, Analog Solutions

Reducing Power Consumption While Increasing SoC Performance

Posted by Jeremy Pakosh on Tue, Sep 09, 2014 @ 10:30 AM

Gregory Recupero — ChipStart

Explore ChipStart IP here

Overview

Designers of today's high-performance multi-client SoCs struggle to achieve the best possible performance/watt for their designs. Every generation of product must improve the customer's user experience by delivering more performance. While at the same time battery life must increase with each subsequent product generation. Performance-IP has developed a family of products based on its patent-pending Memory Tracker TechnologyTM. This technology enables products to operate at higher levels of efficiency, reducing power consumption while increasing SoC performance.

Design Challenge

To show the effectiveness of Performance-IP’s Memory Tracker TechnologyTM a typical SoC Design will be assembled. Then three different approaches will be implemented to improve system performance. The first approach will be to increase the CPU cache size to improve performance. The second approach will be to implement a traditional prefetch engine to reduce memory latency and improve performance. And the final approach will be the use of Performance-IP's Memory Request OptimizerTM. For each of the approaches CPU IPC will be evaluated along with DDR memory power and bandwidth. To maintain consistency between all three design approaches, solutions that have similar silicon area were used to perform the analysis.

Typical SoC Design

A typical SoC design consists of a CPU, memory clients, interconnect, and a memory sub-system. For our design challenge the SoC implementation used is shown in Figure 1.



Design Approach 1 - Increase Cache Size

This approach involved increasing the CPU cache sizes from 4KB to 8KB. To maintain consistent silicon area with all our design solutions, the D-cache and I-cache sizes were increased independently. First, the D-cache size was increased from 4KB to 8KB, while maintaining the standard 4KB I-cache. Application performance and DDR power estimates were computed. Then the I-cache size was increased to 8KB, while maintaining the standard 4KB D-cache. Application performance and DDR power estimates were again computed. The increased cache size design approach is shown in Figure 2.



Design Approach 2 - Typical Prefetch Engine

The Prefetch Engine (PFE) approach required the implementation of a linear prefetch look ahead. Its basic operation consisted of monitoring when any of the SoC clients fetched data from memory, then fetching the next sequential (in terms of memory address) cache line and storing it in a 'response buffer' within the PFE.  This design approach is shown in Figure 3 below.



To improve efficiency and performance of the PFE, multiple separate response buffers were implemented to hold the prefetched cache lines. A maximum of 16 response buffers could be implemented without exceeding our design area constraints. Each response buffer contained the address for the cache line of data it contained, along with the entire cache line of data. This allowed the response buffers to operate very similar to a fully associative cache.

Design Approach 3 - Performance-IPs Memory Request OptimizerTM

The Memory Request OptimizerTM (MRO) design solution is the next-generation of memory prefetch and is based on Performance-IPs Memory Tracker TechnologyTM. This product shows how the addition of the Memory Tracker TechnologyTM can dramatically improve the efficiency of a memory prefetch engine.

This added efficiency allows the MRO to handle multiple read request streams concurrently, analyzing each client’s request to determine which memory requests will yield optimal results. Since the MRO can be dynamically programmed, a system can be tuned for the required performance needed by the target application. The SoC architecture containing the MRO is shown in Figure 4 below.



The MRO provides four modes of optimization; none, low, moderate, and aggressive. Each mode provides a higher level of performance than the previous mode. The low and moderate modes are designed to produce the best performance/watt solution, while the aggressive mode is meant to be used for achieving the highest possible performance.

SoC Application

To demonstrate how each of the design solutions above improves overall processor performance, a baseline system-level Verilog simulation was run using a DDR model with 25-cycles of memory latency. The Verilog simulation was performing a full MP3 audio decode. Two parameters were then obtained from this simulation.

First, the baseline CPU Instructions Executed Per Cycle (IPC) was calculated. A system with lower memory latency will achieve higher processor IPC, since the processor pipeline won't stall as often while waiting for a data or instruction fetch. This parameter allowed us to measure the performance improvement.

The other parameter recorded was the total number of reads that reached the external memory. Since our design goal is to produce the best performance/watt solution, fewer external reads translates to less system power. This is because every read that reaches external memory consumes approximately 10x more energy than if that same read had occurred on-chip. To monitor this parameter, the number of reads to external memory that each design solution generated to decode the MP3 stream was tabulated. The goal of each design solution would be to generate the highest IPC improvement, but not at the expense of generating additional reads to external memory.

Results

The simulation results for each of the design approaches are reviewed below.

 

Increasing Cache Size - Results

Doubling either the I-cache or D-cache size resulted in the fewest reads to external memory; however they produced the smallest IPC improvements. This is shown in the table below.



It should be noted that doubling the D-cache size produced better results than doubling the I-cache. This is most likely due to the data changing over time (exhibiting a high degree of temporal locality). Since increasing the D-cache produced better overall results, only the D-cache results will be used to compare against the other designs approaches.

Adding a Prefetch Engine - Results

Adding a prefetch engine generated the largest reduction in memory latency, and resulted in the highest IPC improvement. This increase in IPC however did come at high cost. The PFE dramatically increased the number of reads to external memory. The prefetch engine generated a 22.8% increase of reads to the external memory during the execution of the MP3 decode application. This is because the PFE is fetching data that is not needed. These additional reads consume more power, and increase the required memory bandwidth. It was our goal to reduce both of these constraints. To see if the prefetch engine could overcome this problem additional simulations were run doubling the number of response buffers to 32 and doubling again to 64. This only reduced the number of external reads to 21% and 18.5% respectively, without appreciable IPC improvements. This confirms the excessive false fetches being performed by the PFE. This data is shown in the table below.



Memory Request OptimizerTM Results

The MRO analysis was performed for each optimization mode. The low and moderate optimization modes generated large reductions in latency which translated in to solid IPC improvements. Both of these modes performed very well and they did not generate any extra reads to memory, in fact the total number of external reads were reduced. The aggressive optimization level results actually approached those of the prefetch engine while generating fewer total reads to external memory. The MRO results are shown in the table below.



Power Savings Results

Our design goal is to produce the best performance/watt design. With these improvements in performance we can reduce the processor clock rate and maintain the same level of processing. Reducing the processor clock rate will also reduce the dynamic power consumption of our designs. If we use the equation for the dynamic contribution to power we have:

Pdynamic = σCV2f

In this equation σ is the activity factor or the portion of the circuit switching, C is the total switching capacitance, V is the voltage, and f is the switching frequency. Because the design is not changing σ and C will be the same. If σ, C, and V are constant, and any reduction in frequency will result in a proportional reduction in dynamic power.

Based on the processor IPC performance improvements noted in Tables 1 - 3, we can reduce the SoC clock frequency to maintain the same application performance as the baseline. This will result in a reduction of power consumption between 5 - 16%, as shown below in Table 4.



A more aggressive power reduction approach would also involve voltage-scaling, because of the squared contribution of voltage to dynamic power consumption. Designs that have lower clock rates can achieve high voltage-scaling. This is another benefit from reducing the required clock rate.

Finally the DDR power reductions were analyzed. One of the design goals was to reduce the number of reads that reached the external memory sub-system. For this analysis the baseline simulation was used to determine the percentages of time spent in each of the various DDR states, and compute a power estimate based on this data. This was then repeated for each of the different designs.

Designs that ran faster would also spend more time in the low power DDR states further helping to reduce power consumption. The results for each of the designs are shown below in Table 5.



The D-cache performed best here because of the large reduction in external DDR reads. Second, was the Memory Request Optimization approach using moderate optimization mode.

What was not expected was the MRO aggressive optimization which ended up consuming less DDR power than the baseline simulation. This was because of the large performance boost generated by this design which allowed the DDR to ultimately spend more time in the lower power states. This power reduction was just enough to overcome the extra power spent performing the additional 7.8% of reads. This was also true for the prefetch engine solution which only consumed an additional 3.4% of DDR power. This was much less than anticipated, but the power saved could not suppress the power expended performing the additional 22.8% of reads, not to mention the required increase in memory bandwidth.

Design Challenge Winner

All three design approaches produced reductions in application latency reductions. Only the prefetch engine and the MRO design solutions translated these latency reductions in to appreciable IPC improvements. However the additional memory bandwidth increase from the prefetch engine made that solution unacceptable. The two approaches that contributed appreciable improvement in IPC and power reductions, while generating fewer reads to the memory sub-system were the low and moderate optimization methods of the MRO. The moderate optimization provided the largest dynamic power savings, combined with the 3% reduction in DDR power making it the clear performance/watt winner.

Conclusion

The results from this Design Challenge shows that the incorporation of the Memory Tracker TechnologyTM can improve the efficiency of the Memory Request OptimizerTM. This allowed it to outperform both the cache and prefetch engine design solutions. This same Memory Tracker TechnologyTM has also been incorporated in to Performance-IPs L2+ Cache solution. This allows our cache solution to warm faster and achieve higher hit rates than level2 cache designs of the same size. In fact our L2+ Cache has outperformed some caches that are up to 4 times as large. So to save power without sacrificing performance check out the products offered by Performance-IP.

5 Aspects of a Good User Interface

Posted by Jeremy Pakosh on Wed, Feb 12, 2014 @ 08:41 AM

Designing a good user interface (UI) is easier than it sounds - it can often be a juggling contest trying to succeed in every goal you have for the interface without there being a conflict of interest somewhere along the line. For example, by trying to make the interface too pleasing to the eye, functionality and ease of use might be impaired.describe the image

So, how does one define a UI as being “good”? A good user interface is one that allows the user to carry out their intended actions efficiently and effectively, without causing too much of a distraction. With this in mind, it’s no wonder that the best UIs aren’t the most in-your-face spectacular designs, but rather the ones that work subtly in the background to allow the users to complete their tasks with ease, like a benevolent but nearly invisible presence.

So, without further ado, here are some of the most important aspects to consider when designing a user interface:

1) Intuitive and consistent design

For an interface to be easily useable and navigable, the controls and information must be laid out in an intuitive and consistent fashion. Your users are probably well acquainted with many other interfaces, and you should be too if you want to achieve a level of familiarity for your users. Coming out with an entirely new layout for your interface might sound like a highly rewarding, paradigm-breaking project, but for all practical purposes, if you want users to feel at home then follow the path of your predecessors! Logic of usability should play a big part in the design process: features that are the most frequently used should be the most prominent in the UI and controls should be consistent so that users know how to repeat their actions.

2) Clarity

If a user is not able to understand his or her way around your interface, all the time you spent perfecting the software’s functionality is rendered useless. Both in terms of visual hierarchy and content, there should be absolutely no ambiguity over the way your interface operates. One of the difficulties in striving towards having a clear UI is knowing when to elaborate and when to be concise. As a general rule of thumb, the quicker and easier something can be explained without losing any of the semantic meaning or factual information, the better! While a UI should be designed so that users can easily run tasks without the help of a manual, it doesn’t hurt to implement some clearly labelled help documentation just incase.


describe the image
 3) High responsivity

For a user to enjoy using your interface, it cannot feel as if the interface is lagging to keep up with their mouse clicking and keyboard tapping. If the interface fails keep up with the demands of the user, this will significantly diminish the users experience and can result in frustration, particularly when trying to perform basic tasks. Wherever possible, the interface should move swiftly in pace with the user, and where this isn’t possible, a loading sign or some other actively updated information should be presented to the user to stop them from feeling “disconnected” from the interface. A slow running interface can give the impression of poor or faulty software, even if this is far from the truth!

4) Maintainability

Call it flexibility if you wish - a UI should have the capacity for new updates to be installed and changes to be integrated without causing a conflict of interest. For instance, you may need to add an additional feature to the software, if your interface is so convoluted that there is no space to draw attention to this feature without compromising something else or appearing unaesthetic, then this signifies a flaw in design.

5) Attractiveness

Aesthetics are by no means the most important part of an interface, and a pretty look cannot make up for a sub-par design. However, so long as you don’t get the sauce confused for the meal, some aesthetically pleasing typography and a pleasant colour scheme can go a long way in making the user feel more at home when using your interface. Again, the aesthetics you choose for your interface must be appropriate for the particular user - so perhaps some market research is required to determine exactly what your users are looking for.

Topics: Argon Design, UI, user, interface

What to look for when selecting 3rd party IP Core

Posted by Jeremy Pakosh on Mon, Oct 07, 2013 @ 08:36 AM

It’s been years since one interested in IP Core would say, ok I need just 8051, what’s the performance and the price. Now, pure IP is far to less. That’s why if you’re making a research through third-party vendors, ask about all the additional stuff like peripherals, deliverables and configurability issues.

So let’s go step by step. Let’s say that Anonymous  Mr J. is calling DCD and asks for 8051 IP Core. First of all would be to choose appropriate solution – so Mr J. should get a brief comparison like eg shown below:

DCD Blog1

When he decides if performance, size or power consumption is the most important for his project, then we’re ready for the step two – choose your peripherals. So, as it was mentioned earlier, 8051 is being offered in several different combinations of peripherals:

  • DUSB2 –USB 2.0 device including
    • HID – Human Interface Device
    • MS – Mass Storage
    • Audio devices
  • Parallel I/O Ports
  • UART’s
  • Timers / Counters with Compare Capture
  • Watchdog Timer
  • Power Management Unit
  • I2C bus interfaces – Master and Slave
  • Serial Peripheral Interface – SPI Master/Slave
  • Floating Point Math Coprocessors
  • Media Access Controller – DMAC
  • 32-bit Multiply Divide Unit
  • Data pointers

“OK, I can find what I need on your list of peripherals, but what about the configurability issues?” – asked Anonymous Mr J. And the 3rd party IP Core vendor should then answer: “Sure, no problem – configuration has never been easier before thanks to the usage of the constants in the IP Core package”

DCD Blog2DCD Blog3

And all of that seems to be pretty clear when you look at the trends visible in EDA  – we could say, that trustworthy IP vendor will always offer you “All Inclusive” IP Core package – tested, verified and ready for the implementation in the final project.  Future of the IP Core market belongs to “superset IP Core”, which include IP Core with peripherals and… some other IP Cores which play the role of subsystem. Thanks to this the customer gets implementation ready solution. What does it mean? That the designer gets a final solution which is fully compatible with industry standard, but offers totally non-standard performance. No wonder that today 3rd party IP Core vendors sell more differentiated IP, rather than commoditized IP.

And last, but not least – as it has been mentioned, do not forget to ask about the deliverables. When Anonymous Mr J. will ask about this, he should get a list which will consist more or less (better more) of these elements: Synthesizable VHDL, or VERILOG Source code, VHDL or VERILOG test bench environments: (Active-HDL automatic simulation macros, NCSim automatic simulation macros, ModelSim automatic simulation macros, Tests with reference responses);  Technical documentation (Installation notes, HDL core specification, Datasheet, Instructions set details, Test plan and Code coverage report);  Synthesis scripts; Example application and of course Technical support (IP Core implementation support; x months of maintenance; Delivery the IP Core updates minor and major versions changes; delivery the documentation updates; Phone & email support.

Topics: Digital Core Design, Core, DCD, solution, power consumtion, IP

A New Look at Digital Signal Processing

Posted by Jeremy Pakosh on Thu, Mar 07, 2013 @ 11:40 AM

Digital Signal Processing is taking on a new look.  Our DSP partner, FireFlyDSP (www.fireflydsp.com), has now joined the Embedded Vision Alliance.  With the ongoing evolution of computer vision technologies, many of these algorithms are migrating into the field of embedded vision and demanding chips with higher performance at lower power and cost. With applications like gesture recognition (in devices such as Microsoft's XBOX Kinect), lane detection and object recognition (in a variety of new vehicles), and face detection (for phones and social media), there are increasing opportunities for new SoCs. 

describe the image

The team at FireFlyDSP has leveraged their own backgrounds in image processing and algorithm optimization to develop an industry-leading technology for these growing applications.  By working with the Embedded Vision Alliance, FireFlyDSP will bringing bring their insights and innovations to new, vision-related SoC developers. 

 

For information on the FireFly DSP products, including the new FireFly64 DSP, contact Chip-Start or email info@fireflydsp.com

Topics: FireFly, DSP, digital signal processing, SoC, embedded vision alliance

System Level Functions Managed Through Control Plane – Is This Acceptable?

Posted by Jeremy Pakosh on Thu, Dec 06, 2012 @ 02:11 PM

The rise of software defined networking introduces new ways to control and operate networks. Software programs can now personalize the behavior of the network by reprograming the switching fabrics with specific rules they would like to use. As a result from a silicon perspective, the biggest change in network operation then is the emergence of large volumes of control plane rule changes that have the effect of appearing “random” compared to the predictable flows traditionally controlled by the network. These changes are random because computer programs initiate them, not the network itself and so they are unpredictable in their timing or requirements.

 

Traditional fabrics have been built using structures optimized for match and forward tasks, and using deep packet inspection and packet classification techniques to learn about where the packets need to be forwarded in the network. These arrays are long in order to achieve line rate for multiple channels. To maximize performance, these arrays require packets to be inserted in the front of the pipeline, with classified packets coming out the end of the pipeline. Over time these traditional architectures have evolved such that specific and often specialized compute and state management is employed.

Introducing the random changes via SDN means the rules changes must occur during a pipeline stage, and as their volumes grow, in turn causes traditional architectures to operate far less efficiently. Eventually both reliability and scalability are compromised. What is emerging is an opportunity for network fabric developers to now leverage embed and general purpose microprocessor architectures as fabrics. These architectures are more optimized for random processing and in general will become adequately efficient as SDN grows. When coupled with companion chip sets that provide traditional IP packet processing tasks, such as TCP offload, these architectures become more efficient traditional network fabrics. 

One of the design challenges when using embed or general purpose microprocessors is the continued need to employ data and control planes. SoCs that utilize these technologies and corresponding interconnects to connect multiple cores, have been heavily optimized for data plane operations. However, in networking, control plane is equally important.

SSM is the industry’s first merchant silicon optimized for control plane management using a SoC development methodology.

SoCControlePlane

 

SSM compliments SoC data plane architectures while filling the void for control plane state management. SSM utilizes a software based policy driven state management approach, which enables SoCs to sufficiently mimic full control and data plane networking. The introduction then of SSM into a SoC enables full utilization of SoC methodologies for network fabric development.

By utilizing software policies that describe the nature of the SDN network personalization required, SSM can orchestrate the operation of the relevant subsystems on the SoC to create the behavior desired. If these subsystems are implemented using deeply embedded processing or re-programmable logic, specific actions that enforce SDN requests can be accommodated in real time while SSM manages the global state operations. As a result, predictability is restored to the network operations.

The SoC based fabric also becomes extremely flexible to adapt to increasingly diverse random requests. Thus, adaptive fabric operation is achievable by using the SSM subsystem to collect data about how the SoC is utilized. A second core processor can then use this data to monitor fabric use and determine optimal behavior patterns. This processor than chooses the SSM policies that reflect the optimizations. A library of SSM policies maintained in memory acts as real time middleware options that enable the fabric to constantly recalibrate and maintain optimal behavior.

The introduction of SDN causes traditional network fabric architectures to loose predictability and efficiency. Embedded and general purpose processors are more optimized for the random actions caused by SDN but lack specific network tasks and do not conventionally support robust control plane operations. SSM provides a complimentary control plane architecture which offers predictability and efficiency adequate for adopting SoC methodology for network fabrics. Additionally, SSM ushers in an adaptive behavior characteristic that enables SoC based fabrics to maintain optimal efficiency as the diversity of SDN random requests grows over time.


Topics: TSMC, SRAM, custom memory, SoC, algorithmic, memory, embedded memory

Improved ESD Concepts for High Voltage Applications

Posted by Howard Pakosh on Tue, Feb 14, 2012 @ 07:14 AM

A growing set of IC applications require a high voltage interface. Examples include power management, power conversion and automotive chips with interfaces typically between 12V and 100V. Also mobile devices like cell phones and personal navigation devices today include interfaces above 10V to e.g. control and sense MEMS gyroscopic or compass sensors. And, most LCD/OLED display technologies require driving voltages between 10 and 40V. Besides the power, MEMS and display interfaces many devices include some sort of motor like the optical zoom lens and shutter control of digital cameras or the ‘silent mode’ vibrator in cell phones.
 
Though these applications represent fast growth markets, the underlying silicon process technologies lack standardized high performance ESD solutions. The purpose of ESD protection is to provide a safe, robust current path while limiting the voltage drop below the critical voltage determined by the circuit-to-be-protected. Today, different protection clamp types are used in the industry, each with significant performance and cost burdens that prevent generic use. The main problems with traditional solutions are high leakage current, large silicon area consumption and extensive custom (trial and error) development cycles for each process/fab change.
 
Further, to reduce the Bill of Materials (BOM) system makers are constantly shifting requirements that were once a system/PCB issue to the IC makers. IC makers designing high voltage applications need robust and reliable ESD technology that can meet a growing set of requirements.
 
Sofics has developed novel ESD devices that can solve the drawbacks of the current ESD concepts while strongly reducing the development and manufacturing cost. The Sofics ‘PowerQubic’ technology is currently used in the development of several products and is being evaluated for automotive (LIN) products. The 40V solution passed very severe automotive requirements like a 45V load dump (ISO 7637-2) and various transient latch-up conditions and high IEC 61000-4-2 stress pulses. Through the IP alliance partnership with TSMC these devices are now available in its 0.25um BCD technology with no strings attached (no NRE for standard cells, no royalty).

 

 

FREE Sofics  TakeCharge EDA  Solution

Topics: high voltage, hebistor, PowerQubic, latch-up immune, BCD, Electrostatic discharge, ESD, TSMC, high holding voltage, IC

Software Defined Networking Changing the Way Silicon IP Being Used.

Posted by Jeremy Pakosh on Wed, Jan 18, 2012 @ 08:49 AM

D. Christopher Keil, VP of Business Development, ChipStart LLC


The increasing momentum around adopting software-defined networking (SDN), which communications markets are currently experiencing, is accelerating the need for more comprehensive system management at the system-on-chip (SoC) level.  

SDN introduces the notion of “random” events sequencing, such as changing the flow of a routing or switching path in real time from sources external to the network device. Random event change management, in turn, introduces new architectural challenges for communications SoCs because the state operation sequences to be executed are not predictable, as is the case when fully contained within the network device.

Traditionally, changing routing or switching flows are managed completely within the “closed” network system. The silicon that supports these devices then relies architecturally on making changes using uniform, well-coordinated and predictable system sequences. The entire change is managed within the confines of the switch or router.  Predictability helps streamline the way these changes are sequenced at the silicon level given that other flows are also managed through the fabric.

Maintaining system integrity as a whole and maximizing system performance (making sure there are no excess cycles to perform these changes) is just a matter of optimizing the base architecture.  Packet headers, packet inspection and table management schemes all are finely tuned today based on uniformity and predictability.

But the introduction of random change requests now means support silicon must operate with less predictability. Previously finely tuned switching fabric architectures will need significant redesign, including how the sequence of tasks through IP blocks are performed (order and structure), how data paths behave given a mix of requests, and how arbitration logic prioritizes all this for resource access.

This realization accelerates the need for more robust system management at the SoC level since the impact of SDN is architectural. The notion that this kind of state management can be handled during device development itself will only further lower overall schema reuse and also complicate maintaining compatibility of solution schemes from chip to chip.

SDN implies then not only a revisit of the core communications architectures, but also the need to add significant global system management into the silicon-based architecture so non-uniform and unpredictable change requests can be sequenced. Only then will system integrity and performance be maintained once again for random event management.

Topics: system management, network, software defined, SDN, SoC