Search Results

Search found 21 results on 1 pages for 'speculative'.

Page 1/1 | 1 

  • Is there a way to configure timeout for speculative execution in Hadoop?

    - by S.O.
    I have hadoop job with tasks that are expected to run for significant length of fime (few minues). However hadoop starts speculative execution too soon. I do not want to turn speculative execution completely off but I want to increase duration of time hadoop waits before considering job for speculative execution. Is there a config option to control this timeout? Thanks

    Read the article

  • Navigating through a sea of hype

    - by wouldLikeACrystalBall
    This is a vague, open question, so if you have no interest in these, please leave now. A few years ago it seemed everyone thought the death of desktop software was imminent. Web applications were the future. Everyone would move to cloud-based software-as-a-service systems, and developing applications for specific end-user platforms like Windows would soon become something of a ghetto. Joel's "How Microsoft Lost the API War" was but one of many such pieces sounding the death knell for this way of software development. Flash-forward to 2010, and the hype is all around mobile devices, particularly the iPhone. Software-as-a-Service vendors--even small ones such as YCombinator startups--go out of their way to build custom applications for the iPhone and other smart phone devices; applications that can be quite sophisticated, that run only on specific hardware and software architectures and are thus inherently incompatible. Now some of you are probably thinking, "Well, only the decline of desktop software was predicted; mobile devices aren't desktops." But the term was used by those predicting its demise to mean laptops also, and really any platform capable of running a browser. What was promised was a world where HTML and related standards would supplant native applications and their inherent difficulties. We would all code to the browser, not the OS. But here we are in 2010 with the AppStore bulging and development for the iPad just revving up. A few days ago, I saw someone on Hacker News claim that the future of computing was entirely in small, portable devices. Apparently the future is underpowered, requires dexterous thumbs and induces near-sightedness. How do those who so vehemently asserted one thing now assert the opposite with equal vehemence, without making even the slightest admission of error? And further, how are we as developers supposed to sift through all of this? I bought into the whole web-standards utopianism that was in vogue back in '06-'07 and now feel like it was a mistake. Is there some formula one can apply rather than a mere appeal to experience?

    Read the article

  • Could CouchDB benefit significantly from the use of BERT instead of JSON?

    - by Victor Rodrigues
    I appreciate a lot CouchDB attempt to use universal web formats in everything it does: RESTFUL HTTP methods in every interaction, JSON objects, javascript code to customize database and documents. CouchDB seems to scale pretty well, but the individual cost to make a request usually makes 'relational' people afraid of. Many small business applications should deal with only one machine and that's all. In this case the scalability talk doesn't say too much, we need more performance per request, or people will not use it. BERT (Binary ERlang Term http://bert-rpc.org/ ) has proven to be a faster and lighter format than JSON and it is native for Erlang, the language in which CouchDB is written. Could we benefit from that, using BERT documents instead of JSON ones? I'm not saying just for retrieving in views, but for everything CouchDB does, including syncing. And, as a consequence of it, use Erlang functions instead of javascript ones. This would modify some original CouchDB principles, because today it is very web oriented. Considering I imagine few people would make their database API public and usually its data is accessed by the users through an application, it would be a good deal to have the ability to configure CouchDB for working faster. HTTP+JSON calls could still be handled by CouchDB, considering an extra cost in these cases because of parsing.

    Read the article

  • Will html5 change everything for designers?

    - by Sean Thompson
    What impact do you think html5 will have on the workflow/way graphic design is done for the web? Right now most designers stay in an Adobe tool, doing most of the design work there, and implement some elements with graphics and some with code. Checking out http://www.apple.com/html5/ it seems that almost everything done in a graphic can be done in code. Will designers have to learn very advanced levels of html5 and do the actual design work in the browser or do you see a more "designer friendly" gui being made for html/graphics work? Will tools like photoshop evolve in a way that handles this new lack of image files?

    Read the article

  • Will .Net 4.0 include a new CLR or keep with version 2.0

    - by Rory Becker
    Will .Net 4.0 use a new version of the CLR (v2.1, 3.0) or will it stick with the existing v2.0? Supplementary: Is it possibly going to keep with CLR v2.0 and add DLR v1.0? Update: Whilst this might look like a speculative question which cannot be answered, the VS team appear to be releasing more and more info on VS10 and .Net 4.0 so this may very soon not be the case. (Info available here - http://msdn.microsoft.com/en-us/vstudio/products/cc948977.aspx)

    Read the article

  • Resources for Windows Phone 7 development

    - by geocoin
    Windows Phone 7 has just been unveiled and right now there is no publicly available information to non-privileged developers/partners. MS are announcing all details of the development chain at the Mix10 conference. So until then this could be a good starting point/collection of online resources as they appear EDIT: I've added the speculation tag so we can also accumulate speculation. Please mark speculative posts as such. once firm details are announced we can confirm or delete them

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

  • A metaphor for Drupal module's inner workings

    - by PHP thinker
    What is the best application workflow metaphor for Drupal module? In PHP frameworks we think MVC-style. How do we think inside Drupal? Asumming I am writing some user-oriented module like Shop, Catalog or Forum. As far as I understand there are no or few MVC based modules. Should I generally treat Drupal modules (as sub-application) as a number of screens connected via forms and hyperlinks or there is a better way. My question may be a little bit speculative, but I hope someone will share my intent to think models, not just "scripts".

    Read the article

  • Tuning OS X Virtual Memory

    - by dcolish
    I've noticed some really odd results form vm_stat on OSX 10.6. According to this, its barely hitting the cache. Searches of pretty much everywhere I could think of turn up little to explain why the rate is so low. I asked a few friends and they're seeing the same thing. What gives and how can I make it better? Mach Virtual Memory Statistics: (page size of 4096 bytes) Pages free: 78609. Pages active: 553411. Pages inactive: 191116. Pages speculative: 6198. Pages wired down: 153998. "Translation faults": 116031508. Pages copy-on-write: 2274338. Pages zero filled: 33360804. Pages reactivated: 264378. Pageins: 1197683. Pageouts: 43756. Object cache: 20 hits of 1550639 lookups (0% hit rate)

    Read the article

  • Apple has bigger market capitalization than Microsoft

    - by Paja
    Should I worry that my Microsoft programming skills (C#, .NET, WinAPI) will be obsolete in a few years, because everybody will be using Mac OS X, iPhone OS or some other Apple-originated OS? According to reuters, Apple has now bigger market capitalization than Microsoft. I know there is probably huge speculative bubble around Apple's shares, but just look at this graph (taken from business insider): I also know Windows have currently 90% of the OS market share, so we (Win devs) should be OK for a few years, but who knows, maybe in 10 years, it will be just 40%, and 0% on mobile phone OS market. Personally, I'm not much worried, because I think Microsoft will fight hard for their desktop OS market share, but I'm interested in your opinions.

    Read the article

  • Such thing as a free lunch

    - by red@work
    There is a lot of hard work goes on in Red Gate, no doubt. And then there are things we're asked to get involved with, that aren't hard and don't feel much like work. What? Give up our free lunch at Red Gate for. a free lunch in a pub? Within an hour, myself and a colleague are at the Railway Vue pub in nearby Impington. This is all part of Red Gate's aim to hire more Software Engineers and Test Engineers, to help Red Gate grow into one of the greatest software companies in the world (it's already the best small software development company in the UK). Phase one then - buy lunch for Cambridge. Seriously, not just the targeted engineers, but for anyone who could print the voucher and make it to the nearest of the venues, two of which happen to be pubs. We're here to watch people happily eat a free pub lunch at Red Gate's expense. We also get involved and I swear I didn't order a beer with the food but the landlord says I clearly did and I'm not one to argue. Red Gate are offering a free iPad to anyone that comes to interview for a Software Engineer or Test Engineer role. We speak to a few engineers who are genuinely interested. We speak to a couple of DBA's too, and encourage them to make speculative applications - no free iPad on offer for them, but that's not really the point. The point is, everyone should apply to work here! It's that good. We overhear someone ask if 'these vouchers really work?' They do. There's no catch. The free IPad? Again, no catch. If that's what it takes to get talented engineers through our doors for an interview, then that's all good. Once they see where we work and how we work, we think they'll want to come and work with us. The following day, Red Gate decides to repeat the offer, and that means more hard work, this time at The Castle pub. Another landlord that mishears 'mineral water' and serves me a beer. There are many more people clutching the printed vouchers and they all seem very happy to be getting a free lunch from Red Gate. "Come and work for us" we suggest, "lunch is always free!" So if you're a talented engineer, like free lunches and want a free iPad, you know what to do.

    Read the article

  • How should I land my next consulting gig? [closed]

    - by MrOodles
    For the last couple of years, I've been working on speculative projects, expanding my skillset with side work, and paying the bills while having a blast consulting for startups. However, for a number of personal reasons, I need to spend the next 6-9 months maximizing my cash income. I want to put this into effect starting in early August. So that means I have one month to put the necessary client list/portfolio/resume together to start making this happen. As a programmer, I am very proficient in building Django web apps. I can write the necessary SQL, python, javascript, and css to build every part of a Django app, and then do the system administration necessary to deploy on AWS using EC2. I can also rig up a CDN to work seemlessly with the app using S3 and Cloudfront. I have built GIS applications using GeoDjango and PostGIS, and I have constructed social video apps by implementing Encoding.com as a service to prepare raw video files for consumption on the web. I am also moderately proficient in programming PHP, Java, and C#. I have built web apps in PHP, and desktop apps in Java and C#. I have dabbled with Android applications and iPhone apps, but nothing I would show off. I have experience doing SEO, social media marketing, and content marketing. Many of my clients have needed their apps promoted after they were built, and I was always happy to oblige when I could. I have also worked with biometrics technology including fingerprints for government contractors. This was as much a business analyst role as it was a programming gig, as I had to help answer RFPs, make checklists, and work around reems and reems of regulations to build applications that met very large bureaucratic requirements. I only have two real requirements for my next gig(s): 1) Work remotely. I live in North East Ohio, and I don't plan on leaving, but I wouldn't mind traveling one or two weeks out of every month to service clients who need on-site help. 2) $60.00hr-$∞ USD contracting rate. So what should I do for the next 30 days to achieve this? Should I target some large company and learn the requisite buzzwords to impress them? Should I learn some new language or technology? Polish some skill that I already have? Should I build something using my current skillset, or with some new technology? Should I put a website for my consultancy together to market myself? Should I do that using latest technology x, y, and z? Or should I just slap something up on Tumblr? I'm willing to do anything (moral) over the next 4 weeks to put myself into a position to maximize my income, and I'm open to all and every idea Programmers users may have. Let me hear them.

    Read the article

  • Why does my Intel HDA onboard sound card not have a "Mix" device / channel?

    - by Hanno Fietz
    I want to be able to record what my sound card outputs on the speakers / headphones. This question is all over the interwebs again and again, and there seem to be two outcomes: in your selection of audio input devices, there's a device called "Stereo Mix", or similar, which is the "loopback" device for audio. Choose that in your recording tool and you're done. there's no such device and only speculative posts about why that may be. Now, I'm using ALSA and an Intel HDA chipset on my mainboard under Kubuntu Karmic. I have some 5-10 output channels and "Mic", "Front Mic" and "Line" for input. All of those are available in KMix, Audacity and other software. No "loopback" / "Mix" / whatever. Do I have to get some driver / kernel module set up ALSA in some way set up my system configuration in some way use a software solution (such as JACK) I had a look at JACK, and found it rather hard to understand, it's either an expert tool or just clumsy, I couldn't say. At least, I wasn't able to figure out how to achieve what I wanted. One of my problems seems to be that I don't understand where and how the mixing happens. Are there sound cards which just aren't able to do it? Why does the sound card matter at all, since I could in theory grab the data stream at some point before it goes to the hardware, right?

    Read the article

  • New i7 is slower than old Core 2 Duo? Why? (BIOS programming)

    - by DrChase
    I've always wondered why the companies who make BIOS' either have terrible engineering psychologists or none at all. But without wasting your time further with random speculative questions, my real question is as follows: Why does my new computer run slower than my old computer? Old Computer: Intel Core 2 Duo CPU @ 3.0 Ghz (stock) 4GB OCZ DDR2 800 RAM Wolfdale E8400 mb nVidia GeForce 8600 GT New Computer: Intel Core i7 920 @ ~3.2 Ghz 6 GB OCZ DDR3 1066 RAM EVGA x58 SLI LE motherboard nVidia GeForce GTX 275 Vista x64 Home Premium on both. "Run slower" is defined as: - poorer FPS performance in the same games, applications - takes longer to start up - general desktop usage (checking email, opening up files, running exe's) is noticeably slower At first I thought I must've not set something up in the BIOS or something. But I have no idea how to set anything in the bios except for "Dummy O.C.", which brought me to ~3.2 Ghz. But beyond that I have no idea. I've been reading stuff about "ram timing" and voltages and the like but I really have no idea about that stuff. I'm a psychologist who has a basic understanding in building his own computers, not a computer scientist. Can someone give me some wisdom that might guide me to the reason my new computer is worse than my older one? I'm sorry if this is a bad question, or not appropriate to SO. I'm just pretty frustrated now and you all have helped me in the past so I figured I'd give it a shot. Thanks for your time.

    Read the article

  • Windows Phone 8, possible tablets and what the latest update might mean

    - by Roger Hart
    Microsoft have just announced an update to Windows Phone 8. As one of the five, maybe six people who actually bought a WP8 handset I found this interesting. Then I read the blog post about it, and rushed off to write somewhat less than a thousand words about a single picture. The blog post announces an extra column of tiles on the start screen, and support for higher resolutions. If we ignore all the usual flummery about how this will make your life better, that (and the rotation lock) sounds a little like stage setting for tablets. Looking at the preview screenshot, I started to wonder. What it’s called Phablet_5F00_StartScreenProductivity_5F00_01_5F00_072A1240.jpg Pretty conclusive. If you can brand something a “phablet” and sleep at night you’re made of sterner stuff than I am, but that’s beside the point. It’s explicit in the post that Microsoft are expecting a broader range of form factors for WP8, but they stop short of quite calling out tablet size. The extra columns and resolution definitely back that up, so why stop at a 6 inch “phablet”? Sadly, the string of numbers there don’t really look like a Lumia model number – that would be a bit tendentious even for a speculative blog post about a single screenshot. “Productivity” is interesting too. I get into this a bit more below, but this is a pretty clear pitch for a business device. What it looks like Something that would look quite decent on a 7 inch screen, but something a bit too vertical to go toe-to-toe with the Surface. Certainly, it would look a lot better on a large-factor phone than any of the current models. Those tiles are going to get cramped and a bit ugly if the handsets aren’t getting bigger. What’s on it You have a bunch of missed calls, you rarely text, use a stocks app, and your budget spreadsheet and meeting notes are a thumb-reach away. Outlook is your main form of email. You care enough about LinkedIn to not only install its app but give it a huge live tile. There’s no beating about the bush here, the implicit persona is a corporate exec. With Nokia in the bag and Blackberry pushing daisies, that may not be a stupid play. There’s almost certainly a niche there if they can parlay their corporate credentials into filling it before BYOD (which functionally means an iPhone) reaches the late adopters. The really quite slick WP8 Office implementation ought to help here. This is the face they’ve chosen to present, the cultural milieu they’re normalizing for Windows Phone. It’s an iPhone for Serious Business Grown-ups. Could work, I guess. Does it mean anything? Is the latest WP8 update a sign that we can expect to see tablets running Windows Phone rather than WinRT? Well, WinRT tablets haven’t exactly taken off but I’m not quite going to make a leap like that just from a file name and a column of icons. I feel pretty safe, however, conjecturing that Microsoft would like to squeeze a WP8 “phablet” into the palm of every exec who’s ever grumbled about their Blackberry, and this release might get them a bit closer. If it works well incrementing up to larger devices, then that could be a fair hedge against WinRt crashing and burning any harder in the marketplace.

    Read the article

  • What makes people think that NNs have more computational power than existing models?

    - by Bubba88
    I've read in Wikipedia that neural-network functions defined on a field of arbitrary real/rational numbers (along with algorithmic schemas, and the speculative `transrecursive' models) have more computational power than the computers we use today. Of course it was a page of russian wikipedia (ru.wikipedia.org) and that may be not properly proven, but that's not the only source of such.. rumors Now, the thing that I really do not understand is: How can a string-rewriting machine (NNs are exactly string-rewriting machines just as Turing machines are; only programming language is different) be more powerful than a universally capable U-machine? Yes, the descriptive instrument is really different, but the fact is that any function of such class can be (easily or not) turned to be a legal Turing-machine. Am I wrong? Do I miss something important? What is the cause of people saying that? I do know that the fenomenum of undecidability is widely accepted today (though not consistently proven according to what I've read), but I do not really see a smallest chance of NNs being able to solve that particular problem. Add-in: Not consistently proven according to what I've read - I meant that you might want to take a look at A. Zenkin's (russian mathematician) papers after mid-90-s where he persuasively postulates the wrongness of G. Cantor's concepts, including transfinite sets, uncountable sets, diagonalization method (method used in the proof of undecidability by Turing) and maybe others. Even Goedel's incompletness theorems were proven in right way in only 21-st century.. That's all just to plug Zenkin's work to the post cause I don't know how widespread that knowledge is in CS community so forgive me if that did look stupid. Thank you!

    Read the article

  • stop and split generated sequence at repeats - clojure

    - by fitzsnaggle
    I am trying to make a sequence that will only generate values until it finds the following conditions and return the listed results: case head = 0 - return {:origin [all generated except 0] :pattern 0} 1 - return {:origin nil :pattern [all-generated-values] } repeated-value - {:origin [values-before-repeat] :pattern [values-after-repeat] { ; n = int ; x = int ; hist - all generated values ; Keeps the head below x (defn trim-head [head x] (loop [head head] (if (> head x) (recur (- head x)) head))) ; Generates the next head (defn next-head [head x n] (trim-head (* head n) x)) (defn row [x n] (iterate #(next-head % x n) n)) ; Generates a whole row - ; Rows are a max of x - 1. (take (- x 1) (row 11 3)) Examples of cases to stop before reaching end of row: [9 8 4 5 6 7 4] - '4' is repeated so STOP. Return preceding as origin and rest as pattern. {:origin [9 8] :pattern [4 5 6 7]} [4 5 6 1] - found a '1' so STOP, so return everything as pattern {:origin nil :pattern [4 5 6 1]} [3 0] - found a '0' so STOP {:origin [3] :pattern [0]} :else if the sequences reaches a length of x - 1: {:origin [all values generated] :pattern nil} The Problem I have used partition-by with some success to split the groups at the point where a repeated value is found, but would like to do this lazily. Is there some way I can use take-while, or condp, or the :while clause of the for loop to make a condition that partitions when it finds repeats? Some Attempts (take 2 (partition-by #(= 1 %) (row 11 4))) (for [p (partition-by #(stop-match? %) head) (iterate #(next-head % x n) n) :while (or (not= (last p) (or 1 0 n) (nil? (rest p))] {:origin (first p) :pattern (concat (second p) (last p))})) # Updates What I really want to be able to do is find out if a value has repeated and partition the seq without using the index. Is that possible? Something like this - { (defn row [x n] (loop [hist [n] head (gen-next-head (first hist) x n) steps 1] (if (>= (- x 1) steps) (case head 0 {:origin [hist] :pattern [0]} 1 {:origin nil :pattern (conj hist head)} ; Speculative from here on out (let [p (partition-by #(apply distinct? %) (conj hist head))] (if-not (nil? (next p)) ; One partition if no repeats. {:origin (first p) :pattern (concat (second p) (nth 3 p))} (recur (conj hist head) (gen-next-head head x n) (inc steps))))) {:origin hist :pattern nil}))) }

    Read the article

  • Organization & Architecture UNISA Studies – Chap 4

    - by MarkPearl
    Learning Outcomes Explain the characteristics of memory systems Describe the memory hierarchy Discuss cache memory principles Discuss issues relevant to cache design Describe the cache organization of the Pentium Computer Memory Systems There are key characteristics of memory… Location – internal or external Capacity – expressed in terms of bytes Unit of Transfer – the number of bits read out of or written into memory at a time Access Method – sequential, direct, random or associative From a users perspective the two most important characteristics of memory are… Capacity Performance – access time, memory cycle time, transfer rate The trade off for memory happens along three axis… Faster access time, greater cost per bit Greater capacity, smaller cost per bit Greater capacity, slower access time This leads to people using a tiered approach in their use of memory   As one goes down the hierarchy, the following occurs… Decreasing cost per bit Increasing capacity Increasing access time Decreasing frequency of access of the memory by the processor The use of two levels of memory to reduce average access time works in principle, but only if conditions 1 to 4 apply. A variety of technologies exist that allow us to accomplish this. Thus it is possible to organize data across the hierarchy such that the percentage of accesses to each successively lower level is substantially less than that of the level above. A portion of main memory can be used as a buffer to hold data temporarily that is to be read out to disk. This is sometimes referred to as a disk cache and improves performance in two ways… Disk writes are clustered. Instead of many small transfers of data, we have a few large transfers of data. This improves disk performance and minimizes processor involvement. Some data designed for write-out may be referenced by a program before the next dump to disk. In that case the data is retrieved rapidly from the software cache rather than slowly from disk. Cache Memory Principles Cache memory is substantially faster than main memory. A caching system works as follows.. When a processor attempts to read a word of memory, a check is made to see if this in in cache memory… If it is, the data is supplied, If it is not in the cache, a block of main memory, consisting of a fixed number of words is loaded to the cache. Because of the phenomenon of locality of references, when a block of data is fetched into the cache, it is likely that there will be future references to that same memory location or to other words in the block. Elements of Cache Design While there are a large number of cache implementations, there are a few basic design elements that serve to classify and differentiate cache architectures… Cache Addresses Cache Size Mapping Function Replacement Algorithm Write Policy Line Size Number of Caches Cache Addresses Almost all non-embedded processors support virtual memory. Virtual memory in essence allows a program to address memory from a logical point of view without needing to worry about the amount of physical memory available. When virtual addresses are used the designer may choose to place the cache between the MMU (memory management unit) and the processor or between the MMU and main memory. The disadvantage of virtual memory is that most virtual memory systems supply each application with the same virtual memory address space (each application sees virtual memory starting at memory address 0), which means the cache memory must be completely flushed with each application context switch or extra bits must be added to each line of the cache to identify which virtual address space the address refers to. Cache Size We would like the size of the cache to be small enough so that the overall average cost per bit is close to that of main memory alone and large enough so that the overall average access time is close to that of the cache alone. Also, larger caches are slightly slower than smaller ones. Mapping Function Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines. The choice of mapping function dictates how the cache is organized. Three techniques can be used… Direct – simplest technique, maps each block of main memory into only one possible cache line Associative – Each main memory block to be loaded into any line of the cache Set Associative – exhibits the strengths of both the direct and associative approaches while reducing their disadvantages For detailed explanations of each approach – read the text book (page 148 – 154) Replacement Algorithm For associative and set associating mapping a replacement algorithm is needed to determine which of the existing blocks in the cache must be replaced by a new block. There are four common approaches… LRU (Least recently used) FIFO (First in first out) LFU (Least frequently used) Random selection Write Policy When a block resident in the cache is to be replaced, there are two cases to consider If no writes to that block have happened in the cache – discard it If a write has occurred, a process needs to be initiated where the changes in the cache are propagated back to the main memory. There are several approaches to achieve this including… Write Through – all writes to the cache are done to the main memory as well at the point of the change Write Back – when a block is replaced, all dirty bits are written back to main memory The problem is complicated when we have multiple caches, there are techniques to accommodate for this but I have not summarized them. Line Size When a block of data is retrieved and placed in the cache, not only the desired word but also some number of adjacent words are retrieved. As the block size increases from very small to larger sizes, the hit ratio will at first increase because of the principle of locality, which states that the data in the vicinity of a referenced word are likely to be referenced in the near future. As the block size increases, more useful data are brought into cache. The hit ratio will begin to decrease as the block becomes even bigger and the probability of using the newly fetched information becomes less than the probability of using the newly fetched information that has to be replaced. Two specific effects come into play… Larger blocks reduce the number of blocks that fit into a cache. Because each block fetch overwrites older cache contents, a small number of blocks results in data being overwritten shortly after they are fetched. As a block becomes larger, each additional word is farther from the requested word and therefore less likely to be needed in the near future. The relationship between block size and hit ratio is complex, and no set approach is judged to be the best in all circumstances.   Pentium 4 and ARM cache organizations The processor core consists of four major components: Fetch/decode unit – fetches program instruction in order from the L2 cache, decodes these into a series of micro-operations, and stores the results in the L2 instruction cache Out-of-order execution logic – Schedules execution of the micro-operations subject to data dependencies and resource availability – thus micro-operations may be scheduled for execution in a different order than they were fetched from the instruction stream. As time permits, this unit schedules speculative execution of micro-operations that may be required in the future Execution units – These units execute micro-operations, fetching the required data from the L1 data cache and temporarily storing results in registers Memory subsystem – This unit includes the L2 and L3 caches and the system bus, which is used to access main memory when the L1 and L2 caches have a cache miss and to access the system I/O resources

    Read the article

  • CodePlex Daily Summary for Tuesday, April 03, 2012

    CodePlex Daily Summary for Tuesday, April 03, 2012Popular ReleasesWiX Toolset: WiX v3.6 RC0: WiX v3.6 RC0 (3.6.2803.0) provides support for VS11 and a more stable Burn engine. For more information see Rob's blog post about the release: http://robmensching.com/blog/posts/2012/4/3/WiX-v3.6-Release-Candidate-Zero-availableSageFrame: SageFrame 2.0: Sageframe is an open source ASP.NET web development framework developed using ASP.NET 3.5 with service pack 1 (sp1) technology. It is designed specifically to help developers build dynamic website by providing core functionality common to most web applications.Lightbox Gallery Module for DotNetNuke: 01.11.00: This version of the Lightbox Gallery Module adds the following features/bug fixes: Feature: Read image EXIF information Bug Fix: URL parsing bug for any folder provider Minimum System Requirements Please ensure that your environment meets or exceeds the following system requirements: DotNetNuke® Version 06.00.00 or higher SQL Server 2005 or later .Net Framework 3.5 SP1 or laterClosedXML - The easy way to OpenXML: ClosedXML 0.65.0: Aside from many bug fixes we now have Conditional Formatting The conditional formatting was sponsored by http://www.bewing.nl (big thanks)callisto: callisto 2.0.24: Implemented ares encryption protocol for supported clients. Also changed script loading method to read direct from source files.sb0t: sb0t 4.61: Added packet encryption protocol for supported clients.iTuner - The iTunes Companion: iTuner 1.5.4475: Fix to parse empty playlists in iTunes LibraryExtAspNet: ExtAspNet v3.1.1: ExtAspNet - ?? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ?????????? ExtAspNet ????? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ??????????。 ExtAspNet ??????? JavaScript,?? CSS,?? UpdatePanel,?? ViewState,?? WebServices ???????。 ??????: IE 7.0, Firefox 3.6, Chrome 3.0, Opera 10.5, Safari 3.0+ ????:Apache License 2.0 (Apache) ??:http://extasp.net/ ??:http://bbs.extasp.net/ ??:http://extaspnet.codeplex.com/ ??:http://sanshi.cnblogs.com/ ????: +2012-04-02 v3.1.1 +????????,?????????????,????????????(...Document.Editor: 2012.2: Whats New for Document.Editor 2012.2: New Save Copy support New Page Setup support Minor Bug Fix's, improvements and speed upsVidCoder: 1.3.2: Added option for the minimum title length to scan. Added support to enable or disable LibDVDNav. Added option to prompt to delete source files after clearing successful completed items. Added option to disable remembering recent files and folders. Tweaked number box to only select all on a quick click.MJP's DirectX 11 Samples: Light Indexed Deferred Rendering: Implements light indexed deferred using per-tile light lists calculated in a compute shader, as well as a traditional deferred renderer that uses a compute shader for per-tile light culling and per-pixel shading.Pcap.Net: Pcap.Net 0.9.0 (66492): Pcap.Net - March 2012 Release Pcap.Net is a .NET wrapper for WinPcap written in C++/CLI and C#. It Features almost all WinPcap features and includes a packet interpretation framework. Version 0.9.0 (Change Set 66492)March 31, 2012 release of the Pcap.Net framework. Follow Pcap.Net on Google+Follow Pcap.Net on Google+ Files Pcap.Net.DevelopersPack.0.9.0.66492.zip - Includes all the tutorial example projects source files, the binaries in a 3rdParty directory and the documentation. It include...Extended WPF Toolkit: Extended WPF Toolkit - 1.6.0: Want an easier way to install the Extended WPF Toolkit?The Extended WPF Toolkit is available on Nuget. What's in the 1.6.0 Release?BusyIndicator ButtonSpinner Calculator CalculatorUpDown CheckListBox - Breaking Changes CheckComboBox - New Control ChildWindow CollectionEditor CollectionEditorDialog ColorCanvas ColorPicker DateTimePicker DateTimeUpDown DecimalUpDown DoubleUpDown DropDownButton IntegerUpDown Magnifier MaskedTextBox MessageBox MultiLineTex...Media Companion: MC 3.434b Release: General This release should be the last beta for 3.4xx. If there are no major problems, by the end of the week it will upgraded to 3.500 Stable! The latest mc_com.exe should be included too! TV Bug fix - crash when using XBMC scraper for TV episodes. Bug fix - episode count update when adding new episodes. Bug fix - crash when actors name was missing. Enhanced TV scrape progress text. Enhancements made to missing episodes display. Movies Bug fix - hide "Play Trailer" when multisaev...Better Explorer: Better Explorer 2.0.0.831 Alpha: - A new release with: - many bugfixes - changed icon - added code for more failsafe registry usage on x64 systems - not needed regfix anymore - added ribbon shortcut keys - Other fixes Note: If you have problems opening system libraries, a suggestion was given to copy all of these libraries and then delete the originals. Thanks to Gaugamela for that! (see discussion here: 349015 ) Note2: I was upload again the setup due to missing file!MonoGame - Write Once, Play Everywhere: MonoGame 2.5: The MonoGame team are pleased to announce that MonoGame v2.5 has been released. This release contains important bug fixes, implements optimisations and adds key features. MonoGame now has the capability to use OpenGLES 2.0 on Android and iOS devices, meaning it now supports custom shaders across mobile and desktop platforms. Also included in this release are native orientation animations on iOS devices and better Orientation support for Android. There have also been a lot of bug fixes since t...Circuit Diagram: Circuit Diagram 2.0 Alpha 3: New in this release: Added components: Microcontroller Demultiplexer Flip & rotate components Open XML files from older versions of Circuit Diagram Text formatting for components New CDDX syntax Other fixesUmbraco CMS: Umbraco 5.1 CMS (Beta): Beta build for testing - please report issues at issues.umbraco.org (Latest uploaded: 5.1.0.123) What's new in 5.1? The full list of changes is on our http://progress.umbraco.org task tracking page. It shows items complete for 5.1, and 5.1 includes items for 5.0.1 and 5.0.2 listed there too. Here's two headline acts: Members5.1 adds support for backoffice editing of Members. We support the pairing up of our content type system in Hive with regular ASP.NET Membership providers (we ship a def...51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.2.11: One Click Install from NuGet Changes to Version 2.1.2.11Code Changes 1. The project is now licenced under the Mozilla Public Licence 2. 2. User interface control and associated data access layer classes have been added to aid developers integrating 51Degrees.mobi into wider projects such as content management systems or web hosting management solutions. Use the following in a web form or user control to access these new UI components. <%@ Register Assembly="FiftyOne.Foundation" Namespace="...ScintillaNET: ScintillaNET 2.5: A slew of bug-fixes with a few new features sprinkled in. This release also upgrades the SciLexer and SciLexer64 DLLs to version 3.0.4. The official stuff: Issue # Title 32402 32402 27137 27137 31548 31548 30179 30179 24932 24932 29701 29701 31238 31238 26875 26875 30052 30052 New Projects3DTweet: 3Dtweet is an effort to make tweets appear in a aesthetic manner to the users of windows phone.Its developed using VS2010 expresss.Alay Plugin for Windows Live Writer: a project for creating an alay languageAuthor-it Copy Folder Plug-in: The Copy Folder Plug-in is an Author-it plug-in that allows the user to create a duplicate of a folder and its contents, including subfolders and objects.Author-it Dependencies Plug-in: The Word Count Plug-in is an Author-it plug-in that allows the user to drill down into object relationships to find objects that selected Author-it objects depend on.Author-it Word Count Plug-in: The Word Count Plug-in is an Author-it plug-in that allows the user to get word counts for selected topics.Azure Configuration File Comparer: Compares two Azure configuration files and highlights the differencesBootstrap Footer Sticker: Bootstrap extensions for footer to stick at the bottomBulk Approval Webpart: This webpart can be added to any list, or document library that requires approval. When the button is clicked it will approve all items. It will also kick start any workflows attached to the list / library that are set to start when an item has been changed. CodeTimer: Code Execute TimeCRM 2011 - WinForm based User Settings Console: This is Windows Form based Utility to do bulk management of User Settings in CRM 2011Emergency System: An Imagine Cup projectFalafel Solution Rename Script: The Falafel Software Solution Rename PowerShell Script makes it easy to reuse an existing solution/project by performing a global rename. If you have a solution named ExampleSolution and want to reuse it as WidgetSolution, this script will rename everything for you.Gadgeteer Bluetooth Module: BETA Drivers for the .NET Gadgeteer Bluetooth modulesGreg's Channel 9 Utilities and Helpers: Utilities and helpers for Microsoft's Channel 9 (C9) site. These are utilities I've created to make my C9 work faster and easierHB's First Time: My first project hereHOMMFrame: Studying project about frames.ICNInformaticaCenter: * ICNInformaticaCenter *Insert a Favorite (Bookmark) plugin for Windows Live Writer: This Windows Live Writer plugin allows you to select a Favorite (Bookmark) and insert it into your blog entry.Kinect your Metro Style App: The main idea is to tell Metro Style App communicate with external environment via Kinect Sensor.Memo: Memo Intellegence SystemMGR.CommandLineParser: MGR.CommandLineParser is a multi-command line parser. It uses System.ComponentModel.DataAnnotations to declare and validate the commands.MusicStoreMVC3: ?????MVC3??????????,????????(???????)??? 1.EF4.1 2.MVC3 3.MvcPager 4.Linq ?????? NN Crew Calculator: Calculator project to help learn C#Reporte Ciudadano: Aplicación para que los ciudadanos reporten los problemas al ayuntamiento.Sejrssedler: This is a school project made, by four student in Aarhus, Denmark. During the Datamatiker study(2nd year).smartcamera: smart cameraSpecEx: Speculative ExecutionStructure Copier: This small program is supposed to copy tree structure of directory.TFS2010 Service Provider: Web Service (asmx) is a provider of Team Foundation Server 2010 object model. Was developed to enable the implementation of client application for Android and etc. platforms, which receives data from the server via Web services.totalem: totalem - free file manager for WindowsWebSharperWithTypeProviders: WebSharper project with Visual Studio 2011 Beta, .Net 4.5 and F# 3.0 with TypeProviders

    Read the article

  • Understanding the value of Customer Experience & Loyalty for the Telecommunications Industry

    - by raul.goycoolea
    Worried by economic woes and market forces, especially in mature markets, communications service providers (CSPs) increasingly focus on improving customer experience. In fact, it seems difficult to find a major message by a C-level executive in the developed world that does not include something on "meeting and exceeding customers' needs". Frequently in customer satisfaction studies by prominent firms, CSPs fall short of the leadership demonstrated by other industries that take customer-centric approaches to their bottom-line strategies. Consider the following:Despite the continued impact of global economic crisis, in July 2010, Apple Computer posted record revenue and net quarterly profit. Those who attribute the results primarily to the iPhone 4 launch should note that Apple also shipped around 30% more Macintosh computers than the same period the previous year. Even sales of the iPod line increased by 8% in a highly commoditized, shrinking media player market. Finally, Apple began selling iPads during the quarter, with total sales of more than 3 million units. What does Apple have that the others lack? Well, some great products (and services) to be sure, but it also excels at customer service and support, marketing, and distribution, and has one of the strongest brands globally. Its products are useful, simple to use, easy to acquire and augment, high quality, and considered very cool. They also evoke such an emotional response from many of Apple's customers, which they turn up their noses at competitive products.In other words, Apple appears to have mastered virtually every aspect of customer experience and the resultant loyalty of its customer base - even in difficult financial times. Through that unwavering customer focus, Apple continues to drive its revenues and profits to new heights. Other customer loyalty leaders like Wal-Mart, Google, Toyota and Honda are also doing well by focusing on customer experience as an essential driver of profitability. Service providers should note this performance and ask themselves how they might leverage the same principles to increase their own profitability. After all, that is what customer experience and loyalty are all about: profitability.To successfully manage all the critical touch points of customer experience, CSPs must shun the one-size-fits-all approach. They can no longer afford to view customer service fundamentally as an act of altruism - which mentality dates back to the industry's civil service days, when CSPs were typically government organizations that were critical to economic development and public safety.As regulators and public officials have pushed, and continue to push, service providers to new heights of reliability - using incentives and punishments - most CSPs already have some of the fundamental building blocks of customer service in place. Yet despite that history and experience, service providers still lag other industries in providing what is seen as good customer service.As we observed in the TMF's 2009 Insights Research report, Customer Experience Management: Driving Loyalty & Profitability there has been resurgence in interest by CSPs. More and more of them have stated ambitions to catch up other industries, and they are realizing that good customer service is a powerful strategy for increasing business performance and profitability, not an act of good will.CSPs are recognizing the connection between customer experience and profitability, as demonstrated in many studies. For example, according to research by Bain & Company, a 5 percent improvement in customer retention rates can yield as much as a 75 percent increase in profits for companies across a range of industries.After decades of customer experience strategy formulation, Bain partner and business author, Frederick Reichheld, considers "would you recommend us to a friend?" as the ultimate question for a customer. How many times have you or your friends recommended an iPod, iPhone or a Mac? What do your children recommend to their peers? Their peers to them?There are certain steps service providers have to take to create more personalized relationships with their customers, as well as reduce churn and increase profitability, all while becoming leaner and more agile. First, they have to define customer experience, we define it as the result of the sum of observations, perceptions, thoughts and feelings arising from interactions and relationships between customers and their service provider(s). Virtually every customer touch point - whether directly or indirectly linked to service providers and their partners - contributes to customer perception, satisfaction, loyalty, and ultimately profitability. Gaining leadership in customer experience and satisfaction will not be a simple task, as it is affected by virtually every customer-facing aspect of the service provider, and in turn impacts the service provider deeply - especially on the all-important bottom line. The scope of issues affecting customer experience is complex and dynamic.With new services, devices and applications extending the basis of customer experience to domains beyond the direct control of the service provider, it is likely to increase in complexity and dynamism.Customer loyalty = increased profitsAs stated earlier, customer experience programs are not fundamentally altruistic exercises, but a strategic means of improving competitiveness and profitability in the short and long term. Loyalty is essential to deriving long term profits from customers.Some of the earliest loyalty programs date back to the 1930s, when packaged goods companies offered embedded coupons for rewards to buyers, and eventually retail chains began offering reward programs to frequent shoppers. These programs continued for decades but were leapfrogged in the 1980s by more aggressive programs from the airlines.This movement was led by American Airlines, which launched the first full-scale loyalty marketing program of the modern era with the AAdvantage frequent flyer scheme. It was the first to reward frequent fliers with notional air miles that could be accumulated and later redeemed for free travel. Figure 1: Opportunities example of Customer loyalty driven profitOther airlines and travel providers were quick to grasp the incredible value of providing customers with an incentive to use their company exclusively. Within a few years, dozens of travel industry companies launched similar initiatives and now loyalty programs are achieving near-ubiquity in many service industries, especially those in which it is difficult to differentiate offerings by product attributes.The belief is that increased profitability will result from customer retention efforts because:•    The cost of acquisition occurs only at the beginning of a relationship: the longer the relationship, the lower the amortized cost;•    Account maintenance costs decline as a percentage of total costs, or as a percentage of revenue, over the lifetime of the relationship;•    Long term customers tend to be less inclined to switch and less price sensitive which can result in stable unit sales volume and increases in dollar-sales volume;•    Long term customers may initiate word-of-mouth promotions and referrals, which cost the company nothing and arguably are the most effective form of advertising;•    Long-term customers are more likely to buy ancillary products and higher margin supplemental products;•    Long term customers tend to be satisfied with their relationship with the company and are less likely to switch to competitors, making market entry or competitors gaining market share difficult;•    Regular customers tend to be less expensive to service, as they are familiar with the processes involved, require less 'education', and are consistent in their order placement;•    Increased customer retention and loyalty makes the employees' jobs easier and more satisfying. In turn, happy employees feed back into higher customer satisfaction in a virtuous circle. Figure 2: The virtuous circle of customer loyaltyFigure 2 represents a high-level example of a virtuous cycle driven by customer satisfaction and loyalty, depicting how superiority in product and service offerings, as well as strong customer support by competent employees, lead to higher sales and ultimately profitability. As stated above, this is not a new concept, but succeeding with it is difficult. It has eluded many a company driven to achieve profitability goals. Of course, for this circle to be virtuous, the customer relationship(s) must be profitable.Trying to maintain the loyalty of unprofitable customers is not a viable business strategy. It is, therefore, important that marketers can assess the profitability of each customer (or customer segment), and either improve or terminate relationships that are not profitable. This means each customer's 'relationship costs' must be understood and compared to their 'relationship revenue'. Customer lifetime value (CLV) is the most commonly used metric here, as it is generally accepted as a representation of exactly how much each customer is worth in monetary terms, and therefore a determinant of exactly how much a service provider should be willing to spend to acquire or retain that customer.CLV models make several simplifying assumptions and often involve the following inputs:•    Churn rate represents the percentage of customers who end their relationship with a company in a given period;•    Retention rate is calculated by subtracting the churn rate percentage from 100;•    Period/horizon equates to the units of time into which a customer relationship can be divided for analysis. A year is the most commonly used period for this purpose. Customer lifetime value is a multi-period calculation, often projecting three to seven years into the future. In practice, analysis beyond this point is viewed as too speculative to be reliable. The model horizon is the number of periods used in the calculation;•    Periodic revenue is the amount of revenue collected from a customer in a given period (though this is often extended across multiple periods into the future to understand lifetime value), such as usage revenue, revenues anticipated from cross and upselling, and often some weighting for referrals by a loyal customer to others; •    Retention cost describes the amount of money the service provider must spend, in a given period, to retain an existing customer. Again, this is often forecast across multiple periods. Retention costs include customer support, billing, promotional incentives and so on;•    Discount rate means the cost of capital used to discount future revenue from a customer. Discounting is an advanced method used in more sophisticated CLV calculations;•    Profit margin is the projected profit as a percentage of revenue for the period. This may be reflected as a percentage of gross or net profit. Again, this is generally projected across the model horizon to understand lifetime value.A strong focus on managing these inputs can help service providers realize stronger customer relationships and profits, but there are some obstacles to overcome in achieving accurate calculations of CLV, such as the complexity of allocating costs across the customer base. There are many costs that serve all customers which must be properly allocated across the base, and often a simple proportional allocation across the whole base or a segment may not accurately reflect the true cost of serving that customer;  This is made worse by the fragmentation of customer information, which is likely to be across a variety of product or operations groups, and may be difficult to aggregate due to different representations.In addition, there is the complexity of account relationships and structures to take into consideration. Complex account structures may not be understood or properly represented. For example, a profitable customer may have a separate account for a second home or another family member, which may appear to be unprofitable. If the service provider cannot relate the two accounts, CLV is not properly represented and any resultant cancellation of the apparently unprofitable account may result in the customer churning from the profitable one.In summary, if service providers are to realize strong customer relationships and their attendant profits, there must be a very strong focus on data management. This needs to be coupled with analytics that help business managers and those who work in customer-facing functions offer highly personalized solutions to customers, while maintaining profitability for the service provider. It's clear that acquiring new customers is expensive. Advertising costs, campaign management expenses, promotional service pricing and discounting, and equipment subsidies make a serious dent in a new customer's profitability. That is especially true given the rising subsidies for Smartphone users, which service providers hope will result in greater profits from profits from data services profitability in future.  The situation is made worse by falling prices and greater competition in mature markets.Customer acquisition through industry consolidation isn't cheap either. A North American service provider spent about $2,000 per subscriber in its acquisition of a smaller company earlier this year. While this has allowed it to leapfrog to become the largest mobile service provider in the country, it required a total investment of more than $28 billion (including assumption of the acquiree's debt).While many operating cost synergies clearly made this deal more attractive to the acquiring company, this is certainly an expensive way to acquire customers: the cost per subscriber in this case is not out of line with the prices others have paid for acquisitions.While growth by acquisition certainly increases overall revenues, it often creates tremendous challenges for profitability. Organic growth through increased customer loyalty and retention is a more effective driver of profit, as well as a stronger predictor of future profitability. Service providers, especially those in mature markets, are increasingly recognizing this and taking steps toward a creating a more personalized, flexible and satisfying experience for their customers.In summary, the clearest path to profitability for companies in virtually all industries is through customer retention and maximization of lifetime value. Service providers would do well to recognize this and focus attention on profitable customer relationships.

    Read the article

1