Search Results

Search found 23323 results on 933 pages for 'worst is better'.

Page 669/933 | < Previous Page | 665 666 667 668 669 670 671 672 673 674 675 676  | Next Page >

  • I'd like to switch from 32-bit to 64-bit within same version

    - by Marty Fried
    I have a 32-bit installation of 11.10 on my 64-bit (4 GB) home AMD system. I have recently read up a bit on 64-bit version, and it seems that it would be a marginally better choice now for me. I have read about several methods to help reinstall all the various apps, using either dpkg's get-selections/set-selections and dselect in various ways, or using synaptic's save/get markings. The problem here is that I've read several variations, and I'm not sure which is best. I have enough disk space to do this with a brand new partition, so I'm not too worried about destroying anything, but I don't really want to make it my life's work, hence my appeal for expert tips. Since it's the same version, would it be safe to copy configuration files from the 32-bit system? I'd guess my home directory and /etc might be enough, and would save at least most of the time to reconfigure. But are there difference in configuration files in either of these directories for 32 vs 64 bits that might cause problems? After reinstalling to 64-bit, I can then continue along the 64 bit path for upgrades, but I thought it would be easier to switch the same version, than to try to reinstall apps and upgrade at the same time. Some methods I've seen suggested, among others: A. From Ubuntu forums On your old system (assuming it is still working), start up Synaptic and go: File->Save Markings and choose a file name along with a location (like a USB drive) that you can use when you have installed your new system). You need to check on the bottom: "Save full state, not only changes" This file contains a list of all your currently installed packages, and when you have installed and booted up your new system (and configured your repositories to the best for your location - as we all do, don't we?) then start up Synaptic and go: File-Read Markings and point it at your saved file, and after that has completed then select Apply to kick off the download & installation of all of those packages you had installed previously! B. From the same discussion: According to section 6.4.9 of the Debian Reference Manual, the following will save both the list of packages installed and their debconf configuration: # dpkg --get-selections "*" >myselections # or use \* # debconf-get-selections > debconfsel.txt and the following will reinstall and reconfigure them: # dselect update # debconf-set-selections < debconfsel.txt # dpkg --set-selections <myselections # apt-get -u dselect-upgrade # or dselect install C. A variation on the above I've seen a lot, this from stackoverflow: dpkg --get-selections > package_list then on the new install: cat package_list | sudo dpkg --set-selections && sudo apt-get dselect-upgrade I don't really understand B, or why it's slightly different than many others.

    Read the article

  • How to suggest using an ORM instead of stored procedures?

    - by Wayne M
    I work at a company that only uses stored procedures for all data access, which makes it very annoying to keep our local databases in sync as every commit we have to run new procs. I have used some basic ORMs in the past and I find the experience much better and cleaner. I'd like to suggest to the development manager and rest of the team that we look into using an ORM Of some kind for future development (the rest of the team are only familiar with stored procedures and have never used anything else). The current architecture is .NET 3.5 written like .NET 1.1, with "god classes" that use a strange implementation of ActiveRecord and return untyped DataSets which are looped over in code-behind files - the classes work something like this: class Foo { public bool LoadFoo() { bool blnResult = false; if (this.FooID == 0) { throw new Exception("FooID must be set before calling this method."); } DataSet ds = // ... call to Sproc if (ds.Tables[0].Rows.Count > 0) { foo.FooName = ds.Tables[0].Rows[0]["FooName"].ToString(); // other properties set blnResult = true; } return blnResult; } } // Consumer Foo foo = new Foo(); foo.FooID = 1234; foo.LoadFoo(); // do stuff with foo... There is pretty much no application of any design patterns. There are no tests whatsoever (nobody else knows how to write unit tests, and testing is done through manually loading up the website and poking around). Looking through our database we have: 199 tables, 13 views, a whopping 926 stored procedures and 93 functions. About 30 or so tables are used for batch jobs or external things, the remainder are used in our core application. Is it even worth pursuing a different approach in this scenario? I'm talking about moving forward only since we aren't allowed to refactor the existing code since "it works" so we cannot change the existing classes to use an ORM, but I don't know how often we add brand new modules instead of adding to/fixing current modules so I'm not sure if an ORM is the right approach (too much invested in stored procedures and DataSets). If it is the right choice, how should I present the case for using one? Off the top of my head the only benefits I can think of is having cleaner code (although it might not be, since the current architecture isn't built with ORMs in mind so we would basically be jury-rigging ORMs on to future modules but the old ones would still be using the DataSets) and less hassle to have to remember what procedure scripts have been run and which need to be run, etc. but that's it, and I don't know how compelling an argument that would be. Maintainability is another concern but one that nobody except me seems to be concerned about.

    Read the article

  • How to update off screen bitmap in a surfaceview thread

    - by DKDiveDude
    I have a Surfaceview thread and an off canvas texture bitmap that is being generated (changed), first row (line), every frame and then copied one position (line) down on regular surfaceview bitmap to make a scrolling effect, and I then continue to draw other things on top of that. Well that is what I really want, however I can't get it to work even though I am creating a separate canvas for off screen bitmap. It is just not scrolling at all. I other words I have a memory bitmap, same size as Surfaceview canvas, which I need to scroll (shift) down one line every frame, and then replace top line with new random texture, and then draw that on regular Surfaceview canvas. Here is what I thought would work; My surfaceChanged where I specify bitmap and canvasses and start thread: @Override public void surfaceCreated(SurfaceHolder holder) { intSurfaceWidth = mSurfaceView.getWidth(); intSurfaceHeight = mSurfaceView.getHeight(); memBitmap = Bitmap.createBitmap(intSurfaceWidth, intSurfaceHeight, Bitmap.Config.ARGB_8888); memCanvas = new Canvas(memCanvas); myThread = new MyThread(holder, this); myThread.setRunning(true); blnPause = false; myThread.start(); } My thread, only showing essential middle running part: @Override public void run() { while (running) { c = null; try { // Lock canvas for drawing c = myHolder.lockCanvas(null); synchronized (mSurfaceHolder) { // First draw off screen bitmap to off screen canvas one line down memCanvas.drawBitmap(memBitmap, 0, 1, null); // Create random one line(row) texture bitmap memTexture = Bitmap.createBitmap(imgTexture, 0, rnd.nextInt(intTextureImageHeight), intSurfaceWidth, 1); // Now add this texture bitmap to top of off screen canvas and hopefully bitmap memCanvas.drawBitmap(textureBitmap, intSurfaceWidth, 0, null); // Draw above updated off screen bitmap to regular canvas, at least I thought it would update (save changes) shifting down and add the texture line to off screen bitmap the off screen canvas was pointing to. c.drawBitmap(memBitmap, 0, 0, null); // Other drawing to canvas comes here } finally { // do this in a finally so that if an exception is thrown // during the above, we don't leave the Surface in an // inconsistent state if (c != null) { myHolder.unlockCanvasAndPost(c); } } } } For my game Tunnel Run. Right now I have a working solution where I instead have an array of bitmaps, size of surface height, that I populate with my random texture and then shift down in a loop for each frame. I get 50 frames per second, but I think I can do better by instead scrolling bitmap.

    Read the article

  • Customer Experience and BPM – From Efficiency to Engagement

    - by Ajay Khanna
    Over the last few years, focus of BPM has been mainly to improve the businesses efficiency. To create more efficient processes, to remove bottlenecks, to automate processes. That still holds true and why not? Isn’t BPM all about continuous improvement? BPM facilitates and requires business and IT collaboration. But business also requires working with customer. Do we not want to get close to and collaborate with our customers? This is where Social BPM takes BPM a step further. It not only allows people within an organization to collaborate to design exceptional processes, not only lets them collaborate on resolving a case but also let them engage with the customers. Engaging with customer means, first of all, connecting with them on their terms and turf. Take a new account opening process. Can a customer call you and initiate the process? Can a customer email you, or go to the website and initiate the process? Can they tweet you and initiate the process? Can they check the status of process via any channel they like? Can they take a picture of damaged package delivery and kick-off a returns process from their mobile device, with GIS data? Yes, these are various aspects to consider during process design if the goal is better customer experience and engagement. Of course, we want to be efficient and agile, but the focus here needs to be the customer. Now when the customer is tweeting about your products, posting on Facebook and Yelp about their experience with your company (and your process), you need to seek out that information. You need to gather and analyze the customer’s feedback on the social media and use that information to improve the processes and products. This is an excellent source of product and process ideation. So BPM is no longer only about improving back-office process efficiency, it is moving into a new and exciting phase of improving frontline customer facing processes, customer experience and engagement. Let me know how you think BPM can enhance customer experience.

    Read the article

  • Oracle Service Cloud May 2014 Release – Focus on your driving by JP Saunders

    - by Tuula Fai
    The next time you’re twiddling dials on your car’s dashboard to get the air to blow in the right direction, and the right song to play on the stereo, while pulling on the wires to charge your phone and punching in passwords to re-sync your hands-free headset to take a call, consider this… Does having a better dashboard UI in your car improve your driving performance? The Tesla car has one of the most modern and intuitive dashboards in any commercial car today. It is actually based on the design of a smart phone, which can download apps and updates directly from the cloud.  The 17” touchscreen, Lynx-based dashboard totally integrates all channels and devices, allowing the driver to focus on the smooth driving and power of this luxury (toy) car.  What the folks at Tesla didn't do was avoid the complexity of our needs. Instead, they streamlined them. And, while we might not all be able to afford a Tesla, their approach demonstrates that a modern UI approach can ultimately make a positive difference in our lives and businesses.  This is why the productivity and effectiveness of a Modern Contact Center is many times greater than that of a traditional contact center. Agents in a Modern Contact Center get to focus on the task at hand, the customer engagement, rather than stumbling their way through Lego blocks of complexity.  The Oracle Service Cloud is a modern approach to customer service that empowers your agents to achieve greater focus on improving your operational and strategic success through streamlined business processes.  Here are some of the recent May 2014 release highlights to the Oracle Service Cloud: Performance Enhanced Desktop UI A modern agent desktop interface that optimizes clumsy tasks, logins, screens and workflows and is optimized for agent and system performance. Improvements include performance for drag-and-drop configurable views, saved searches, and improved caching for high-speed performance even during disconnected or slow internet access.  Customer Experience Routing A streamlined automatic way to connect the right customer need to the best agent skills, based on multidimensional variables such as product skills, language skills, workload, call volume to optimize the connection and resolution experience. On-The-Go Mobile Improvements to the Agent mobile app that extend connectivity to websites, and customer surveys that are mobile-ready and rendered for any device, and ensure the customer’s voice is captured while the insight is still top of mind.  Infused Social Engagement Enhancements to infused social capabilities allow agents to respond in social threads directly from within the agent desktop, with the information becoming part of the incident record for automatic actions (such as replay or escalate) triggered off the response. Front-End Siebel Contact Center The market leading online Web Customer Self-Service interface from the Oracle Service Cloud, is now out-of-the-box ready for Oracle Siebel customers. Deploy a new online web self-service interface in a matter of weeks to have customers self-serve and self-solve answers, with escalated incidents routed directly into the Oracle Siebel Contact Center. For more information on the latest enhancements for the Oracle Service Cloud, please see the Oracle Service Cloud May 2014 Capabilities and Benefits. Related blogs: Oracle Service Cloud Feb 2014

    Read the article

  • Push-Based Events in a Services Oriented Architecture

    - by Colin Morelli
    I have come to a point, in building a services oriented architecture (on top of Thrift), that I need to expose events and allow listeners. My initial thought was, "create an EventService" to handle publishing and subscribing to events. That EventService can use whatever implementation it desires to actually distribute the events. My client automatically round-robins service requests to available service hosts which are determined using Zookeeper-based service discovery. So, I'd probably use JMS inside of EventService mainly for the purpose of persisting messages (in the event that a service host for EventService goes down before it can distribute the message to all of the available listeners). When I started considering this, I began looking into the differences between Queues and Topics. Topics unfortunately won't work for me, because (at least for now), all listeners must receive the message (even if they were down at the time the event was pushed, or hadn't made a subscription yet because they haven't completed startup (during deployment, for example) - messages should be queued until the service is available). However, I don't want EventService to be responsible for handling all of the events. I don't think it should have the code to react to events inside of it. Each of the services should do what it needs with a given event. This would indicate that each service would need a JMS connection, which questions the value of having EventService at all (as the services could individually publish and subscribe to JMS directly). However, it also couples all of the services to JMS (when I'd rather that there be a single service that's responsible for determining how to distribute events). What I had thought was to publish an event to EventService, which pulls a configuration of listeners from some configuration source (database, flat file, irrelevant for now). It replicates the message and pushes each one back into a queue with information specific to that listener (so, if there are 3 listeners, 1 event would become 3 events in JMS). Then, another thread in EventService (which is replicated, running on multiple hots) would be pulling from the queue, attempting to make the service call to the "listener", and returning the message to the queue (if the service is down), or discarding the message (if the listener completed successfully). tl;dr If I have an EventService that is responsible for receiving events and delegating service calls to "event listeners," (which are really just endpoints on other services), how should it know how to craft the service call? Should I create a generic "Event" object that is shared among all services? Then, the EventService can just construct this object and pass it to the service call. Or is there a better answer to this problem entirely?

    Read the article

  • How to export 3D models that consist of several parts (eg. turret on a tank)?

    - by Will
    What are the standard alternatives for the mechanics of attaching turrets and such to 3D models for use in-game? I don't mean the logic, but rather the graphics aspects. My naive approach is to extend the MD2-like format that I'm using (blender-exported using a script) to include a new set of properties for a mesh that: is anchored in another 'parent' mesh. The anchor is a point and normal in the parent mesh and a point and normal in the child mesh; these will always be colinear, giving the child rotation but not translation relative to the parent point. has a normal that is aligned with a 'target'. Classically this target is the enemy that is being engaged, but it might be some other vector e.g. 'the wind' (for sails and flags (and smoke, which is a particle system but the same principle applies)) or 'upwards' (e.g. so bodies of riders bend properly when riding a horse up an incline etc). that the anchor and target alignments have maximum and minimum and a speed coeff. there is game logic for multiple turrets and on a model and deciding which engages which enemy. 'primary' and 'secondary' or 'target0' ... 'targetN' or some such annotation will be there. So to illustrate, a classic tank would be made from three meshes; a main body mesh, a turret mesh that is anchored to the top of the main body so it can spin only horizontally and a barrel mesh that is anchored to the front of the turret and can only move vertically within some bounds. And there might be a forth flag mesh on top of the turret that is aligned with 'wind' where wind is a function the engine solves that merges environment's wind angle with angle the vehicle is travelling in an velocity, or something fancy. This gives each mesh one degree of freedom relative to its parent. Things with multiple degrees of freedom can be modelled by zero-vertex connecting meshes perhaps? This is where I think the approach I outlined begins to feel inelegant, yet perhaps its still a workable system? This is why I want to know how it is done in professional games ;) Are there better approaches? Are there formats that already include this information? Is this routine?

    Read the article

  • Managing Joomla via Android

    Surprisingly, it was only today that I actually looked for possible solutions to write more content for my blog. Since quite some time I'm using my Samsung Galaxy Tab 10.1 for all kind of social media activities like Google+, FB, etc. but also for my casual mail during the evening hours. And yes, I feel a little bit guilty about missing the chance to use my tablet to write some content here... OK, only a little bit. ;-) These are not the droids you are looking for But those lazy times are over! While searching the Play Store with the expression 'joomla' I got three interesting hits: - Joomla Admin Mobile! - Joooid - Joomla! Security Checklist After reading the reviews I installed the two later apps. Joomla! Security Checklist The author clearly outlines here that the app is primarily for his personal purpose to have safety checklist at hand at anytime. I guess that any reader of this article has an Android based smartphone or tablet, so that simple app should be part of your toolbox when using Joomla! for your websites. Joooid plugin & app Although I was looking for an app that could work with the default XML RPC interface of Joomla I have to admit that this combination of an enhanced Web service suits me better, mainly due to performance reason. The official website has not only the downloads for Joomla versions 1.5 - 2.5 but also very good and easy to follow step-by-step instructions to prepare your server for the Android app. It will take you less than 5 minutes to get it up and running. For safety reasons, I recommend that you should configure your Web server to have an additional authentication layer on the plugins folder. The smartphone app has the ability to run against HTTP authentication. Personally, I like the look and feel of the app. It is a little bit different compared to the web UI but still easy to use. In fact, this article is the first one written in the Joooid app. At the moment, I only miss the ability to have list tags. Quick and easy Writing full-fledged articles with images, a couple of hyperlinks and some styling here and there should be left to the desktop. At least for the moment. Let's see whether I'm going to change my mind on this during the upcoming months... I'll give it a try, and hope to publish at least once per month to write some content using Joooid. Actually, it would be great to have some feedback about other Joomla! clients in the wild.

    Read the article

  • Stuff I learned at Innovate 2011

    - by David Dorf
    After returning from the NRF Innovate 2011 conference, I picked up few nuggets I thought I'd share here.  These thoughts are a bit random, but I hope they're useful nonetheless.Kevin Kelly opened the conference with six verbs that represent the future.  They were Screening, Interacting, Sharing, Accessing, Flowing, and Generating.  It struck me that these are all ways in which we merge the digital and physical worlds.  The internet of things continues to gain momentum.Some buzzwords:  deal economy, subscription commerce, discovery (instead of search), curationThat last one, curation, came up over and over.  Retailers, especially those in fashion, are finding value in helping their customers organize and present their own collections.  Social media has made sharing such collections easy, and mobile lets them take those ideas into the stores.  Mannequins are becoming less relevant.I heard from both HauteLook and Gilt Groupe (flash sale retailers) that a large percentage of their visits come from mobile devices, and most of those are iOS devices.  I find it interesting that even though Android has passed iPhone in units shipped (and will eventually pass iOS as a whole), its still the Apple crowd that leads the way.RadioShack mentioned their Holiday Heroes campaigned was very successful.  They asked their Foursquare users to check-in at a gym, coffee shop, and transportation hub as part of being a hero.  For this feat, customers were awarded a special badge that was worth 20% off at their next store visit. They claim a 3.5x increase in ticket size vs. regular check-in customers, and a 5x increase vs those that don't check-in at all.I also learned of RadioShack's #28 campaign, which is apparently one of the largest Twitter trends ever.  Their partnership with LIVESTRONG has gotten them followers, impressions, and credit for supporting the fight against cancer.The guys at Invodo showed the importance of video to e-commerce.  They gave compelling examples of how video can show customers the value of products better than just words.The highlight of the show was Guy Kawasaki's talk on innovation, which was not only informative but also peppered with humor and personality.  Back in the early days of the internet boom, Guy turned down the CEO position at Yahoo! because the commute was too long.  By his calculation, that was a $2B mistake.There are other good accounts of the conference at the NRF Blog.

    Read the article

  • best way to "introduce" OOP/OOD to team of experienced C++ engineers

    - by DXM
    I am looking for an efficient way, that also doesn't come off as an insult, to introduce OOP concepts to existing team members? My teammates are not new to OO languages. We've been doing C++/C# for a long time so technology itself is familiar. However, I look around and without major infusion of effort (mostly in the form of code reviews), it seems what we are producing is C code that happens to be inside classes. There's almost no use of single responsibility principle, abstractions or attempts to minimize coupling, just to name a few. I've seen classes that don't have a constructor but get memset to 0 every time they are instantiated. But every time I bring up OOP, everyone always nods and makes it seem like they know exactly what I'm talking about. Knowing the concepts is good, but we (some more than others) seem to have very hard time applying them when it comes to delivering actual work. Code reviews have been very helpful but the problem with code reviews is that they only occur after the fact so to some it seems we end up rewriting (it's mostly refactoring, but still takes lots of time) code that was just written. Also code reviews only give feedback to an individual engineer, not the entire team. I am toying with the idea of doing a presentation (or a series) and try to bring up OOP again along with some examples of existing code that could've been written better and could be refactored. I could use some really old projects that no one owns anymore so at least that part shouldn't be a sensitive issue. However, will this work? As I said most people have done C++ for a long time so my guess is that a) they'll sit there thinking why I'm telling them stuff they already know or b) they might actually take it as an insult because I'm telling them they don't know how to do the job they've been doing for years if not decades. Is there another approach which would reach broader audience than a code review would, but at the same time wouldn't feel like a punishment lecture? I'm not a fresh kid out of college who has utopian ideals of perfectly designed code and I don't expect that from anyone. The reason I'm writing this is because I just did a review of a person who actually had decent high-level design on paper. However if you picture classes: A - B - C - D, in the code B, C and D all implement almost the same public interface and B/C have one liner functions so that top-most class A is doing absolutely all the work (down to memory management, string parsing, setup negotiations...) primarily in 4 mongo methods and, for all intents and purposes, calls almost directly into D. Update: I'm a tech lead(6 months in this role) and do have full support of the group manager. We are working on a very mature product and maintenance costs are definitely letting themselves be known.

    Read the article

  • Pay in the future should make you think in the present

    - by BuckWoody
    Distributed Computing - and more importantly “-as-a-Service” models of computing have a different cost model. This is something that sounds obvious on the surface but it’s often forgotten during the design and coding phase of a project. In on-premises computing, we’re used to purchasing a server and all of the hardware infrastructure and software licenses needed not only for one project, but several. This is an up-front or “sunk” cost that we consume by running code the organization needs to perform its function. Using a direct connection over wires you’ve already paid for, we don’t often have to think about bandwidth, hits on the data store or the amount of compute we use - we just know more is better. In a pay-as-you-go model, however, each of these architecture decisions has a potential cost impact. The amount of data you store, the number of times you access it, and the amount you send back all come with a charge. The offset is that you don’t buy anything at all up-front, so that sunk cost is freed up. And financial professionals know that money now is worth more than money later. Saving that up-front cost allows you to invest it in other things. It’s not just that you’re using things that now cost money - it’s that the design itself in distributed computing has a cost impact. That can be a really good thing, such as when you dynamically add capacity for paying customers. If you can tie back the cost of a series of clicks to what a user will pay to do so, you can set a profit margin that is easy to track. Here’s a case in point: Assume you are using a large instance in Windows Azure to compute some data that you retrieve from a SQL Azure database. If you don’t monitor the path of the application, you may not know what you are really using. Since you’re paying by the size of the instance, it’s best to maximize it all the time. Recently I evaluated just this situation, and found that downsizing the instance and adding another one where needed, adding a caching function to the application, moving part of the data into Windows Azure tables not only increased the speed of the application, but reduced the cost and more closely tied the cost to the profit. The key is this: from the very outset - the design - make sure you include metrics to measure for the cost/performance (sometimes these are the same) for your application. Windows Azure opens up awesome new ways of doing things, so make sure you study distributed systems architecture before you try and force in the application design you have on premises into your new application structure.

    Read the article

  • How to switch off wifi on startup or from the console

    - by mit
    I have installed ubuntu 10.04 on a laptop. Wifi is switched on by default on startup. I can disable it rightclicking the network manager icon in the gnome bar. How can I set it to have wifi switched off as default? Alternatively, how can I switch off wifi on the console? I tried already the rfkill command but it does not list any devices and it does not switch off wifi, I tried different parameters. This is a standard install of the Ubuntu 10.04 i386 Desktop Live CD on an IBM T40 Laptop. EDIT A: This is the output of some rfkill commands on my system, and it does not affect the wifi of the laptop: $ rfkill --help Usage: rfkill [options] command Options: --version show version (0.4) Commands: help event list [IDENTIFIER] block IDENTIFIER unblock IDENTIFIER where IDENTIFIER is the index no. of an rfkill switch or one of: <idx> all wifi wlan bluetooth uwb ultrawideband wimax wwan gps fm $ rfkill list $ rfkill list wifi $ rfkill list all $ rfkill list wlan $ sudo rfkill list all $ sudo rfkill block all $ sudo rfkill block wlan $ sudo rfkill block wifi $ EDIT B: Now I found out that sudo ifconfig eth1 down turns it off. And I can turn it on through the gnome network applet again. But the applet does not reflect the change from the commandline, it stills believes wifi is switched on. I have to switch it off and on again on the applet to switch it on again, when I switched it off from the console. Is there a better way? This is what the syslog looks like when I switch wireless off and on again from the network manager: NetworkManager: <info> (eth1): device state change: 3 -> 2 (reason 0) NetworkManager: <info> (eth1): deactivating device (reason: 0). NetworkManager: <info> Policy set '24' (eth0) as default for routing and DNS. NetworkManager: <info> (eth1): taking down device. avahi-daemon[660]: Withdrawing address record for fe80::202:8aff:feba:d798 on eth1. kernel: [ 971.472116] airo(eth1): cmd:3 status:7f03 rsp0:0 rsp1:0 rsp2:0 NetworkManager: <info> (eth1): bringing up device. NetworkManager: <info> (eth1): supplicant interface state: starting -> ready NetworkManager: <info> (eth1): device state change: 2 -> 3 (reason 42) avahi-daemon[660]: Registering new address record for fe80::202:8aff:feba:d798 on eth1.*. kernel: [ 965.512048] eth1: no IPv6 routers present

    Read the article

  • What are the drawbacks of sending XML to browsers and let them apply XSLT?

    - by MainMa
    Context Working as a freelance developer, I often made websites completely based on XSLT. In other words, on every request, an XML file is generated, containing everything we need to know about the page content: the name of the user currently logged in, the top menu entries, if this menu is dynamic/configurable, the text to display in a specific area of the page, etc. Then XSL process (caches, etc.) it to HTML/XHTML page to send to the browser. It has a good point to make it easier to create small-scale websites, especially with PHP. It is a sort of template engine, but which I prefer to other template engines because it's much more powerful than most of template engines, and because I know it better and like it. It is also possible, when need, to give an access to raw XML data on demand for an automated access, without the need to create separate APIs. Of course, it will fail completely on any medium-scale or large-scale website, since, even with good caching techniques, XSL still degrades overall website performance and requires more CPU serverside. Question Modern browsers have the ability to take an XML file and to transform it with an associated XSL file declared in XML like <?xml-stylesheet href="demo.xslt" type="text/xsl"?>. Firefox 3 can do it. Internet Explorer 8 can do it too. It means that it is possible to migrate XSL processing from the server to the client side for 50% of users (according on browser statistics on several websites where I may want to implement this). It means that those 50% of users will receive only the XML file at each request, thus reducing their and server's bandwidth (XML file being much shorter than its processed HTML analog), and reducing server's CPU usage. What are the drawbacks of this technique? I thought about several ones, but it doesn't apply in this situation: Difficult implementation and the need to choose, based on the browser request, when to send raw XML and when to transform it to HTML instead. Obviously, the system will not be much more difficult then the actual one. The only change to make is to add XSL file link to every XML, and to add a browser check. More IO and bandwidth usage, since the XSLT file will be downloaded by the browsers, instead of being cached by the server. I don't think it will be a problem, since XSLT file will be cached by the browsers (like images, or CSS, or JavaScript files are cached actually). Possibly some problems on client side, like maybe problems when saving a page in some browsers. Difficulty to debug code: it is impossible to obtain an HTML source the browser is actually using, since the only displayed source is the downloaded XML. On the other hand, I rarely go look at HTML code on client side, and in most cases, it is unusable directly (whitespace being removed).

    Read the article

  • How to determine the source of a request in a distributed service system?

    - by Kabumbus
    Map/Reduce is a great concept for sorting large quantities of data at once. What to do if you have small parts of data and you need to reduce it all the time? Simple example - choosing a service for request. Imagine we have 10 services. Each provides services host with sets of request headers and post/get arguments. Each service declares it has 30 unique keys - 10 per set. service A: name id ... Now imagine we have a distributed services host. We have 200 machines with 10 services on each. Each service has 30 unique keys in there sets. but now to find to which service to map the incoming request we make our services post unique values that map to that sets. We can have up to or more than 10 000 such values sets on each machine per each service. service A machine 1 name = Sam id = 13245 ... service A machine 1 name = Ben id = 33232 ... ... service A machine 100 name = Ron id = 777888 ... So we get 200 * 10 * 30 * 30 * 10 000 == 18 000 000 000 and we get 500 requests per second on our gateway each containing 45 items 15 of which are just noise. And our task is to find a service for request (at least a machine it is running on). On all machines all over cluster for same services we have same rules. We can first select to which service came our request via rules filter 10 * 30. and we will have 200 * 30 * 10 000 == 60 000 000. So... 60 mil is definitely a problem... I hope to get on idea of mapping 30 * 10 000 onto some artificial neural network alike Perceptron that outputs 1 if 30 words (some hashes from words) from the request are correct or if less than Perceptron should return 0. And I’ll send each such Perceptron for each service from each machine to gateway. So I would have a map Perceptron <-> machine for each service. Can any one tall me if my Perceptron idea is at least “sane”? Or normal people do it some other way? Or if there are better ANNs for such purposes?

    Read the article

  • Loading levels from .txt or .XML for XNA

    - by Dave Voyles
    I'm attemptin to add multiple levels to my pong game. I'd like to simply exchange a few elements with each level, nothing crazy. Just the background texture, the color of the AI paddle (the one on the right side), and the music. It seems that the best way to go about this is by utilizing the StreamReader to read and write the files from XML. If there is a better, or more efficient alternative way then I'm all for it. In looking over the XNA Starter Platformer Kit provided by MS it seems that they've done it in this manner as well. I'm perplexed by a few things, however, namely parts within the Level class which aren't commented. /// <summary> /// Iterates over every tile in the structure file and loads its /// appearance and behavior. This method also validates that the /// file is well-formed with a player start point, exit, etc. /// </summary> /// <param name="fileStream"> /// A stream containing the tile data. /// </param> private void LoadTiles(Stream fileStream) { // Load the level and ensure all of the lines are the same length. int width; List<string> lines = new List<string>(); using (StreamReader reader = new StreamReader(fileStream)) { string line = reader.ReadLine(); width = line.Length; while (line != null) { lines.Add(line); if (line.Length != width) throw new Exception(String.Format("The length of line {0} is different from all preceeding lines.", lines.Count)); line = reader.ReadLine(); } } What does width = line.Length mean exactly? I mean I know how it reads the line, but what difference does it make if one line is longer than any of the others? Finally, their levels are simply text files that look like this: .................... .................... .................... .................... .................... .................... .................... .........GGG........ .........###........ .................... ....GGG.......GGG... ....###.......###... .................... .1................X. #################### It can't be that easy..... Can it?

    Read the article

  • Games at Work Part 1: Introduction to Gamification and Applications

    - by ultan o'broin
    Games Are Everywhere How many of you (will admit to) remember playing Pong? OK then, do you play Angry Birds on your phone during work hours? Thought about why we keep playing online, video, and mobile games and what this "gamification" business we're hearing about means for the enterprise applications user experience? In Reality Is Broken: Why Games Make Us Better and How They Can Change the World, Jane McGonigal says that playing computer and online games now provides more rewards for people than their real lives do. Games offer intrinsic rewards and happiness to the players as they pursue more satisfying work and the success, social connection, and meaning that goes with it. Yep, Gran Turismo, Dungeons & Dragons, Guitar Hero, Mario Kart, Wii Boxing, and the rest are all forms of work it seems. Games are, in fact, work taken so seriously that governments now move to limit the impact of virtual gaming currencies on the real financial system. Anyone who spends hours harvesting crops on FarmVille realizes it’s hard work too. Yet games evoke a positive emotion in players who voluntarily stay engaged with games for hours, day after day. Some 183 million active gamers in the United States play on average 13 hours per week. Weekly, 5 million of those gamers play for longer than a working week (45 hours). So why not harness the work put into games to solve real-world problems? Or, in the case of our applications users, real-world work problems? What’s a Game? Jane explains that all games have four defining traits: a goal, rules, a feedback system, and voluntary participation. We need to look at what motivational ideas behind the dynamics of the game—what we call gamification—are appropriate for our users. Typically, these motivators are achievement, altruism, competition, reward, self-expression, and status). Common game techniques for leveraging these motivations include: Badging and avatars Points and awards Leader boards Progress charts Virtual currencies or goods Gifting and giving Challenges and quests Some technology commentators argue for a game layer on top of everything, but this layer is already part of our daily lives in many instances. We see gamification working around us already: the badging and kudos offered on My Oracle Support or other Oracle community forums, becoming a Dragon Slayer implementor of Atlassian applications, being made duke of your favorite coffee shop on Yelp, sharing your workout details with Nike+, or donating to Japanese earthquake relief through FarmVille, for example. And what does all this mean for the applications that you use in your work? Read on in part two...

    Read the article

  • Visual studio Real Dark mode (2010,2012,2013)

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/11/02/visual-studio-real-dark-mode-201020122013.aspxWhen Visual studio 2010 released back in 3 year ago I soon show a demo to some people that how Dark mode of Visual studio will be great idea. Soon we got some theme plugin  which make us able to modify the look of visual studio.   http://studiostyl.es/ already provide lots of wonderful color scheme that make you able to modify the theme. These themes are also work in webmatrix 2.  Webmatrix 2 have a plugin for themes that is made by Yishai Galatzer that is awesome for webmatrix 2.   In Visual studio 2012 we got a native dark mode. This means we can configure it without any plugin or requirement of anything. In this post I have a demo to show you how to use Dark mode that is part of Windows 7 (and windows 8 too).   Few months ago I show a problem that webmatrix 2 run slow. it’s run better in windows 7 dark mode. Windows 7 dark mode simply refer to right click > personalize > High contrast theme in bottom of windows. This setting make thing a little bit faster.   When you have set this you have seen that Visual studio doesn’t react good anymore because it’s color scheme is broken now. What you need now is import any theme from http://studiostyl.es/ When you import this this will look good as this.   This is the demo look of Windows 7 phone Express 2010. It will react same for future version as 2012, 2013. Now see your VS react look dark. Everything is dark now. Your Firefox and IE will not run totally in blackish mode but you can use chrome. Chrome have less effect of dark. Now if you benchmark it then you will feel that everything that take a long time in loading now run fast.   Note :- This is experiments. Remember to have settings backup before apply new theme. All thing I do is make my VS run faster. If you have any trouble or idea please comment it.   Thanks for read my post

    Read the article

  • How is programming affected by spatial aptitude?

    - by natli
    The longer I work on a project, the less clear it becomes. It's like I cannot seperate various classes/objects anymore in my head. Everything starts mixing up, and it's extremely hard to take it all apart again. I start putting functions in classes where they really don't belong, and make silly mistakes such as writing code that I later find was 100% obsolete; things are no longer clearly mappable in my head. It isn't until I take a step back for several hours (or days somtimes!) that I can actually see what's going on again, and be productive. I usually try to fight through this, I am so passionate about coding that I wouldn't for the life of me know what else I could be doing. This is when stuff can get really weird, I get so up in my head that I sort of lose touch with reality (to some extent) in that various actions, such as pouring a glass of water, no longer happen on a concious level. It happens on auto pilot, during which pretty much all of my concious concentration (is that even a thing?) is devoted to borderline pointless problem solving (trying to seperate elements of code). It feels like a losing battle. So I took an IQ test a while ago (Wechsler Adult Intelligence Scale I believe it was) and it turned out my Spatial Aptitude was quite low. I still got a decent score, just above average, so I won't have to poke things with a stick for a living, but I am a little worried that this is such a handicap when writing/engineering computer programs that I won't ever be able to do it seriously or professionally. I am very much interested in what other people think of this.. could a low spatial aptitude be the cause of the above described problems? Maybe I should be looking more along the lines of ADD or something similar, because I did get diagnosed with ADD at the age of 17 (5 years ago) but the medicine I received didn't seem to affect me that much so I never took it all that serious. Sorry if I got a little off topic there, I know this is not a mental help board, the question should be clear; How is programming affected by spatial aptitude? As far as I know people are born with low/med/high spatial aptitude, so I think it's interesting to find out if the more fortunate are better programmers by birth right.

    Read the article

  • Convert VARCHAR() columns to NVARCHAR()

    - by ChrisD
    We recently underwent an upgrade that required us to change our database columns from varchar to NVarchar, to support unicode characters. Digging through the internet, I found a base script which I modified to handle reserved word table names, and maintain the NULL/NotNull constraint of the columns.   I Ran this script use NWOperationalContent – Your Catalog Name here GO SELECT 'ALTER TABLE ' + isnull(schema_name(syo.id), 'dbo') + '.[' +  syo.name +'] '     + ' ALTER COLUMN [' + syc.name + '] NVARCHAR(' + case syc.length when -1 then 'MAX'         ELSE convert(nvarchar(10),syc.length) end + ') '+         case  syc.isnullable when 1 then ' NULL' ELSE ' NOT NULL' END +';'    FROM sysobjects syo    JOIN syscolumns syc ON      syc.id = syo.id    JOIN systypes syt ON      syt.xtype = syc.xtype    WHERE      syt.name = 'varchar'     and syo.xtype='U'   which produced a series of ALTER statements which I could then execute the tables.  In some cases I had to drop indexes, alter the tables, and re-create the indexes.  There might have been a better way to do that, but manually dropping them got the job done.   use NWMerchandisingContent GO ALTER TABLE Locale Drop Constraint PK_Locale ALTER TABLE Country DROP CONSTRAINT PK_Country GO ALTER TABLE dbo.[Campaign]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[BundleLocalization]  ALTER COLUMN [Locale] NVARCHAR(8)  NOT NULL; ALTER TABLE dbo.[BundleLocalization]  ALTER COLUMN [UnitOfmeasure] NVARCHAR(200)  NULL; ALTER TABLE dbo.[BundleLocalization]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[BundleComponentLocalization]  ALTER COLUMN [Locale] NVARCHAR(8)  NOT NULL; ALTER TABLE dbo.[BundleComponentLocalization]  ALTER COLUMN [Imperative] NVARCHAR(MAX)  NULL; ALTER TABLE dbo.[BundleComponentLocalization]  ALTER COLUMN [Instructions] NVARCHAR(MAX)  NULL; ALTER TABLE dbo.[BundleComponentLocalization]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[BundleComponent]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[Bundle]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[Banner]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[Video]  ALTER COLUMN [Link] NVARCHAR(512)  NOT NULL; ALTER TABLE dbo.[Video]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[ProductUsage]  ALTER COLUMN [VideoLink] NVARCHAR(512)  NOT NULL; ALTER TABLE dbo.[ProductUsage]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[Thumbnail]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[SkuLocalization]  ALTER COLUMN [Locale] NVARCHAR(8)  NOT NULL; ALTER TABLE dbo.[SkuLocalization]  ALTER COLUMN [UnitOfMeasure] NVARCHAR(150)  NOT NULL; ALTER TABLE dbo.[SkuLocalization]  ALTER COLUMN [SwatchColor] NVARCHAR(50)  NOT NULL; etc.. GO ALTER TABLE Locale ADD CONSTRAINT PK_Locale PRIMARY KEY (LocaleId) ALTER TABLE Country ADD CONSTRAINT PK_Country PRIMARY KEY (CountryId) Note that this alter is non-destructive to the data.   Hope this helps.

    Read the article

  • Updating an Entity through a Service

    - by GeorgeK
    I'm separating my software into three main layers (maybe tiers would be a better term): Presentation ('Views') Business logic ('Services' and 'Repositories') Data access ('Entities' (e.g. ActiveRecords)) What do I have now? In Presentation, I use read-only access to Entities, returned from Repositories or Services, to display data. $banks = $banksRegistryService->getBanksRepository()->getBanksByCity( $city ); $banksViewModel = new PaginatedList( $banks ); // some way to display banks; // example, not real code I find this approach quite efficient in terms of performance and code maintanability and still safe as long as all write operations (create, update, delete) are preformed through a Service: namespace Service\BankRegistry; use Service\AbstractDatabaseService; use Service\IBankRegistryService; use Model\BankRegistry\Bank; class Service extends AbstractDatabaseService implements IBankRegistryService { /** * Registers a new Bank * * @param string $name Bank's name * @param string $bik Bank's Identification Code * @param string $correspondent_account Bank's correspondent account * * @return Bank */ public function registerBank( $name, $bik, $correspondent_account ) { $bank = new Bank(); $bank -> setName( $name ) -> setBik( $bik ) -> setCorrespondentAccount( $correspondent_account ); if( null === $this->getBanksRepository()->getDefaultBank() ) $this->setDefaultBank( $bank ); $this->getEntityManager()->persist( $bank ); return $bank; } /** * Makes the $bank system's default bank * * @param Bank $bank * @return IBankRegistryService */ public function setDefaultBank( Bank $bank ) { $default_bank = $this->getBanksRepository()->getDefaultBank(); if( null !== $default_bank ) $default_bank->setDefault( false ); $bank->setDefault( true ); return $this; } } Where am I stuck? I'm struggling about how to update certain fields in Bank Entity. Bad solution #1: Making a series of setters in Service for each setter in Bank; - seems to be quite reduntant, increases Service interface complexity and proportionally decreases it's simplicity - something to avoid if you care about code maitainability. I try to follow KISS and DRY principles. Bad solution #2: Modifying Bank directly through it's native setters; - really bad. If you'll ever need to move modification into the Service, it will be pain. Business logic should remain in Business logic layer. Plus, there are plans on logging all of the actions and maybe even involve user permissions (perhaps, through decorators) in future, so all modifications should be made only through the Service. Possible good solution: Creating an updateBank( Bank $bank, $array_of_fields_to_update) method; - makes the interface as simple as possible, but there is a problem: one should not try to manually set isDefault flag on a Bank, this operation should be performed through setDefaultBank method. It gets even worse when you have relations that you don't want to be directly modified. Of course, you can just limit the fields that can be modified by this method, but how do you tell method's user what they can and cannot modify? Exceptions?

    Read the article

  • Career Change Need Advice: Professional Web Developer

    - by bikedorkseattle
    I'm hoping to get some advice here on the steps I should take to make a career change into professional web development. I've been working in cancer research the last 14 years and I need a change. The job market is terrible, the pay is worse, and despite what one would think the atmosphere is generally un-collegial, even in your own group. Venture funding never returned after the dot com burst and with 3 to 5 wars our country is now in, NIH funding is only going to get worse. I know things are not going to get better for my field, sadly, and I know I need to move on. For probably just as long I have fiddled around with web development, I even run a fairly popular site with close to 1 million/month pageviews that pulls a decent income, but not stable enough to live off of right now. My skills are ok for being self taught. I enjoy the fast paced nature of the web and the tools the community creates and how eager people are to help and share knowledge; it's what science should be. I have been trying to find an entry level developer job doing standard HTML/CSS/PHP/MySQL/JS/jQuery type work. A good 50%+ of the jobs want someone with a CS degree, and most want 5 years experience. Having no professional experience and no formal education, I know I'm at a huge disadvantage. I am now considering my options on how to move forward professionally. The way I see it I have basically 3 options. Build up my portfolio of work as much as I can and continue to learn as much as I can on my own. Try to contribute on some open source project when time allows. Network like crazy and go to meetups. Be confident and pray a lot in private. OR While doing above, do some certification programs in PHP and Java, possibly others. Get a Zend Certification. OR Spend a few years getting a CS degree while doing 1. I've already done the work fulltime go to school thing and it doesn't excite me one bit. I didn't have the greatest college experience and am not too eager to return, but I have a family to feed. Is the degree really necessary or is it more of a right of passage type thing in most instances? I appreciate everyones input. Thanks for taking the time to respond.

    Read the article

  • mongoDB Management Studio

    - by Liam McLennan
    This weekend I have been in Sydney at the MS Web Camp, learning about web application development. At the end of the first day we came up with application ideas and pitched them. My idea was to build a web management application for mongoDB. mongoDB I pitched my idea, put down the microphone, and then someone asked, “what’s mongo?”. Good question. MongoDB is a document database that stores JSON style documents. This is a JSON document for a tweet from twitter: db.tweets.find()[0] { "_id" : ObjectId("4bfe4946cfbfb01420000001"), "created_at" : "Thu, 27 May 2010 10:25:46 +0000", "profile_image_url" : "http://a3.twimg.com/profile_images/600304197/Snapshot_2009-07-26_13-12-43_normal.jpg", "from_user" : "drearyclocks", "text" : "Does anyone know who has better coverage, Optus or Vodafone? Telstra is still too expensive.", "to_user_id" : null, "metadata" : { "result_type" : "recent" }, "id" : { "floatApprox" : 14825648892 }, "geo" : null, "from_user_id" : 6825770, "search_term" : "telstra", "iso_language_code" : "en", "source" : "&lt;a href=&quot;http://www.tweetdeck.com&quot; rel=&quot;nofollow&quot;&gt;TweetDeck&lt;/a&gt;" } A mongodb server can have many databases, each database has many collections (instead of tables) and a collection has many documents (instead of rows). Development Day 2 of the Sydney MS Web Camp was allocated to building our applications. First thing in the morning I identified the stories that I wanted to implement: Scenario: View databases Scenario: View Collections in a database Scenario: View Documents in a Collection Scenario: Delete a Collection Scenario: Delete a Database Scenario: Delete Documents Over the course of the day the team (3.5 developers) implemented all of the planned stories (except ‘delete a database’) and also implemented the following: Scenario: Create Database Scenario: Create Collection Lessons Learned I’m new to MongoDB and in the past I have only accessed it from Ruby (for my hare-brained scheme). When it came to implementing our MongoDB management studio we discovered that their is no official MongoDB driver for .NET. We chose to use NoRM, honestly just because it was the only one I had heard of. NoRM was a challenge. I think it is a fine library but it is focused on mapping strongly typed objects to MongoDB. For our application we had no prior knowledge of the types that would be in the MongoDB database so NoRM was probably a poor choice. Here are some screens (click to enlarge):

    Read the article

  • Pragmas and exceptions

    - by Darryl Gove
    The compiler pragmas: #pragma no_side_effect(routinename) #pragma does_not_write_global_data(routinename) #pragma does_not_read_global_data(routinename) are used to tell the compiler more about the routine being called, and enable it to do a better job of optimising around the routine. If a routine does not read global data, then global data does not need to be stored to memory before the call to the routine. If the routine does not write global data, then global data does not need to be reloaded after the call. The no side effect directive indicates that the routine does no I/O, does not read or write global data, and the result only depends on the input. However, these pragmas should not be used on routines that throw exceptions. The following example indicates the problem: #include <iostream extern "C" { int exceptional(int); #pragma no_side_effect(exceptional) } int exceptional(int a) { if (a==7) { throw 7; } else { return a+1; } } int a; int c=0; class myclass { public: int routine(); }; int myclass::routine() { for(a=0; a<1000; a++) { c=exceptional(c); } return 0; } int main() { myclass f; try { f.routine(); } catch(...) { std::cout << "Something happened" << a << c << std::endl; } } The routine "exceptional" is declared as having no side effects, however it can throw an exception. The no side effects directive enables the compiler to avoid storing global data back to memory, and retrieving it after the function call, so the loop containing the call to exceptional is quite tight: $ CC -O -S test.cpp ... .L77000061: /* 0x0014 38 */ call exceptional ! params = %o0 ! Result = %o0 /* 0x0018 36 */ add %i1,1,%i1 /* 0x001c */ cmp %i1,999 /* 0x0020 */ ble,pt %icc,.L77000061 /* 0x0024 */ nop However, when the program is run the result is incorrect: $ CC -O t.cpp $ ./a.out Something happend00 If the code had worked correctly, the output would have been "Something happened77" - the exception occurs on the seventh iteration. Yet, the current code produces a message that uses the original values for the variables 'a' and 'c'. The problem is that the exception handler reads global data, and due to the no side effects directive the compiler has not updated the global data before the function call. So these pragmas should not be used on routines that have the potential to throw exceptions.

    Read the article

  • It is CX a new concept?

    - by Isabel F. Peñuelas
    The Marketing Industry and the Web Industry are talking about CX since some time. However it is only very recently that the concept has reached some common meaning accepted by the analysts’ and the IT community. The new CX model depends on two previous facts: the expansion of the social media, and the impact of the new advanced features of mobile devices regarding brand-customer interaction. CXsers vs UXers First there is some need of disambiguity between User Experience and Customer Experience. User Experience -UX, is a much well established concept related with the design of user interactions for particular devices. UX people are interested on multiple touch points of digital interfaces while CX people are interested on all kind of interfaces including physical ones. UX is an evolution of Web Usability, while CX is a marketing concept. UX is an instrument of User Experience. CX in fact is all about Connections and Interactions. Connections Dan Draper, the creative director Mad Men, understands very well that to market effectively means to connect with people, and the best way to connect to people is to use the connections people have with other people: understanding Social Media connections and taking the customer pulse of customers on those medias, and are strong facilitators of CX strategies.  Interactions We can very simply define CX as the relationship that a customer establishes with a brand through multiple touch points (interactions, channels) through the entire life cycle of his relationship- direct or indirect with the brand. Interactions can be grouped on Customer Journeys through multiple touch points defined as the path a customer follows to achieve a goal. Processes A customer journey today usually starts at the moment he surfs the Web, then he takes a purchase decision; purchases the product;  request a particular service and finally recommends or do not recommends the product.  Customer Journeys are processes, and to analyze customer journeys there exists today a broad offering of modern Customer Journey tools very similar actually to the use cases or UML activity diagrams for IT systems design. As a summary CX is nothing more and nothing less than applying process analysis methods for better understanding how to create value through customer interactions across the multiple user´s touch points with the brand.

    Read the article

  • Why would you dual-run an app on Azure and AWS?

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/11/10/why-would-you-dual-run-an-app-on-azure-and-aws.aspxI had this question from a viewer of my Pluralsight course, Implementing the Reactive Manifesto with Azure and AWS, and thought I’d publish the response. So why would you dual-run your cloud app by hosting it on Azure and AWS? Sounds like a lot of extra development and management overhead. Well the most compelling reasons are reliability and portability. In 2012 I was working for a client who was making a big investment in the cloud, and at the end of the year we published their first external API for business partners. It was hosted in Azure and used some really nice features to route back into existing on-premise services. We were able to publish a clean, simple API to partners, and hide away the underlying complexity of the internal services while still leveraging them to do all the work. Two days after we went live, we were hit by the Azure SSL certificate expiry outage, and our API was unavailable for the best part of 3 days. Fortunately we had planned a gradual roll-out to partners, so the impact was minimal, but we’d been intending to ramp up quickly, and if the outage had happened a week or two later we would have been in a very bad place. Not least because our app could only run on Azure, we couldn’t package it up for another service without going back and reworking the code. More recently AWS had an issue with a networking device in one of their data centres which caused an outage that took the best part of a day to resolve. In both scenarios the SLAs are worthless, as you’ll get back a small percentage of your cloud expenditure, which is going to be negligible compared to your costs in dealing with the outage. And if your app is built specifically for AWS or Azure then if there’s an extended outage you can’t just deploy it onto a new set of kit from a different supplier. And the chances are pretty good there will be another extended outage, both for Microsoft and for Amazon. But the chances are small that it will happen to both at the same time. So my basic guidance has been: ignore the SLAs, go for better uptime by using two clouds. As soon as you need to scale beyond a single instance, start by scaling out to another cloud. Then scale out to different data centres in both clouds. Then you’ve got dual-cloud, quadruple-datacentre redundancy, so any more scaling you need can be left to the clouds to auto-scale themselves. By running in both clouds, you’ve made your app portable, so in the highly unlikely event that both AWS and Azure go down in multiple regions, you’ll have a deployment package which will let you spin up a new stack on yet another cloud, without having to rework your solution.

    Read the article

< Previous Page | 665 666 667 668 669 670 671 672 673 674 675 676  | Next Page >