Search Results

Search found 20715 results on 829 pages for 'non mvc'.

Page 607/829 | < Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >

  • Google Analytics Social Tracking implementation. Is Google's example correct?

    - by s_a
    The current Google Analytics help page on Social tracking (developers.google.com/analytics/devguides/collection/gajs/gaTrackingSocial?hl=es-419) links to this page with an example of the implementation: http://analytics-api-samples.googlecode.com/svn/trunk/src/tracking/javascript/v5/social/facebook_js_async.html I've followed the example carefully yet social interactions are not registered. This is the webpage with the non-working setup: http://bit.ly/1dA00dY (obscured domain as per Google's Webmaster Central recommendations for their product forums) This is the structure of the page: In the : ga async code copied from the analytics' page a script tag linking to stored in the same domain. the twitter js loading tag In the the fb-root div the facebook async loading js including the _ga.trackFacebook(); call the social buttons afterwards, like so: (with the proper URL) Tweet (with the proper handle) That's it. As far as I can tell, I have implemented it exactly like in the example, but likes and twitts aren't registered. I have also altered the ga_social_tracking.js to register the social interactions as events, adding the code below. It doesn't work either. What could be wrong? Thanks! Code added to ga_social_tracking.js var url = document.URL; var category = 'Social Media'; /* Facebook */ FB.Event.subscribe('edge.create', function(href, widget) { _gaq.push(['_trackEvent', category, 'Facebook', url]); }); /* Twitter */ twttr.events.bind('tweet', function(event) { _gaq.push(['_trackEvent', category, 'Twitter', url]); });

    Read the article

  • Name of the Countdown Numbers round problem - and algorithmic solutions?

    - by Dai
    For the non-Brits in the audience, there's a segment of a daytime game-show where contestants have a set of 6 numbers and a randomly generated target number. They have to reach the target number using any (but not necessarily all) of the 6 numbers using only arithmetic operators. All calculations must result in positive integers. An example: Youtube: Countdown - The Most Extraordinary Numbers Game Ever? A detailed description is given on Wikipedia: Countdown (Game Show) For example: The contentant selects 6 numbers - two large (possibilities include 25, 50, 75, 100) and four small (numbers 1 .. 10, each included twice in the pool). The numbers picked are 75, 50, 2, 3, 8, 7 are given with a target number of 812. One attempt is (75 + 50 - 8) * 7 - (3 * 2) = 813 (This scores 7 points for a solution within 5 of the target) An exact answer would be (50 + 8) * 7 * 2 = 812 (This would have scored 10 points exactly matching the target). Obviously this problem has existed before the advent of TV, but the Wikipedia article doesn't give it a name. I've also saw this game at a primary school I attended where the game was called "Crypto" as an inter-class competition - but searching for it now reveals nothing. I took part in it a few times and my dad wrote an Excel spreadsheet that attempted to brute-force the problem, I don't remember how it worked (only that it didn't work, what with Excel's 65535 row limit), but surely there must be an algorithmic solution for the problem. Maybe there's a solution that works the way human cognition does (e.g. in-parallel to find numbers 'close enough', then taking candidates and performing 'smaller' operations).

    Read the article

  • Am I the only one this anal / obsessive about code? [closed]

    - by Chris
    While writing a shared lock class for sql server for a web app tonight, I found myself writing in the code style below as I always do: private bool acquired; private bool disposed; private TimeSpan timeout; private string connectionString; private Guid instance = Guid.NewGuid(); private Thread autoRenewThread; Basically, whenever I'm declaring a group of variables or writing a sql statement or any coding activity involving multiple related lines, I always try to arrange them where possible so that they form a bell curve (imagine rotating the text 90deg CCW). As an example of something that peeves the hell out of me, consider the following alternative: private bool acquired; private bool disposed; private string connectionString; private Thread autoRenewThread; private Guid instance = Guid.NewGuid(); private TimeSpan timeout; In the above example, declarations are grouped (arbitrarily) so that the primitive types appear at the top. When viewing the code in Visual Studio, primitive types are a different color than non-primitives, so the grouping makes sense visually, if for no other reason. But I don't like it because the right margin is less of an aesthetic curve. I've always chalked this up to being OCD or something, but at least in my mind, the code is "prettier". Am I the only one?

    Read the article

  • How to indicate to a web server the language of a resource

    - by Nik M
    I'm writing an HTTP API to a publishing server, and I want resources with representations in multiple languages. A user whose client GETs a resource which has Korean, Japanese and Trad. Chinese representations, and sends Accept-Language: en, ja;q=0.7 should get the Japanese. One resource, identified by one URI, will therefore have a number of different language representations. This seems to me like a totally orthodox use of content negotiation and multiple resource representations. But when each translator comes to provide these alternate language representations to the server, what's the correct way to instruct the server which language to store the representation under? I'm having the translators PUT the representation in its entirety to the same URI, but I can't find out how to do this elegantly. Content-Language is a response header, and none of the request headers seem to fit the bill. It seems my options are Invent a new request header Supply additional metadata in a multipart/related document Provide language as a parameter to the Content-Type of the request, like Content-Type: text/html;language=en I don't want to get into the business of extending HTTP, and I don't feel great about bundling extra metadata into the representation. Neither approach seems friendly to HTTP caches either. So option 3 seems like the best way that I can think of, but even then it's decidedly non-standard to put my own specific parameters on a very well established content type. Is there any by-the-book way of achieving this?

    Read the article

  • Problems when rendering code on Nvidia GPU

    - by 2am
    I am following OpenGL GLSL cookbook 4.0, I have rendered a tesselated quad, as you see in the screenshot below, and i am moving Y coordinate of every vertex using a time based sin function as given in the code in the book. This program, as you see on the text in the image, runs perfectly on built in Intel HD graphics of my processor, but i have Nvidia GT 555m graphics in my laptop, (which by the way has switchable graphics) when I run the program on the graphic card, the OpenGL shader compilation fails. It fails on following instruction.. pos.y = sin.waveAmp * sin(u); giving error Error C1105 : Cannot call a non-function I know this error is coming on the sin(u) function which you see in the instruction. I am not able to understand why? When i removed sin(u) from the code, the program ran fine on Nvidia card. Its running with sin(u) fine on Intel HD 3000 graphics. Also, if you notice the program is almost unusable with intel HD 3000 graphics, I am getting only 9FPS, which is not enough. Its too much load for intel HD 3000. So, sin(X) function is not defined in the OpenGL specification given by Nvidia drivers or something else??

    Read the article

  • No sound after upgrading to Ubuntu 11.10 from win7

    - by Tilman
    just as a prefix to my question, i'd like to note that i'm just now entering the world of Linux (unless you count my android, but that's a very different experience...) i have two computers now that run Ubuntu 11.10, the first of which i've had very little problems with, aside from figuring out the basics. the second, from which i'm writing this question, has (up to this point) only had one problem.... no sound. i've read a couple questions similiar and found little help as the component catalog doesn't have my computer listed. (in fact i'm not suprised this is a pos i had my mom grab from her work before they officially closed the doors behind them) had perfect sound before hand, and no sound now. sudo lspci -v brings up 00:1b.0 Audio device: Intel Corporation N10/ICH 7 Family High Definition Audio Controller (rev 01) Subsystem: Intel Corporation Device d608 Flags: bus master, fast devsel, latency 0, IRQ 45 Memory at ff980000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [100] Virtual Channel Capabilities: [130] Root Complex Link Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel any help would be greatly appreciated, me and my gf just wanna watch a damn movie lol

    Read the article

  • How do I uninstall GRUB?

    - by ændrük
    A hard drive that I use only for data storage still has GRUB from past Ubuntu installations. How can I remove GRUB from it without harming the rest of the drive's data? Background I occasionally move the data drive between computers with various boot order configurations, so I would like it to be non-bootable in order to avoid having to accommodate it in each computer's BIOS settings. When I power on a computer while only the data drive is attached, the following appears: error: no such device: fdf38dd4-9e9d-479d-b830-2a6989958503. grub rescue> I can confirm from old backups of /etc/fstab that this was the UUID of a root partition that I recently reformatted and which no longer exists. Here's the the data drive's partition table and raw master boot record. Please note that I'm not interested in workarounds that don't answer my primary question. I can think of several ways to work around this issue, but it bothers me on principle that I don't know how to directly resolve it. Every installation procedure should have a counterpart uninstallation procedure.

    Read the article

  • Best way to lay-out the website when sections of it are almost identical

    - by Linas
    so, I have a minisite for the mobile application that I did. The mobile application is a public transport (transit) schedule viewer for a particular city (let's call it Foo), and I'm trying to sell it via that minisite. I publish that minisite in www.myawesomeapplication.com/foo/. It has the usual "standard" subpages, like "About", "Compatible phones", "Contact", etc. Now, I have decided to create analogue mobile application for other cities, Bar and Baz. These mobile applications (products) would be almost identical to the one for the Foo city, thus the minisites for those would (should) look very similar too (except for some artwork and Foo = Bar replacement). The question is: what do you think would be the most logical way to lay-out the website in this situation, both from the business and search engine perspective? In other words, should I just duplicate the /foo/ website to /bar/ and /baz/, or would it be better to try to create a single website under root path (/)? I don't want search engine penalties for almost-duplicate information under /foo/, /bar/ and /baz/, and also I don't want a messy, non-localized website (I guess the user is more likely to buy something if he/she sees "This-and-that is the application for NYC, the city you live in", not "This-and-that is the application for city A, city B, ..., NYC, ..., and city Z.")

    Read the article

  • User Switching in XFCE 12.04 with LightDM and dumping unneeccesary Gnome libs

    - by user111120
    I'm an elder non-techie Mac-to-Linux convert trying to play the linux tech game by ear, so please be gentle! :) I am running XFCE Ubuntu 12.04 totally on a 8-gig flash drive and it's fantastic. I am starting to run into potential space issues (down to 1.0 gig free from 1.9 gigs since being installed last summer), most likely because of growing Thunderbird mail files, and this prompted my question. I just installed lightDM on my system because I want the ability to switch users in XFCE if I follow instructions on another blog. They advised using LightDM instead of GDM because LightDM doesn't download Gnome libraries. That's great since I need the space, but my question is how can I tell whether I don't already have Gnome libraries installed from other updates and such? And can I minimize having any Gnome libraries? The method for me to switch users entails creating a "fast-user-switch" file in /usr/local/bin; is there any easier way? One last thing so I din't have topen another needless thread; while experimenting I somehow lost the share folder in one of my accounts. Is there any way to get a share folder back? Thanks for any tips! Jim in NYC

    Read the article

  • Triple-display setup using AMD drivers

    - by Halik
    I am currently running a dual display setup with nVidia 8800GTS video card, on a Ubuntu 12.10 box. The current setup uses nVidia TwinView to render the image on a 1920x1200 display and 1600x1200 one. I'm planning to add a third, 1280x1024 display to the setup. The change will require me to upgrade my GFX card to one supporting triple displays. I'll probably go with Sapphire Radeon 7770 (FLEX edition, to avoid additional active DP-DVI adapters). Before I invest in new GFX I wanted to ask - how well the AMD drivers will support such a setup. It does not matter whether it's fglrx or the OSS ones. If I remember correctly, when running Fedora on a Radeon x800, I had 'void' areas above and below the working area on my second display. The desktop was rendered in 1920+1280 width and 1200 height (which left 176px of vertical space accessible for my cursor and windows but not displayed on the screen - I'd prefer to avoid that). It may have very well been my misconfiguration back then. Generally, are there any solutions from AMD on par with TwinView? Or is it a non-issue at all? Also, I'm wondering about the usual stuff - hardware h264 decoding support, glitch-free flash support, any issues with Compiz/Unity?

    Read the article

  • Solving &ldquo;XmlSchemaException: The global element '&lt;elementName&gt;' has already been declare

    - by ChrisD
    I recently encountered this error when I attempted to consume a new hosted WCF service.  The service used the Request/Response model and had been properly decorated.  The response and request objects were marked as DataContracts and had a specified namespace.   My WCF service interface was marked as a ServiceContract and shared the namespace attribute value.   Everything should have been fine, right? [ServiceContract(Namespace = "http://schemas.myclient.com/09/12")] public interface IProductActivationService { [OperationContract] ActivateSoftwareResponse ActivateSoftware(ActivateSoftwareRequest request); } well, not exactly.  Apparently the WSDL generator was having an issue: System.Xml.Schema.XmlSchemaException: The global element 'http://schemas.myclient.com/09/12:ActivateSoftwareResponse' has already been declared. After digging I’ve found the problem; the WSDL generator has some reserved suffixes for its entities, including Response, Request, Solicit (see http://msdn.microsoft.com/en-us/library/ms731045.aspx).  The error message is actually the result of a naming conflict.  The WSDL generator uses the namespace of the service to build its reserved types.  The service contract and data contract share a namespace, which coupled with the response/request name suffixes I was using in my class names, resulted in the SchemaException. The Fix: Two options: Rename my data contract entities to use a non-reserved keyword suffix (i.e.  change ActivateSoftwareResponse to ActivateSoftwareResp). or; Change the namespace of the data contracts to differ from the service contract namespace. I chose option 2 and changed all my data contracts to use a “http://schemas.myclient.com/09/12/data” namespace value. This avoided a name collision and I was able to produce my WSDL and consume my service.

    Read the article

  • Capturing BizTalk 2004 SQLAdapter failures

    - by DanBedassa
    I was recently working on a BizTalk 2004 project where I encountered an issue with capturing exceptions (inside my orchestration) occurring from an external source. Like database server down, non-existing stored procedure, …   I thought I might write-up this in case it might help someone …   To reproduce an issue, I just rename the database to something different.   The orchestration was failing at the point where I make a SQL request via a Response-Request Port. The exception handlers were bypassed but I can see a warning in the event log saying: "The adapter failed to transmit message going to send port "   After scratching my head for a while (as a newbie to BTS 2004) to find a way to catch the exceptions from the SQLAdapter in an orchestration, here is the solution I had.   ·         Put the Send and Receive shapes inside a Scope shape ·         Set the Scope’s transaction type to “Long Running” ·         Add a Catch block expecting type “System.Exception” ·         Set the “Delivery Notification” of the associated Port to “Transmitted” ·         Change the “Retry Count” of the associated port to 0 (This will make sure BizTalk will raise the exception, instead of a warning, and you can capture that) ·         Now capture and do whatever with the exception inside the Catch block

    Read the article

  • How can state changes be batched while adhering to opaque-front-to-back/alpha-blended-back-to-front?

    - by Sion Sheevok
    This is a question I've never been able to find the answer to. Batching objects with similar states is a major performance gain when rendering many objects. However, I've been learned various rules when drawing objects in the game world. Draw all opaque objects, front-to-back. Draw all alpha-blended objects, back-to-front. Some of the major parameters to batch by, as I understand it, are textures, vertex buffers, and index buffers. It seems that, as long as you are adhering to the above two rules, there's little to be done in regards to batching. I see one possibility to batch, while still adhering to the above two rules. Opaque objects can still be drawn out of depth-order, because drawing them front-to-back is merely a fillrate optimization, meanwhile state changes may very well be far more expensive than the overdraw of drawing out of depth-order. However, non-opaque objects, those that require alpha-blending at least, must be drawn back-to-front in order to avoid rendering artifacts. Is the loss of the fillrate optimization for opaques worth the state batching optimization?

    Read the article

  • So now Google has said no to old browsers when can the rest of us follow suit?

    - by Richard
    Google recently announced that they will no longer support older browsers on Aug 1st: http://www.bbc.co.uk/news/technology-13639875 http://gmailblog.blogspot.com/2011/06/our-plans-to-support-modern-browsers.html For this reason, soon Google Apps will only support modern browsers. Beginning August 1st, we’ll support the current and prior major release of Chrome, Firefox, Internet Explorer and Safari on a rolling basis. Each time a new version is released, we’ll begin supporting the update and stop supporting the third-oldest version. There is nothing worse than looking at the patching of code that takes place to support older browsers. If we could all move towards a standards only web (I'm looking at you IE9) then surely we could spend more time programming good web apps and less trying to make them run equally on terrible non standards compliant older browsers. So when can the rest of us expect to be able to tell our clients that we no longer support older browsers? Because it seems that large corporates will continue to run older browsers and even if google chrome frame can be installed without admin privileges (it's coming soon, currently in beta) we can't expect all users to be motivated to do this. I appreciate any thoughts.

    Read the article

  • .NET vs Windows 8: Rematch!

    - by Simon Cooper
    So, although you will be able to use your existing .NET skills to develop Metro apps, it turns out Microsoft are limiting Visual Studio 2011 Express to Metro-only. From the Express website: Visual Studio 11 Express for Windows 8 provides tools for Metro style app development. To create desktop apps, you need to use Visual Studio 11 Professional, or higher. Oh dear. To develop any sort of non-Metro application, you will need to pay for at least VS Professional. I suspect Microsoft (or at least, certain groups within Microsoft) have a very explicit strategy in mind. By making VS Express Metro-only, developers who don't want to pay for Professional will be forced to make their simple one-shot or open-source application in Metro. This increases the number of applications available for Windows 8 and Windows mobile devices, which in turn make those platforms more attractive for consumers. When you use the free VS 11 Express, instead of paying Microsoft, you provide them a service by making applications for Metro, which in turn makes Microsoft's mobile offering more attractive to consumers, increasing their market share. Of course, it remains to be seen if developers forced to jump onto the Metro bandwagon will simply jump ship to Android or iOS instead. At least, that's what I think is going on. With Microsoft, who really knows?

    Read the article

  • How to make the members of my Data Access Layer object aware of their siblings

    - by Graham
    My team currently has a project with a data access object composed like so: public abstract class DataProvider { public CustomerRepository CustomerRepo { get; private set; } public InvoiceRepository InvoiceRepo { get; private set; } public InventoryRepository InventoryRepo { get; private set; } // couple more like the above } We have non-abstract classes that inherit from DataProvider, and the type of "CustomerRepo" that gets instantiated is controlled by that child class. public class FloridaDataProvider { public FloridaDataProvider() { CustomerRepo = new FloridaCustomerRepo(); // derived from base CustomerRepository InvoiceRepo = new InvoiceRespository(); InventoryRepo = new InventoryRepository(); } } Our problem is that some of the methods inside a given repo really would benefit from having access to the other repo's. Like, a method inside InventoryRepository needs to get to Customer data to do some determinations, so I need to pass in a reference to a CustomerRepository object. Whats the best way for these "sibling" repos to be aware of each other and have the ability to call each other's methods as-needed? Virtually all the other repos would benefit from having the CustomerRepo, for example, because it is where names/phones/etc are selected from, and these data elements need to be added to the various objects that are returned out of the other repos. I can't just new-up a plain "CustomerRepository" object inside a method within a different repo, because it might not be the base CustomerRepository that actually needs to run.

    Read the article

  • Ubuntu Tools for recovering data from damaged USB Flash Drive ~ 10 Gb

    - by PREDA LUCIAN
    I have technical issues with my USB Flash Drive - JetFlash®V15 (TS16GJFV15) It's very critical situation because I can not see the data from it and I should get a way to recover them ASAP. So, in general, I have connected Non-stop that USB Flash Disk at my laptop. Was appear Power surges and when I was coming back, I saw that problem with it. Details regarding JetFlash®V15 (in present): - when I connect it on USP slot, the led is working intermittent and later on remain with constant light. - if I inspect the computer drivers, I found "Generic USB Flash Disk" (when the stick it's connected). - if I inspect "Properties", I can see next details: --- Type: unknown (application/octet-stream) --- Size: unknown --- Volume: unknown --- Accessed: unknown --- Modified: unknown I inspected that stick on 2 different computers (as well in different different USB Ports) and was the same problem, I can not see the content. I was checking with Windows 7 and Ubuntu 10.04 OS, but without success. With both OS was working before this issue. I'll appreciate an answer which will solve the problem, not an answer which will certify the problem. What I have to do, to recover the information form it (nearly 10 Gb)? I'm looking forward to be guided from a technical expert.

    Read the article

  • How to debug a fatal system crash - [graphical loop DOTA2]?

    - by Huw
    Whilst playing DOTA2, I occasionally and apparently randomly seem to be experiencing a fatal crash where the display freezes fixed and the audio loops over approx the last .5 of a second. Now, I'm interested in resolving this - but my trouble is I don't know where to start. The error appears non-reproducible (I've tried returning to games and deploying the same combination of events in hopes of pinning to to a certain shader etc), and I don't know which part of 'the stack' it might be coming from. Variables that occur to me: I custom build this system, did I do something wrong - is my PSU not providing enough power to the graphics card? I am running Steam and DOTA under Linux, could this new software have a bug Might it be something to do with my ATI Catalyst graphics drivers Is some other background process interfering I'm usually mid game when this occurs, so i quickly kill the power and reboot (when i'm lucky i can get back in with only 1-2 mins lost!). So my question here relates to logs. Where should I start to look or how might I set up logs to help me pin down a fatal crash of this kind by recording moments up to / before a crash and is this likely to be something I should push Steam to do, or is there something at a system level? Then perhaps I can return with a more specific question and perhaps even a bug report :) Many thanks in advance.

    Read the article

  • visit counts in advanced segments not consistant

    - by user671201
    My organization has recently noticed an issue when applying advanced segments to visit counts during different time ranges. With no advanced segments turned on, here are the visit counts for Oct 1st - Oct 4th during the time range Sept 8th - Oct 8th: Oct 1 - 7 Oct 2 - 7 Oct 3 - 8 Oct 4 - 5 Again, with no advanced segments turned on, here are the visit counts for Oct 1st - Oct 4th but I've changed the time range to Oct 1st - Oct 4th. As expected, the numbers are the exact same as above: Oct 1 - 7 Oct 2 - 7 Oct 3 - 8 Oct 4 - 5 Now, I turn on the "Non paid search traffic" advanced segment. Here are the visit counts for Oct 1st - Oct 4th during the time range Sept 8th - Oct 8th: Oct 1 - 0 Oct 2 - 0 Oct 3 - 0 Oct 4 - 2 Here is where it gets weird. I keep the advanced segment on, and change the time range to Oct 1st - Oct 4th. This is what I get for the exact same dates as above: Oct 1 - 4 Oct 2 - 2 Oct 3 - 6 Oct 4 - 5 We've found the same inconsistency in our other GA profiles that get much more traffic (the above numbers come from one of our specialized topic blogs), but the inconsistency is less pronounced where there are more visits. My question is: why are the visit counts different for different time ranges when advanced segments are turned on, but exactly the same when no advanced segments are applied? Is this a GA bug or am I missing something about how the advanced segments work?

    Read the article

  • kismet on BCM43227

    - by Uttam Baroi
    I am trying to monitor wireless on Broadcom BCM43227, I used sudo airmon-ng to run the monitoring, i get command not found. I installed kismet, when i run, i get this *uttam@UT:~$ sudo kismet Launching kismet_server: //usr/bin/kismet_server Suid priv-dropping disabled. This may not be secure. No specific sources given to be enabled, all will be enabled. Non-RFMon VAPs will be destroyed on multi-vap interfaces (ie, madwifi-ng) Enabling channel hopping. Enabling channel splitting. NOTICE: Disabling channel hopping, no enabled sources are able to change channel. Source 0 (addme): Opening none source interface none... FATAL: Please configure at least one packet source. Kismet will not function if no packet sources are defined in kismet.conf or on the command line. Please read the README for more information about configuring Kismet. Kismet exiting. Done. uttam@UT:~$* I did check a blog about kismet on Broadcom that says about some binary drivers not allowing to do it... I used iwconfig and it says no extension : what is that well I need to give a hand on air monitoring............ help, how to do it

    Read the article

  • International Pricing of Software [closed]

    - by arachnode.net
    I operate a small company that charges $99 for a piece of software. I'd like to know what would be a fair price for non-US customers. Today I sold a license to a party in South Africa. He told me he had been watching the project for two years while business justification could be made for the purchase as SA's currency is nine times weaker than the US dollar. I found this resource detailing how much a Big Mac costs in various countries: http://howmuchatyourplace.com/how_much_does/Big%20Mac_cost.php I realize that the cost of producing a Big Mac varies from locale to locale as does the demand for one. I am aware that many software companies charge prices in local currencies that equate to the price in US dollars. I am aware that my costs remain fixed, and I obviously I cannot discount the rate at which my time costs me. I'm OK with earning less per sale as I would rather get my software onto the desktops of those that need it rather than having them try to write it themselves. Support is light and I can usually point a user to an existing blog or forum post. Being a resident of Hawaii, I am aware that certain goods and services cost more here. Power is up to six times as much per KWH as it is in, say, Seattle, and wages are approximately 60% of what they are for my profession (programmer). I'd like to offer my software at a price that would be fair for everyone around the globe. If a currency is 2 foreign units to 1 US dollar, and goods and services cost 50% more and pay for an equivalent job is 50% of what it is here, should I charge, say, $50 instead of $99? Is there a resource which would allow me to input a price in US dollars and adjust for a list of international locations?

    Read the article

  • Installing ubuntu 12.04 on macbook pro9,2

    - by stariz77
    I seem to have tried all the various suggested methods for installing ubuntu on a mbp, but can't seem to get anything that works and was wondering if anyone has run into any new problems with the latest non-retina models? I have a core i7 in my macbook, and model identifier is MacBookPro9,2. I have partitioned my HD using disk utility and have 700gig free space ready for the install (I haven't removed OSX Lion, it is still there in a 50gig partition). Problem: I am just getting a blank screen with a blinking cursor (unresponsive) in the top left whenever I boot from the disk. I left it for 20 minutes and nothing ever happened. This was without any boot manager, just holding "c" on startup. Attempted remedies: I have downloaded the 64 ubuntu iso from their site 3 times now and burned 4 separate discs to rule out some kind of corruption or burn error. I burned one in OSX Lion 10.7.4 and 3 on my windows 7 pc. I tried holding "alt" instead and then navigating to the windows disc to boot. Same thing happens, blank blinking unresponsive cursor. I also tried going to the EFI disc which actually brings up a menu (after saying "error prefix is not set") asking if I want to install ubuntu, test for errors or partition. All three options lead me to an unresponsive blank screen (some without cursors). I downloaded and installed rEFIt and if I hold "alt" on startup a linux penguin (Boot Linux from CD) appears in my boot options, along with the apple boot, and two others that I'm not sure of: "Boot EFI\boto\bootx64.efi from" and "Boot Legacy OS from". The "Boot Linux from CD" just takes me to the blank blinking cursor screen; again, I left if for 10+ minutes and nothing. I heard that the detection of the graphics card might be a problem and that I need change to nomodeset, but I have tried pressing F6 in all of the boot menus listed above and no options appear. Does anyone have any other suggested routes or can you see what I might have done wrong?

    Read the article

  • Storing editable site content?

    - by hmp
    We have a Django-based website for which we wanted to make some of the content (text, and business logic such as pricing plans) easily editable in-house, and so we decided to store it outside the codebase. Usually the reason is one of the following: It's something that non-technical people want to edit. One example is copywriting for a website - the programmers prepare a template with text that defaults to "Lorem ipsum...", and the real content is inserted later to the database. It's something that we want to be able to change quickly, without the need to deploy new code (which we currently do twice a week). An example would be features currently available to the customers at different tiers of pricing. Instead of hardcoding these, we read them from database. The described solution is flexible but there are some reasons why I don't like it. Because the content has to be read from the database, there is a performance overhead. We mitigate that by using a caching scheme, but this also adds some complexity to the system. Developers who run the code locally see the system in a significantly different state compared to how it runs on production. Automated tests also exercise the system in a different state. Situations like testing new features on a staging server also get trickier - if the staging server doesn't have a recent copy of the database, it can be unexpectedly different from production. We could mitigate that by committing the new state to the repository occasionally (e.g. by adding data migrations), but it seems like a wrong approach. Is it? Any ideas how best to solve these problems? Is there a better approach for handling the content that I'm overlooking?

    Read the article

  • Design Pattern for Skipping Steps in a Wizard

    - by Eric J.
    I'm designing a flexible Wizard system that presents a number of screens to complete a task. Some screens may need to be skipped based on answers to prompts on one or more previous screens. The conditions to skip a given screen need to be editable by a non-technical user via a UI. Multiple conditions need only be combined with and. I have an initial design in mind, but it feels inelegant. I wonder if there's a better way to approach this class of problem. Initial Design UI where The first column allows the user to select a question from a previous screen. The second column allows the user to select an operator applicable to the type of question asked. The third column allows the user to enter one or more values depending on the selected operator. Object Model public enum Operations { ... } public class Condition { int QuestionId { get; set; } Operations Operation { get; set; } List<object> Parameters { get; private set; } } List<Condition> pageSkipConditions; Controller Logic bool allConditionsTrue = pageSkipConditions.Count > 0; foreach (Condition c in pageSkipConditions) { allConditionsTrue &= Evaluate(previousAnswers, c); } // ... private bool Evaluate(List<Answers> previousAnswers, Condition c) { switch (c.Operation) { case Operations.StartsWith: // logic for this operation // etc. } }

    Read the article

  • Using Queries with Coherence Write-Behind Caches

    - by jpurdy
    Applications that use write-behind caching and wish to query the logical entity set have the option of querying the NamedCache itself or querying the database. In the former case, no particular restrictions exist beyond the limitations intrinsic to the Coherence query engine itself. In the latter case, queries may see partially committed transactions (e.g. with a parent-child relationship, the version of the parent may be different than the version of the child objects) and/or significant version skew (the query may see the current version of one object and a far older version of another object). This is consistent with "read committed" semantics, but the read skew may be far greater than would ever occur in a non-cached environment. As is usually the case, the application developer may choose to accept these limitations (with the hope that they are sufficiently infrequent), or they may choose to validate the reads (perhaps via a version flag on the objects). This also applies to situations where a third party application (such as a reporting tool) is querying the database. In many cases, the database may only be in a consistent state after the Coherence cluster has been halted.

    Read the article

< Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >