Search Results

Search found 98173 results on 3927 pages for 'maintaining old code'.

Page 637/3927 | < Previous Page | 633 634 635 636 637 638 639 640 641 642 643 644  | Next Page >

  • Making large scale changes to an economy in a social game

    - by Zach
    Are there any examples or case studies of social games, specifically on Facebook, where the developer has made drastic changes to the economy? I'm specifically interested in examples where the old economy was based off of purchasing items with Facebook credits then moving to a new model where the same inventory or similar inventory is sold with a soft currency. The closest comparisons I've been able to find so far are looking at iOS games that have gone from purchase models to freemium models, but haven't found a comparable scenario in a social game besides larger scale MMO's.

    Read the article

  • The most dangerous SQL Script in the world!

    - by DrJohn
    In my last blog entry, I outlined how to automate SQL Server database builds from concatenated SQL Scripts. However, I did not mention how I ensure the database is clean before I rebuild it. Clearly a simple DROP/CREATE DATABASE command would suffice; but you may not have permission to execute such commands, especially in a corporate environment controlled by a centralised DBA team. However, you should at least have database owner permissions on the development database so you can actually do your job! Then you can employ my universal "drop all" script which will clear down your database before you run your SQL Scripts to rebuild all the database objects. Why start with a clean database? During the development process, it is all too easy to leave old objects hanging around in the database which can have unforeseen consequences. For example, when you rename a table you may forget to delete the old table and change all the related views to use the new table. Clearly this will mean an end-user querying the views will get the wrong data and your reputation will take a nose dive as a result! Starting with a clean, empty database and then building all your database objects using SQL Scripts using the technique outlined in my previous blog means you know exactly what you have in your database. The database can then be repopulated using SSIS and bingo; you have a data mart "to go". My universal "drop all" SQL Script To ensure you start with a clean database run my universal "drop all" script which you can download from here: 100_drop_all.zip By using the database catalog views, the script finds and drops all of the following database objects: Foreign key relationships Stored procedures Triggers Database triggers Views Tables Functions Partition schemes Partition functions XML Schema Collections Schemas Types Service broker services Service broker queues Service broker contracts Service broker message types SQLCLR assemblies There are two optional sections to the script: drop users and drop roles. You may use these at your peril, particularly as you may well remove your own permissions! Note that the script has a verbose mode which displays the SQL commands it is executing. This can be switched on by setting @debug=1. Running this script against one of the system databases is certainly not recommended! So I advise you to keep a USE database statement at the top of the file. Good luck and be careful!!

    Read the article

  • More Maintenance Plan Weirdness

    - by AjarnMark
    I’m not a big fan of the built-in Maintenance Plan functionality in SQL Server.  I like the interface in SQL 2005 better than 2000 (it looks more like building an SSIS package) but it’s still a bit of a black box.  You don’t really know what commands are being run based on the selections you have made, and you can easily make some unwise choices without realizing it, such as shrinking your database on a regular basis.  I really prefer to know exactly what commands and with which options are being run on my servers. Recently I had another very strange thing happen with a Maintenance Plan, this time in SQL 2005, SP3.  I inherited this server and have done a bit of cleanup on it, but had not yet gotten around to replacing the Maintenance Plans with all my own scripts.  However, one of the maintenance plans which was just responsible for doing LOG backups was running more frequently than that system needed, and I thought I would just tweak the schedule a bit.  So I opened the Maintenance Plan and edited the properties of the Subplan, setting a new schedule, saved it and figured all was good to go.  But the next execution of the Scheduled Job that triggers the Maintenance Plan code failed with an error about the Owner of the job.  Specifically the error was, “Unable to determine if the owner (OldDomain\OldDBAUserID) of job MaintenancePlanName.Subplan has server access (reason: Could not obtain information about Windows NT group/user 'OldDomain\OldDBAUserID’..”  I was really confused because I had previously updated all of the jobs to have current accounts as the owners.  At first I thought it was just a fluke, but it happened on the next scheduled cycle so I investigated further and sure enough, that job had the old DBA’s account listed as the owner.  I fixed it and the job successfully ran to completion. Now, I don’t really like mysteries like that, so I did some more testing and verified that, sure enough, just editing the Subplan schedule and saving the Maintenance Job caused the Scheduled Job to be recreated with the old credentials.  I don’t know where it is getting those credentials, but I can only assume that it is the same as the original creator of the Maintenance Plan, and for some reason it insists on using that ID for the job owner.  I looked through the options in SSMA and could not find anything would let me easily set the value that I wanted it to use.  I suspect that if I did something like executing sp_changeobjectowner against the Maintenance Plan that it would use that new ID instead.  I’m sure that there is good reason that it works this way, but rather than mess around with it much more, I’m just going to spend my time rolling out my replacement scripts instead. Chalk this little hidden oddity up as yet one more reason I’m not a fan of Maintenance Plans.

    Read the article

  • Google I/O 2012 - Measuring the End-to-End Value of Your App

    Google I/O 2012 - Measuring the End-to-End Value of Your App Neil Rhodes, Nick Mihailovski, Mike Kwong We've rethought mobile app analytics from the ground up. If you are a mobile app developer, come see what's new from the land of Google Analytics; Understand how to measure the end-to-end value of your app, and improve its performance to drive usage and retention. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 69 4 ratings Time: 01:04:12 More in Science & Technology

    Read the article

  • Content API for Shopping Technical Webinar - April 3, 2012

    Content API for Shopping Technical Webinar - April 3, 2012 This webinar is for those interested in getting up and running with the Google Content API for Shopping without worrying about constructing XML or figuring out how to make an HTTP request in your language of choice. We'll show you how to leverage open source client libraries written by Google engineers so you can focus on the important stuff: your product data. We cover four basic topics: -Review of Existing Resources -Basic Primer on Using the API -Best Practices -Using a Client Library to Manage Product Data Feel free to follow along on the slides: google-content-api-tools.appspot.com From: GoogleDevelopers Views: 1112 16 ratings Time: 46:55 More in Science & Technology

    Read the article

  • How exactly is Google Webmaster Tools measuring "Site Performance"?

    - by Rémi
    I've been working for two months now on improving our response time (mainly server side) on a new forum (a brand new product on a technical point of view) we've launched in Germany a few month ago and I'm a lot surprised by the results I get. I monitor our response time using Apache logs and our own implementation of Boomerang beacon. Using my stats, I can see that our new product responds in about 680 ms where our old product was responding in about 1050 ms. On the other side, Google Webmaster Tool tells us that our pages have an average reponse time of about 1500 ms today where it was 700 three months ago with our old product. I've figured that GWT was taking client side metrics into account so I've added some measures on our Boomerang beacon and everything looks just fine. I've also ran some random pages on ySlow and Google's Page Speed and everything looks better than it was before. We event have a 82% on Google's Page Speed tool which is quite cool for a site with some ads in it :) Lately, we have signed a deal with Akamai to use two of their products : CDN for our static files (we were using another CDN before but it wasn't very effective) and RMA to improve Networks routes. We have also introduced a new agressive cache mecanism to ensure that most of the pages served to crawlers are cached by our memcache grid. After checking my metrics, it seems that this changes have improved from 650ms to about 500ms, which is good (still not great but it is definitly an improvement). But webmaster tools continues to report an increasing average response time where we see it decreasing in the same time. Have you ever had the same kind of wierd behavior on your sites while doing performance improvements ? Do you have any idea how to monitor the same thing Google does with Site Performance in Google Webmaster Tools so that we could improve our site and constantly check if it is what Google wants ? Edit 2011/07/26 : Thanks for your answers guys ! Nevertheless, I was not precise enough. The main issue we have is not with the Site Performance page but with the Crawl Stats one for now. We probably found an issue on our side with some very slow pages (around 3000 ms !!) and we are trying to fix them. I'll keep you posted as soon I'll have some infos. Thanks again !

    Read the article

  • Google Top Geek E07

    Google Top Geek E07 In Spanish! Noticias: 1. Gráfico de conocimiento ahora en español y varios idiomas más. Totalmente localizado. 2. Nueva versión de Snapseed para iOS y Android. Gmail para Android y la versión 2.0 para iOS. Nuevo estilo para YouTube. 3. 500Millones de usuarios en Google+ y una nueva característica: comunidades. Las búsquedas de la semana y lo más visto en YouTube. Recomendamos Picket, una app para Android que funciona en México y te da la cartelera en cines. Noticias para desarrolladores: 1. Mejores mapas para apps de Android, nuevo API. 2. Una imagen dice más que mil palabras: Place Photos y Radar Search Ligas y más información en el blog: programa-con-google.blogspot.com From: GoogleDevelopers Views: 80 11 ratings Time: 18:09 More in Science & Technology

    Read the article

  • Google I/O 2012 - Building Android Applications that Use Web APIs

    Google I/O 2012 - Building Android Applications that Use Web APIs Yaniv Inbar Google offers a large and growing set of back-end services, from AdSense to Tasks to Calendar to Google+, that can enrich your app, and increasingly they have a uniform set of APIs. This session discusses how to use them efficiently and securely, including authenticating safely and with good user experience, and describes Android-specific app-level optimizations. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 563 12 ratings Time: 55:14 More in Science & Technology

    Read the article

  • Various issues linked to my CD drive, when it has a disc in it

    - by Voyagerfan5761
    When I go to the Desktop and click on a media icon (for my flash drive, a CD, whatever it is), the following problems occur, in this approximate sequence: Nautilus will close if it's open. the desktop icons disappear my Window List shows a button that says "Starting File Manager" the icons reappear the button in Window List disappears Because of this problem, I can no longer drag and drop media, nor can I right-click to perform actions such as "Eject" and "Safely Remove Drive". The same symptoms occur if I click a media icon (that is also present on the desktop) in Nautilus' Computer view, though notably not if I click in the places list on the left. I have confirmed that this problem happens only if there is a CD in the drive (Matshita UJDA360). Also, inserting a disc into the CD drive appears to kill all running programs and restart Nautilus (or X; I'm not sure). Applications like Brasero and Rhythmbox will not start while there is a disc in the drive. Removing the disc doesn't result in the list of media updating; it must be forced to update by clicking on one of the desktop icons and going through one of the above-described cycles. It doesn't seem to matter what type of disc is in the drive. This has happened with CD-RWs I burned years ago using Roxio on Windows XP, the Ubuntu disc I installed from (burned with InfraRecorder Portable under Windows XP), and the retail game disc for Star Trek Armada II. The first indication of a problem was Brasero dying when I tried to insert a disc for erasure and rewriting. Since then, I've drafted several different questions on various issues, finally combining them into this one when I realized that having a CD in the drive was the common link. Could this be a simple driver issue? If Ubuntu is dynamically detecting my hardware on boot, can I specify drivers for devices that I know will be a problem if the default files are used? I'm beginning to think that my laptop, an old Dell Inspiron 2650, is just too old or proprietary-driver-hungry (or something, maybe RAM-starved) for Ubuntu and Windows XP to play nicely alongside each other. Or maybe I just need to carefully take my wall-wart machine to a coffee shop for an afternoon so I can download updates and such from the Internet, as I lack a home connection.

    Read the article

  • Google+ Platform Office Hours for February 1st 2012

    Google+ Platform Office Hours for February 1st 2012 Jenny Murphy and Jonathan Beri represented Google. Fraser Cain, Abraham Williams and Allen Firstenberg joined us from the developer community. This week we discussed the new configuration options for the Google+ Badge. You can read more about these new features on the platform blog: googleplusplatform.blogspot.com Please join the discussion on our support forum: groups.google.com Learn more about our office hours on Google Developers: developers.google.com From: GoogleDevelopers Views: 4150 55 ratings Time: 47:51 More in Science & Technology

    Read the article

  • Google Chrome Extensions: Launch Event (part 6)

    Google Chrome Extensions: Launch Event (part 6) Video Footage from the Google Chrome Extensions launch event on 12/09/09. Nick Baum, product manager for Google Chrome's extension system presents the gallery approval process, gives tips to extensions developers on how to make their extension successful and discusses the team's short term plans. From: GoogleDevelopers Views: 5659 17 ratings Time: 08:42 More in Science & Technology

    Read the article

< Previous Page | 633 634 635 636 637 638 639 640 641 642 643 644  | Next Page >